uuid
int64
541B
3,299B
dataset
stringclasses
1 value
text
stringlengths
1
4.29M
1,108,101,565,315
arxiv
\section{Introduction} \label{sec:intro} The origin of Ultra-High Energy Cosmic Rays (UHECR, with energies~$E>10^{18}$~eV) still remains a mystery. Experimental results \cite{HPphoton,photonreview,Augerphoton} suggest that the UHECR flux is composed predominantly of hadronic primary particles. As charged particles, they suffer deflections in cosmic magnetic fields and do not point back directly to their sources. Indirect proofs of their origin are necessary instead: the precise measurement of the energy spectrum, an estimation of the mass composition and its evolution with energy, and angular anisotropies are the three main handles on disentangling this almost century-old problem. Due to the interaction with the cosmic microwave background, UHECR suffer energy losses which limit their propagation distance \cite{GZK1,GZK2}. This ``GZK horizon'', indications of which have already been observed in the UHECR spectrum \cite{hiresGZK,augericrcSpectrum,AugerScience}, depends very sensitively on the energy and mass of the cosmic ray. In order to discern between different source scenarios, and to disentangle source characteristics from the effects of propagation, a precise knowledge of the energies of UHECR is crucial. Constraints on the composition of the cosmic ray flux at the highest energies will supply additional fundamental insight. Due to the low fluxes at ultra-high energies, the detection of UHECR can only be achieved by measuring {\it Extensive Air Showers} (EAS), cascades of secondary particles resulting from the interaction of the primary cosmic rays with the Earth's atmosphere. The measurement of the cosmic ray energy, flux, and mass composition relies on an understanding of this phenomenon. Two main EAS detection techniques were developed over the years (see \cite{NaganoWatson} for a review): {\it surface detectors} (SD) detect the particle flux of an EAS at a particular stage of the shower development; {\it fluorescence detectors} (FD) measure the shower development through nitrogen fluorescence emission induced by the electrons in the shower. The modeling of EAS through Monte Carlo simulations is needed in both fluorescence and surface detector experiments in order to interpret the data. We will show that hadronic EAS can be characterized, to a remarkable degree of precision, by only three parameters: the primary energy $E$, the depth of shower maximum $X_{\rm max}$, and an overall normalization of the muon component, which we call $N_{\mu}$. This is what we will call {\it air shower universality} \cite{univicrc}. The parameters $X_{\rm max}$ and $N_{\mu}$ are linked to the mass of the primary particle, ranging from proton to iron, and are subject to shower-to-shower fluctuations; proton showers have a larger depth of shower maximum than iron showers, while iron showers contain $\sim 40$\% more muons than those induced by protons. Once measured, $N_{\mu}$ and $X_{\rm max}$ have to be compared with simulations to infer the cosmic ray composition and place constraints on hadronic interaction models. Previous studies have demonstrated that the energy spectra and angular distributions of electromagnetic particles \cite{Nerling,Giller}, as well as the lateral distribution of energy deposit close to the shower core \cite{Gora} are all universal, i.e. they are functions of $E$, $X_{\rm max}$, and the atmospheric depth $X$ only.\footnote{The dependence on $X$ and $X_{\rm max}$ is commonly put in terms of the shower age $s$. We will use a different parameter, $DX$, which is better suited to our purpose.} For studies of shower universality in the context of ground detectors, see \cite{billoirphoton,GAP1,GAP2}. EAS induced by photons show somewhat different properties, due to the absent hadronic cascade. Hence, it remains to be investigated to what extent the hadronic EAS universality studied here applies to photon showers. By sampling the longitudinal development of the electromagnetic shower component close to the core, fluorescence detectors measure both $X_{\rm max}$ and $E$. The systematic uncertainty in the energy $E$ is typically 25\%, mainly due to the uncertainties in the air fluorescence yield. A surface detector only samples the properties of an EAS at a given stage of the shower development and at several points at different distances $r$ from the shower axis. Rather than using the signal integrated over all distances, a quantity which shows large fluctuations, Hillas \cite{Hillas} proposed to use the signal at a given distance $r$ from the shower axis, $S(r)$, as a measure of the shower {\it size}, connected with the primary energy. The distance where experimental uncertainties in the {\it size} determination are minimized (the optimal distance $r_{\rm opt}$ \cite{Newton}) is mainly determined by the experiment geometry, i.e. the spacing between surface detectors. $S(r_{\rm opt})$ is then related with the primary energy of the incoming cosmic ray using Monte Carlo simulations. This calibration has large systematics due to uncertainties in the hadronic models and the unknown primary cosmic ray composition. In this paper, we will show how to use air shower universality to determine the calibration of a surface detector in a model-independent way. The signal $S(r_{\rm opt})$ is the sum of two components: an electromagnetic part which is well-understood and to a good approximation depends only on $E$ and $X_{\rm max}$ of the shower; and a muon part which, in addition to $E$ and $X_{\rm max}$, depends on the model and primary composition in terms of an overall normalization. The muon fraction can be determined by requiring that the shape of the zenith angle dependence of $S(r_{\rm opt})$ at a fixed energy, which depends on the muon normalization $N_{\mu}$, match the observed one. This method determines the energy scale of the experiment as well as the average number of muons produced in the air showers at a given energy. Subsequently, we will apply air shower universality to data collected by a {\it hybrid experiment}, which combines the fluorescence technique with a surface detector. In this case, the calibration of the surface detector can be done almost independently of hadronic models and composition by using a small subset of the data (hybrid events) which are simultaneously measured by the fluorescence and the surface detector. Applying our method to hybrid data yields an event-by-event measurement of the muon content of the shower. This can be used as an independent cross-check of the measurement from the surface detector alone. Since the electromagnetic contribution to the signal varies with zenith angle, a hybrid measurement of $N_{\mu}$ at different zenith angles probes whether the electromagnetic part is described correctly by simulations, a key ingredient in our study. Conversely, the surface detector energy scale obtained with the universality-based method offers a cross-check of the hybrid calibration of the surface detector, which uses the fluorescence energy measurement. In this work, we will not use data from any experiment, but we will use the Pierre Auger Observatory as a case of study. First results from this method applied to Auger data have already been presented in \cite{augericrcNmu}. While we adopt the specifications of this experiment, the method presented here can be applied to any other surface detector (for example, AGASA \cite{agasa}) or hybrid experiment (for example, Telescope Array \cite{TA}). The paper is organized as follows: in \refsec{univ} we explain air shower universality, verifying it with the two standard high energy hadronic models used in cosmic ray physics (QGSJetII and Sibyll); in \refsec{viol} the limits of air shower universality are shown; \refsec{CIC} presents the method of obtaining $N_{\mu}$ from the surface detector and determining the surface detector energy scale; in \refsec{toyMC} we validate the method using a simple Monte Carlo approach; \refsec{hybrid} shows how the approach can be applied to hybrid events; finally, the application to other experiments is discussed in \refsec{otherexp}; we conclude in \refsec{disc}. \section{Extensive air shower universality at large core distances} \label{sec:univ} The results presented in this paper were obtained from a library of simulated EAS. We used CORSIKA 6.500 \cite{corsika} with hadronic interaction models QGSJetII \cite{qgsjet,qgsjet2} and Fluka \cite{fluka} (proton and iron primaries, energies $10^{17.8}-10^{20}$~eV), and Sibyll \cite{sibyll,sibyll2} / Fluka as well as QGSJetII / Gheisha \cite{gheisha} (proton at $10^{19}$~eV). For each primary/energy combination, we simulated 80 showers each at 7 zenith angles ranging from $0^{\circ}$ to 60$^{\circ}$. Statistical thinning was employed in the simulations as describeds in \cite{Kobal}, at a thinning threshold of $\varepsilon=10^{-6}$. Using lookup tables generated with GEANT4 simulations \cite{geant4}, we calculated the average response of a cylindrical water Cherenkov detector (height 1.2m, cross section 10~m$^2$, similar to the type used in the Pierre Auger observatory) to each shower particle hitting the ground. See \refsec{otherexp} for a discussion of the applicability to other experiments. The signals were calculated in two different approaches: 1.)~{\it Ground plane signals:} The response is calculated for a realistic water tank on the ground. 2.)~{\it Shower plane signals:} The response is calculated for a fiducial flat detector (with the same average particle response as the water tank) placed in the plane orthogonal to the shower axis ({\it shower plane}). Signals calculated in the shower plane procedure are not affected by detector geometrical effects, and therefore independent of the zenith angle. For details on the signal calculation, see appendix \ref{sec:SP}. The shower plane signals will be useful to verify air shower universality, while the ground plane signals will be needed for the application to a realistic experiment (in our case the Pierre Auger Observatory). Due to the statistical thinning procedure employed in the shower simulation, particles were collected in a sampling area of width 0.1 in $\log_{10}\:r$ centered around the shower core distance $r$ considered. This ensures, for a wide range of slopes of the lateral distribution, that the median radius of the energy deposit in the sampling area is indeed $r$. Signals are calculated in 18 azimuthal sectors, and normalized relative to the signal deposited by a vertically incident muon (VEM), a standard practice in surface detectors using the water Cherenkov technique. \begin{figure}[t] \begin{minipage}[t]{0.48\textwidth} \center \includegraphics[width=\textwidth]{Sem-DG-p-Fe.eps} \end{minipage} \hfill \begin{minipage}[t]{0.48\textwidth} \center \includegraphics[width=\textwidth]{Sempure-prof.eps} \end{minipage} \caption{{\it Left panel:} Simulated electromagnetic shower plane signals at $r=1000$~m for proton (red dots) and iron showers (blue circles) at $10^{19}$~eV as a function of $DX$. The showers are simulated with QGSJetII/Fluka at discrete zenith angles spanning 0$^{\circ}$ to $60^{\circ}$. {\it Right panel:} Simulated electromagnetic shower plane signals vs. $DX$ for different primaries and hadronic models, relative to the prediction for proton showers when using QGSJetII/Fluka.} \label{fig:Sem} \end{figure} To describe the stage of the shower development, we use the variable $DX$, defined as the distance from the detector to the shower maximum measured along the shower axis (in $\mbox{g}/\mbox{cm}^2$). For a tank on ground at a distance $r$ from the shower axis, $DX$ is: \begin{equation} DX = X_{\rm gr}\:\sec\theta -X_{\rm max} - r\:\cos\zeta\:\tan\theta\:\rho_{\rm air} \label{eq:DX} \end{equation} where $X_{\rm gr}$ is the vertical depth of the atmosphere, $\theta$ is the zenith angle, $X_{\rm max}$ is the slant depth of shower maximum, and $\zeta$ is the azimuthal angle in the shower plane such that $\zeta=0$ corresponds to a tank below the shower axis. $\rho_{\rm air}\approx 10^{-3}\mbox{g}/\rm cm^3$ is the density of air at ground level (see also \reffig{asym_sketch} in the appendix). Often, we will consider signals averaged over azimuth. In this case, $DX$ is simply given by: \begin{equation} DX = X_{\rm gr}\:\sec\theta - X_{\rm max} \label{eq:DX-Xmax} \end{equation} \subsection{Electromagnetic and muon shower plane signals} \label{sec:signals} At large core distances ($r \gtrsim 100$~m), the particle flux of EAS at ground is dominated by electromagnetic particles ($e^+$, $e^-$, $\gamma$) and muons. Throughout the paper, we include the signal from the electromagnetic products of in-flight muon decay in the muon contribution, separating it from the `pure' electromagnetic part. \reffig{Sem} (left panel) shows the electromagnetic shower plane signals $S_{\rm EM}$ of simulated proton and iron showers at $10^{19}$~eV as a function of $DX$ (in $\mbox{g}/\mbox{cm}^2$) for a core distance of 1000~m. For each tank, we calculate the corresponding $DX$ via \refeq{DX} using the azimuth angle of the tank. Since zenith angle dependent detector geometry effects are removed in the shower plane treatment, we are able to compare the signals from a wide range of zenith angles. The electromagnetic signal shows a strong evolution with $DX$, reaching a maximum at $DX_{\rm peak}$ and rapidly attenuated for larger $DX$. $DX_{\rm peak}$ depends on core distance, being $0\:\mbox{g}/\mbox{cm}^2$ very close to the core and $\approx 200\:\mbox{g}/\mbox{cm}^2$ at 1000~m. This shift is only mildly dependent on $r$ in the range $400-1600$~m and it can be naturally explained by diffusion of electromagnetic particles away from the shower axis. Note that the overall electromagnetic signal as well as its evolution are slightly different for protons and iron. This is apparent in the right panel of \reffig{Sem}, where the ratio of the signals obtained from different primary/model combinations to proton-QGSJetII is shown as a function of $DX$. The differences between models are around 5--10\%, smaller than the deviation between proton and iron. This result is an extension to large $r$ of previous results \cite{Gora,Giller,Giller2} on the universality of the electromagnetic EAS component at small core distances. We address the difference ($\sim$ 15\%) in $S_{\rm EM}$ between protons and iron in \refsec{viol}. \begin{figure} \begin{minipage}[t]{0.48\textwidth} \center \includegraphics[width=\textwidth]{Smu-DG-p-Fe.eps} \end{minipage} \hfill \begin{minipage}[t]{0.48\textwidth} \center \includegraphics[width=\textwidth]{Smu-prof.eps} \end{minipage} \caption{{\it Left panel:} Simulated muon signals $S_{\mu}$ at $r=1000$~m vs. $DX$ for the same $10^{19}$~eV proton (red dots) and iron showers (blue circles) as in \reffig{Sem}. Note that $S_{\mu}$ includes the contribution from muon decay products. {\it Right panel:} Simulated muon signals vs. $DX$ for different primaries and hadronic models relative to the muon signal predicted for proton showers when using QGSJetII/Fluka. Note the difference in scale compared to \reffig{Sem}.} \label{fig:Smu} \end{figure} \reffig{Smu} (right panel) shows the evolution of the muon signal $S_{\mu}$ with $DX$ (again for proton and iron showers at $E=10^{19}$~eV and $r$=1000~m). $S_{\mu}$ shows a distinctly different behavior: it peaks at $DX_{\rm peak} \approx 400\:\mbox{g}/\mbox{cm}^2$, and it is attenuated much more slowly than $S_{\rm EM}$. As expected, there is a dependence of the absolute normalization of the signals on the primary particle and hadronic model, which is clearly seen in \reffig{Smu} (right panel) where we again show $S_{\mu}$ obtained for different primaries and models relative to that of proton-QGSJetII. As for $S_{\rm EM}$, only differences in normalization and not in shape are apparent. We verified that the primary- and model-independence of the electromagnetic and muon signal evolution holds for shower core distances between 100~m and 1000~m. \subsection{Ground signal parameterization} \label{sec:param} In the previous section, we have shown that the evolution of the shower plane signals at a given shower core distance is only very weakly dependent on the primary particle or hadronic model considered. Therefore, a simple parameterization of the signals is possible. In this work, since we use the Pierre Auger Observatory as a case of study, we perform such a parameterization for $r$=1000~m. It has been shown \cite{augericrcS1000} that the main observable in the surface detector of the Pierre Auger Observatory, $S(1000)$, is indeed a good measurement of the azimuth-averaged signal of particles at a core distance of $r=1000$~m. Hence, we separately parameterize the azimuth-averaged ground plane electromagnetic and muon signal at $10^{19}$~eV ($S_{\rm EM}(1000)$ and $S_{\mu}(1000)$, the total predicted signal being the sum of both), using the incomplete gamma, or Gaisser-Hillas-type function: \begin{equation} S(1000,\:DX) = S_{\rm max}\left ( \frac{DX-X_0}{DX_{\rm peak}-X_0} \right )^{\alpha} \exp \left ( \frac{DX_{\rm peak} - DX}{\lambda} \right ), \;\; \alpha \equiv \frac{DX_{\rm peak}-X_0}{\lambda}\; \label{eq:gh} \end{equation} The four free parameters of this function are: $S_{\rm max}$ (the peak signal at 1000~m); $DX_{\rm peak}$(the slant depth relative to the overall shower maximum where the peak signal is reached); $\lambda$ (the attenuation length after the maximum); and $X_0$ (an additional shape parameter). \reffig{Smufit} shows the results of the fit for the muon signal. We have simultaneously fitted the predictions for different primaries (proton, iron) and different models (QGSJetII, Sibyll), keeping a separate normalization ($S_{\rm max}$) for each, while $\lambda$ and $DX_{\rm peak}$ are common to all. $X_0$ is not fitted but fixed to $-200\:\mbox{g}/\mbox{cm}^2$. The resulting parameters are summarized in \reftab{fitpar}, with $S_{\rm max}$ given for proton-QGSJetII. For the other model/primary combinations, we define a {\it relative} muon normalization given by $N_{\mu} = S_{\rm max}/S_{\rm max;ref}$, where we take proton-QGSJetII as the reference $S_{\rm max;ref}$. The $N_{\mu}$ for different models and primaries are listed in \reftab{sh2sh}. \begin{figure}[t] \begin{minipage}[t]{0.48\textwidth} \center \includegraphics[width=\textwidth]{Smu-DG-fits.eps} \caption{Parameterization of the muon ground plane signal ($E=10^{19}$~eV) at $r=1000$~m using Gaisser-Hillas functions (\refeq{gh}). Red dots (crosses) denote proton-QGSJetII (proton-Sibyll) showers, blue circles (asterisks) are iron-QGSJetII (iron-Sibyll). The normalization is left free for each model/primary combination, while the other parameters are common to all.} \label{fig:Smufit} \end{minipage} \hfill \begin{minipage}[t]{0.48\textwidth} \center \includegraphics[width=\textwidth]{SemGP-fits.eps} \caption{Electromagnetic ground plane signals (proton$-$dots; iron$-$crosses; QGSJetII) in zenith angle bands (color-coded). Proton and iron signals have been scaled symmetrically. A linear function is fit to the signal separately for each zenith angle band.} \label{fig:Semfit} \end{minipage} \end{figure} In the case of the electromagnetic signal, we have to take into account the detector geometrical effects, which cause differences in the signals from showers at two different zenith angles with the same $DX$ (see appendix~\ref{sec:SP}). Hence, we have to find a parameterization for $S_{\rm EM}(DX,\:\theta)$. The first step is to parameterize, for each of the 7 simulated zenith angles, the dependence of $S_{\rm EM}$ on $DX$; a linear function is found to be sufficient due to the limited $DX$ range at a fixed $\theta$. We scaled the proton and iron signals by $1+\alpha$ and $1-\alpha$, respectively (with $\alpha\lesssim 0.06$), to account for the deviations from universality. The deviations in the ground plane signals are slightly smaller than those shown in the previous section for the shower plane signals. \reffig{Semfit} shows the results of the fits together with the direct Monte Carlo results, for the 7 fixed values of zenith angle. In the second step, we fit a Gaisser-Hillas-type function (of $DX$) to $S_{\rm EM}(DX,\theta)$ for each $X_{\rm max}$ considered. The Gaisser-Hillas function is fitted to 7 equal-weight data points ``predicted'' from the 7 linear fits of the first step. This is equivalent to a parameterization of the dependence of $S_{\rm EM}$ on $\theta$ at a fixed $X_{\rm max}$, using an intermediate variable $DX(X_{\rm max},\theta)$. \reftab{fitpar} gives the results for $X_{\rm max} = 750\:\mbox{g}/\mbox{cm}^2$ and $X_{\rm gr}=875\:\mbox{g}/\mbox{cm}^2$. The second step may be applied to any value of $X_{\rm max}$, yielding a continuous function $S_{\rm EM}(\theta,DX(X_{\rm max},\theta))$ which depends on $\theta$ both explicitly and implicitly via $DX$. By contrast, since muons deposit a signal which is proportional to their pathlength in the water tank, the tank acts as a volume detector: the smaller projected area at higher zenith angle is canceled by the longer average tracklength. Hence, the average muon signal $S_{\mu}(DX)$ does not show an explicit $\theta$ dependence. \begin{table}[b] \center \begin{tabular}{cc|c|c|c|c} \hline & & ~$S_{\rm max}$~ & ~$DX_{\rm peak}$~ & ~~~$X_0$~~~ & $\lambda$ \\ \hline $S_{\rm EM}(1000)$ & ($X_{\rm max}=750\:\mbox{g}/\mbox{cm}^2$) & 22.5 & 103.0 & -540.6 & 102.7 \\ $S_{\mu}(1000)$ & & 15.6 & 302.4 & -200 & 1109 \\ \hline \end{tabular} \caption{Fit parameters of the Gaisser-Hillas parameterization (\refeq{gh}) of the universal electromagnetic and muon signal at $10^{19}$~eV. The electromagnetic parameterization is for a fixed $X_{\rm max}$ and $X_{\rm gr}=875\:\mbox{g}/\mbox{cm}^2$. For $S_{\mu\:\rm max}$, the value of proton-QGSJetII is given (for the other primaries and models relative to proton-QGSJetII, see \reftab{sh2sh}).} \label{tab:fitpar} \end{table} At a fixed energy (here, 10~EeV), the parameterization presented above determines the average ground signal of a shower (at $r=1000$~m, azimuth-averaged): \begin{equation} S(1000) = S_{\rm EM}(\theta,\:DX(X_{\rm max},\theta)) + N_{\mu}\cdot S_{\mu;\rm ref}(DX(X_{\rm max},\theta)) \label{eq:Sparam} \end{equation} Here, $S_{\rm EM}$ denotes the parameterized electromagnetic signal, and $S_{\mu;\rm ref}$ is the reference muon signal which we take to be proton-QGSJetII. Hence, there are only three free parameters describing the average shower at this energy: the zenith angle $\theta$; the depth of shower maximum $X_{\rm max}$; and the normalization of the muon signal $N_{\mu}$ (relative to proton-QGSJetII). We used the library of proton and iron showers with energies of $10^{18}-10^{20}$~eV to investigate the energy dependence of the evolution of $S_{\rm EM}$ and $S_{\mu}$ with $DX$. The electromagnetic signal normalization shows an energy scaling of $S_{\rm max; EM}\propto E^{0.97}$ (see also \refsec{viol}), while for the muon signal $S_{\rm max; \mu}\propto E^{\alpha}$ with $\alpha=0.9\dots 0.95$, depending on the hadronic model. All other fit parameters in \refeq{gh} are independent of the primary energy in this energy range, for both $S_{\rm EM}$ and $S_{\mu}$, to within 5\%. Hence, \refeq{Sparam} can be straightforwardly extended to other energies: \begin{eqnarray} S(1000,E) &=& S_{\rm EM}(10\:\mbox{EeV},\theta,\:DX(X_{\rm max},\theta))\: \left ( \frac{E}{10\:\mbox{EeV}} \right )^{0.97} \nonumber\\ &+& N_{\mu}(E)\cdot S_{\mu;\rm ref}(10\:\mbox{EeV}, DX(X_{\rm max},\theta)) \label{eq:SparamE} \end{eqnarray} As the energy scaling of $S_{\mu}$ is slightly model-dependent, we treat it as an unknown and define $N_{\mu}(E)$ as the muon normalization at the energy $E$ with respect to the proton-QGSJetII reference at the fixed energy of 10~EeV. \subsection{Shower fluctuations} \label{sec:fluct} In addition to the overall behavior of the signals with $DX$ parameterized before, both electromagnetic and muon signals show fluctuations around the mean value. \reffig{sh2sh} shows the relative deviations of the ground plane $S_{\rm EM}$ (left panel) and $S_{\mu}$ (right panel) from the parameterization for proton and iron showers ($10^{19}$~eV, QGSJetII). These distributions contain showers from all zenith angles; no dependence of the relative fluctuations on zenith angle has been found. Note that the proton and iron electromagnetic signals are slightly shifted from 0 due to the universality violation (\reffig{Sem}), whereas the deviations are centered around 0 for the muon signals (we used the corresponding muon signal normalizations for proton/iron). \begin{figure}[t] \begin{minipage}[t]{0.48\textwidth} \center \includegraphics[width=\textwidth]{shfluct-em.eps} \end{minipage} \hfill \begin{minipage}[t]{0.48\textwidth} \center \includegraphics[width=\textwidth]{shfluct-mu.eps} \end{minipage} \caption{{\it Left panel:} Distribution of the relative deviations of the electromagnetic ground plane signals of showers at $10^{19}$~eV (QGSJetII) from the parameterization (\refsec{param}). The red (solid) line is for proton, while the blue (dashed) is for iron. {\it Right panel:} The same for the muon ground plane signals.} \label{fig:sh2sh} \end{figure} The spread of the distribution shown in \reffig{Sem} and \reffig{Smu} has a contribution from the artificial fluctuations due to the {\it thinning} procedure used in the simulations. The fluctuations due to thinning can be estimated from vertical showers: since we expect the same signal in all azimuth sectors, thinning fluctuations are expected to be the dominant source of the variance between sectors. We find $\sigma_{\rm thin}= 6.5$\% for $S_{\rm EM}$ and 4\% for $S_{\mu}$. We can then subtract the uncorrelated thinning variance from the total signal fluctuations to obtain the shower-to-shower fluctuations. Note that since we compare shower signals with the parameterization of the average signal {\it at the same distance to ground} $DX$, the signal fluctuations shown here are not caused by the fluctuations in the depth of shower maximum. The latter ones will induce additional fluctuations (mainly in the electromagnetic signal) that can be straightforwardly calculated convolving the fluctuations in $X_{\rm max}$ with the signal parameterization. \begin{table}[b] \center \begin{tabular}{c|c|c|c|c|c} \hline & RMS($S_{\rm EM}$)$/S_{\rm EM}$ & RMS($S_{\mu}$)$/S_{\mu}$ & $\langle X_{\rm max} \rangle$ (10~EeV) & $\tau_X$& $N_{\mu}$\\ \hline {\bf Proton} & & & & & \\ QGSJetII & 7.9\% & 11.8\% & 787.8$\:\mbox{g}/\mbox{cm}^2$ & 25.4$\:\mbox{g}/\mbox{cm}^2$ & 1 \\ Sibyll & 8.9\% & 12.4\% & 795.8$\:\mbox{g}/\mbox{cm}^2$ & 24.1$\:\mbox{g}/\mbox{cm}^2$ & 0.87 \\ \hline {\bf Iron} & & & & & \\ QGSJetII & 5.4\% & 3.5\% & 708.7$\:\mbox{g}/\mbox{cm}^2$ & 10.9$\:\mbox{g}/\mbox{cm}^2$ & 1.40 \\ Sibyll & 4.8\% & 4.0\% & 696.5$\:\mbox{g}/\mbox{cm}^2$ & 10.2$\:\mbox{g}/\mbox{cm}^2$ & 1.27 \\ \hline \end{tabular} \caption{Relative shower-to-shower fluctuations of the electromagnetic and muon signals and parameters of the $X_{\rm max}$ distribution derived from QGSJetII and Sibyll showers at $10^{19}$~eV. The muon signal normalization $N_{\mu}$ relative to proton-QGSJetII for the different models is also shown. Note the differences in the absolute value of $\langle X_{\rm max} \rangle$ and $N_{\mu}$, while the fluctuations are model-independent.} \label{tab:sh2sh} \end{table} We also parameterized the distribution of $X_{\rm max}$ for different primaries and models, using the following functional form: \begin{equation} \frac{dN}{dX_{\rm max}} \propto x^4\:e^{-x},\;\;0 < x < \infty;\quad x = \frac{X_{\rm max}-\langle X_{\rm max} \rangle}{\tau_X}+5, \label{eq:dNdXmax} \end{equation} where $\langle X_{\rm max} \rangle$ denotes the mean depth of shower maximum, and $\tau_X$ is related to the RMS of the distribution via $\tau_X = \mbox{RMS}(X_{\rm max}) / \sqrt{5}$. This asymmetric distribution is found to be a good fit to the $X_{\rm max}$ distributions for different primaries and models. \reftab{sh2sh} summarizes the magnitude of fluctuations in $S_{\rm EM}$ and $S_{\mu}$ as well as $\tau_X$ for protons and iron using different hadronic models. Clearly, the shower-to-shower fluctuations in signal as well as $X_{\rm max}$ are independent of the hadronic model considered, but depend quite strongly on the primary particle. Hence, if measured, fluctuations can serve as a robust, model-independent indicator of composition. Our simulations predict that these fluctuations depend only very weakly on energy. \section{Limits of universality} \label{sec:viol} The main discernible deviation from the universality approach adopted here is the difference in electromagnetic signal between proton and iron showers. This difference, which we refer to as {\it universality violation}, is larger than the differences found in the overall energy deposit in the atmosphere for proton and iron showers (for which in fact one finds the opposite effect: the so-called {\it missing energy} is larger for iron showers, \cite{EngelAstropart}). Since we include muon decay products in the muon signal, this deviation is unrelated to the differences in muon content between protons and iron. \reffig{Nemratio} shows the ratio of the number flux of electromagnetic particles for different combinations of primaries/models to the reference (proton/QGSJetII) as a function of $DX$ (again $r$=1000~m and $E=10^{19}$~eV). The differences between protons and iron is much smaller than in the case of the signals (\reffig{Sem}), pointing to a slightly harder energy spectrum for electromagnetic particles at large $r$ in iron showers compared to proton showers. We also found that the discrepancy becomes smaller at smaller core distances. We have verified that the differences are independent of the details of nuclear fragmentation of the primary iron nuclei. This means that the nuclear binding of the 56 nucleons is not important for the EAS development. In other words, the superposition model holds, i.e., an iron shower can be considered as a superposition of 56 proton showers at $1/56$th of the primary energy. \begin{figure}[t] \begin{minipage}[t]{0.48\textwidth} \center \includegraphics[width=\textwidth]{Nempure-profratios-1000.eps} \caption{Number flux of electromagnetic particles in the shower plane at $r=1000$~m for different primaries and hadronic models at $10^{19}$~eV, relative to that of proton-QGSJetII.} \label{fig:Nemratio} \end{minipage} \hfill \begin{minipage}[t]{0.48\textwidth} \center \includegraphics[width=\textwidth]{Smax-scaling.eps} \caption{The parameter $S_{\rm max}$ of the parameterization \refeq{gh} of electromagnetic shower plane signals as a function of primary energy, for different core distances $r$. Power-law fits and the resulting exponents are indicated.} \label{fig:Semscaling} \end{minipage} \end{figure} This implies that the universality violation is due to a violation of strict linear energy scaling of the electromagnetic signal in hadronic shower simulations: if the electromagnetic signal scales as $E^{\alpha}$, $\alpha < 1$, then the signal of an iron shower will be a factor of $56^{1-\alpha}$ times larger than that of a proton shower at the same energy. In order to explain the observed difference of $\sim15$\% in the shower plane signals, we would infer $\alpha\sim 0.97$. We have parameterized the electromagnetic signals for different energies and indeed found that the amplitude $S_{\rm max}$ of the signal (\refeq{gh}) scales as $E^{0.97}$ at $r=1000$~m, with $\alpha$ approaching 1 as $r\rightarrow0$ (\reffig{Semscaling}). Note that, by parameterizing the complete evolution of the signal with $DX$, we take out the effects of the energy dependence of $X_{\rm max}$. This violation of perfect energy scaling of the electromagnetic signal can be due to several reasons. The injection rate of energy into the electromagnetic part via $\pi^0$ decay as well as the energy spectrum of secondary $\pi^0$ might evolve with primary energy. In addition, the NKG theory of pure electromagnetic showers also predicts a slight deviation from perfect energy scaling of the particle flux on ground. These effects are currently under investigation. \section{Determining the muon normalization using the constant intensity method} \label{sec:CIC} One of the main challenges of a cosmic ray surface detector is to convert the ground signal $S(r)$ to a primary energy. As mentioned in \refsec{param}, the universality-based signal parameterization has three free parameters. Apart from the zenith angle $\theta$ which is well measured along with the signal \cite{augericrcS1000}, the depth of shower maximum $X_{\rm max}$ and muon normalization $N_{\mu}$ (with respect to the reference signal at the fixed energy of 10~EeV) remain to be determined. Once these are known, \refeq{SparamE} provides a one-to-one mapping of ground signal and energy, i.e. a model-independent energy scale of the experiment. The mean depth $\langle X_{\rm max} \rangle$ of showers has been measured as a function of energy from experiments using the air fluorescence technique, e.g. HiRes \cite{hiresXmax} and Auger \cite{augericrcXmax}. The knowledge of $\langle X_{\rm max} \rangle$ is important as it determines the average distance to shower maximum $DX$ for a given zenith angle, and the electromagnetic signal evolves strongly with $DX$. The overall precision of these $\langle X_{\rm max} \rangle$ measurements is better than $20\:\mbox{g}/\mbox{cm}^2$, and this small uncertainty in $\langle X_{\rm max} \rangle$ has only a limited effect on the estimated electromagnetic signal. \reffig{Sem-sectheta} shows the limited effect of varying $X_{\rm max}$ by $\pm 14\:\mbox{g}/\mbox{cm}^2$ (the current measurement uncertainty at 10~EeV reported by Auger \cite{augericrcXmax}) on $S_{\rm EM}$ as a function of $\sec\theta$. \begin{figure}[t] \begin{minipage}[t]{0.48\textwidth} \center \includegraphics[width=\textwidth]{Sem-sectheta.eps} \caption{Parameterized electromagnetic signal at $10^{19}$~eV vs. $\sec\theta$ with $X_{\rm max} = 750\:\mbox{g}/\mbox{cm}^2$ (black line, the dots indicate the signal parameterized at each zenith angle). The shaded band shows the effect on $S_{\rm EM}$ of a variation of $X_{\rm max}$ by $\pm 14\:\mbox{g}/\mbox{cm}^2$.} \label{fig:Sem-sectheta} \end{minipage} \hfill \begin{minipage}[t]{0.48\textwidth} \center \includegraphics[width=\textwidth]{CICmethod.eps} \caption{{\it Upper panel:} the signal parameterization at 10~EeV \refeq{Sparam} vs. $\sec\:\theta$ for different $N_{\mu}$ (black/solid$-$1.1, red/dashed$-$0.5, blue/dash-dotted$-$2.0). {\it Lower panel:} histograms of number of events above the parameterized signal in equal exposure bins, obtained from a Monte Carlo data set with a true $N_{\mu}$ of 1, for the different $N_{\mu}$ values shown in the upper panel.} \label{fig:CICmethod} \end{minipage} \end{figure} The main uncertainty in determining the energy scale of surface detectors is thus in the value of $N_{\mu}$. Fortunately, one can make use of the different behavior of $S_{\rm EM}$ and $S_{\mu}$ with $DX$ (and hence, $\sec\theta$) to measure $N_{\mu}$ via the {\it constant intensity method} (\reffig{CICmethod}): dividing the data set into equal exposure bins in zenith angle, i.e., bins of $\sin^2\theta$, a correct signal-to-energy convertor should yield the same number of events in each bin with measured signal greater than the parameterized signal at a fixed energy. This is due to the isotropy ($\theta$-independence) of the cosmic ray flux, which requires that the number of events $N(>E)$ above a fixed energy $E$ should be equal in equal exposure bins. \reffig{CICmethod} (upper panel) shows the zenith angle dependence of the signal (\refeq{Sparam}) for a fixed energy of 10$^{19}$ eV and different values of $N_\mu$. Apart from the overall change in signal, it is evident that the smaller the $N_\mu$, the steeper the $\theta$ dependence is. We now divide a simulated ground detector data set with a ``true'' $N_{\mu}(10^{19}\rm eV) =1$ (see \refsec{toyMC} for details on the simulation) in equal exposure bins in zenith angle. Given a muon normalization, we calculate the number of events in each bin that are above a given reference energy (here $E_{\rm ref}$=10$^{19}$~eV), according to \refeq{Sparam} with the given $N_{\mu}$. We then adjust $N_{\mu}(E_{\rm ref})$ in the signal parameterization \refeq{Sparam} to the value which gives an equal number of events $N(>S(E_{\rm ref}, \theta))$ in each zenith angle bin (lower panel in \reffig{CICmethod}). Clearly, a too low value of $N_{\mu}$ results in an excess of events at high $\theta$ (the parameterized signal has a too steep attenuation with $\sec\theta$), whereas a too high $N_{\mu}$ results in a deficit of high zenith angle events ($\sec\theta$ attenuation too shallow). In this calculation we used the $\langle X_{\rm max} \rangle$ at 10~EeV reported in \cite{augericrcXmax} to calculate the ground plane signals. Note that an $N_{\mu}$ of 1.1 gives a flat distribution, whereas the ``true'' $N_{\mu}$ used is 1.0. This bias in the $N_{\mu}$ measurement will be addressed in the next section. For a range of $N_{\mu}$ values, we then calculate the $\chi^2$/dof of the event histogram relative to a flat distribution in $\sin^2\theta$. Fitting a parabola to the function $\chi^2(N_{\mu})$ yields the best-fit $N_{\mu\:\rm fit}$ and its error $\sigma_{N_{\mu}}$: \begin{equation} \chi^2(N_{\mu}) = \chi^2_{\rm min} + \left ( \frac{N_{\mu}-N_{\mu\:\rm fit}}{\sigma_{\Nmu}} \right)^2 \label{eq:chi2} \end{equation} For a data set comparable to current Auger statistics ($\sim$11~000 events above 3~EeV \cite{augericrcSpectrum}), we expect a statistical error of $\sigma_{\Nmu} = 0.1$. Once $N_{\mu}$ is known, the knowledge of $S_{\rm EM}$ (within the uncertainty of $\sim \pm 6$\% due to universality violation) determines a model-independent energy scale, with a statistical error of $\sigma_{\Nmu}\cdot S_{\mu;\rm ref}$ around 4\%. The constant intensity method can be extended to other energies, using the energy-dependent parameterization \refeq{SparamE} in \refsec{param}. This yields a measurement of $N_{\mu}(E)$, comparable to the measurement of $\langle X_{\rm max} \rangle$ in its sensitivity to the primary composition. \reftab{errors} contains a summary of the expected statistical and systematic errors from current and upcoming experiments. In \reffig{Nmu-elrate} we show possible results of this measurement, the integral measurement of $N_{\mu}(E)$ (solid black line, corrected for the bias, see \refsec{toyMC}) and with 1$\sigma$ statistical error band (shaded) after a three year Auger exposure. Here, we took $N_{\mu}(E) = 1.2 (E/10\:\mbox{EeV})^{0.85}$ as fiducial value. As the cosmic ray spectrum drops rapidly with energy ($\propto E^{-3}$), the average energy of cosmic rays above a given energy threshold is very close to that threshold. Hence, for slow changes of the cosmic ray composition with energy, the $N_{\mu}$ value determined from the constant intensity method will reflect the actual average value of $N_{\mu}$ for cosmic rays at that energy (this will be shown in the next section). In case of an abruptly changing composition, the measured $N_{\mu}(E)$ will clearly show evidence for this. However, the interpretation of the integral $N_{\mu}$ measurement in terms of composition will have to rely on a modeling of the composition evolution in this case. In addition, the measurement of $N_{\mu}(E)$ can place constraints on hadronic models, whose predictions are shown as lines in \reffig{Nmu-elrate}. \begin{figure}[t] \centering \includegraphics[width=0.6\textwidth]{Nmu-elrate-sim.eps} \caption{The measured muon normalization $N_{\mu}$ as a function of energy (thick black line) with statitistical error band expected from a three-year Auger exposure (shaded), for a fiducial $N_{\mu}(E) = 1.2\: (E/10\:\mbox{EeV})^{0.85}$. Also shown are model predictions for iron (upper two lines, blue; solid$-$QGSJetII, dashed$-$Sibyll) and proton showers (lower two lines, red; solid$-$QGSJetII, dashed$-$Sibyll).} \label{fig:Nmu-elrate} \end{figure} \begin{table}[b] \center \begin{tabular}{c|c|c} \hline & ~Muon normalization $N_{\mu}$~ & ~Energy scale~ \\ \hline {\bf Statistical error} & & \\ current Auger & 0.1 & 4\% \\ \hline {\bf Systematic errors} & & \\ $\langle X_{\rm max} \rangle$ uncertainty (14$\:\mbox{g}/\mbox{cm}^2$ \cite{augericrcXmax}) & +0.05 / -0.07 & +0.5\% / -2\% \\ Universality violation & +0.01 / -0.04 & +3\% / -4\% \\ $N_{\mu}$ bias & $\lesssim$10\% & $\lesssim 5$\% \\ \hline \end{tabular} \caption{Expected statistical (for current Auger exposure) and systematic errors on the muon normalization $N_{\mu}$ and energy scale $S(\theta=\theta_0,\:E=10\:\rm EeV)$ at 10~EeV.} \label{tab:errors} \end{table} \section{Validating the constant intensity method} \label{sec:toyMC} To benchmark and validate the determination of $N_{\mu}$ via the constant intensity method, we simulate realistic data sets based on our parameterization of the ground signal (\refsec{param}) and its fluctuations (\refsec{fluct}). The fluctuations in signal as well as $X_{\rm max}$ could have an impact on the measurement of $N_{\mu}$, since only the average values are used to infer $N_{\mu}$ (\refsec{CIC}). Additionally, a mixed composition of the cosmic ray beam could bias the measurements. The purpose of this section is to quantify systematic uncertainties of the method described before. The calculation of a simulated data set proceeds as follows. Event energies are drawn from a spectrum $dN/dE \propto E^{-2.9}$ in the range $10^{17.8}-10^{20.2}$eV, while the zenith angle is drawn from an isotropic distribution ($\theta< 70^{\circ}$). The primary particle type (proton or iron) is chosen at random according to a given mixture. The depth of shower maximum is then drawn from the distribution \refeq{dNdXmax} with the parameters for the given primary (we adopt the parameters from QGSJetII; this has no influence on our conclusions). An $N_{\mu}$ is determined according to the primary. With $E$, $\theta$, $X_{\rm max}$, and $N_{\mu}$ given, the ground signal can be determined via \refeq{Sparam} (we scale $S_{\rm EM}$ with $E^{0.97}$, and $S_{\mu}$ with $E^{0.9}$). The two signal components are fluctuated according to the primary (see \reftab{sh2sh}). Finally, we cut events according to a simple trigger depending on the ground signal, and apply signal reconstruction uncertainties as reported by the Auger observatory \cite{augericrcS1000}. The main characteristics of the reconstruction of $S(1000)$ are that it is unbiased at large signals $S(1000) \gtrsim 10$~VEM, and that bias and variance increase quickly for signals below 10~VEM. A large set of simulated data sets showed that the error calculation according to \refeq{chi2} is a good estimator for the variance of the $N_{\mu}$ measurement. However, we found that the constant intensity method yields a systematic shift to higher $N_{\mu}$ values of about 5--10\%. This can be explained by trigger effects and fluctuations. Due to the attenuation of the signal with zenith angle at a fixed energy, the resolution gets worse at large zenith angles. Additionally, upward fluctuations above the trigger threshold are more important at high zenith angles. These two effects, in the presence of a steep spectrum, produce a zenith angle-dependent enhancement of the number of events reconstructed above a given energy. This tends to {\it flatten} the constant intensity curve (\reffig{CICmethod}), which leads to a higher estimated $N_{\mu}$ value. The bias in $N_{\mu}$ is mainly determined by the experimental resolution and trigger effects. It depends slightly on the primary composition, ranging from 4\% for pure iron to 8\% for a mixed composition, due to the differing magnitudes of shower-to-shower fluctuations. The unknown composition, and imperfect knowledge of the experimental characteristics, lead to an uncertainty in the bias which should be included in the systematic error on $N_{\mu}$. We assume that this error will be smaller than the absolute value of the bias, hence $\lesssim$10\%. Further systematics of $N_{\mu}$ are the violation of universality in the electromagnetic ground signal ($\sim\pm 6$\%, \refsec{param}), and the uncertainty in the value of $\langle X_{\rm max} \rangle$, which translates into a further uncertainty in the electromagnetic signal. \reftab{errors} gives a summary of the systematic errors derived for $N_{\mu}$ and the energy scale (i.e., $S(\theta=\theta_0,\:E=10\; \rm EeV)$, we take $\theta_0=38^{\circ}$, close to the median of the isotropic cosmic rays), all evaluated at 10~EeV. We stress that since the universality violation is smaller closer to the shower core, this method exhibits significantly smaller systematics when applied to a surface detector measuring the signal at $r=600$~m, instead of 1000~m as considered here. \begin{figure}[t] \centering \includegraphics[width=0.6\textwidth]{Nmu-Escale.eps} \caption{Constraints in the $N_{\mu}$-energy scale plane placed by three independent measurements: the constant intensity method (black dot with error); vertical hybrid events (blue solid line with shaded error band); inclined hybrid events (red dashed lines). An error of 25\% in the fluorescence energy scale is indicated (vertical lines). All values are calculated for a three year Auger-equivalent exposure. The fiducial $N_{\mu}$ is 1.2.} \label{fig:Nmu-Escale} \end{figure} \section{Cross-checks and independent hybrid measurements} \label{sec:hybrid} The constant intensity method described above is independent of the primary composition and hadronic interaction models (within systematics), and also independent from other energy calibrations (e.g., fluorescence telescopes). However, it relies on a good understanding of the electromagnetic part of air showers as well as detector bias and resolution (\refsec{toyMC}). Hence, it is desirable to cross-check the results of the method with independent data. Hybrid experiments, which simultaneously measure the ground signal as well as fluorescence energy and $X_{\rm max}$ on an event-by-event basis, allow several cross-checks of the $N_{\mu}$ measurement. Due to the uncertainty in the energy scale of the fluorescence detector ($\sim$ 25\%), we introduce a scaling factor ($f_{FD}$) of the measured fluorescence energy. Using the parameterized electromagnetic signal $S_{\rm EM}(DX(\theta,X_{\rm max}),\theta,E)$ (\refsec{param}), with $E=f_{FD} E_{FD}$ and $X_{\rm max}$ given by the fluorescence measurement, we can, for a single event, determine the muon signal $S_{\mu}$ at a fixed core distance by subtracting the electromagnetic component from the total signal (see \refeq{SparamE}). The muon normalization is then given by $N_{\mu} = S_{\mu}/S_{\mu; \rm ref}$, where $S_{\mu; \rm ref}$ is the reference parameterized muon signal (proton-QGSJetII) at 10~EeV. Additionally, we can divide the hybrid data into two sets: vertical and inclined events, with zenith angles smaller and larger than 60$^{\circ}$, respectively. In inclined events the electromagnetic signal at ground is essentially negligible, and the ground signal allows for a direct measurement of the muon signal. We can then calculate the mean measured $\langle N_{\mu} \rangle$ for vertical and inclined events as a function of $f_{FD}$. \reffig{Nmu-Escale} shows the results for vertical (blue line with shaded error band) and inclined (red lines) events for an experiment with 3 years Auger-equivalent exposure. In order to measure $\langle N_{\mu} \rangle$ with hybrid events we clearly need to constrain $f_{FD}$. The black dot in \reffig{Nmu-Escale} corresponds to the result of the constant intensity method described in the previous sections. It constrains $N_{\mu}$ {\it as well} as the energy scale. The fluorescence energy scale and its current uncertainty ($f_{FD}=1.0\pm0.25$) are indicated in the graph by vertical lines. The crossing point of the three $N_{\mu}$ measurements is an important cross-check of the universality-based method: only a correct description of the evolution of the electromagnetic and muon ground signals will lead to a unique crossing point. The value of $f_{FD}$ that corresponds to the crossing point is a quite powerful fluorescence-independent measurement of the energy scale. The statistical uncertainty of this measurement is much smaller than the current uncertainty on the fluorescence energy scale, and thus provides for a sensitive cross-check. At the same time, experimental efforts to reduce the systematic fluorescence energy uncertainty are in progress \cite{Airfly}. Hybrid events offer several further ways to place constraints on hadronic models. For example, since $S_{\mu}$ and $DX$ are measured independently for each hybrid event, the behavior of $S_{\mu}(DX)$ can be inferred, which contains information on the energy spectrum of muons in UHE air showers. In addition, the measured fluctuations of $N_{\mu}$ allow for model-independent constraints on the primary composition, if the reconstruction uncertainties are well understood. In addition, observed anomalous $N_{\mu}$ values in single events can be used to search for non-hadronic primaries. Photon showers have a muon component of about $1/10$th of a proton shower and thus could leave a distinctive signature in the observed $N_{\mu}$. However, the sensitivity of this method to photon showers remains to be studied quantitatively with simulations of photon showers. \section{Application to other experiments} \label{sec:otherexp} The methodology and results presented so far have been specialized to the case of Auger. We now discuss the applicability to other current and future experiments. The main experiment characteristic determining this application is the ratio of the electromagnetic and muon contributions to the shower size observable ($S(1000)$ in the case of Auger). If the muon contribution is very small, the signal becomes essentially independent of $N_{\mu}$, and only the knowledge of $\langle X_{\rm max} \rangle$ is needed to predict the average signal in a model-independent way. An experiment operating in this regime (such as AGASA \cite{AGASAsim}) is able to experimentally verify the electromagnetic signal predicted by simulations. Conversely, if the muon contribution dominates even at small zenith angles, the attenuation (i.e., $\theta$-dependence) of the signal becomes very small, and a model-independent separation of the electromagnetic and muon components is impossible. The method of determining the energy scale using the constant intensity method is then not applicable. However, the attenuation curve can still be measured experimentally and compared with the prediction from simulations, which depends on the energy spectrum of muons produced in EAS. In addition, hybrid events with an independent energy measurement allow for a measurement of the absolute muon signal normalization with respect to simulations. The ratio of muon to electromagnetic signal is determined by three factors in the experimental setup: {\it 1.)~The detector type:} thin scintillator detectors have equal response to all minimum ionizing particles and thus operate as particle counters. The measured number flux of particles is dominated by electromagnetic particles. Shielding can however make scintillator detectors sensitive to muons as well. By contrast, muons deposit a large signal in water Cherenkov detectors. In this case, the ratio of area to height of the water volume determines the EM$/\mu$ ratio (flatter tanks yielding a larger ratio). {\it 2.)~The detector spacing:} the spacing determines the distance at which the signal is measured \cite{Newton}. Since the lateral distribution of the muon signal is more spread out than the electromagnetic signal, increasing the spacing will increase the relative muon contribution in the measured particle flux. {\it 3.)~The stage of shower evolution probed:} this depends on the height above sea level of the experiment and the range of primary energy observed. Showers observed very far from the shower maximum have a small electromagnetic component. Due to different characteristics, the ground signal will be dominated by the electromagnetic part in some experiments (e.g., AGASA, EASTop, Telescope Array), whereas the muon signal will contribute significantly at others (e.g., Auger, Haverah Park). A possible quantitative criterion for the applicability of the constant intensity method is the significance of the signal attenuation observed (e.g., $S(\theta=0^{\circ})/S(\theta=60^{\circ})$) with respect to the statistical and systematic errors in the signal determination. This criterion corresponds to an upper limit on the relative muon signal contribution, which has a very weak dependence on zenith angle. \section{Discussion and conclusion} \label{sec:disc} We have shown how Monte Carlo predictions of ground signals can be used to determine the energy scale of surface detector experiments, indepedently of the cosmic ray composition and hadronic interaction models. This method overcomes the otherwise unavoidable systematics of surface detectors due to the unknown cosmic ray composition. In addition, it allows for a clean measurement of the number of muons in extensive air showers. In light of the recent detection of a possible GZK feature in the UHECR spectrum, the energy scale of cosmic ray experiments is of crucial importance to distinguish between different UHECR source scenarios. Hence, it is desirable to determine the energy scale with several methods. The measurement of the surface detector energy scale presented here is completely independent of the energy scale determined from fluorescence detectors, and contains different systematic uncertainties. In this paper, we explored only a single surface detector observable, the signal $S(r)$ at a fixed distance from the shower axis. The methodology can be extended to parameterize the signal at different distances and azimuth angles. An extended parameterization like this can then be compared with each detector station in a given event, increasing the number of observables for each event. Ideally, perhaps in combination with other observables like the rise time \cite{risetime1,risetime2,Healy}, this could be used to break the degeneracy of $N_{\mu}$ and energy on an event-by-event basis for a surface detector alone. It is important to note, however, that air shower universality, the basis of this methodology, can be violated by new mechanisms in hadronic interactions in EAS. Recently, the hadronic interaction model EPOS has been introduced \cite{Pierog,PierogAIPC}. While the predictions for the depth of shower maximum are within the range of the previous models considered here, EPOS shows considerable deviations in the ground signal predictions: at 10$^{19}$ eV, the EPOS electromagnetic signal seems to be $\sim 20$\% larger than in the other models, while the predicted muon signal is 50--70\% higher. These differences are due to the production of secondary baryon-antibaryon pairs in the GeV range which is strongly increased in EPOS. These baryons then produce more muons and a flatter lateral distribution of the signal compared to the other models. These predictions, while violating air shower universality, can be constrained by observations using the methodology presented here: by separately parametrizing the electromagnetic and muon signals predicted by EPOS, one can infer the relative muon normalization with respect to EPOS which is required by the data. We would like to point out that EPOS can be compared with cosmic ray data at lower energies (e.g. KASCADE \cite{kascade}), as it was done in the past with the QGSJetII and Sibyll2.1 models. In addition, accelerator experiments \cite{NA61} are underway to measure the baryon pair production at the relevant energies. One might hope that, once the magnitude of the baryon-antibaryon production is understood, hadronic models will converge to a universal prediction of the electromagnetic part as shown here for QGSJetII and Sibyll. Very generally, the methodology presented here allows for a clean comparison of Monte Carlo simulations with air shower data, by separating shower evolution effects from primary composition and high-energy interactions. In applying air shower universality, current and future experiments have the potential to tightly constrain high energy hadronic models, as well as the energy scale and mass composition of the cosmic ray beam. \section*{Acknowledgments} We would like to thank the members of the Pierre Auger Collaboration, in particular Katsushi Arisaka, David Barnhill, Pierre Billoir, Jim Cronin, Ralph Engel, Matt Healy, and Markus Risse, for support and helpful discussions related to this work. We are grateful to the IN2P3 computing center in Lyon, where the shower library used for this paper was generated. Aaron Chou's work is supported by the U.S. Department of Energy under contract No. DE-AC02-07CH11359 and by the NSF under NSF-PHY-0401232. Lorenzo Cazon acknowledges support from the Ministerio de Educacion y Ciencia of Spain. This work was supported in part by the Kavli Institute for Cosmological Physics at the University of Chicago through grants NSF PHY-0114422 and NSF PHY-0551142 and an endowment from the Kavli Foundation and its founder Fred Kavli.
1,108,101,565,316
arxiv
\section{Introduction} \label{introduction}In recent years there has been considerable interest in the breakdown of the Lorentz invariance, as a phenomenological possibility \cite{alan} in the context of various quantum field theories as well as modified gravity and string theories\cite{kraus,arkani,grip,tajac}. The existence of Lorentz violation leads to a plethora of new high energy effects \cite{alan,glashow} with interesting implications for neutrino experiments as well as high energy cosmic ray phenomena. Furthermore, for particular parameterizations of Lorentz violation, these high energy effects can also lead to bounds on the strengths of these effects. In this note we discuss Lorentz violating effects within a new formulation of Quantum Electrodynamics (QED), where the familiar QED in Coulomb gauge emerges as an effective low energy Lagrangian in a theory where the masslessness of photon is related to spontaneous breakdown of the Lorentz symmetry (SBLS). Lorentz symmetry and its spontaneous breakdown is an old idea\cite{bjorken} and have been considered in many different contexts\cite {book}, particularly, in the generation of the internal symmetries observed in particle physics, although many formulations of the idea look contradictory (see for some recent criticism \cite{kraus,jenkins}). Our point is that an adequate formulation of the SBLS should be related to a fundamental vector field Lagrangian by itself rather than an effective field theory framework containing a (finite or infinite) set of the primary fermion interactions where the vector fields appear as the auxiliary fields for each of these fermion bilinears\footnote{% There is no need, in essence, for the physical SBLS to generate ''composite '' vector bosons (related with fermion bilinears) which could mediate the gauge-type binding interactions in Abelian or non-Abelian theories\cite {bjorken,book}.This conclusion most clearly follows from the proper lattice formulation \cite{randjbar} where Lorentz invariance is explicitly broken at the very beginning.}. Actually, the symmetry structure of the existing theories, like as QED or Yang-Mills Lagrangians, seems to be well consistent with such a point of view \cite{cfn}. The vector field gauge-type transformation, say for QED, of the form \begin{equation} A_{\mu }(x)\rightarrow A_{\mu }(x)+n_{\mu }\text{ \ \ \ \ \ \ }(\mu =0,1,2,3), \label{vector} \end{equation} can be identified by itself as the pure SBLS transformation with vector field $A_{\mu }(x)$ developing some constant background value $n_{\mu }$% \footnote{% Remarkably, it was argued a long time ago \cite{picasso} that just the invariance of the QED under the transformations (\ref{vector}) with a gauge function linear in the co-ordinates ($A_{\mu }\rightarrow A_{\mu }+\partial _{\mu }\omega $, $\omega =n_{\mu }x^{\mu }$) implies that the theory contains a genuine zero mass vector particle.}. The point is, however, this Lorentz symmetry breaking does not manifest itself in any physical way, due to the fact that an invariance of the QED under the transformation (\ref {vector}) leads to the conversion of the SBLS into gauge degrees of freedom of the massless photon. This in essence is what we call the non-observability of the SBLS of type (\ref{vector}). In this connection it was recently shown \cite{cfn} that gauge theories, both Abelian and non-Abelian, can be obtained from the requirement of the physical non-observability of the SBLS (\ref{vector}), caused by the Goldstonic nature of vector fields, rather than from the standard gauge principle. It is instructive here to compare this QED case with the free massless (pseudo-) scalar triplet theory ($\partial ^{2}\phi ^{i}=0$), which is invariant under a similar spontaneous symmetry breaking transformation \begin{equation} \phi ^{i}(x)\rightarrow \phi ^{i}(x)+c^{i}\qquad (i=1,2,3), \label{scalar} \end{equation} where the $c^{i}$ are arbitrary constants. Again this symmetry transformation corresponds to zero-mass excitations of the vacuum, which might be identified (in some approximation) with physical pions. However, in marked contrast with the vector field case (\ref{vector}) where renormalizable interactions of the form $\bar{\psi}\gamma _{\mu }\psi A^{\mu }$ are invariant under the transformation (\ref{vector}) (accompanied by $% \psi \rightarrow e^{in\cdot x}\psi $), the scalar field theory invariant under (\ref{scalar}) ends up being a trivial theory. The only way to have it as an interacting theory is to add additional states such as the $\sigma $ field as in the famous $\sigma $-model\cite{GMlevy} of Gell-Mann and Levy. A question then is whether one can have an alternative formulation of QED where the symmetry transformation in (\ref{vector}) is realized in a manner analogous to the $\sigma $-model and, if so, what kind of new physics it leads to in addition to the familiar successes of QED. It is quite clear that the simplest way to retain the explicitly covariant form of the vector Goldstone boson transformation (\ref{vector}), is to enlarge the existing Minkowskian space-time to higher dimensions with our physical world assumed to be located on a three-dimensional Brane embedded in the high-dimensional bulk. However, while technically it is quite possible to start, say, with the spontaneous breakdown of the 5-dimensional Lorentz symmetry $SO(1,4)\rightarrow SO(1,3)$ \cite{Li} to generate an ordinary 4-dimensional vector Goldstone vector field $A_{\mu }(x)$, a serious problem for such theories is how to achieve the localization of this field on the flat Brane associated with our world \cite{rubakov}. We therefore take an alternative path: we start with a general massive vector field theory in an ordinary 4-dimensional space-time. The only restriction imposed is the requirement that the four-vector $A_{\mu }$ in order to describe a spin-1 particle, must satisfy the Lorentz condition\footnote{% This supplementary condition is in fact imposed as an off-shell constraint, independent of its equation of motion\cite{ogi}.} \begin{equation} \partial _{\mu }A^{\mu }(x)=0 \label{spin} \end{equation} In this connection, it seems important to note that we are dealing further with just the physical vector field condensation rather than a condensation of the scalar component in the 4-vector $A_{\mu }(x)$, as might occur in the general case when the supplementary Lorentz condition (\ref{spin}) is not imposed. We show that this leads to a non-linear $\sigma $-type model for QED, where the photon emerges as a vector Goldstone boson related to the spontaneous breakdown of Lorentz symmetry down to its spatial rotation subgroup $SO(1,3)\rightarrow SO(3)$ at some high scale $M$. The model appears to coincide with ordinary QED taken in the Coulomb gauge in the limit where the scale $M$ goes to infinity. For finite values of $M$, there appear an infinite number of nonlinear photon interaction and self-interaction terms properly suppressed by powers of $M$. These terms violate Lorentz invariance and could have interesting implications for physics. \section{The Spin-1 Vector Fields and Physical SBLS} \label{abelian} Let us consider a simple Lagrangian for the neutral vector field $A_{\mu }(x)$ and one fermion $\psi (x)$ with dimensionless coupling constants ( $\lambda $ and $e$) \begin{equation} \mathcal{L}=-\frac{1}{4}F_{\mu \nu }F^{\mu \nu }+\frac{\mu ^{2}}{2}A_{\mu }^{2}-\frac{\lambda }{4}(A_{\mu }^{2})^{2}+\overline{\psi }(i\gamma \partial -m)\psi -eA_{\mu }\overline{\psi }\gamma ^{\mu }\psi \label{Lagr} \end{equation} where $F_{\mu \nu }=\partial _{\mu }A_{\nu }-\partial _{\nu }A_{\mu }$ denotes the field strength tensor for the vector field $A_{\mu }=(A_{0},A_{i})$ (and we denote $A_{\mu }A^{\mu }\equiv A_{\mu }^{2}$ and use similar shorthand notation e.g. $(\partial _{\mu }A_{i})^{2}\equiv \partial _{\mu }A_{i}\partial ^{\mu }A_{i}$ etc. later). The free part of the Lagrangian is taken in the standard form so that in this case the Lorentz condition (\ref{spin}) automatically follows from the equation for the vector field $A_{\mu }$. The $\lambda A^{4}$ term is added to implement the spontaneous breakdown of Lorentz symmetry $SO(1,3)$ down to the $SO(3)$ or $SO(1,2)$ for $\mu ^{2}>0$ and $\mu ^{2}<0$ , respectively. Note that contribution of the form $A^{4}$ (and higher)\footnote{% In fact, one might add one more term of the form $A_{\mu }A^{\nu }\partial _{\nu }A^{\mu }$, making the neutral vector field Lagrangian (\ref{Lagr}) to be the most general parity-conserving theory with only terms of dimension $% \leq 4$. However, in the ground state of interest here, this extra term vanishes and leads to the same physics (see below).} naturally arises in string theories\cite{alan,alan1}. Writing down the equation of motion for vector and fermion fields \begin{eqnarray} \partial ^{2}A_{\mu }-\partial _{\mu }\partial ^{\nu }A_{\nu }+\mu ^{2}A_{\mu }-\lambda A_{\mu }A_{\nu }^{2}-e\overline{\psi }\gamma _{\mu }\psi &=&0 \label{vec} \\ (i\gamma \partial -m)\psi -eA^{\mu }\gamma _{\mu }\psi &=&0 \label{fer} \end{eqnarray} taking then the 4-divergence of Eq.(\ref{vec}), and requiring that the Lorentz condition (\ref{spin}) be fulfilled, one comes to the equation \begin{equation} \lambda \partial _{\nu }(A^{\mu }A_{\mu })~=0 \label{cond} \end{equation} which should be satisfied identically. Otherwise, it would represent by itself one more supplementary condition (in addition to the equations of motions and Lorentz condition) implying that the field $A_{\mu }$ has fewer degrees of freedom than is needed for describing all its three possible spin states. This is definitely inadmissible. Furthermore a solution of the form $\lambda =0$ is also not acceptable for $\mu ^{2}<0$ since in this case, the Hamiltonian for the theory has no lower bound. Thus the only solution to Eq. (\ref{cond}) for the physical massive vector field corresponds generally to the case \begin{equation} \lambda \neq 0,\text{ \ \ \ \ }A_{\mu }^{2}=M^{2} \end{equation} where $M^{2}=\frac{\mu ^{2}}{\lambda }$ stands for some arbitrary constant parameter with dimensionality of mass squared. The general Lagrangian (\ref {Lagr}) now takes the form \begin{equation} \mathcal{L}_{SBLS}=-\frac{1}{4}F^{\mu \nu }F_{\mu \nu }+\overline{\psi }% (i\gamma \partial +m)\psi -eA^{\mu }\overline{\psi }\gamma _{\mu }\psi +const, \label{lagr1} \end{equation} with the important constraint \begin{equation*} A_{0}^{2}-A_{i}^{2}=M^{2}, \end{equation*} from which it will be obviously noticed that the vector field $A_{\mu }$ appears massless, while its vev leads to the actual SBLS. We have obtained in fact the nonlinear $\sigma $-type model for QED which we now expand in more detail. Remarkably, there is no other solution to our basic equation (% \ref{cond}) inspired solely by the spin-1 requirement for massive vector field (\ref{spin}). Furthermore, the condition (8) arises regardless of the sign of $\mu ^{2}$ leading to the global minimum of the theory (given by constant term in $\mathcal{L}_{SBLS}$) which locates lower than the case where the vector field has zero vev. However, for the right sign mass term case ($\mu ^{2}>0$) taken in the starting Lagrangian (\ref{Lagr}) the Lorentz symmetry always breaks down to its spatial rotation subgroup $% SO(1,3)\rightarrow SO(3)$. This is the main result of the paper, whose implications we study now\footnote{% Note that some models of QED with a nonlinear condition $% A_{0}^{2}-A_{i}^{2}=M^{2}$ has been previously considered \cite {nambu,venturi,alan1}. In refs. \cite{nambu,venturi}, this condition was used as a pure gauge choice whereas in our case it is a dynamical constraint (stemming from the spin-1 requirement for vector field) since our basic massive vector field Lagrangian does not have gauge invariance. In Ref.\cite {alan1}, this condition appeared as a symmetry breaking condition in the string-inspired models (called ``bumblebee models'') with an effective negative-sign mass square term for vector field. A crucial difference between the present work and the work of ref. \cite{alan1} is that in our case the dynamical spin-1 requirement (\ref{spin}) leads to the constraint in Eq.(8) for vector field regardless of the sign of its mass-term. Apart from the fact that for the case of right (positive) sign for $\mu ^{2}$ one has the global minimum of the theory for the SBLS of type $SO(1,3)\rightarrow SO(3)$ , this requirement also excludes any ghost like modes in the model. As a result, our model is fundamentally different, with a very different effective Lagrangian at low energies.}. \section{ Nonlinear $\protect\sigma $ Model for QED} The above considerations allow us to argue that the spin-1 vector field $% A_{\mu }$ can selfconsistently be presented in the Lorentz symmetry phase as the massive vector field mediating the fermion (and any other matter) interactions in the framework of the massive QED, or, conversely, in the physical SBLS phase (\ref{lagr1}) as the basic condensed field producing massless Goldstone states which then are identified with physical photons. Taking the characteristic SBLS parameter $M^{2}$ positive ($M^{2}>0$) one comes to the breakdown of the Lorentz symmetry to its spatial rotation subgroup with the vector field space-components $A_{i}$ ($i=1,2,3$) as the Goldstone fields. Their Lagrangian immediately follows from Eq.(\ref{lagr1}) which after using the Lorentz condition (\ref{spin}) and elimination of the vector field time-component $A_{0}$ looks like \begin{eqnarray} \mathcal{L}_{QED\sigma } &=&-\frac{1}{2}(\partial _{\mu }A_{\nu })^{2}+% \overline{\psi }(i\gamma \partial +m)\psi -eA^{\mu }\overline{\psi }\gamma _{\mu }\psi \notag \\ &=&\frac{1}{2}(\partial _{\mu }A_{i})^{2}-\frac{1}{2}\frac{(A_{i}\partial _{\mu }A_{i})^{2}}{M^{2}+A_{i}^{2}}+ \label{lagr2} \\ &&+\overline{\psi }(i\gamma \partial +m)\psi +eA_{i}\overline{\psi }\gamma _{i}\psi \notag \\ &&-e\sqrt{M^{2}+A_{i}^{2}}\overline{\psi }\gamma _{0}\psi \notag \end{eqnarray} We now expand the newly appeared terms in powers of $\frac{A_{i}^{2}}{M^{2}}$ and also make the appropriate redefinition of fermion field $\psi $ according to \begin{equation} \psi \rightarrow e^{ieMx_{0}}\psi \end{equation} so that the mass-type term $eM\overline{\psi }\gamma _{0}\psi $, appearing from the expansion of the fermion current time-component interaction in the Lagrangian (\ref{lagr2}) will be exactly cancelled by an analogous term stemming now from the fermion kinetic term. After this redefinition, and collecting the linear and nonlinear (in the $A_{i}$ fields) terms separately, we arrive at the Lagrangian \begin{eqnarray} \mathcal{L}_{QED\sigma } &=&\frac{1}{2}(\partial _{\mu }A_{i})^{2}+\overline{% \psi }(i\gamma \partial +m)\psi +eA_{i}\overline{\psi }\gamma _{i}\psi - \\ &&-\frac{1}{2}\frac{(A_{i}\partial _{\mu }A_{i})^{2}}{M^{2}}\left( 1-\frac{% A_{i}^{2}}{M^{2}}+\cdot \cdot \right) \notag \\ &&-e\frac{A_{i}^{2}}{2M}\left( 1-\frac{A_{i}^{2}}{4M^{2}}\cdot \cdot \cdot \right) \overline{\psi }\gamma _{0}\psi \notag \end{eqnarray} where we have retained the former notation for the fermion $\psi $ and omitted the higher nonlinear terms for photon. Additionally, the Lorentz condition for the spin-1 vector field (\ref{spin}) now reads as follows: \begin{equation} \partial _{i}A_{i}-\frac{A_{i}\partial _{0}A_{i}}{M}\left( 1-\frac{A_{i}^{2}% }{2M^{2}}+\cdot \cdot \cdot \right) =0 \label{spin2} \end{equation} The Lagrangian (12) together with a modified Lorentz condition (\ref{spin2}) completes the $\sigma $ model construction for quantum electrodynamics. We will call this $QED\sigma$. The model contains only two independent (and approximately transverse) vector Goldstone boson modes which are identified with the physical photon, and in the limit $M\rightarrow \infty $ is indistinguishable from conventional QED taken in the Coulomb gauge. In this limit the 3-dimensional analog of the ``goldstonic'' gauge transformations (% \ref{vector}) accompanied by the proper phase transformation of fermion \begin{equation} A_{i}(x)\rightarrow A_{i}(x)+n_{i},\ \ \ \ \ \ \ \psi \rightarrow e^{-ien_{i}x_{i}}\psi \ \ \ \ \ \ \ (i=1,2,3) \end{equation} emerges as an exact symmetry of the Lagrangian in Eq. (12), as one would expect in the pure Goldstonic phase. While $QED\sigma $ coincides with the conventional QED in Coulomb gauge in the limit of $M\rightarrow \infty $, it differs from the conventional QED in Coulomb gauge in several ways. First, apart from an ordinary photon-fermion coupling, our model generically includes an infinite number of nonlinear photon interaction and self-interaction terms which become active at high energies comparable to the SBLS scale $M$. Second and more important, the nonlinear photon interaction terms in the Lagrangian Eq. (12) break Lorentz invariance in a very specific way depending only on a single parameter $M$ unlike many recent parameterizations of Lorentz breaking which involve more than one new parameter. Furthermore all the non-linear photon-fermion (photon-matter in general) interaction terms are C, CP and CPT non-invariant as well. This should have interesting implications for particle physics and cosmology, such as high precision measurements involving atomic systems, breaking of C and CP invariance in electromagnetic processes, extra contribution to neutral mesons oscillations and, especially, the implications of CPT-violating effects on the matter-antimatter asymmetry in the early universe. We will pursue these implication in a separate publication. One immediate point to note is that the dispersion formula for light propagation still remains the same (i.e. $\omega _{k}^{2}-|\vec{k}% |^{2}~=~0$). \section{ Conclusion} To summarize, we have started with the observation\cite{cfn} that the gauge-type transformations (\ref{vector}) with a gauge function linear in the co-ordinates can be treated as the transformations of the spontaneously broken Lorentz symmetry, whose pure Goldstonic phase is presumably realized in the form of the known QED. Exploring this point of view and starting from the general massive vector field theory, we have constructed a full theoretical framework for the physical SBLS including its Higgs phase as well, in terms of the properly formulated nonlinear $\sigma $-type model . For the first time we have proposed a pure fundamental Lagrangian formulation without referring to the effective four-fermion interaction ansatz dating back to the pioneering work of Bjorken \cite{bjorken}. In this connection, one might conclude that the whole non-linear Lagrangian $% \mathcal{L}_{QED\sigma }$ (12), with a massless photon provided by the spontaneous breakdown of Lorentz invariance, is in some sense a more fundamental theory of electromagnetic interactions than the usual QED% \footnote{% Indeed this origin for the masslessness of the photon seems to be more general and deep than the usually postulated gauge symmetry. Despite the essentially non-renormalisable character of the Lagrangian $\mathcal{L}% _{QED\sigma }$ (12), one does not expect the radiative corrections to generate a mass for the photon; otherwise one would have to admit that the radiative corrections lead to a breakdown of the original Lorentz symmetry in the starting Lagrangian (\ref{Lagr}) or (\ref{lagr1}), which is hardly imaginable.}. This theory coinciding with quantum electrodynamics at low energies happens to generically predict striking new phenomena beyond conventional QED at high energies comparable to the SBLS scale $M$ - an infinite number of nonlinear photon-photon and photon-matter interactions which explicitly break relativistic invariance, and C, CP and CPT symmetry. \section*{Acknowledgments} We would like to thank Gia Dvali, Oleg Kancheli, Alan Kostelecky, Gordon Moorhouse, Valery Rubakov, David Sutherland and Ching Hung Woo for useful discussions and comments. One of us (J.L.C.) is grateful for the kind hospitality shown to him during his summer visit (June-July 2003) to the Department of Physics and Astronomy at Glasgow University, where part of this work was carried out. Financial support from GRDF grant No. 3305 is also gratefully acknowledged by J.L.C. and R.N.M. R. N. M. is supported by National Science Foundation Grant No. PHY-0354401.
1,108,101,565,317
arxiv
\section{Introduction} Throughout we shall assume that all graphs are simple. For a positive integer $r$ we let $K_r$ denote a complete graph on $r$ vertices and we let $K_{r,r}$ denote a balanced complete bipartite graph with $r$ vertices in each part. A \emph{triangle} in a graph $G$ is a subgraph isomorphic to $K_3$. The starting point for this work is the following classical theorem, one of the first results in extremal graph theory. \begin{theorem}[Mantel \cite{Mantel}]\label{mant} If $G$ is a graph on $n$ vertices with $|E(G)| > \frac{1}{4}n^2$, then $G$ contains a triangle. \end{theorem} To see that this bound is best possible, observe that when $n$ is even, the complete bipartite graph $K_{\frac{n}{2},\frac{n}{2}}$ has $\frac{n^2}{4}$ edges but no triangle. In this article we consider a colourful variant of the above. Let $G_1, G_2, G_3$ be three graphs on a common vertex set $V$ and think of each graph as having edges of a distinct colour. Define a \emph{rainbow triangle} to be three vertices $v_1, v_2, v_3 \in V$ so that $v_i v_{i+1} \in E(G_i)$ (where the indices are treated modulo~3). We will be interested in determining how many edges force the existence of a rainbow triangle. Is it true that if $|E(G_i)| > \frac{1}{4}n^2$ for $1 \le i \le 3$, then there exists a rainbow triangle? By taking $G_1 = G_2 = G_3$ we return to the setting of Mantel's Theorem. In general, however, the answer to the former question is negative, as shown by the following construction. Let $n$ be an integer and let $0 < t < \frac{1}{2}$ have the property that $tn$ is an integer. Let $V$ be a set of $n$ vertices, and partition $V$ into $\{A,B,C\}$ where $|B| = |C| = tn$ and $|A| = (1 - 2t)n$. Construct three graphs $G_1, G_2, G_3$ on $V$ as follows: Let $G_1$ consist of a clique on $A$ plus a clique on $B$, let $G_2$ consist of a clique on $A$ plus a clique on $C$, and let $G_3$ consist of all edges except for those with both ends in $A$ (see Figure~\ref{fig:ABC}). \begin{figure} \centering \includegraphics[width=180pt]{ABC_a} \caption{Graphs $G_1, G_2, G_3$ on $V=A\cup B\cup C$, with edges of $G_1$, $G_2$, $G_3$ depicted by colors red, green and blue respectively.} \label{fig:ABC} \end{figure} A simple check reveals that there is no rainbow triangle for this triple of graphs. Furthermore $|E(G_1)| = |E(G_2)| = {n - 2tn \choose 2} + {tn \choose 2} = \frac{2-8t + 10t^2}{4}n^2 - \frac{1-t}{2}n$ while $|E(G_3)| = {n \choose 2} - {n - 2tn \choose 2} = \frac{8t - 8t^2}{4} n^2 - tn$. It is easy to verify that $2-8t + 10t^2>1$ and $8t - 8t^2>1$ whenever $t$ satisfies $\frac{1}{2}-\frac{1}{2\sqrt 2} < 0.147 < t < 0.155 < \frac{2-\sqrt{3/2}}{5}$. Thus, if we pick $0.147<t<0.155$, then for every sufficiently large $n$ (such that $tn$ is an integer) there are graphs $G_1, G_2, G_3$ on a common set of~$n$ vertices without a rainbow triangle that satisfy $|E(G_i)| > \frac{1}{4} n^2$ for $1 \le i \le 3$. However, a slight increase in the number of edges forces the occurence of a rainbow triangle. Throughout the paper we fix the value $\tau = \frac{4 - \sqrt{7}}{9}$, so $\tau^2 \approx 0.0226$, and $\frac{1 + \tau^2}{4} =\frac{ 26 - 2 \sqrt{7} }{81} \approx 0.2557$. \begin{theorem} \label{main} Let $G_1, G_2, G_3$ be graphs on a common set of $n$ vertices. If $|E(G_i)| > \frac{1 + \tau^2}{4}n^2 $ for $1 \le i \le 3$, then there exists a rainbow triangle. \end{theorem} Only after finishing the paper we have learned about work of A.~Diwan and D.~Mubayi~\cite{dhruv} (see also \url{https://faculty.math.illinois.edu/~west/regs/turancol.html}). They consider two-colored variants of Tur\'an's theorem, prove a couple of them and pose a problem about three-colored version of Mantel's theorem. Thus, the above theorem is an asymptotically tight solution to their problem. Theorem~\ref{main} is sharp in the sense that $\tau^2$ cannot be replaced by a smaller constant. To see this, note that $t = \tau$ is the unique solution to the quadratic equation $2 - 8t + 10t^2 = 8t - 8t^2$ with $0 < t < \frac{1}{2}$. For this number $\tau$ both sides of this quadratic equation are equal to $1 + \tau^2$. It follows by the construction showed above (taking $n$ large and $t$ close to $\tau$) that for every $\epsilon > 0$ there exist simple graphs $G_1, G_2, G_3$ on a common set of~$n$ vertices without a rainbow triangle that satisfy $|E(G_i)| > (\frac{1 + \tau^2}{4} - \epsilon) n^2$ for $1 \le i \le 3$. We were also able to use some of the ideas in our proof to obtain a new short proof of Mantel's Theorem. We include this proof at the beginning of the main section since it provides a nice example of a technique later used to prove Theorem \ref{main}. Since $\tau^2$ is not rational, there does not exist a graph $G$ with $|V(G)| = n$ and $|E(G)| = \frac{1 + \tau^2}{4}n^2$, and thus there is no finite tight example for our problem. This inconvenience is removed in the setting of graph limits and graphons: a growing sequence constructed as in the previous paragraph would converge to three graphons, each with density $\frac{1 + \tau^2}{2}$ and without a rainbow triangle. In this setting, Razborov's flag algebra machinery may give an alternative proof of our result, and be useful in extending it. Indeed, a flag-algebra proof has already been obtained (independently from us) by E.~Culver, B.~Lidick\'y, F.~Pfender, and J.~Volec~\cite{flags-manu}. Their proof even gives a precise characterization of all extremal configurations with sufficiently large number of vertices. To further explore this area of ``rainbow extremal graph theory'' each approach has its pluses and minuses, ours has the advantage of being verifiable by hand. We suggest some potential interesting directions to proceed with the following problems. \begin{problem} For what real numbers $\alpha_1, \alpha_2, \alpha_3 > 0$ is it true that every triple of graphs $G_1, G_2, G_3$ satisfying $|E(G_i)| > \alpha_i n^2$ must have a rainbow triangle? \end{problem} Tur\'an's Theorem generalizes Mantel's Theorem by proving that for every integer $r \ge 2$, a simple $n$ vertex graph with more than $(1 - \frac{1}{r-1}) \frac{n^2}{2}$ edges has a $K_r$ subgraph. Analogously, one may consider the following. \begin{problem} For every positive integer $r$, what is the smallest real number $\delta_r$ so that whenever $G_1, \ldots, G_{ {r \choose 2} }$ are graphs on a common set of $n$ vertices with $|E(G_i)| \ge \delta_r n^2$ for every $1 \le i \le {r \choose 2}$ there exists a rainbow $K_r$, i.e., a set of $r$ vertices and one edge from each $G_i$ that together form a clique on this set of vertices. \end{problem} We can also consider this problem for the number of graphs (colours) being different from~$\binom r2$, with an appropriately modified notion of ``rainbow''. For $r=3$ and more than three graphs the answer is $\delta_r = 1/4$ with the extremal configuration beeing all graphs identical complete biparite (Theorem 1.2 of~\cite{KSSV}). When the number of graphs is less than~$\binom r2$, one can study the existence of other colour patterns. For $r=3$ and two graphs a problem with such a flavor was considered in ~\cite{DMM}. We will finish this section by a sample of other results and conjectures that can be described as rainbow. Perhaps historically first is a result of B\'ar\'any~\cite{Barany} in combinatorial geometry. He obtained a rainbow (also termed colourful) version of Carath\'eodory's theorem; see also~\cite{GKblog}. More recent, and closer to our present topic, is the study of rainbow Erd\H{o}s-Ko-Rado theorems. Let us use $f(n,r,k)$ to denote the EKR number -- the smallest $m$ such that every $r$-uniform hypergraph with $n$~vertices and $m$~edges has a matching of size~$k$. (Recall that the classical Erd\H{o}s-Ko-Rado theorem states that $f(n,r,2) = \binom{n-1}{r-1}$ whenever $n \ge 2r$.) Aharoni and~Howard~\cite{rainbowEKR} conjecture the following rainbow version: \begin{conjecture}[Aharoni, Howard] Let $H_1$, \dots, $H_k$ be $r$-uniform hypergraphs on the same set of~$n$~vertices, each having $f(n,r,k)$ hyperedges. Then there is a rainbow matching: a matching $\{e_1, \dots, e_k\}$ such that $e_i \in E(H_i)$ for $i=1, \dots, k$. \end{conjecture} In~\cite{rainbowEKR} this conjecture is discussed for hypergraphs that are balanced $r$-partite and it is proved there with this restriction for $r=3$. We finish with a conjecture motivated by Dirac's condition for hamiltonicity. \begin{conjecture}[Aharoni] Given graphs $G_1$, \dots, $G_n$ on the same vertex set of size~$n$, each having minimum degrees at least $n/2$, there exists a rainbow Hamilton cycle: a cycle with edge-set $\{e_1, \dots, e_n\}$ such that $e_i \in E(G_i)$ for $i=1, \dots, n$. \end{conjecture} \section{Proof} A key idea in the proof of Theorem \ref{main} will be to analyze the structure of the subgraphs induced by pairs of edges in a matching of a graph. We can use a similar (but simpler) approach to obtain a short proof of Mantel's Theorem. \begin{lemma}\label{count} Let $G$ be a graph and $P$ be the set of pairs of distinct vertices $\left\lbrace x,y\right\rbrace \subseteq V(G)$ such that $N(x) \cap N(y) \neq \emptyset$. If $M$ is a maximal matching in $G$, then~$\left| P\right| \geq \left| E(G) \right| - \left| M \right|$. \end{lemma} \begin{proof} Let $M = \left\lbrace e_1, e_2, \dots, e_k \right\rbrace$ be a maximal matching of $G$. Since $M$ is maximal, we know that every edge $e \in E(G)$ has at least one endpoint in common with an edge of $M$. For $e \in E(G) \setminus M$, let $s(e)$ be the smallest integer such that $e \cap e_{s(e)} \neq \emptyset$, and take $f(e) = e \triangle e_{s(e)}$. It is easy to see that $f:E(G) \setminus M \to P$ is an injective function, and the result follows. \end{proof} \begin{proof}[Proof of Theorem \ref{mant}] Let $G$ be a triangle-free graph and $M$ a maximum matching of $G$. Since $G$ has no triangles then $ \left| P \right| + \left| E(G) \right| \leq \binom{n}{2}$, and by Lemma \ref{count} we have $\left| E(G)\right| - \frac{1}{2}n \leq \left| E(G) \right| - \left| M \right| \leq \left| P\right|$. By combining these inequalities, we get $2\left| E(G) \right| \leq \binom{n}{2} + \frac{1}{2}n$, and so $\left| E(G) \right| \leq \frac{1}{4}n^2$. \end{proof} There are a great many proofs of Mantel's Theorem and we borrow ideas from a few. In particular, the following lemma we require is a variant of an ``entropy minimizing'' proof (\cite{Erdos-Turan}, see also \cite{Aigner-Turan}). For a graph $G$ and a set $X \subseteq V(G)$ we let $G[X]$ denote the subgraph of $G$ induced on $X$ and we let $e_G(X) = |E(G[X])|$. If $Y \subseteq V(G)$ is disjoint from~$X$ we let $e_G(X,Y) = | \{ xy \in E(G) \mid \mbox{$x \in X$ and $y \in Y$} \}|$. As usual, if the graph $G$ is clear from context, we drop the subscript $G$. \begin{lemma} \label{bipman} Let $G$ be a graph and let $\{Z_0, Z_1\}$ be a partition of $V(G)$. If for $i=0,1$, every $z \in Z_i$ has the property that $N(z) \cap Z_{1 - i}$ is a clique, then \[ e(Z_0,Z_1) \le e(Z_0) + e(Z_1) + \tfrac{1}{2} \left( |Z_0| + |Z_1| \right). \] \end{lemma} \begin{proof} We say that two vertices $z,z' \in Z_i$ ($i=0,1$ is the same for both $z$, $z'$) are \emph{twins} if they have the same closed neighbourhood, $N[z] = N[z']$. Observe that being twins is an equivalence relation. Now we choose a graph $G$ so that \begin{enumerate}[(1)] \item \label{opt1} $e(Z_0,Z_1) - e(Z_0) - e(Z_1)$ is maximum; and \item \label{opt2} the total number of pairs of vertices that are twins is maximum (subj. to (\ref{opt1})). Recall that if a pair of vertices are twins, then they both are in~$Z_0$ or both in~$Z_1$. \end{enumerate} Observe that it is sufficient to verify the desired bound for this~$G$. Now we fix $i = 0,1$ and let $w',w'' \in Z_i$ be adjacent. Consider the graph $G'$ ($G''$) obtained from $G$ by deleting $w'$ ($w''$) and adding a new vertex and making it a twin of $w''$ ($w'$). It is immediate from this construction that both $G'$ and $G''$ satisfy the condition that $N(z) \cap Z_{1-j}$ is a clique for every $j = 0,1$ and $z \in Z_j$. If one of $G'$ or $G''$ is superior to the original graph $G$ for the first optimization criterion~(\ref{opt1}) we have a contradiction with the choice of~$G$. It follows that all three graphs $G$, $G'$, and $G''$ are tied relative to this criterion. If $w'$ and $w''$ are not twins, then one of $G'$ or $G''$ is superior to $G$ relative to the second optimization criterion~(\ref{opt2}). To see this, observe that if $x, y \in Z_{1-i}$ are twins in~$G$, then they are also twins in~$G'$ and in~$G''$. If the twin-equivalence class of~$w'$ ($w''$) has $a'$ ($a''$ elements), then $G'$ looses $a'-1$ twin-pairs and gains $a''$ new ones; thus if $a'' \ge a'$, then $G'$ is superior to~$G$, a contradiction to the choice of~$G$. (If $a' > a''$, we use $G''$.) It follows that $w'$ and $w''$ must be twins. As being twins is an equivalence relation, we conclude that the graph~$G$ is a disjoint union of complete graphs. Consider a component $H$ of $G$ with $|V(H) \cap Z_0| = \ell$ and $|V(H) \cap Z_1| = m$. In this case the sets $C = E(H) \cap E(Z_0,Z_1)$ and $D = E(H) \setminus C$ satisfy $$ |D| - |C| = {\ell \choose 2} + {m \choose 2} - \ell m = \tfrac{1}{2}(\ell - m)^2 - \tfrac{\ell + m}{2} \ge -\tfrac{\ell + m}{2} $$ and the lemma follows by summing these inequalities over each component. \end{proof} Any counterexample to Theorem~\ref{main} would immediately imply the existence of large counterexamples by way of ``blowing up'' vertices. More precisely, suppose that $G_1, G_2, G_3$ contradict the theorem, and let $k$ be a positive integer. Now replace every vertex $v$ by a set $X_v$ consisting of $k$ isolated vertices, and for each graph $G_i$, replace every edge $uv$ by all possible edges between the sets $X_u$ and $X_v$. This operation magnifies the number of vertices by a factor of $k$ and the number of edges in each $G_i$ by a factor of $k^2$, and thus yields another counterexample. Moreover, if $\min_{1 \le i \le 3} \frac{|E(G_i)|}{n^2} - \frac{1+\tau^2}{4} = \epsilon$ then this property will also be preserved. So, the resulting graph on~$kn$ vertices will exceed the bound by $\epsilon k^2n^2$ edges ($\epsilon$ is positive, as $\tau^2$ is irrational). The condition in Theorem \ref{main} implies $|E(G_i)| + |E(G_j)| \ge \tfrac{1 + \tau^2}{2} n^2$ for all $1 \le i<j\le 3$. Hence, by the above observation, to prove Theorem~\ref{main} it suffices to establish the following result. \begin{lemma}\label{main2} Let $G_1, G_2, G_3$ be graphs on a common set $V$ of $n \ge 1$ vertices. If \[ |E(G_i)| + |E(G_j)| \ge \tfrac{1 + \tau^2}{2} n^2 + \tfrac{3}{2}n \] holds for every $1 \le i < j \le 3$, then there exists a rainbow triangle. \end{lemma} The statement of the above lemma replaces a bound on the number of edges in each graph $G_i$ by a bound on the sum of the number of edges in any two such graphs. This adjustment will allow us to forbid certain types of induced subgraphs of a possible minimal counterexample, as demonstrated by Lemma \ref{no3pm} below. To proceed, we need some further notation. For the remainder of this article we will be focusing on the proof of Lemma~\ref{main2}, so we will always have three graphs $G_1, G_2, G_3$ on a common set of vertices $V$. Abbreviating our usual notation, if $X \subseteq V$ and $1 \le i \le 3$ we will let $e_i( X) = e_{G_i}(X)$ and we define $e(X) = e_1(X) + e_2(X) + e_3(X)$. Similarly, if $Y \subseteq V$ is disjoint from $X$, then we let $e_i(X,Y) = e_{G_i}(X,Y)$ and we define $e(X,Y) = e_1(X,Y) + e_2(X,Y) + e_3(X,Y)$. We also let $E_i = E(G_i)$. \begin{lemma}\label{no3pm} A counterexample to Lemma~\ref{main2} with $n$ minimum does not contain a nonempty proper subset of vertices $X$ for which $G_i[X]$ has a perfect matching for all $1 \le i \le 3$. \end{lemma} \begin{proof} Let $G_1$, $G_2$, $G_3$ be a minimal counterexample to Lemma~\ref{main2}. Suppose (for a contradiction) that there is a set of vertices $X$, such that every colour induces a graph with a perfect matching on $X$. Let $|X| = \ell$ and let $M$ be a perfect matching in the graph $G_3[X]$. If $xx' \in M$ and $y \in V \setminus \{x,x'\}$, then $e_1(y, \{x,x' \} ) + e_2(y, \{x,x'\}) \le 2$ (otherwise there would be a rainbow triangle). Summing this over all edges of $M$ and $y \in V \setminus X$ gives us $e_1(X, V \setminus X) + e_2(X, V \setminus X) \le \ell(n- \ell)$. If $uu', vv' \in M$ and $uv \in E_1\cap E_2$, then $uv', vu' \not \in E_1 \cup E_2$. This implies that $e_1(\{u,u'\}, \{v,v'\}) + e_2(\{u,u'\}, \{v,v'\}) \le 4$ and thus, on average, an edge in $E(X)\setminus M$ contributes at most $1$ to $e_1(X) + e_2(X)$; obviously an edge of $M$ contributes at most~$2$. Hence $e_1(X) + e_2(X) \le \frac{\ell}{2} + {\ell \choose 2} = \frac{\ell^2}{2}$. Therefore \begin{align*} |E(G_1 - X)| + |E(G_2-X)| &\ge |E(G_1)| + |E(G_2)| - \ell (n-\ell) - \tfrac{\ell^2}{2} \\ &\ge \tfrac{1+\tau^2}{2}n^2 + \tfrac{3}{2}n - \ell n + \tfrac{\ell^2}{2} \\ &\ge \tfrac{1+\tau^2}{2} (n-\ell)^2 + \tfrac{3}{2}(n - \ell). \end{align*} It follows from the same argument applied to the other two pairs of colours that the graphs $G_1-X$, $G_2-X$, and $G_3-X$ form a smaller counterexample, contradicting minimality. \end{proof} \begin{figure}[ht] \centering \includegraphics[width=310pt]{Obs1_a} \caption{The three situations described in the first instance of Observation \ref{digonobs}. Colors $i, j$ and $k$ are depicted by green, red and blue respectively. If an edge is not depicted it means that the edge is not present, if it is dashed it indicates that it may be present.} \label{fig:Obs1} \end{figure} \begin{figure}[ht] \centering \includegraphics[width=260pt]{Obs2_a} \caption{The two situations described in the second instance of Observation \ref{digonobs}. Colors $i$, $j$ and $k$ are depicted by green, red and blue respectively. If an edge is not depicted it means that the edge is not present, if it is dashed (gray) it indicates that it may be present in any color (as soon as no rainbow triangle appears).} \label{fig:Obs2} \end{figure} \begin{observation} \label{digonobs} Let $G_1, G_2, G_3$ be a counterexample to Lemma~\ref{main2} for which $n$ is minimum and let $X, X' \subseteq V$ be disjoint sets with $|X| = |X'| = 2$. Suppose also that $n \ge 5$. If $e(X) = e(X') = 2$ and $\{i,j,k\} = \{1,2,3\}$ then we have: \begin{enumerate} \item If $e_i(X) = e_j(X) = e_i(X') = e_j(X') = 1$, one of the following holds (see Figure~\ref{fig:Obs1}): \begin{enumerate} \item $e_{k}(X,X') = 0$, or \item $e_{k}(X,X') = 1$ and $e_{i}(X,X'), e_{j}(X,X') \le 2$, or \item $e_{k}(X,X') = 2$ and $e_{i}(X,X') = e_{j}(X,X') = 0$. \end{enumerate} \item If $e_i(X) = e_j(X) = e_i(X') = e_k(X') = 1$, one of the following holds (see Figure~\ref{fig:Obs2}): \begin{enumerate} \item $e(X,X') \le 4$, or \item $e(X,X') = 5$ where $e_{i}(X,X') = 3$ and $e_{j}(X,X') = e_{k}(X,X') = 1$. \end{enumerate} \end{enumerate} \end{observation} \begin{proof} The first part follows from the observation that the graph induced on $X \cup X'$ cannot have two nonadjacent edges of colour $k$ (otherwise Lemma~\ref{no3pm} would be violated) and a straightforward case analysis (leaning on the assumption that there is no rainbow triangle). To show the second part, recall that Lemma~\ref{no3pm} implies there is no pair of vertices adjacent in all three colors. We make a similar case analysis, obtaining that the only configuration with $e(X,X') = 5$ is the one depicted in Figure \ref{fig:Obs2} $(b)$. \end{proof} \begin{proof}[Proof of Lemma \ref{main2}] Suppose (for a contradiction) that Lemma~\ref{main2} is false, and choose a counterexample $G_1, G_2, G_3$ with common vertex set $V$ so that $n = |V|$ is minimum. It follows that $n > 5$ (otherwise the given bound on~$|E(G_i)| + |E(G_j)|$ is greater than $2\binom n2$). Recall that by Lemma~\ref{no3pm} there does not exist a pair of vertices adjacent in all three graphs $G_1, G_2, G_3$. Say that a set $X \subseteq V$ with $|X|=2$ is a \emph{digon} if $e(X) = 2$. Now, choose a maximum sized collection $M$ of pairwise disjoint digons. For $1 \le i < j \le 3$ define $M_{i,j}$ to be the subset of $M$ consisting of those digons $X$ so that $e_i(X) = e_j(X) = 1$. For every $1 \le i < j \le 3$ let $X_{i,j}$ be the union of the digons in $M_{i,j}$ and let $D = V \setminus (X_{1,2} \cup X_{1,3} \cup X_{2,3})$. \begin{claim} The set $D$ satisfies the following. \begin{enumerate} \item If $x,x' \in D$ then $e(x,x') \le 1$. \item If $X \in M_{i,j}$ and $y \in D$ satisfy $e(X,y) \ge 3$, then $e_k(X,y) = 0$ (with $\{i,j,k\} = \{1,2,3\}$). \item For every $X \in M$ there is at most one vertex $y \in D$ for which $e(X,y) = 4$. \end{enumerate} \end{claim} \begin{proof}[Proof of Claim~1] The first part follows from the maximality of $M$. For the second part, if $X=\{x,x'\}$ and, say, $xy \in E_k$ then $x'y \notin E_i \cup E_j$, thus $e(X,y) \le 2$. For the last part, assume again $X=\{x,x'\} \in M_{i,j}$. For contradiction, suppose there are distinct $y,y' \in D$ such that $e(X,y) = e(X,y') = 4$. As no edge is contained in all three subgraphs, both $\{x,y\}$ and $\{x',y'\}$ are digons, so we may use them in~$M$ instead of $\{x,x'\}$, contradicting the maximality of $M$. \end{proof} \bigskip The plan for the rest of the proof is to use our understanding of the structure of minimal counterexample (as given by Observation~\ref{digonobs} and by Claim~1) to derive several inequalities for $|X_{1,2}|$, $|X_{2,3}|$, $|X_{1,3}|$ and~$|D|$. These inequalities will appear as (1)--(3) below. As we will show they have no solution, we will reach our desired contradiction. In order to simplify calculations we now replace the graphs $G_i$ by graphs having simpler structure (and possibly several rainbow triangles). First off, for every $X \in M$, if there exists $y \in D$ with $e(X,y) = 4$, then delete one edge between $X$ and $y$. (Note that by the above claim this removes at most $\frac{n}{2}$ edges in total.) Next suppose that $\{i,j,k\} = \{1,2,3\}$ and $X,X' \in M_{i,j}$ satisfy $e_{k}(X,X') > 0$. In this case we delete all edges between $X$ and $X'$ (in all three graphs) and then add back three edges between $X$ and $X'$ in $G_i$ and three such edges in $G_j$. Note that for this operation the first part of Observation~\ref{digonobs} implies that the sum of the edge-count for any two of the graphs does not decrease. Let $G_1', G_2', G_3'$ be the graphs resulting from applying these operations whenever possible. By the above, \[ \min_{1 \le i < j \le 3} |E(G'_i)| + |E(G'_j)| \ge \tfrac{1+\tau^2}{2}n^2 + n. \] Note that the sets $M$, $M_{i,j}$, $X_{i,j}$ and $D$ do not change by going to $G'_1$, $G'_2$, $G'_3$. The graphs $G'_1, G'_2,G'_3$ may have a rainbow triangle. However, each such triangle involves an edge between two digons $X$, $X'$ in the same set $M_{i,j}$ for which $e_{G'_i}(X,X') = e_{G'_j}(X,X') = 3$. Before making our next modification, we pause to construct an auxiliary graph. For every $1 \le i \le 3$ let $\{i,j,k\} = \{1,2,3\}$ and construct a simple graph $H$ with vertex set $M_{i,j} \cup M_{i,k}$ by the following rules: \begin{itemize} \item If $X,X' \in M_{i,j}$ are distinct, we add an edge between them if $e_{G_j'}(X,X') \le 3$. We do the same with $k$ in place of~$j$. \item If $X \in M_{i,j}$ and $X' \in M_{i,k}$, we add an edge between them if $e'(X,X') = 5$ (using $e' = e_{G_1'} + e_{G_2'} + e_{G_3'}$). \end{itemize} \begin{claim} The graph $H$ satisfies the hypothesis of Lemma~\ref{bipman} with $Z_0 = M_{i,j}$ and~$Z_1 = M_{i,k}$. \end{claim} \begin{proof}[Proof of Claim~2] For the sake of contradiction assume that there are digons $X,X' \in M_{i,k}$ and $Y \in M_{i,j}$ such that both $YX$ and $YX'$ are edges of~$H$, but $XX'$ is not. It follows that $e_{G'_k}(X,X') = 4$, so the edges between $X$ and~$X'$ have not been modified when constructing graphs $G'_1$, $G'_2$, $G'_3$. Moreover, as $e'(X,Y) = e'(X',Y) = 5$, the second part of Observation \ref{digonobs} describes precisely the structure of edges between $X \cup X'$ and $Y$, and we find a rainbow triangle (not only in~$G'$, but also in~$G$). \end{proof} By Lemma~\ref{bipman} we have $e_H(Z_0) + e_H(Z_1)\ge e_H(Z_0,Z_1)-(|M_{i,j}|+|M_{i,k}|)/2$. Note that by the definition of $H$, $e_H(Z_0)$ is at most the number of missing edges of colour $j$ in $X_{i,j}$, and $ e_H(Z_1)$ is at most the number of missing edges of colour~$k$ in~$X_{i,k}$. Also, $e_H(Z_0,Z_1)$ is the number of pairs $(X,X')$ where $X \in M_{i,j}$ and $X' \in M_{i,k}$ that satisfy $e(X,X') = 5$. Based on this observation, we now construct a new triple of graphs. For each $i = 0, 1, 2$ and for $j<k$ such that $\{i,j,k\}=\{0,1,2\}$ we modify the induced subgraphs of $G_1'$, $G_2'$, $G_3'$ on $X_{i,j} \cup X_{i,k}$: { \narrower For every $X \in M_{i,j}$ and $X' \in M_{i,k}$ we modify the graph between $X$ and $X'$ as follows: If $e(X,X') = 5$ (so, by Lemma~\ref{bipman}, $e_i(X,X') = 3$ and $e_j(X,X') = 1 = e_k(X,X')$) then delete the edges between $X$ and $X'$ of colours $j$ and $k$ and then add back one new edge of colour $j$ or $k$ (to be chosen later) so that the new edge is not parallel to any of the three edges of colour~$i$. Otherwise, if $e(X,X') \le 4$ we rearrange the edges between $X$ and $X'$ so that every $x \in X$ and $x' \in X'$ satisfy $e(x,x') \le 1$. Next, we add to $X_{i,j}$ all missing edges of colour~$j$ and to $X_{i,k}$ all missing edges of colour~$k$. \def\mathop{\mathrm{loss}}{\mathop{\mathrm{loss}}} We let $\mathop{\mathrm{loss}}(j,i)$ denote the decrease of the number of edges of~$G_j'$ (thus $\mathop{\mathrm{loss}}(j,i)$ is negative, if the number of edges of~$G_j'$ has increased). Similarly for $k$ in place of~$j$. It follows from Lemma~\ref{bipman} and the above discussion that $\mathop{\mathrm{loss}}(j,i) + \mathop{\mathrm{loss}}(k,i) \le (|M_{i,j}| + |M_{i,k}|)/2 \le n/4$. Moreover, we can choose whether to decrease the number of edges of~$G_j'$ or $G_k'$. Thus, we may ensure that $\mathop{\mathrm{loss}}(j,i)<0$ only if $\mathop{\mathrm{loss}}(k,i)\le 0$ (and vice versa). Consequently, we have $$ \mathop{\mathrm{loss}}(j,i) \le n/4 \quad \mbox{and} \quad \mathop{\mathrm{loss}}(k,i) \le n/4. $$ } Summing up, we may arrange the modification process so that each colour class of edges decreases in size by at most $\frac{n}{2}$. So, if we let $G''_1, G''_2, G''_3$ be the graphs resulting from our operation, we have: \[ \min_{1 \le i < j \le 3} |E(G''_i)| + |E(G''_j)| > \tfrac{1+\tau^2}{2}n^2. \] To complete the proof, we will now show that the above density condition is incompatible with the structure of the graphs $G''_i$. Let $G''=\bigcup_{i\le 3}G''_i$. Below we use the notation $e(S)$, $e(S,S')$, $e_i(S)$ for the parameters in the graph $G''$. The construction of the graphs $G''_i$ implies: \begin{claim} \begin{enumerate} \item The subgraph of $G''$ induced on $X_{i,j}$ is complete in colours $i$ and $j$ and empty in the remaining colour. \item If $x \in X_{i,j}$ and $x' \in X_{i,k}$ where $j \neq k$, then $e(x,x') \le 1$. \item If $x,x' \in D$, then $e(x,x') \le 1$. \item \label{denstod} If $X \in M_{i,j}$ and $y \in D$ then $e(X,y) \le 3$ and if $e(X,y) = 3$, then all edges between $X$ and $y$ have colour $i$ or colour $j$. \end{enumerate} \end{claim} \def\mathop{\mathtt{d}}{\mathop{\mathtt{d}}} Let $a$, $b$, $c$, $d$ be such that $an = |X_{1,2}|$, $bn = |X_{1,3}|$, $cn= |X_{2,3}|$, and $dn = |D|$. We shall assume (without loss of generality) that $a \ge b \ge c$, and we note that $a+b+c+d = 1$. Next, we will apply our density bounds to get some inequalities relating $a$, $b$, $c$, and $d$. For the purposes of these calculations, it is convenient to introduce a density function. For any graph $H$ we define $\mathop{\mathtt{d}}(H) = \frac{ 2 |E(H)| }{ |V(H)|^2 }$. Note that with this terminology we have \[ \min_{1 \le i < j \le 3} \mathop{\mathtt{d}}(G''_i) + \mathop{\mathtt{d}}(G''_j) > 1 + \tau^2. \] First we consider just colours 2 and 3. Note that if $x,y \in V$ are adjacent in both $G_2''$ and $G_3''$, then either $x,y \in X_{2,3}$ or one of these vertices is in $X_{2,3}$ and the other is in $D$. Furthermore, in this last case, if say $y \in D$ and $x \in X_{2,3}$ has $X = \{x,x'\} \in M_{2,3}$, then $e_2(X, y) + e_3(X,y) \le 3$. It follows from this that $|E(G_2'')| + |E(G_3'')| \le {n \choose 2} + {cn \choose 2} + \frac{1}{2} cdn^2 \le \frac{n^2}{2} + c^2 \frac{n^2}2 + cd \frac{n^2}{2}$. Multiplying this equation through by $\frac{2}{n^2}$ then gives the useful bound \begin{equation} \label{011} c^2 + cd \ge \mathop{\mathtt{d}}(G_2'') + \mathop{\mathtt{d}}(G_3'') - 1 \ge \tau^2. \end{equation} Next, we will count edges of~$G_1''$ twice, edges of~$G_2''$ twice, and edges of~$G_3''$ three times. An edge within~$X_{1,2}$ is counted four times in total, edge within $X_{1,3}$ or within $X_{2,3}$ five times in total. Edge between $X_{1,2}$ and $X_{2,3}$ (etc.) at most three times. Finally, for $y \in D$ and $X \in M_{2,3}$, we count the two edges between $y$ and $X$ at most $2+3+3$ times, thus we have at most $4 |D| |X_{2,3}|$ edges here. The same count applies for $M_{1,3}$ in place of~$M_{2,3}$; for $M_{1,2}$ we get at most $3 |D| |X_{1,2}|$. This implies $$ 2 |E(G_1'')| + 2|E(G_2'')| + 3 |E(G_3'')| -3 {n \choose 2} \le {an \choose 2} + 2 {bn \choose 2} + 2 {cn \choose 2} + bdn^2 + cdn^2, $$ giving us the bound \begin{equation*} a^2 + 2b^2 + 2c^2 +2bd + 2cd \ge 2 \mathop{\mathtt{d}}(G_1'') + 2 \mathop{\mathtt{d}}(G_2'') + 3 \mathop{\mathtt{d}}(G_3'') - 3. \end{equation*} We can express the right hand side as $\tfrac12 (\mathop{\mathtt{d}}(G_1'')+\mathop{\mathtt{d}}(G_2'')) +\tfrac32 (\mathop{\mathtt{d}}(G_1'')+\mathop{\mathtt{d}}(G_3'')) +\tfrac32 (\mathop{\mathtt{d}}(G_2'')+\mathop{\mathtt{d}}(G_3'')) -3$ and use the lower bound for each of the sum of two densities, yielding \begin{equation} \label{223} a^2 + 2b^2 + 2c^2 +2bd + 2cd \ge \tfrac{1}{2} + \tfrac{7}{2}\tau^2. \end{equation} Finally, $|E(G_1'')| + |E(G_2'')| + |E(G_3'')| - {n \choose 2} \le {an \choose 2} + {bn \choose 2} + {cn \choose 2} + \frac{1}{2}(a+b+c)dn^2$ gives us the inequality (strict one, as $\tau^2$ is irrational) \begin{equation} \label{111} a^2 + b^2 + c^2 + d(a+b+c) > \tfrac{1}{2} + \tfrac{3}{2}\tau^2. \end{equation} We claim that there do not exist nonnegative real numbers $a,b,c,d$ with $a \ge b \ge c$ and $a+b+c+d = 1$ satisfying the inequalities (\ref{011}), (\ref{223}), and (\ref{111}). To prove this, first note that inequality (\ref{011}) (and the quadratic formula) imply \begin{equation} \label{easycl} c \ge \frac{-d + \sqrt{d^2 + 4\tau^2}}{2}. \end{equation} Now $b \ge c$ and inequality (\ref{easycl}) imply $b + c + d \ge \sqrt{d^2 + 4\tau^2}$. This gives us the following useful upper bound on $a$ \begin{equation} \label{aupper} a \le 1 - \sqrt{d^2 + 4\tau^2} \le 1 - 2\tau. \end{equation} To get a lower bound on $a$, observe that $a \ge b \ge c$ and inequality (\ref{111}) give us $\frac{1}{2} + \frac{3}{2}\tau^2 < a^2 + b^2 + c^2 + d(1-d) \le 3a^2 + \frac{1}{4}$. It follows that $a \ge \sqrt{ \frac{1}{12} + \frac{1}{2}\tau^2 } \ge 2\tau$. Combining this lower bound on $a$ with the upper bound (\ref{aupper}) gives the following useful inequality \[ a^2 + (1-a)^2 \le 1 - 4\tau + 8\tau^2. \] The above bound together with (\ref{223}) implies \[\tfrac{1}{2} + \tfrac{7}{2}\tau^2 \le a^2 + (1-a)^2 + b^2 + c^2 - 2bc - d^2 \le 1 -4\tau + 8\tau^2 + (b-c)^2 - d^2. \] However, $\tau$ satisfies the equation $\tfrac{1}{2} + \tfrac{7}{2}\tau^2 = 1 -4\tau + 8\tau^2$ so the above simplifies to give \begin{equation} \label{bmc} b - c \ge d. \end{equation} Note that since $b \ge d$ we have $1 \ge a + b+d \ge 3d$ and thus $d \le \frac{1}{3}$. At this point we have $c \ge \frac{-d + \sqrt{d^2 + 4\tau^2}}{2}$ and $b \ge \frac{d + \sqrt{d^2 + 4\tau^2}}{2}$ and we will show that this contradicts (\ref{111}). To see this, note that under the assumption $a \ge b \ge c$ and $a+b+c = 1-d$ the quantity $a^2 + b^2 + c^2$ is maximized when $b$ and $c$ are as small as possible and $a$ is as large as possible. Thus \begin{align*} \tfrac{1}{2} + \tfrac{3}{2}\tau^2 &< a^2 + b^2 + c^2 + d(1-d) \\ &\le (1 - d - \sqrt{d^2 + 4\tau^2} )^2 + \tfrac{\left( d + \sqrt{d^2 + 4\tau^2} \right)^2}{4} + \tfrac{\left( -d + \sqrt{d^2 + 4\tau^2} \right)^2}{4} + d(1-d) \\ &= 1 + 2d^2 - d + 6\tau^2 -2(1-d) \sqrt{d^2 + 4\tau^2} \end{align*} Using the identity $\tfrac{1}{2} + \tfrac{9}{2}\tau^2 = 4\tau$ and rearranging gives us \begin{equation} \label{lastineq} 2(1-d) \sqrt{d^2 + 4\tau^2} < 4\tau + 2d^2 - d. \end{equation} The above equation immediately implies $d > 0$. From here a straightforward calculation gives the contradiction $d > \frac{1 - 2\tau^2 + \sqrt{ (1 - 2\tau^2)^2 +16(1 - 23\tau^2) } }{8} \doteq 0.485 > \frac{1}{3}$. (To check this by hand, square both sides of (\ref{lastineq}) and observe that the left and right sides are degree 4 polynomials in $d$ with matching highest order terms and matching constants; cancel these terms, divide by $d$, apply the quadratic formula and use the fact $9\tau^2-8\tau+1=0$ for simplification.) \end{proof} \bibliographystyle{amsplain}
1,108,101,565,318
arxiv
\section{Introduction} \label{sec:intro} Zero-shot recognition~\cite{lampert2014attribute,palatucci2009zero} considers if models trained on a given set of $\seen$ classes $\mathcal{S}$ can extrapolate to a distinct set of $\unseen$ classes $\mathcal{U}$. In generalized zero-shot learning~\cite{chao2016empirical,xian2018zero}, we also want to remember the $\seen$ classes and evaluate over the union of the two sets of classes $\mathcal{T}=\mathcal{S}\cup\mathcal{U}$. Nevertheless, when evaluating existing models in the generalized scenario, the seminal work of Chao~\emph{et al}\bmvaOneDot~\cite{chao2016empirical} highlights that predictions tend to be biased towards the $\seen$ classes observed during training. In this paper, we consider the challenge of mitigating this inherent bias present in classifiers by proposing a bias-aware model. An effective remedy to remove the bias towards seen classes is to calibrate their predictions during inference. Chao~\emph{et al}\bmvaOneDot~\cite{chao2016empirical} propose to reduce the scores for the seen classes, which in return improves the generalized zero-shot learning performance. Yet, the bias towards seen classes should also be tackled while training classifiers, and not only during the evaluation phase, to address the bias from the start. Towards this goal, seen and unseen classes can be addressed separately during training. Liu~\emph{et al}\bmvaOneDot~\cite{liu2018generalized} define two separate training objectives to calibrate the confidence of seen classes and the uncertainty of unseen classes. Atzmon and Chechik~\cite{atzmon2018domain} break the classification into two separate experts, with one model for seen classes and another one for unseen classes. Their COSMO approach provides compelling results at the expense of a third additional expert to combine results. As generalized zero-shot learning considers both seen and unseen classes simultaneously, learners should benefit from mitigating the bias in both directions by considering both sets jointly rather than separately. The main objective of this paper is to mitigate the bias towards seen classes by considering predictions of seen and unseen classes simultaneously during training. To achieve this, we propose a simple bias-aware learner that maps inputs to a semantic embedding space where class prototypes are formed by real-valued representations. We address the bias by introducing (\textit{i}) a calibration for the learner with temperature scaling, and (\textit{ii}) a margin-based bidirectional entropy term to regularize seen and unseen probabilities jointly. We show that the bias towards seen classes is also dataset-dependent, and every dataset does not suffer to the same extent. Finally, we illustrate the versatility of our approach. By relying on a real-valued embedding space, the model can handle different types of prototype representation for both seen and unseen classes, and operate either on real features, akin to compatibility functions, or leverage generated unseen features. Comparisons on four datasets for generalized zero-shot learning show the effectiveness of bias-awareness. All source code and setups are released\footnote{Source code is available at \href{https://github.com/twuilliam/bias-gzsl}{https://github.com/twuilliam/bias-gzsl}}. \section{Related Work}\label{sec:rw} \paragraph{Generalized zero-shot learning} has been introduced to provide a more realistic and practical setting than zero-shot learning, as models are evaluated on both seen and unseen classes~\cite{chao2016empirical}. This change in evaluation has a large impact on existing compatibility functions designed for zero-shot learning, as they do not perform well in the generalized setting~\cite{chao2016empirical,xian2018zero,changpinyo2020classifier}. Indeed, whether they are based on a ranking loss~\cite{akata2016label,akata2015evaluation,xian2016latent,romera2015embarrassingly,frome2013devise} or synthesis~\cite{changpinyo2020classifier,changpinyo2016synthesized,changpinyo2017predicting}, compatibility functions empirically exhibit a very low accuracy for unseen classes. As identified by Chao~\emph{et al}\bmvaOneDot~\cite{chao2016empirical}, this indicates a strong inherent bias in all classifiers towards the seen classes. To overcome the low accuracy for unseen classes, both Kumar Verma~\emph{et al}\bmvaOneDot~\cite{kumar2018generalized} and Xian~\emph{et al}\bmvaOneDot~\cite{xian2018feature} learn a conditional generative model to generate image features. Once trained, image features of unseen classes are sampled by changing the conditioning of the generative models. Classification then consists of training a one-hot softmax classifier on both real and sampled image features. Having access during training to generated unseen features leads to an increase in unseen class accuracy. Among the different generative models used in generalized zero-shot learning are generative adversarial networks~\cite{xian2018feature,li2019leveraging,felix2018multi}, variational autoencoders~\cite{schonfeld2018generalized,kumar2018generalized} or a combination of both~\cite{xian2019f}. Still, a classifier trained on generated features suffers from a bias towards seen classes because generative models do not fully match the true distribution of unseen classes. In this paper, we strive for a bias-aware classifier, which can behave as a stand-alone model like compatibility functions and also leverage unseen features sampled from a generative model. \paragraph{Addressing the bias} in classifiers remains an open challenge for generalized zero-shot learning. Although Chao~\emph{et al}\bmvaOneDot~\cite{chao2016empirical} identify the critical bias towards seen classes, only a few works try to address it during training. Related works separate the seen and unseen classifications. Liu~\emph{et al}\bmvaOneDot~\cite{liu2018generalized} map both features and semantic representations to a common embedding space. Probabilities are then calibrated separately in this common space to make seen class probabilities confident and reduce the uncertainty of unseen class probabilities. Atzmon and Chechik~\cite{atzmon2018domain} train expert models separately for seen and unseen class predictions. Their predictions are further combined in a soft manner with a third expert to produce the final decision. In this paper, we strive to address the bias by considering seen and unseen class probabilities jointly rather than separately. Having access during training to the joint class probabilities lets the bias-aware model learn how to balance them from the start. \section{Method} \label{sec:metho} During training, a generalized zero-shot learner $G:X \rightarrow \mathcal{T}$ is given a training set $\mathcal{D}^{\mathcal{S}}=\{(x_n, y_n), y_n\in \mathcal{S}\}_{n=1}^{N}$, where $x_n \in \mathbb{R}^D$ is an image feature of dimension $D$ and $y_n$ comes from the set $\mathcal{S}$ of $\seen$ classes, with $\mathcal{S} \subset \mathcal{T}$. For each $c \in \mathcal{S}$ there exists a corresponding semantic class representation $\phi(c)\in \mathbb{R}^A$ of dimension $A$. At testing time, $G$ predicts for each sample in the testing set $\mathcal{D}^{\mathcal{T}}=\{x_n\}_{n=1}^{M}$ a label that belongs to $\mathcal{T}$ by exploiting the joint set of $\seen$ and $\unseen$ semantic class representations. This problem formulation can be extended with an auxiliary dataset $\widetilde{\mathcal{D}}^{\mathcal{U}}=\{(\widetilde{x}_n, y_n), y_n\in \mathcal{U}\}_{n=1}^{\widetilde{N}}$, where $y_n$ comes from the set of unseen classes $\mathcal{U}$. $\widetilde{\mathcal{D}}^{\mathcal{U}}$ mimics image features from unseen classes, and is typically sampled from a generative model. The joint set $\{\mathcal{D}^{\mathcal{S}}, \widetilde{\mathcal{D}}^{\mathcal{U}}\}$ now covers both seen and unseen classes. In this paper, we propose a bias-aware generalized zero-shot learner $f(\cdot)$, which can operate during training with only $\mathcal{D}^{\mathcal{S}}$ similar to compatibility functions (Section~\ref{sec:metho:seen}) or the joint set $\{\mathcal{D}^{\mathcal{S}}, \widetilde{\mathcal{D}}^{\mathcal{U}}\}$ similar to classifiers in the generative approach (Section~\ref{sec:metho:unseen}). In both scenarios, the learner includes mechanisms to mitigate the bias towards seen classes. Learning consists of mapping inputs $x$ to their corresponding semantic class representations $\phi(c)$. In other words, the model regresses to a real-valued vector, which describes a class prototype. We denote the set of seen class prototypes as $\Phi^\mathcal{S}=\{\phi(c), c \in \mathcal{S}\}$, unseen class prototypes as $\Phi^\mathcal{U}=\{\phi(c), c \in \mathcal{U}\}$, and their union as $\Phi^\mathcal{T}=\Phi^\mathcal{S}\cup\Phi^\mathcal{U}=\{\phi(c), c \in \mathcal{T}\}$. Usually, the semantic knowledge used for class prototypes corresponds to semantic attributes~\cite{farhadi2009describing,lampert2014attribute}, word vectors of the class name~\cite{palatucci2009zero,frome2013devise}, hierarchical representations~\cite{akata2016label,akata2015evaluation,xian2016latent}, or sentence descriptions~\cite{reed2016learning,xian2018feature}. To exploit this diversity in semantic knowledge, we propose to swap the representation types for seen and unseen prototypes (Section~\ref{sec:metho:swap}). \subsection{Stand-alone classification with seen classes only}\label{sec:metho:seen} We design the bias-aware generalized zero-shot learner as a probabilistic model with two key principles. First, it is calibrated towards $\seen$ classes such that inputs from $\unseen$ classes yield a low confidence prediction at testing time. In return, this reduces the bias towards seen classes for unseen class inputs. Second, it maps inputs to class prototypes in the semantic embedding space. Following these two principles, we propose: \begin{equation} \label{eq:pseen} p(c|x,\mathcal{S}) = \exp\left(\dfrac{s\left(f(x), \phi(c)\right)}{T}\right) \bigg/ \sum_{c' \in \mathcal{S}}\exp\left(\dfrac{s\left(f(x), \phi(c')\right)}{T}\right), \end{equation} where $s(\cdot, \cdot)$ is the cosine similarity and $T \in \mathbb{R}_{>0}$ is the temperature scale. When $T = 1$, it acts as the normal softmax function. When $T > 1$, probabilities are spreading out. When $T < 1$, probabilities tend to concentrate similar to a Dirac delta function. Contrary to knowledge distillation~\cite{hinton2014distilling}, we seek to concentrate the probabilities with a low temperature scale for discriminative purposes. Learning the probabilistic model is done via minimizing the cross-entropy loss function over the training set of seen examples $\mathcal{D}^{\mathcal{S}}$: \begin{equation} \label{eq:loss-seen} \mathcal{L}_{\mathrm{s}} = - \frac{1}{N}\sum_{n=1}^{N} \log p(y_n|x_n,\mathcal{S}). \end{equation} This probabilistic model behaves like a compatibility function, because it only sees samples from $\seen$ classes during training. At testing, the evaluation simply measures the similarity in the embedding space with respect to the union of seen and unseen prototypes $\Phi^\mathcal{T}$. Variants of this prototype-based learner have been proposed in image retrieval~\cite{liu2017sphereface,movshovitz2017no,wen2016discriminative,zhai2018making} or image classification~\cite{liu2018generalized,wu2018improving,snell2017prototypical}. We differ by (\textit{i}) fixing the prototypes to be semantic class representations rather than learning them; (\textit{ii}) learning a mapping from the inputs to the class representations rather than learning a common embedding space; (\textit{iii}) applying a softmax function to provide a probabilistic interpretation of cosine similarities; and (\textit{iv}) calibrating the model with the same temperature scaling for both training and testing. \subsection{Classification with both seen and unseen classes}\label{sec:metho:unseen} In the generative approach for generalized zero-shot learning, samples from unseen classes are generated. We can then use the generated data $\widetilde{\mathcal{D}}^{\mathcal{U}}$ as an auxiliary dataset for calibration and for entropy regularization. In this context, given an input $x$ the probabilistic model learns to predict a class from the union of both $\seen$ and $\unseen$ classes: \begin{equation} \label{eq:punseen} p(c|x, \mathcal{T}) = \exp\left(\dfrac{s\left(f(x), \phi(c)\right)}{T}\right) \bigg/ \sum_{c' \in \mathcal{T}}\exp\left(\dfrac{s\left(f(x), \phi(c')\right)}{T}\right). \end{equation} The only and major difference with eq.~\ref{eq:pseen} resides in the class prototypes that are considered to produce the prediction, while $f(\cdot)$ remains the same model. $p(c|x,\mathcal{S})$ only evaluates over the set of $\seen$ class prototypes $\Phi^\mathcal{S}$, while $p(c|x,\mathcal{T})$ evaluates over the union of seen and unseen class prototypes $\Phi^\mathcal{T}$. In this case, the temperature scaling ensures the model is confident for both seen and unseen classes. This difference also makes the learning distinctive from related works (\textit{i.e.},~DCN~\cite{liu2018generalized} or COSMO~\cite{atzmon2018domain}), as they consider seen and unseen classifications separately rather than jointly. Akin to eq.~\ref{eq:loss-seen}, we minimize the cross-entropy loss function on the joint set $\{\mathcal{D}^{\mathcal{S}}, \widetilde{\mathcal{D}}^{\mathcal{U}}\}$ of $\seen$ and $\unseen$ classes: \begin{equation} \label{eq:loss-unseen} \mathcal{L}_{\mathrm{s}+\mathrm{u}} = - \frac{1}{N}\sum_{n=1}^{N} \log p(y_n|x_n,\mathcal{T}) - \frac{1}{\widetilde{N}}\sum_{n=1}^{\widetilde{N}} \log p(y_n|\widetilde{x}_n,\mathcal{T}). \end{equation} This probabilistic model behaves like a classifier used in generative approaches, because it sees samples from both $\seen$ and $\unseen$ classes at both training and testing times, and the partition function normalizes over the union of seen and unseen sets of classes. Having a classification over the union enables regularization in both seen and unseen directions. \paragraph{Bidirectional entropy regularization. Intuitively, when an image from an unseen class is fed to the classifier, probabilities for seen classes should yield a high entropy, while probabilities for unseen classes should result in a low entropy. In other words, the evaluation over seen classes of an unseen class input should be uncertain, because the image comes from a class the classifier has never encountered during training. Conversely, when an image from a seen class is fed to the classifier, the entropy of the probabilities for unseen classes should be high, while the entropy for seen classes should be low. To encourage this effect, given an image $x$, we compute the normalized Shannon entropy~\cite{shannon1948mathematical} of the probabilistic model $p(c|x,\mathcal{T})$ for both seen and unseen class directions: \begin{align} \mathcal{H}_{\mathrm{s}}(x) = \frac{-1}{|\mathcal{S}|} \sum_{c \in \mathcal{S}} p(c|x,\mathcal{T}) \log p(c|x,\mathcal{T})\textrm{, and~} \mathcal{H}_{\mathrm{u}}(x) = \frac{-1}{|\mathcal{U}|} \sum_{c \in \mathcal{U}} p(c|x,\mathcal{T}) \log p(c|x,\mathcal{T}), \end{align} where $\mathcal{H}_{\mathrm{s}}$ and $\mathcal{H}_{\mathrm{u}}$ are the average entropy for seen and unseen classes, and $|\cdot|$ is the cardinality of the set. For training, we derive a margin-based regularization for both seen and unseen class directions: \begin{align} R_{\mathrm{s}} = \left[ m + \frac{1}{N}\sum_{n=1}^{N} \mathcal{H}_{\mathrm{s}}(x_n) - \frac{1}{\widetilde{N}}\sum_{n=1}^{\widetilde{N}} \mathcal{H}_{\mathrm{s}}(\widetilde{x}_n) \right]_+,\\ R_{\mathrm{u}} = \left[ m + \frac{1}{\widetilde{N}}\sum_{n=1}^{\widetilde{N}} \mathcal{H}_{\mathrm{u}}(\widetilde{x}_n) - \frac{1}{N}\sum_{n=1}^{N} \mathcal{H}_{\mathrm{u}}(x_n) \right]_+, \end{align} where $[\cdot ]_+ = \max(0, \cdot)$. $R_{\mathrm{s}}$ ensures a margin of at least $m$ between the average seen class entropy of seen inputs $x_n$ and generated unseen inputs $\widetilde{x}_n$. In other words, this formulation seeks to minimize $\mathcal{H}_{\mathrm{s}}(x_n)$ and maximize $\mathcal{H}_{\mathrm{s}}(\widetilde{x}_n)$. $R_{\mathrm{u}}$ has a corresponding effect on the unseen class entropy. The final loss function for training then becomes: \begin{equation} \label{eq:loss-su} \mathcal{L}_{\mathrm{f}} = \mathcal{L}_{\mathrm{s+u}} + \lambda_{\mathrm{Ent}} (R_\mathrm{s} + R_\mathrm{u}), \end{equation} where $\lambda_{\mathrm{Ent}} \in \mathbb{R}_{\geq0}$ is a hyper-parameter to control the contribution of the bidirectional entropy. \subsection{Swapping seen and unseen class representations}\label{sec:metho:swap} As presented above, relying on a real-valued embedding space allows mechanisms to mitigate the bias in two scenarios. It also enables to swap class representations to less biased representations. Consider now the case where there exist multiple types of semantic information, which differ by their type of representation and by how expensive it is to collect them. For example, attribute descriptions require expert knowledge, while sentence descriptions can be crowd-sourced to non-expert workers. Practically, sentences tend to be less biased than attributes and perform better~\cite{xian2018feature}, but do not offer a comprehensive expert-based explanation~\cite{reed2016learning}. One could then train a model for seen classes on attributes as they rely on expert-based explanations and rely for unseen classes on sentences as they are easier to collect. This results in different representation types for seen and unseen classes. Formally, we assume that we have access to seen prototypes $\{\Phi_A^\mathcal{S}, \Phi_B^\mathcal{S}\}$ with representations from domain $A$ and $B$. For evaluation, we have access to unseen prototypes $\Phi_A^\mathcal{U}$ of domain $A$, but $\Phi_B^\mathcal{U}$ of domain $B$ is absent. The objective is then to learn a mapping $\beta$ from $\Phi_A^\mathcal{S}$ to $\Phi_B^\mathcal{S}$, in order to regress $\hat{\Phi}_B^\mathcal{U}$ from $\Phi_A^\mathcal{U}$ at testing time. We define the mapping as a linear least squares regression problem with Tikhonov regularization, which corresponds to: \begin{equation} \label{eq:lsq} \min_{\beta}\lVert\Phi_B^\mathcal{S} - \beta\Phi_A^\mathcal{S}\rVert_2 + \lambda_\beta \lVert \beta\rVert_2. \end{equation} where $\lambda_\beta$ controls the amount of regularization. Relying on a linear transformation prevents overfitting, as the mapping involves a limited set of class prototypes. During evaluation, we apply $\beta$ to unseen prototypes of domain $A$ to regress their values in domain $B$: $\hat{\Phi}_B^\mathcal{U} = \beta\Phi_A^\mathcal{U}$. Swapping representations then corresponds to regressing from one domain to another. \section{Experimental Details} \label{sec:details} \paragraph{Datasets.} We report experiments on four datasets commonly used in generalized zero-shot learning, \emph{e.g}\bmvaOneDot,~\cite{chao2016empirical,changpinyo2020classifier,xian2018zero,reed2016learning}. For all datasets, we rely on the train and test splits proposed by Xian~\emph{et al}\bmvaOneDot~\cite{xian2018zero}. \textit{\textbf{Caltech-UCSD-Birds 200-2011 (CUB)}}~\cite{WahCUB_200_2011} contains 11,788 images from 200 bird species. Every species is described by a unique combination of 312 semantic attributes to characterize the color, pattern and shape of their specific parts. Moreover, every bird image comes along with 10 sentences describing the most prominent characteristics~\cite{reed2016learning}. 150 species are used as $\seen$ classes during training, and 50 distinct species are left out as $\unseen$ classes during testing. \textit{\textbf{SUN Attribute (SUN)}}~\cite{Patterson2012SunAttributes} contains 14,340 images from 717 scene types. Every scene is also described by a unique combination of 102 semantic attributes to characterize material and surface properties. 645 scene types are used as $\seen$ classes during training, and 72 distinct scene types are left out as $\unseen$ classes during testing. \textit{\textbf{Animals with Attributes (AWA)}}~\cite{lampert2014attribute} contains 30,475 images from 50 animals. Every animal comes with a unique combination of 85 semantic attributes to describe their color, shape, state or function. 40 animals are used as $\seen$ classes during training, and 10 distinct animals are left out as $\unseen$ classes during testing. \textit{\textbf{Oxford Flowers (FLO)}}~\cite{nilsback2008automated} contains 8,189 images from 102 flower plants. Every flower plant image is described by 10 different sentences describing the shape and appearance~\cite{reed2016learning}. 82 flowers are used as $\seen$ classes during training, and 20 distinct flowers are left out as $\unseen$ classes during testing. \paragraph{Features extraction.} For all datasets, we rely on the features extracted by Xian~\emph{et al}\bmvaOneDot~\cite{xian2018zero}. Image features $x$ come from ResNet101~\cite{he2016deep} trained on ImageNet~\cite{ILSVRC15} and sentence representations are extracted from a 1024-dimensional CNN-RNN~\cite{reed2016learning}. As established by Xian~\emph{et al}\bmvaOneDot~\cite{xian2018zero}, parameters of ResNet101 and the CNN-RNN are frozen and are not fine-tuned during the training phase. No data augmentation is performed either. \paragraph{Evaluation.} We evaluate experiments with calibration stacking as proposed by Chao~\emph{et al}\bmvaOneDot~\cite{chao2016empirical}, which penalizes the seen class probabilities to reduce the bias during evaluation. Following Xian~\emph{et al}\bmvaOneDot~\cite{xian2018zero}, we compute the average per-class top-1 accuracy of $\seen$ classes (denoted as $\mathbf{s}$) and $\unseen$ classes (denoted as $\mathbf{u}$), as well as their harmonic mean $\textbf{H} = (2\times \mathbf{s} \times \mathbf{u}) / (\mathbf{s} + \mathbf{u})$. We report the 3-run average. \paragraph{Implementation details.} In our model, $f(\cdot)$ corresponds to a multilayer perceptron with 2 hidden layers of size 2048 and 1024 to map the features $x$ to the joint visual-semantic embedding space of size $A$. The output layer has a linear activation, while hidden layers have a ReLU activation~\cite{nair2010rectified} followed by a Dropout regularization ($p=0.5$)~\cite{srivastava2014dropout}. We train $f(\cdot)$ using stochastic gradient descent with Nesterov momentum~\cite{sutskever2013importance}. We set the following hyper-parameters for all datasets: learning rate of 0.01 with cosine annealing~\cite{loshchilov2017sgdr}, initial momentum of 0.9, batch size of 64, temperature of 0.05, and an entropy regularization term of 0.1 with a margin of 0.2. For AWA, we reduce the learning rate to 0.0001 and increase the entropy regularization to 0.5 while keeping the same margin. When relying on sentence representations, we double the capacity of $f(\cdot)$ with twice the number of hidden units in each layer. We set hyper-parameters on a hold-out validation set and re-train on the joint training and validation sets. The source code uses the Pytorch framework~\cite{paszke2019pytorch}. \begin{figure}[t] \centering \begin{minipage}{.48\textwidth} \centering \includegraphics[width=.9\linewidth]{images/linkage.pdf} \caption{\textbf{Bias variation} across datasets. When measuring the average linkage between seen and unseen representations, FLO is the most affected while SUN is the least. Thus, the bias towards seen classes differs across datasets.} \label{fig:linkage} \end{minipage}\hfill \begin{minipage}{.48\textwidth} \centering \vspace{7pt} \hfill \begin{overpic}[width=0.30\linewidth]{images/greatgrey.png} \put(-15,30){\rotatebox{90}{Seen}} \put(35,103){\small CUB} \put(22,-9){\footnotesize \textit{great grey}} \end{overpic}~ \begin{overpic}[width=0.30\linewidth]{images/leopard.png} \put(30,103){\small AWA} \put(25,-9){\footnotesize \textit{leopard}} \end{overpic}~ \begin{overpic}[width=0.30\linewidth]{images/lenten-rose.png} \put(30,103){\small FLO} \put(20,-9){\footnotesize \textit{leten rose}} \end{overpic}\\[6pt] \hfill \begin{overpic}[width=0.30\linewidth]{images/loggerhead.png} \put(-15,21){\rotatebox{90}{Unseen}} \put(21,-9){\footnotesize \textit{loggerhead}} \end{overpic}~ \begin{overpic}[width=0.30\linewidth]{images/bobcat.png} \put(28,-9){\footnotesize \textit{bobcat}} \end{overpic}~ \begin{overpic}[width=0.30\linewidth]{images/primrose.png} \put(8,-9){\footnotesize \textit{pink primrose}} \end{overpic}\\[1pt] \caption{\textbf{Seen and unseen class samples}. Visual differences arise from the global shape (CUB, AWA) or colors (FLO). Yet, their semantic class representation yields a very high pairwise similarity, which creates a high bias.\newline} \label{fig:ex} \end{minipage} \end{figure} \section{Results} \label{sec:res} \paragraph{Bias variation.} To verify whether the bias towards seen classes is dataset-dependent, we measure the average linkage between seen and unseen representations. Concretely, we compute the average of the pairwise cosine similarity between $\Phi^{\mathcal{S}}$ and $\Phi^{\mathcal{U}}$. A high average linkage then refers to a high similarity between seen and unseen representations. Intuitively, a high average linkage is not desirable as unseen representations can easily be confused with seen ones, which makes the generalized zero-shot learning problem harder. Figure~\ref{fig:linkage} depicts the average linkage per dataset. FLO exhibits the highest average linkage while SUN the lowest, with a 1.6 times difference. In other words, classifiers trained on FLO are highly affected by the bias towards seen classes. Figure~\ref{fig:ex} illustrates seen and unseen class samples with a very high pairwise similarity on CUB, AWA and FLO. Visually, these classes can be differentiated by their color or shape. Though, their semantic representations are very similar, which creates a high bias. Now that we have established that the bias towards seen classes differs across datasets, we can address the bias within generalized zero-shot learners. \paragraph{Temperature scaling.} Figure~\ref{fig:temp} varies the scale of the temperature in eq.~\ref{eq:pseen}. Following related metric learning works (\emph{e.g}\bmvaOneDot,~\cite{wu2018improving,zhai2018making}), we consider the temperature as a hyper-parameter. When treated as a latent parameter, the optimization diverges as its value goes down to zero to satisfy the loss function. The highest \textbf{H} score occurs when $T=0.05$ on the validation set of all datasets. Performance starts to degrade substantially after $T>0.1$. A temperature lower than $T<0.05$ can yield even higher scores, but is usually prone to numerical errors. As such, we set $T=0.05$ in all our experiments when training the model with only seen samples (eq.~\ref{eq:loss-seen}) or in combination with generated unseen samples (eq.~\ref{eq:loss-unseen}). We also evaluate modifying $T$ between training and testing phases. Setting it to 1 during training and testing, as in a normal softmax, drops \textbf{H} by 43.3\% on AWA. And changing it to 0.05 when testing, drops the score by 25.6\%. Keeping a fixed temperature value ensures $f(\cdot)$ maps inputs to prototypes similarly in training and testing. The temperature value should also be low to promote a more confident and discriminative model that yields narrow probabilities. Hence, the model reduces the bias by having a lower likelihood to classify an unseen class input as part of a seen class. \begin{figure}[t] \centering \begin{minipage}{.48\textwidth} \centering \includegraphics[width=.9\linewidth]{images/temp.pdf} \caption{\textbf{Temperature scaling} ablation from $T=0.05$ to $T=1$. Temperature values over 0.1 degrade the performance because probabilities start to spread which makes the model less confident.\newline\newline} \label{fig:temp} \end{minipage}\hfill \begin{minipage}{.48\textwidth} \centering \includegraphics[width=.94\linewidth]{images/reg.pdf} \caption{\textbf{Entropy regularization} in one ($\mathcal{H}_s$ or $\mathcal{H}_u$, hatched) and two ($\mathcal{H}_s + \mathcal{H}_u$, not hatched) directions compared with models without. Regularizing in only one direction can result in a negatively effect. Including both directions consistently improves results by creating a better bias trade-off.} \label{fig:bias} \end{minipage} \end{figure} \paragraph{Entropy regularization.} Figure~\ref{fig:bias} ablates the direction of the margin-based entropy term in eq.~\ref{eq:loss-su}. For this experiment, we rely on unseen class features generated from Cycle-CLSWGAN~\cite{felix2018multi}. When using a unidirectional entropy regularization, the improvement is either very low, or even negative, over a model without any regularization. Interestingly, this negative effect does not depend on the direction, as both $\mathcal{H}_s$ and $\mathcal{H}_u$ are affected when considered individually. Regularizing in only one direction forces the model to compensate for the other direction. Only the bidirectional regularization provides a consistent benefit for all datasets. This positive effect indicates the importance of balancing out both seen and unseen probabilities when mitigating the bias. Regularizing in both directions jointly helps the model learn a correct bias trade-off. \paragraph{Swapping representations.} Table~\ref{tab:swap} presents the different combinations of attribute (\texttt{Att}) and sentence (\texttt{Sen}) representations for training and evaluation. \texttt{Att}-\texttt{Att} and \texttt{Sen}-\texttt{Sen} are the common non-swapped settings. \texttt{Sen}-\texttt{Sen} forms an upper-bound as sentences provide better class representations over attributes. Indeed, sentence descriptions exhibit a lower average linkage than attribute descriptions. In a swapped setting, the unseen representations are regressed from representations in another domain based on eq.~\ref{eq:lsq}. A model trained on \texttt{Att} can be improved by 1.2 points at testing time when using \texttt{Sen} to regress the unseen representations. However, a model trained on \texttt{Sen} degrades when using \texttt{Att} to regress unseen representations. Indeed, \texttt{Sen}-\texttt{Att} requires to map low-dimensional attribute representations of unseen classes to a high-dimensional space of sentence representations on which the classifier has been trained. \texttt{Sen}-\texttt{Att} then involves dimensionality expansion, which is a harder problem than dimensionality compression in \texttt{Att}-\texttt{Sen}. In the scenario where a model is trained on attributes for seen classes derived from experts, it is possible to leverage sentences for unseen classes derived from crowd-sourcing to further improve the results. \begin{table}[t] \centering \begin{minipage}{.3\textwidth} \centering \tablestyle{6pt}{1.} \begin{tabular}{ccc} \toprule Seen & Unseen & \textbf{H}\\ \cmidrule(lr){1-2}\cmidrule(lr){3-3} \texttt{Att} & \texttt{Att} & 48.5 \\ \texttt{Sen} & \texttt{Att} & 47.4 \\ \texttt{Att} & \texttt{Sen} & 49.7 \\ \texttt{Sen} & \texttt{Sen} & 50.3 \\ \bottomrule \end{tabular} \end{minipage} \begin{minipage}{.68\textwidth} \caption{\textbf{Swapping attribute (\texttt{Att}) and sentence (\texttt{Sen}) representations}. While \texttt{Att}-\texttt{Att} and \texttt{Sen}-\texttt{Sen} are the usual non-swapped evaluation settings, our method can also swap them. When using sentences for unseen classes, it always improves upon attributes in swapped and non-swapped evaluations as they are less biased and more discriminative.} \label{tab:swap} \end{minipage} \end{table} \begin{table}[h!] \centering \tablestyle{2.7pt}{1.0} \begin{tabular}{lcccccccccccc} \toprule \multirow{2}{*}{Method} & \multicolumn{3}{c}{\textbf{CUB}} & \multicolumn{3}{c}{\textbf{SUN}} & \multicolumn{3}{c}{\textbf{AWA}} & \multicolumn{3}{c}{\textbf{FLO}} \\ \cmidrule(lr){2-13} & $\mathbf{u}$ & $\mathbf{s}$ & \cellcolor{Gray}$\mathbf{H}$ & $\mathbf{u}$ & $\mathbf{s}$ & \cellcolor{Gray}$\mathbf{H}$ & $\mathbf{u}$ & $\mathbf{s}$ & \cellcolor{Gray}$\mathbf{H}$ & $\mathbf{u}$ & $\mathbf{s}$ & \cellcolor{Gray}$\mathbf{H}$\\ \midrule DeViSE~\cite{frome2013devise} & 23.8 & 53.0 & \cellcolor{Gray}32.8 & 16.9 & 27.4 & \cellcolor{Gray}20.9 & 13.4 & 68.7 & \cellcolor{Gray}22.4 & 9.9 & 44.2 & \cellcolor{Gray}16.2 \\ \texttt{w/ f-CLSWGAN}~\cite{xian2018feature} & 52.2 & 42.4 & \cellcolor{Gray}46.7 & 38.4 & 25.4 & \cellcolor{Gray}30.6 & 35.0 & 62.8 & \cellcolor{Gray}45.0 & 45.0 & 38.6 & \cellcolor{Gray}41.6\\ \midrule SJE~\cite{akata2015evaluation} & 23.5 & 59.2 & \cellcolor{Gray}33.6 & 14.7 & 30.5 & \cellcolor{Gray}19.8 & 11.3 & 74.6 & \cellcolor{Gray}19.6 & 13.9 & 47.6 & \cellcolor{Gray}21.5 \\ \texttt{w/ f-CLSWGAN}~\cite{xian2018feature} & 48.1 & 37.4 & \cellcolor{Gray}42.1 & 36.7 & 25.0 & \cellcolor{Gray}29.7 & 37.9 & 70.1 & \cellcolor{Gray}49.2 & 52.1 & 56.2 & \cellcolor{Gray}54.1\\ \midrule LATEM~\cite{xian2016latent} & 15.2 & 57.3 & \cellcolor{Gray}24.0 & 14.7 & 28.8 & \cellcolor{Gray}19.5 & 7.3 & 71.7 & \cellcolor{Gray}13.3 & 6.6 & 47.6 & \cellcolor{Gray}11.5\\ \texttt{w/ f-CLSWGAN}~\cite{xian2018feature} & 53.6 & 39.2 & \cellcolor{Gray}45.3 & 42.4 & 23.1 & \cellcolor{Gray}29.9 & 33.0 & 61.5 & \cellcolor{Gray}43.0 & 47.2 & 37.7 & \cellcolor{Gray}41.9 \\ \midrule ESZSL~\cite{romera2015embarrassingly} & 12.6 & 63.8 & \cellcolor{Gray}21.0 & 11.0 & 27.9 & \cellcolor{Gray}15.8 & 6.6 & 75.6 & \cellcolor{Gray}12.1 & 11.4 & 56.8 & \cellcolor{Gray}19.0 \\ \texttt{w/ f-CLSWGAN}~\cite{xian2018feature} & 36.8 & 50.9 & \cellcolor{Gray}43.2 & 27.8 & 20.4 & \cellcolor{Gray}23.5 & 31.1 & 72.8 & \cellcolor{Gray}43.6 & 25.3 & 69.2 & \cellcolor{Gray}37.1 \\ \midrule ALE~\cite{akata2016label} & 23.7 & 62.8 & \cellcolor{Gray}34.4 & 21.8 & 33.1 & \cellcolor{Gray}26.3 & 16.8 & 76.1 & \cellcolor{Gray}27.5 & 13.3 & 61.6 & \cellcolor{Gray}21.9 \\ \texttt{w/ f-CLSWGAN}~\cite{xian2018feature} & 40.2 & 59.3 & \cellcolor{Gray}47.9 & 41.3 & 31.1 & \cellcolor{Gray}35.5 & 47.6 & 57.2 & \cellcolor{Gray}52.0 & 54.3 & 60.3 & \cellcolor{Gray}57.1\\ \midrule DCN~\cite{liu2018generalized} & 28.4 & 60.7 & \cellcolor{Gray}38.7 & 25.5 & 37.0 & \cellcolor{Gray}30.2 & 25.5 & 84.2 & \cellcolor{Gray}39.1 & \deemph{--} & \deemph{--} & \cellcolor{Gray}\deemph{--}\\ \midrule One-hot softmax~\cite{xian2018feature} & \deemph{n/a} & \deemph{n/a} & \cellcolor{Gray}\deemph{n/a} & \deemph{n/a} & \deemph{n/a} & \cellcolor{Gray}\deemph{n/a} & \deemph{n/a} & \deemph{n/a} & \cellcolor{Gray}\deemph{n/a} & \deemph{n/a} & \deemph{n/a} & \cellcolor{Gray}\deemph{n/a}\\ \texttt{w/ f-CLSWGAN}~\cite{xian2018feature} & 43.7 & 57.7 & \cellcolor{Gray}49.7 & 42.6 & 36.6 & \cellcolor{Gray}39.4 & 57.9 & 61.4 & \cellcolor{Gray}59.6 & 59.0 & 73.8 & \cellcolor{Gray}65.6 \\ \texttt{w/ Cycle-CLSWGAN}~\cite{felix2018multi}$\dagger$ & 45.7 & 61.0 & \cellcolor{Gray}52.3 & 49.4 & 33.6 & \cellcolor{Gray}40.0 & 56.9 & 64.0 & \cellcolor{Gray}60.2 & 72.5 & 59.2 & \cellcolor{Gray}65.1\\ \texttt{w/ CADA-VAE}~\cite{schonfeld2018generalized} & 51.6 & 53.5 & \cellcolor{Gray}52.4 & 47.2 & 35.7 & \cellcolor{Gray}40.6 & 57.3 & 72.8 & \cellcolor{Gray}64.1 & \deemph{--} & \deemph{--} & \cellcolor{Gray}\deemph{--}\\ \texttt{w/ f-VAEGAN-D2}~\cite{xian2019f}$\dagger$ & 48.4 & 60.1 & \cellcolor{Gray}53.6 & 45.1 & 38.0 & \cellcolor{Gray}\textbf{41.3} & 57.6 & 70.6 & \cellcolor{Gray}63.5 & 56.8 & 74.9 & \cellcolor{Gray}64.6\\ \texttt{w/ LisGAN}~\cite{li2019leveraging} & 46.5 & 57.9 & \cellcolor{Gray}51.6 & 42.9 & 37.8 & \cellcolor{Gray}40.2 & 52.6 & 76.3 & \cellcolor{Gray}62.3 & 57.7 & 83.9 & \cellcolor{Gray}68.3\\ \midrule COSMO~\cite{atzmon2018domain} & \deemph{n/a} & \deemph{n/a} & \cellcolor{Gray}\deemph{n/a} & \deemph{n/a} & \deemph{n/a} & \cellcolor{Gray}\deemph{n/a} & \deemph{n/a} & \deemph{n/a} & \cellcolor{Gray}\deemph{n/a} & \deemph{n/a} & \deemph{n/a} & \cellcolor{Gray}\deemph{n/a} \\ \texttt{w/ f-CLSWGAN}~\cite{xian2018feature} & 60.5 & 41.0 & \cellcolor{Gray}48.9 & 35.3 & 40.2 & \cellcolor{Gray}37.6 & 64.8 & 51.7 & \cellcolor{Gray}57.5 & 59.6 & 81.4 & \cellcolor{Gray}68.8\\ \texttt{w/ LAGO}~\cite{atzmon2018probabilistic} & 44.4 & 57.8 & \cellcolor{Gray}50.2 & 44.9 & 37.7 & \cellcolor{Gray}41.0 & 52.8 & 80.0 & \cellcolor{Gray}63.6 & \deemph{n/a} & \deemph{n/a} & \cellcolor{Gray}\deemph{n/a}\\ \midrule \textit{\textbf{This paper}} & 45.1 & 52.5 & \cellcolor{Gray}\underline{48.5} & 41.0 & 30.1 & \cellcolor{Gray}\underline{34.7} & 55.2 & 70.5 & \cellcolor{Gray}\underline{61.9} & 42.6 & 66.6 & \cellcolor{Gray}\underline{52.0}\\ \texttt{w/ f-CLSWGAN}~\cite{xian2018feature} & 50.7 & 49.9 & \cellcolor{Gray}50.3 & 41.1 & 31.6 & \cellcolor{Gray}35.7 & 57.7 & 68.4 & \cellcolor{Gray}62.5 & 53.8 & 76.0 & \cellcolor{Gray}63.0 \\ \texttt{w/ Cycle-CLSWGAN}~\cite{felix2018multi}$\dagger$ & 57.4 & 58.2 & \cellcolor{Gray}\textbf{57.8} & 44.8 & 32.7 & \cellcolor{Gray}37.8 & 61.3 & 69.2 & \cellcolor{Gray}\textbf{65.0} & 69.3 & 79.9 & \cellcolor{Gray}\textbf{74.2}\\ \bottomrule \multicolumn{12}{l}{\footnotesize$\dagger$ Method relies on sentence representations instead of attribute representations for CUB.} \end{tabular} \caption{\textbf{Comparison with the state of the art}, where classifiers are delimited by a horizontal rule and their combination with a generative model is in \texttt{teletype} font. ``n/a'' denotes a non-applicable setting to the method while ``--'' refers to non-reported results in the original paper. Compared with one-hot softmax and COSMO, our proposal is a stand-alone method that can also operate with seen class samples only. Compared with the other compatibility functions that also operate in this similar stand-alone setting, it achieves the best results (\underline{underlined}). When extended with generated unseen class samples, we also improve over other classifiers (\textbf{bold}), leading to state-of-the-art results on the three most biased datasets out of four (see Figure~\ref{fig:linkage}\label{tab:sota}). } \end{table} \paragraph{Comparison with the state of the art.} Table~\ref{tab:sota} compares our bias-aware prototype learner with eight other classifiers. Scores from other classifiers correspond to the performance as reported by the authors in their original paper. First, we consider stand-alone classifiers, which only observe the seen class inputs during training, \textit{i.e.}, without using any generated features. In this setting, our bias-aware formulation outperforms existing compatiblity functions~\cite{frome2013devise,romera2015embarrassingly,akata2016label,akata2015evaluation,xian2016latent,liu2018generalized} on all datasets. It is also interesting to note that recent formulations with one-hot softmax~\cite{xian2018feature} or COSMO~\cite{atzmon2018domain} cannot operate in this setting. Indeed, they rely on a discrete label space for classification while we rely on a real-valued embedding space. This enables our formulation to incorporate new unseen classes easily and at near zero cost, similar to compatibility functions. Second, our approach is easily extended with existing generative models to include an auxiliary dataset $\widetilde{\mathcal{D}}^{\mathcal{U}}$ for unseen classes. We select f-CLSWGAN~\cite{xian2018feature} and Cycle-CLSWGAN~\cite{felix2018multi} as the authors provide source code to evaluate on all four datasets. Reproducing the models from their original source code yields results within a reasonable range, \textit{i.e.}, less than a 2-point difference in the \textbf{H} score. We obtain better results with Cycle-CLSWGAN~\cite{felix2018multi} than f-CLSWGAN~\cite{xian2018feature}, which highlights the importance of the quality of the generated unseen class features. Moreover, our method profits more when generated samples better reflect the true distribution. When switching from f-CLSWGAN~\cite{xian2018feature} to Cycle-CLSWGAN~\cite{felix2018multi} on CUB, a one-hot softmax classifier leads to a 2.6\% increase while our bias-aware classifier with a joint entropy regularization yields a 7.5\% increase. We achieve state-of-the-art results on CUB, AWA and FLO. Only on the SUN dataset the one-hot softmax~\cite{xian2018feature} and COSMO~\cite{atzmon2018domain} provide higher scores. This originates from a lower bias towards seen classes in the SUN dataset (see Figure~\ref{fig:linkage}), which makes a bias-aware model less beneficial. When a dataset exhibits a low bias, separating the model for seen and unseen classes is preferred for equal treatment. Conversely, when a dataset exhibits a high bias, the training of the model should consider seen and unseen classes jointly to balance out their probabilities from the start. Overall, we produce competitive results in both scenarios, especially compared with classifiers without any bias-awareness. \section{Conclusion} The classification of seen and unseen classes in generalized zero-shot learning requires models to be aware of the bias towards seen classes. In this paper, we present such a model which calibrates the probabilities of seen and unseen classes jointly during training, and ensures a margin between the average entropy of both seen and unseen class probabilities. Learning consists of regressing inputs to real-valued representations. Relying on a mapping to a real-valued embedding space enables to swap seen and unseen representation types, and to evaluate the model in a stand-alone scenario or in combination with generated unseen features. Overall, our proposed bias-aware learner provides an effective alternative to separate classification approaches or classifiers without bias-awareness. \vspace{0.5em} \paragraph{Acknowledgements} {The authors thank Zeynep Akata for helpful initial discussions, and Hubert Banville for feedback. William Thong is partially supported by an NSERC scholarship.}
1,108,101,565,319
arxiv
\section*{Abstract} {\bf In recent years it has become understood that quantum oscillations of the magnetization as a function of magnetic field, long recognized as phenomena intrinsic to metals, can also manifest in insulating systems. Theory has shown that in certain simple band insulators, quantum oscillations can appear with a frequency set by the area traced by the minimum gap in momentum space, and are suppressed for weak fields by an intrinsic ``Dingle damping'' factor reflecting the size of the bandgap. Here we examine quantum oscillations of the magnetization in excitonic and Kondo insulators, for which interactions play a crucial role. In models of these systems, self-consistent parameters themselves oscillate with changing magnetic field, generating additional contributions to quantum oscillations. In the low-temperature, weak-field regime, we find that the lowest harmonic of quantum oscillations of the magnetization are unaffected, so that the zero-field bandgap can still be extracted by measuring the Dingle damping factor of this harmonic. However, these contributions dominate quantum oscillations at all higher harmonics, thereby providing a route to measure this interaction effect. } \vspace{10pt} \noindent\rule{\textwidth}{1pt} \tableofcontents\thispagestyle{fancy} \noindent\rule{\textwidth}{1pt} \vspace{10pt} \section{Introduction} Quantum oscillations (QO) of observables as a function of applied magnetic field have been understood as a phenomenon intimately tied to the idea of a Fermi surface ever since they were first discovered~\cite{deHaas1930}. The well-established Lifshitz-Kosevich theory~\cite{Lifshitz1956} of QO directly relates the frequency of the oscillations to extremal cross-sectional areas of the Fermi surface. This has allowed the technique to become a useful tool for examining the geometry of Fermi surfaces in real materials. However, this long-held understanding has been challenged recently by the measurement of quantum oscillations in insulators, notably the strongly-correlated Kondo insulators $\text{SmB}_6$~\cite{Li2014,Tan2015,Hartstein2018} and $\text{YbB}_{12}$~\cite{Liu2018,Xiang2018}, and the insulating phase of $\text{WTe}_2$~\cite{Wang2021}, believed to be an excitonic insulator~\cite{Jia2020}, which all lack a traditional notion of a Fermi surface entirely. Many theoretical works~\cite{Knolle2015,Baskaran2015,Erten2016,Zhang2016,Pal2016,Pal2017a,Pal2017b,Knolle2017a,Knolle2017b,Erten2017,Riseborough2017,Sodemann2018,Chowdhury2018,Shen2018,Peters2019,Skinner2019,Lu2020,Varma2020,Lee2021,He2021} have been put forward in response, seeking to understand the phenomenon in these specific materials and how QO may arise in insulators more generally. In this second direction, a direct calculation shows that generating QO in insulating systems is actually relatively straightforward---if the minimum band gap is both larger than the cyclotron energy and traces out a nonzero area in the Brillouin zone, then oscillations can be found at the frequency corresponding to this area as though it were a Fermi surface cross section~\cite{Knolle2015,Zhang2016}. Furthermore, it is found that these oscillations come with an intrinsic ``Dingle damping'' factor--an exponential suppression of the form $\exp(-B_0/B)$, where $B$ is the applied magnetic field strength. In metallic systems the Dingle factor accounts for the effect of disorder; $B_0$ is related to the finite quasiparticle relaxation time~\cite{Shoenberg1984} and will vary between samples, but in an insulator $B_0$ is directly related to intrinsic properties the gapped band structure itself. This implies that for band insulators QO contain important information about electronic structure just as they do for metals, and fundamental properties of the band structure may be extracted from careful analysis of the field dependence of oscillation amplitudes. The question we examine here is whether this result also holds for systems where the band structure is strongly affected by interactions, such as excitonic and Kondo insulators. In the mean field descriptions of these systems at zero field, the mean field parameters obey self-consistent constraints and determine the form of the bands. When a magnetic field is applied these constraints necessarily introduce $B$-dependence to these parameters, which causes the bands themselves to fluctuate with field and introduces additional contributions to QO not present in `rigid' band insulators. To analyze each of these systems we employ the following general procedure. First we analyze the mean field description of the system at zero field, in particular identifying the self-consistent equations that the mean field parameters must obey. With the introduction of a magnetic field we assume that electronic dispersions are quantized into Landau levels and the mean field parameters acquire $B$-dependent oscillatory components, which for weak fields are small compared to the zero-field parts. We then linearize the self-consistent conditions around the zero-field values and determine the leading effect of the magnetic field on top of the rigid band case. We find very generally that when considering mean field theories the fundamental harmonic of QO of the magnetization is unaffected, and the oscillatory component of the mean field affects second and higher harmonics only. For both excitonic and Kondo insulator models these new contributions to higher harmonics have exactly the same exponential sensitivity to the size of the gap as for the corresponding $B=0$ rigid band insulators, but have different overall dependence on the field strength allowing them to be the dominant contribution to all harmonics to which they contribute. The remainder of the paper is organized as follows: We begin in \cref{sec:Band} by examining QO for the case of a rigid band insulator, which is the background around which we linearize in the following sections. In \cref{sec:exciton} we analyze a model excitonic insulator, first applying the mean field approximation at $B=0$, then considering the oscillations the mean field parameter acquires upon introduction of a magnetic field and it's contributions to QO. In \cref{sec:Kondo} we then do the same for the case of a Kondo insulator using the mean field slave-boson formalism. \section{Rigid Band Insulator} \label{sec:Band} We first consider a spinless, two-dimensional band insulator at zero temperature described by the Hamiltonian~\cite{Knolle2015} \begin{equation}\label{eq:bandH} H_0 = \sum_\mathbf{k} \Psi^\dagger_\mathbf{k} \begin{pmatrix} \epsilon^c_\mathbf{k} & g \\ g & \epsilon^v_\mathbf{k} \end{pmatrix} \Psi_\mathbf{k}. \end{equation} Here and throughout the rest of our calculations we set $\hbar=1$, $c$ and $v$ label conduction and valence bands, and $\Psi_\mathbf{k} = (\psi_{c,\mathbf{k}}, \psi_{v,\mathbf{k}})^T$, with $\psi^\dagger_{i,\mathbf{k}}$ and $\psi_{i,\mathbf{k}}$ the creation and annihilation operators for electrons in band $i$. The conduction band dispersion is $\epsilon^c_\mathbf{k}$, which we take to be approximately parabolic in the region of interest, and we set the valence band dispersion to be $\epsilon^v_\mathbf{k} = \epsilon_0 - \eta \epsilon^c_\mathbf{k}$, with $\eta$ a dimensionless constant and $\epsilon_0$ the shift of the valence band relative to the conduction band. The limit $\eta\to0$ yields the flat valence band of a heavy fermion system. We take $\epsilon_0 > 0$ so the conduction and valence bands cross and the interband tunneling amplitude $g$ opens a hybridization gap at the band crossing point. The single-particle energies of the system are then \begin{equation} \label{eq:energies} E^\pm_\mathbf{k} = \frac{1}{2}\left(\epsilon^c_\mathbf{k} + \epsilon^v_\mathbf{k} \pm \sqrt{(\epsilon^c_\mathbf{k}-\epsilon^v_\mathbf{k})^2 + 4g^2}\right), \end{equation} which are shown in \cref{fig:bands}. We assume a ground state with the lower band entirely filled and the upper band empty, so the system is an insulator. In writing this model we have implicitly assumed that the physics of interest is captured by a two-band model. This is expected as long as any additional bands are well separated in energy from the gap opening point. \begin{figure} \centering \includegraphics[width=0.6\columnwidth]{Bands.pdf} \caption{An example band structure of the sort we consider. The solid lines show the energies $E^\pm_\mathbf{k}$ in \cref{eq:energies}, while the dashed lines show $\epsilon^c_\mathbf{k}$ and $\epsilon^v_\mathbf{k}$, the two bands in \cref{eq:bandH} prior to hybridization. We have indicated the offset energy $\epsilon_0$ and hybridization gap $2g$, and marked in red the point on the lower band where the gap is minimized. Inset: a 3D view of the bands. The area traced by the minimum gap, setting the QO frequency, is indicated with the red dashed line.} \label{fig:bands} \end{figure} Applying a magnetic field $B$ perpendicular to the system quantizes $\epsilon^c_\mathbf{k}$ into discrete Landau levels (LL), indexed by $n=0,1,2,\dots$, via the replacement $\epsilon^c_\mathbf{k} \to \epsilon^c_n = (n+\gamma)\omega_c$, with cyclotron frequency $\omega_c$ and phase shift $\gamma$. If $\epsilon^c_\mathbf{k} = k^2/2m_c$ exactly, with effective conduction electron mass $m_c$, then this replacement is exact, the cyclotron energy and phase shift are $\omega_c=eB/m_c$ and $\gamma = 1/2$, and $\eta = m_c/m_v$ represents the ratio of effective masses of the two bands. Otherwise this substitution is an approximation valid for weak fields, with $\gamma \in [0,1)$ in general. Within $E^\pm_\mathbf{k}$ this replacement gives the energies $E^\pm_n$, and sums over momentum are replaced by sums over LL index, $\sum_\mathbf{k} \to n_\Phi\sum_{n=0}^\infty$, with $n_\Phi = B/\Phi_0$ the degeneracy of each LL and $\Phi_0 = h/e$ the magnetic flux quantum. Because the hybridization is spatially homogeneous, after these replacements the Hamiltonian only couples corresponding Landau levels in the conduction and valence bands, reflected in the form of $E^\pm_n$. At zero temperature the free energy of the system is given by the sum over the energies of all occupied states, which for an insulator is the completely filled lower band. With a magnetic field this energy is \begin{equation} \label{eq:OmegaR} \Omega_R(B) = n_\Phi \sum_{n=0}^\infty E^-_n, \end{equation} where the subscript $R$ indicates the bands are rigid, unaffected by changing $B$. We can separate $\Omega_R(B)$ into constant $\Omega_{R0}$ and oscillatory $\tilde\Omega_R(B)$ parts using the Poisson summation formula~\cite{Shoenberg1984}, which for a general function $f$ is \begin{equation} \sum_{n=0}^\infty f(n+\gamma) = \int_0^\infty \dd x\,f(x) \\ + 2\int_0^\infty\dd x\sum_{p=1}^\infty f(x)\cos\left(2\pi p (x-\gamma)\right). \end{equation} Though this system lacks a Fermi momentum and Fermi surface, the momentum which minimizes the band gap characterizes the gapped band structure, and the area in momentum space that it encircles, indicated in \cref{fig:bands}, functions in lieu of a Fermi surface for the purposes of QO. We denote this momentum as $k^\ast$, defined through $\eval{\dd(E^+_\mathbf{k}-E^-_\mathbf{k})/\dd k}_{k=k^\ast} = 0$. From this we define a corresponding (noninteger) reference value of the LL index $n^\ast$ through $\epsilon^c_{\mathbf{k}^\ast} = \epsilon^c_{n^\ast}$, giving $n^\ast = \epsilon_0/\omega^\ast -\gamma$, where we put $\omega^\ast = (1+\eta)\omega_c$. For weak magnetic fields we have $n^\ast \gg 1$, which allows us to find an approximate analytic form of $\tilde\Omega_R(B)$, \begin{equation} \label{eq:OmegatildeR} \tilde\Omega_R(B) \approx \frac{2\abs{g}n_\Phi}{\pi}\sum_{p=1}^\infty \frac{\cos(2\pi p n^\ast)}{p}K_1\left(2\pi p \frac{2\abs{g}}{\omega^\ast}\right) \\ \sim \sqrt{\frac{\abs{g}\omega^\ast}{2}}\frac{n_\Phi}{\pi} \sum_{p=1}^\infty \frac{\cos(2\pi p n^\ast)}{p^{3/2}}e^{-2\pi p \frac{2\abs{g}}{\omega^\ast}}, \end{equation} where $K_i$ are the modified Bessel functions of the second kind. The final expression here uses the asymptotic form $K_i(x) \sim \sqrt{\pi/2x}\,\exp(-x)$ for $x\gg 1$, which further refines the weak field regime to the condition $\omega^\ast \ll 2\abs{g}$, cyclotron energy much smaller than the band gap. This result is very similar in form to the results in Ref.~\cite{Miyake1993} which examines a system with a superconducting gap. It can be derived from the $T\to 0$ limit of the expression in Ref.~\cite{Knolle2015} as shown in \cref{sec:comparison}. We see that for weak magnetic fields the harmonics of the band insulator free energy are exponentially suppressed by powers of $\exp(-2\pi \tfrac{2\abs{g}}{\omega^\ast})$, which we identify as the Dingle factor in an insulating system. This factor will function as a small parameter when considering additional oscillatory contributions in the following sections. \section{Excitonic Insulator} \label{sec:exciton} We now consider the case of an excitonic insulator~\cite{Cloizeaux1965,Jerome1967,Keldysh1968,Halperin1968}. This type of system is formed from the condensation of excitons with binding energy greater than the inherent band gap of the system, so that the band gap is (predominantly) generated by electron-electron interactions. In the mean field approximation there is a single parameter controlling the insulating properties of the system, the exciton condensate amplitude, which allows for a very simple treatment of QO in the weak field regime. To describe this type of system we start from a two-band, two-dimensional model Hamiltonian for spinless electrons with an interband interaction, \begin{equation} H = \sum_\mathbf{k} \Psi^\dagger_\mathbf{k} \begin{pmatrix} \epsilon^c_\mathbf{k} & 0 \\ 0 & \epsilon^v_\mathbf{k} \end{pmatrix} \Psi_\mathbf{k} - V \sum_{\mathbf{k,k'}} \psi^\dagger_{c,\mathbf{k}}\psi_{v,\mathbf{k}} \psi^\dagger_{v,\mathbf{k'}}\psi_{c,\mathbf{k'}}, \end{equation} where $V$ is the strength of the short-range exciton pairing potential. We decouple the interaction via a mean field approximation neglecting fluctuations, defining the exciton condensate order parameter $\Delta = V\sum_\mathbf{k} \expval{\psi^\dagger_{v,\mathbf{k}} \psi_{c,\mathbf{k}}}$, where $\expval{\cdots}$ denotes the expectation value in the state with a filled valence band and empty conduction band. Though generally complex, we can choose $\Delta$ to be purely real and positive by adjusting the phases of $\psi_{c,v}$. We then obtain the excitonic insulator Hamiltonian \begin{equation} \label{eq:excitonH} H_X = \sum_\mathbf{k} \Psi^\dagger_\mathbf{k} \begin{pmatrix} \epsilon^c_\mathbf{k} & -\Delta \\ -\Delta & \epsilon^v_\mathbf{k} \end{pmatrix} \Psi_\mathbf{k} + \frac{\Delta^2}{V} \end{equation} with $\Delta$ obeying the BCS-type gap equation \begin{equation} \label{eq:gapEqn0} \frac{1}{V} = \sum_\mathbf{k} \frac{1}{\sqrt{\left(\epsilon^c_\mathbf{k}-\epsilon^v_\mathbf{k}\right)^2+4\Delta^2}}. \end{equation} Note that the fermionic part of \cref{eq:excitonH} is the same as \cref{eq:bandH} with $g=-\Delta$. This two-dimensional model and our main results can in principle be extended to three dimensions by including additional dispersion along $k_z$, the direction of the magnetic field, as described in e.g. Ref.~\cite{Shoenberg1984}. Such an extension should not change the fundamental nature of our results. We also note that the role of fluctuations about the mean field order could be considered by extending the mean-field theory (\ref{eq:excitonH}) via standard means~\cite{Vaks1962,Kos2004,Hoyer2018}. We now consider applying a perpendicular magnetic field $B$, which quantizes the electronic dispersion into Landau levels as discussed in \cref{sec:Band}. Because we assume $\Delta$ to be spatially homogeneous, we still have coupling only between corresponding Landau levels in the two bands. In contrast to the rigid band insulator, however, the value of the gap $\Delta$ acquires magnetic field dependence because of its relationship to the electronic energies through \cref{eq:gapEqn0}. We put $\Delta \to \Delta(B) = \Delta_0 + \tilde\Delta(B)$, where $\Delta_0$ is the constant value of the order parameter at zero field, solving \cref{eq:gapEqn0}, $\tilde\Delta(B)$ is the part of the order parameter that varies with changing field, and we assume that $\abs{\tilde\Delta(B)} \ll \Delta_0$. \subsection{Oscillations in the Linearized Theory} The free energy of an excitonic insulator at zero temperature, which we denote $\Omega_X$, is the sum over energies of all states in the lower band, which has the same form as \cref{eq:OmegaR}, plus the energy of the mean field parameter, the second term in \cref{eq:excitonH}. Unlike for the band insulator, the full dependence of $\Omega_X$ on $B$ is partially implicit through $\tilde\Delta(B)$. To find the first corrections on top of the band insulator result and we expand around $\Delta=\Delta_0$, keeping terms up to second order in oscillatory quantities, assumed to be small: \begin{equation} \label{eq:OmegaX} \Omega_X(B,\Delta) \approx \Omega_{XR} + \pdv{\Omega_{XR}}{\Delta_0}\tilde\Delta + \frac{1}{2}\pdv[2]{\Omega_{XR}}{\Delta_0}\tilde\Delta^2 = \Omega_{XR0} + \tilde\Omega_{XR} + \pdv{\tilde\Omega_{XR}}{\Delta_0}\tilde\Delta + \frac{1}{2}\pdv[2]{\Omega_{XR0}}{\Delta_0}\tilde\Delta^2, \end{equation} where we identify $\Omega_{XR} = \Omega_X(B,\Delta_0)$. The function $\tilde\Omega_{XR}(B)$ is the oscillatory part of $\Omega_{XR}$ and has the same form as $\tilde\Omega_R(B)$ given in \cref{eq:OmegatildeR}, but with the replacement $g\to-\Delta_0$. The mean field gap $\Delta_0$ is, by definition, the value for which the action is stationary with respect to variation in $\Delta$ (\cref{eq:gapEqn0} is equivalent to $\pdv*{\Omega_{XR0}}{\Delta_0} = 0$), so the only term remaining at first order in oscillatory quantities is the rigid band contribution, $\tilde\Omega_{XR}$. Therefore, the next largest contribution comes at second order, given by the final two terms. This is a general implication of mean field theory, independent of the choice of system or mean field being considered. We now consider these next largest terms. For both terms we need the form of $\tilde\Delta(B)$, which can be evaluated by analyzing the gap equation. For $B\neq0$ this has the same form as in \cref{eq:gapEqn0} but with the replacements noted above, i.e.\ $\epsilon^c_\mathbf{k} \to \epsilon^c_n = (n+\gamma)\omega_c$, $\Delta \to \Delta_0 + \tilde\Delta(B)$, etc. We begin by expanding to first order in $\tilde\Delta(B)$ \begin{equation} \frac{1}{V} \approx n_\Phi\sum_{n=0}^\infty \frac{1}{\sqrt{\left(\epsilon^c_n-\epsilon^v_n\right)^2+4\Delta_0^2}} - n_\Phi \sum_{n=0}^\infty \frac{4\Delta_0}{\left(\left(\epsilon^c_n-\epsilon^v_n\right)^2+4\Delta_0^2\right)^{3/2}} \tilde\Delta(B) \equiv \alpha(B) + \beta(B)\tilde\Delta(B). \end{equation} In the second equality we define the two sums as the functions $\alpha(B)$ and $\beta(B)$. We then rewrite these functions in terms of their constant ($\alpha_0$ and $\beta_0$) and oscillatory ($\tilde\alpha$ and $\tilde\beta$) parts, which can be evaluated with the Poisson summation formula, and keep terms only to first order in oscillations, giving \begin{equation} \frac{1}{V} \approx \alpha_0 + \tilde\alpha(B) + \beta_0\tilde\Delta(B). \end{equation} Because the left hand side is a constant, we must have \begin{gather} \frac{1}{V} = \alpha_0 \label{eq:alpha0} \\ \tilde\Delta(B) = -\frac{\tilde\alpha(B)}{\beta_0}. \label{eq:alphabeta} \end{gather} Calculating the explicit forms of $\alpha_0$, $\tilde\alpha(B)$, and $\beta_0$ (see \cref{sec:Poisson}) verifies that \cref{eq:alpha0} is exactly \cref{eq:gapEqn0}, the zero-field gap equation determining $\Delta_0$, and gives the explicit form for $\tilde\Delta(B)$ via \cref{eq:alphabeta}, \begin{equation} \label{eq:Deltatilde} \tilde\Delta(B) = 2\Delta_0\sum_{p=1}^\infty \cos\left(2\pi p n^\ast\right)K_0\left(2\pi p \frac{2\Delta_0}{\omega^\ast}\right) \sim \sqrt{\frac{\Delta_0\omega^\ast}{2}}\sum_{p=1}^\infty \frac{\cos\left(2\pi p n^\ast\right)}{\sqrt{p}} e^{-2\pi p \frac{2\Delta_0}{\omega^\ast}}, \end{equation} where $\omega^\ast = (1+\eta)\omega_c$ as before and the second expression is the asymptotic form for weak fields, $\omega^\ast \ll 2\Delta_0$. As for the oscillatory part of the free energy, we see that the $p$\textsuperscript{th} harmonic comes with $p$ powers of the Dingle factor. With explicit forms for $\tilde\Delta(B)$ and $\tilde\Omega_{XR}$, the last quantity we need to evaluate is the second derivative of the $B=0$ free energy $\Omega_{XR0}$ with respect to $\Delta_0$. Using the gap equation to simplify, we find \begin{equation} \frac{1}{2}\pdv[2]{\Omega_{XR0}}{\Delta_0} = 2\frac{n_\Phi}{\omega^\ast}. \end{equation} Putting all of the terms together we find the dominant contribution to the free energy at second order in small oscillatory quantities is \begin{equation} \pdv{\tilde\Omega_{XR}}{\Delta_0}\tilde\Delta + \frac{1}{2}\pdv[2]{\Omega_{XR0}}{\Delta_0}\tilde\Delta^2 \sim -\frac{\Delta_0 n_\Phi}{2}\cos(4\pi n^\ast)e^{-4\pi\frac{2\Delta_0}{\omega^\ast}}, \end{equation} where we have kept only the oscillatory terms at lowest order in the Dingle factor and discarded a term that is smaller by a factor of $\omega^\ast/2\Delta_0 \ll 1$. We see that this contributes to the second harmonic of QO. Comparing this to the $p=2$ term of the rigid band contribution, there is a clear difference in the overall field dependence--the prefactor of the mean field term goes as $B$, whereas the rigid band term goes as $B^{3/2}$, so the rigid band term is smaller by a factor of $\tfrac{1}{2\pi}\sqrt{\tfrac{\omega^\ast}{\Delta_0}} \ll 1$ at small fields. Therefore, for weak fields the oscillations of the mean field order parameter provide the dominant contribution to the second harmonic of the free energy. The contribution from the mean field is likely dominant for all higher harmonics as well. In addition to other terms, such as those acquired by calculating $\tilde\Delta$ at higher orders than the linearized framework presented here, we can write down several terms that have a leading $B$ dependence at a lower power than the corresponding term in $\tilde\Omega_{XR}$. First, in the term in \cref{eq:OmegaX} proportional to $\tilde\Delta^2$, cross terms between the $q$ harmonic of one factor and the $p-q$ harmonic of the other give contributions to the $p$\textsuperscript{th} harmonic that also have $p$ powers of the Dingle factor. All such terms have a coefficient that goes as $B$, making them larger than the corresponding term of $\tilde\Omega_{XR}$, going as $B^{3/2}$. Additionally, there will be a term contributing to the $p$\textsuperscript{th} harmonic of the free energy the form \begin{equation} \pdv[p-1]{\tilde{\Omega}_{XR}}{\Delta_0}\tilde\Delta^{p-1} \sim B^{2-\frac{p}{2}}e^{-2\pi p \frac{2\Delta_0}{\omega^\ast}}, \end{equation} with $\tilde{\Delta}$ given by \cref{eq:Deltatilde}. We see that this goes as $B^{2-p/2}$, which for small $B$ is larger than $B^{3/2}$ for all $p\geq2$, and is larger than $B$ for $p>2$ (it is $B^{1/2}$ for $p=3$), as shown in \cref{fig:Bdependence}. There is no reason why the mean field contributions such as these should exactly cancel for any harmonic $p$ above the first--we have shown this explicitly for $p=2$--so for weak fields these mean field terms will dominate for all harmonics $p\geq2$. We pause to emphasize an important feature of the result that we have found: There is only a single dimensionless parameter, $\omega^\ast/\Delta_0$, that controls the size of the quantum oscillations (in both the Dingle damping factor and its multiplicative prefactors) arising from the contributions of both the rigid band and self-consistent mean-field parts of the free energy. Rewriting $\omega^\ast/\Delta_0 = B/B_0$ so that $B_0 = m_c \Delta_0 /((1+\eta)e)$, the value of $B_0$, proportional to the product of the hybridization gap and the cyclotron mass, can be used to characterize individual materials. Indeed, fitting measurements of quantum oscillations to the form of the Dingle factor--as done with metallic systems to extract mean free paths--would here allow for a direct experimental determination of this quantity for a given material. For $B \sim B_0$ the rigid band and mean field contributions to the higher harmonics ($p\geq 2$) are of the same size; below this the mean field part dominates and above this point our approximations begin to break down. Consequently, we see that the oscillations of the gap that we analyze here cannot be ignored whenever they are present--a system with an interaction generated gap is never accurately described by just the corresponding rigid band structure. \begin{figure} \centering \includegraphics[width=0.6\columnwidth]{Bdependence.pdf} \caption{Demonstrating the size of the leading dependencies $B^\alpha$ that we find for various terms, for small $B/B_0 = \omega^\ast/(2\Delta_0)$. The rigid band case has $\alpha = 3/2$ for all harmonics, while the contributions from the mean field to the second harmonic are larger, with $\alpha = 1$. There are contributions to the third harmonic with $\alpha = 1/2$, which is even larger still.} \label{fig:Bdependence} \end{figure} We now briefly comment on how our results relate to those in Refs.~\cite{Lee2021} and \cite{He2021}, which also analyzes oscillations in an excitonic insulator. There are several key differences between what is done there and what we present here, but there is no obvious disagreement. First, they focus on electronic transport via thermally activated electrons and holes and not thermodynamic properties which are our focus. Second, the calculations there consider a rigid band structure of hybridized particle and hole bands as in \cref{sec:Band}--the fixed hybridization set equal to the excitonic condensate order parameter at $B=0$--and define the gap for $B\neq0$ as the energy between the highest energy Landau level in the lower band and the lowest energy Landau level in the upper band. The resulting oscillations of the gap are then more akin to the framework of Ref.~\cite{Knolle2015} than what we find here. Because we consider the free energy, and can therefore discuss only thermodynamic quantities, our result cannot be directly compared to those of Refs.~\cite{Lee2021} and \cite{He2021}. Importantly, our generic conclusion that there can be no interaction contribution to the first harmonic does not apply, and in general one should indeed expect an additional contribution to the fundamental frequency oscillation of non-thermodynamic quantities. How such contributions would compare to the results of Refs.~\cite{Lee2021} and \cite{He2021} is left as future work. \subsection{Effects of nonzero temperature} All of our calculations so far have been performed for a model at exactly zero temperature. We show in \cref{sec:temperature} that our results apply for a range of nonzero temperatures provided that $T\ll \Delta_0$, well below the transition temperature into the excitonic insulator phase. It is clear that the temperature dependence of quantum oscillations in this model must be quite distinct from the typical Lifshitz-Kosevich form, given by the factor \begin{equation} R_T = \frac{p\theta}{\sinh\left(p\theta\right)}, \end{equation} where $p$ labels the harmonic and $\theta = 2\pi^2 k_B T/\omega^\ast$, with $k_B$ Boltzmann's constant. Even for the rigid bandstructure the oscillations show a temperature dependence that departs from the LK form~\cite{Knolle2015}. We expect further corrections beyond this, arising from the effects of self-consistency of the gap $\Delta_0$ at a function of temperature. Computing this dependence would require considering temperatures of the order of the gap, at which point thermal occupation of the upper band would become relevant. This is a regime for which we do not have any analytic results. We note however, that the behaviour is potentially quite rich. The typical expectation is for temperature to reduce quantum oscillation amplitudes due to thermal broadening of occupied states near the relevant energy. However, here the fact that $\Delta_0$ diminishes with increasing temperature has the potential to counteract that, by lessening the damping from the Dingle factor. We note that a temperature-dependent gap would force one to reconsider the validity of analyzing just the ``weak field regime'' that we have used so far. For a fixed magnetic field strength and increasing temperature, the ratio $\omega^\ast/\Delta_0$ grows as $\Delta_0$ diminishes, so for any field strength the regime $\omega^\ast \sim \Delta_0$ becomes relevant at some temperature. This is certainly a rich avenue for future work, but is beyond the present scope. \section{Kondo Insulator} \label{sec:Kondo} We now look to the case of Kondo insulators, a class of strongly-correlated, heavy fermion system~\cite{Hewson1993} with narrow band gaps first identified over 50 years ago~\cite{Menth1969}. We begin with an Anderson model in two dimensions~\cite{Read1983,Auerbach1986,Millis1987,Coleman1987}, describing the coupling of a light conduction band to a heavy valence band, localized by strong interactions. The Hamiltonian is \begin{multline} H = -t\sum_{\langle ij\rangle,\sigma} \left(c^\dagger_{i\sigma}c_{j\sigma} + \text{h.c.}\right) -t_d \sum_{\langle ij\rangle,\sigma}\left(d^{\dagger}_{i\sigma}d_{j\sigma} + \text{h.c.}\right) \\ +\sum_{i,\sigma}V_i \left(c^\dagger_{i\sigma}d_{i\sigma} + d^\dagger_{i\sigma}c_{i\sigma}\right) + \sum_i\left(\epsilon_d n^d_i + U n^d_{i\uparrow}n^d_{i\downarrow}\right). \end{multline} The first line describes the two species of electrons (conduction $c$ and heavy $d$ bands) hopping on a lattice with amplitudes $t$ and $t_d$ respectively, with $t_d < 0$ and $\abs{t_d/t} = \eta \ll 1$. The next term describes interband transitions with amplitude $V_i$, which opens the gap in the spectrum. The final two terms are written in terms of the $d$-electron densities, $n^d_{i,\sigma} = d^\dagger_{i,\sigma}d_{i,\sigma}$ and $n^d_i = n^d_{i,\uparrow} + n^d_{i,\downarrow}$. The $\epsilon_d$ term gives the shift of the heavy electron band relative to the conduction band. The $U$ term is a Hubbard interaction between $d$-electrons, forbidding double occupancy in the large $U$ limit. For $U\to\infty$ this condition can be enforced with the slave-boson formalism: put $d^\dagger_{i\sigma} = f^\dagger_{i\sigma}b_i$, where $f_{i\sigma}$ is a new fermionic degree of freedom, which we refer to as $f$-electrons, and $b_i$ is the slave boson. Each site contains either a boson or a single $f$-electron, and the Hubbard term is replaced by $\sum_i\lambda_i(\sum_\sigma f_{i\sigma}^\dagger f_{i\sigma}+b_i^\dagger b_i -1)$, where $\lambda_i$ is a Lagrange multiplier field enforcing the constraint. We assume that the interband interaction is spatially uniform, $V_i = V$, and now employ the mean field approximation $\lambda_i \to \langle\lambda_i \rangle \equiv \lambda$, $b_i \to \langle b_i\rangle \equiv b$, and $b_i^\dagger \to \langle b_i^\dagger\rangle \equiv b$. In the continuum limit, which is a valid approximation when considering weak magnetic fields, we obtain the mean field Hamiltonian \begin{equation} \label{eq:KondoH} H_K =\sum_{\mathbf{k},\sigma}\Psi^\dagger_{\mathbf{k}\sigma} \begin{pmatrix} \epsilon^c_\mathbf{k} & b V \\ b V & \epsilon^f_\mathbf{k} \end{pmatrix} \Psi_{\mathbf{k}\sigma} + \lambda \left(b^2 - 1\right). \end{equation} Here the $f$-band dispersion is $\epsilon^f_\mathbf{k} = \epsilon_d + \lambda - \eta b^2(\epsilon^c_\mathbf{k} - 4t)$, and the limit of immobile heavy fermions, i.e. infinite $f$-band mass, corresponds to $\eta\to 0$. We can identify what we called $\epsilon_0$ in the case of the rigid band insulator with $\epsilon_d + \lambda + 4\eta tb^2$ and what we called $g$ with $b V$. As written here we are considering an even parity coupling between the bands. We could consider an odd parity coupling instead, $\hat{V}(\mathbf{k}) = V \mathbf{d}(\mathbf{k})\cdot \hat{\sigma}$ with $\mathbf{d}(\mathbf{k}) = -\mathbf{d}(-\mathbf{k})$, which results in nontrivial topological properties~\cite{Dzero2010,Dzero2016}. This choice does not affect the nature of the results we present here, however; the zero-field gap appearing in our final results would have a different form reflecting its origins from an odd parity coupling, but the overall form of the expressions in terms of the size of the gap would remain the same. We continue with the simpler case of even parity coupling considered thus far. For the Kondo insulator there are two self-consistent equations allowing us to determine the two mean field parameters $b$ and $\lambda$, contrasting with the excitonic insulator considered in \cref{sec:exciton} which has only one. The first equation is simply the constraint imposed by the Hubbard interaction, which in the mean field approximation becomes \begin{equation} \label{eq:nf0eqn} \sum_{\mathbf{k},\sigma} \expval{f^\dagger_{\mathbf{k}\sigma} f_{\mathbf{k}\sigma}}\equiv n^f_0 = 1-b^2, \end{equation} where we have defined $n^f_0$, the total $f$-electron density at $B=0$. The second constraint follows from the equation of motion for the boson field, which in the mean field approximation becomes a demand that the energy be stationary with respect to variation of $b$, \begin{equation} \label{eq:C0eqn} \sum_{\mathbf{k},\sigma} \expval{\frac{V}{2}\left(c^\dagger_{k\sigma}f_{k\sigma}+f^\dagger_{k\sigma}c_{k\sigma}\right) - \eta b (\epsilon_\mathbf{k}-4t)f^\dagger_{k\sigma}f_{k\sigma}} = C_0 + \eta b \left(K^f_0 + 4t\, n^f_0\right) = -\lambda b. \end{equation} Here we have defined two additional functions, \begin{gather} C_0 \equiv \frac{V}{2} \sum_{\mathbf{k},\sigma}\expval{c^\dagger_{k\sigma}f_{k\sigma} + f^\dagger_{k\sigma}c_{k\sigma}} \label{eq:C0}\\ K^f_0 \equiv -\sum_{\mathbf{k},\sigma} \epsilon^c_\mathbf{k} \expval{f^\dagger_{k\sigma}f_{k\sigma}}, \label{eq:Kf0} \end{gather} $C_0$ the interband correlation energy and $\eta K^f_0$ the kinetic energy of the $f$-electrons. We now consider applying a perpendicular magnetic field $B$ to the system. As discussed in \cref{sec:Band}, this can be done by replacing energies with their Landau quantized versions, and sums over momentum with sums over LL index. We note here specifically that for a generic anisotropic Kondo system the hybridization gap does not necessarily open at a fixed energy unless the $f$-band is completely immobile, $\eta = 0$. Therefore, the conclusions we arrive at only generically apply for $\eta \neq 0$ in the case of an isotropic system. We assume that the effect of a nonzero field on the mean field parameters is to induce an oscillatory component for each above the value determined at $B=0$, as we did for the case of the excitonic insulator. Explicitly, we put \begin{equation} b\to b(B) = b_0 + \tilde{b}(B), \quad \lambda \to \lambda(B) = \lambda_0 + \tilde{\lambda}(B), \end{equation} with $\tilde{b}$ and $\tilde{\lambda}$ the components of the order parameters that vary with changing field and vanish for $B=0$. We assume these vanish continuously as the field is switched off, so we can consider a regime where $\abs{\tilde{b}}$ and $\abs{\tilde{\lambda}}$ are small compared to the zero-field parts. \subsection{Oscillations in the Linearized Theory} The free energy of the Kondo system at zero temperature, $\Omega_K$, is given by the sum over energies of the lower band, plus the final term in \cref{eq:KondoH}, giving an additional contribution from the mean fields. Because we must include spin when discussing a Kondo system, the band contribution to $\Omega_K$ is the same as \cref{eq:OmegaR} but with an additional sum over the spin degree of freedom, amounting to a factor of $2$. As in the case of the excitonic insulator, the free energy has an implicit dependence on $B$ through the mean field functions $\tilde{b}(B)$ and $\tilde{\lambda}(B)$, so we expand around $(b,\lambda)=(b_0,\lambda_0)$ and keep terms up to first order in oscillatory quantities to find the first corrections on top of the band insulator result, \begin{multline} \label{eq:OmegaK} \Omega_K(B,b,\lambda) \approx \Omega_{KR} + \pdv{\Omega_{KR}}{b_0} \tilde{b} + \pdv{\Omega_{KR}}{\lambda_0} \tilde{\lambda} + \frac{1}{2}\pdv[2]{\Omega_{KR}}{b_0}\tilde{b}^2 + \frac{1}{2}\pdv[2]{\Omega_{KR}}{\lambda_0}\tilde{\lambda}^2 + \frac{1}{2}\pdv{\Omega_{KR}}{b_0}{\lambda_0}\tilde{b}\tilde{\lambda}\\ \approx \Omega_{KR0} + \tilde{\Omega}_{KR} + \pdv{\tilde{\Omega}_{KR}}{b_0}\tilde{b} + \pdv{\tilde{\Omega}_{KR}}{\lambda_0}\tilde{\lambda} + \frac{1}{2}\pdv[2]{\Omega_{KR0}}{b_0}\tilde{b}^2 + \frac{1}{2}\pdv[2]{\Omega_{KR0}}{\lambda_0}\tilde{\lambda}^2 + \frac{1}{2} \pdv{\Omega_{KR0}}{b_0}{\lambda_0}\,\tilde{b}\tilde{\lambda}. \end{multline} We define $\Omega_{KR} = \Omega_K(B,b_0,\lambda_0)$ to be the free energy evaluated with rigid bands, which we then separate into constant $\Omega_{KR0}$ and oscillatory $\tilde{\Omega}_{KR}$ parts. The form of $\tilde{\Omega}_{KR}$ is identical to \cref{eq:OmegatildeR} up to an overall factor of $2$ due to spin, the replacement $g\to b_0V$, and $n^\ast = (\epsilon_d + \lambda_0 + 4\eta t b_0^2)/\omega^\ast - \gamma$ with $\omega^\ast = (1+\eta b_0^2)\omega_c$. The vanishing of the first derivatives of $\Omega_{KR0}$ with respect to $b_0$ and $\lambda_0$ is synonymous with working at the level of mean field theory, and as a result we see that the contributions to the free energy from magnetic-field-induced oscillations of the mean field parameters enter at second order in small oscillations. We now seek to determine the size of these terms and their dependence on $B$, we did for the excitonic insulator. We begin by examining $\tilde{b}(B)$ and $\tilde{\lambda}(B)$. We can evaluate the forms of these functions by analyzing the constraint equations, which for nonzero field have the same form as \cref{eq:nf0eqn} and \cref{eq:C0eqn} but with the standard replacements we have made throughout, \begin{gather} n^f(B) = 1-b(B)^2 \label{eq:nfeqn}\\ C(B) + \eta b(B)\left(K^f(B) + 4t\, n^f(B)\right) = -\lambda(B) b(B), \label{eq:Ceqn} \end{gather} now with $n^f$, $C$, and $K^f$ functions of $B$ both explicitly and through their dependence on $b(B)$ and $\lambda(B)$. We now expand these functions around the rigid band case up to first order in small oscillatory quantities. For $n^f$ we have \begin{equation} n^f(B,b,\lambda) \approx n^f_0 + \tilde{n}^f_R(B) + \pdv{n^f_0}{b_0}\tilde{b}(B) + \pdv{n^f_0}{\lambda_0}\tilde{\lambda}(B), \end{equation} where $n^f_0$ is the $f$-electron density for $B=0$, equal to the constant part of $n^f(B,b_0,\lambda_0)$, and we define $\tilde{n}^f_R(B)$ to be the oscillatory part of $n^f(B,b_0,\lambda_0)$. The same expansion can be done for $C(B)$ and $K^f(B)$, letting us similarly define the quantities $\tilde{C}_R(B)$ and $\tilde{K}^f_R(B)$. We now expand \cref{eq:nfeqn,eq:Ceqn} up to first order in small oscillations. The terms at zeroth order are precisely \cref{eq:nf0eqn,eq:C0eqn}. We are left with the oscillatory components, obeying \begin{equation} \label{eq:blambda} \begin{pmatrix} \tilde{n}^f_R(B) \\ \tilde{C}_R(B) + \eta b_0 \tilde{K}^f_R(B) \end{pmatrix} = -\begin{pmatrix} u_b & u_\lambda \\ v_b & v_\lambda \end{pmatrix} \begin{pmatrix} \tilde{b}(B) \\ \tilde{\lambda}(B), \end{pmatrix} \end{equation} with \begin{gather} u_b = 2b_0 + \pdv{n^f_0}{b_0}\\ u_\lambda = \pdv{n^f_0}{\lambda_0}\\ v_b = \lambda_0 + \pdv{C_0}{b_0} + \eta\left(K^f_0 + b_0 \pdv{K^f_0}{b_0} + 4t(1-3b_0^2)\right)\\ v_\lambda = b_0 + \pdv{C_0}{\lambda_0} + \eta\,b_0 \pdv{K^f_0}{\lambda_0}. \end{gather} This system of equations can be inverted in general, and doing so gives $\tilde{b}$ and $\tilde{\lambda}$ as linear combinations of $\tilde{n}^f_R$ and $\tilde{C}_R + \eta b_0\tilde{K}^f_R$. The task is then to evaluate these quantities, for which we have explicit expressions. Using \cref{eq:nf0eqn,eq:C0,eq:Kf0} with the standard replacements for the case of $B\neq0$, and evaluating all quantities at $(b,\lambda)=(b_0,\lambda_0)$, we arrive at expressions for $n^f(B,b_0,\lambda_0)$, $C(B,b_0,\lambda_0)$, and $K^f(B,b_0,\lambda_0)$, from which we can extract the function $\tilde{n}^f_R$, $\tilde{C}_R$, and $\tilde{K}^f_R$ using the Poisson summation formula. Using the same methods we employed in \cref{sec:exciton} (see \cref{sec:Poisson}), we find \begin{multline} \label{eq:nftilde} \tilde{n}^f_R(B) \approx -8b_0 V\frac{n_\Phi}{\omega^\ast}\sum_{p=1}^\infty \sin\left(2\pi p n^\ast\right) K_1\left(2\pi p \frac{2b_0 V}{\omega^\ast}\right)\\ \sim -\sqrt{\frac{8b_0V}{\omega^\ast}} n_\Phi\sum_{p=1}^\infty \frac{\sin\left(2\pi p n^\ast\right)}{\sqrt{p}} e^{-2\pi p \frac{2b_0 V}{\omega^\ast}} \end{multline} for the oscillatory part of the $f$-electron density, \begin{multline} \label{eq:Ctilde} \tilde{C}_R(B) \approx -8b_0V^2 \frac{n_\Phi}{\omega^\ast} \sum_{p=1}^\infty \cos\left(2\pi p n^\ast\right) K_0\left(2\pi p \frac{2b_0V}{\omega^\ast}\right)\\ \sim -V\sqrt{\frac{8b_0V}{\omega^\ast}}n_\Phi\sum_{p=1}^\infty\frac{\cos\left(2\pi p n^\ast\right)}{\sqrt{p}} e^{-2\pi p\frac{2b_0V}{\omega^\ast}} \end{multline} for the oscillatory part of the interband correlation, and \begin{equation} \label{eq:Kftilde} \tilde{K}^f_R(B) \approx -\frac{\epsilon_d+\lambda_0+4\eta tb_0^2}{1-\eta b_0^2}\tilde{n}^f_R(B) \end{equation} for the oscillatory part of the $f$-electron kinetic energy. The asymptotic forms in second lines of \cref{eq:nftilde,eq:Ctilde} apply in the regime where $\omega^\ast \ll 2b_0V$. We see that, as has been true for all oscillatory quantities we have evaluated thus far, $\tilde{n}^f_R$, $\tilde{C}_R$, and $\tilde{K}^f_R$ all have a leading $B^{1/2}$ dependence and the $p$\textsuperscript{th} harmonic is accompanied by $p$ powers of the Dingle factor. From \cref{eq:blambda}, $\tilde{b}$ and $\tilde{\lambda}$ are linear combinations of these functions, so it follows that they share the same $B^{1/2}$ dependence and the same Dingle factor structure, which are also true of the derivative of $\tilde{\Omega}_{KR}$ appearing in \cref{eq:OmegaK}. Using these insights we can draw important conclusions about the additional oscillatory contribution of the free energy \cref{eq:OmegaK} without explicitly inverting \cref{eq:blambda}, calculating the constants $u_b, u_\lambda, v_b$, and $v_\lambda$, or taking the second derivatives of $\Omega_{KR0}$. All of the oscillatory quantities comprising these additional terms go as $B^{1/2}$ and their lowest harmonics ($p=1$) are proportional to a single power of the Dingle factor. The largest terms they contribute to \cref{eq:OmegaK} are second order in oscillatory quantities, so they have a coefficient that is linear in $B$ and contribute at the same order as the $p=2$ term of $\tilde{\Omega}_{KR}$, which has a coefficient going as $B^{3/2}$. Therefore, for weak fields these new terms are the dominant contribution to the second harmonic of the free energy, and the same sort of argument as at the end of \cref{sec:exciton} suggests that this is true for all higher harmonics as well. We also note that, also as for the excitonic insulator case, the behavior of these functions is determined by a single dimensionless parameter $\omega^\ast/b_0V$, so that when these oscillations are present they provide a non-negligible contribution. \section{Discussion and Conclusion} We have shown that the field-induced oscillatory components of the mean field parameters in excitonic and Kondo insulator models yield qualitatively similar contributions to the oscillatory part of the free energy, and both systems differ from the band insulator in similar ways. In both cases oscillations of the mean field order parameters generate the dominant contributions to the second and higher QO harmonics for weak fields, which should have observable consequences in measurements of, e.g. the de Haas-van Alphen effect. In particular, our results demonstrate that measuring the field dependence of these higher harmonics allows one to distinguish between a simple band insulator and an insulating system with bands that are strongly affected by interactions. Additionally, since both the rigid band insulator and mean field contributions to the free energy, and therefore all thermodynamic quantities, are parametrized by the same dimensionless parameter, the oscillations of the self-consistent mean field parameters are always relevant when present and produce a distinct functional dependence on the magnetic field strength to second harmonic and higher oscillations. Importantly, however, several features of the free energy are entirely insensitive to the mean field parameters acquiring weak magnetic field dependence. First, the lowest QO harmonic is unchanged from the behavior predicted by a rigid band model, which is guaranteed since the mean field state is defined as the saddle point of the free energy. Second, there are no changes to the nature of the Dingle factor--exponential sensitivity to the size of the $B=0$ gap is the same as predicted from the theory of QO in a rigid band insulator. Thus, our results demonstrate that for interacting insulators the non-rigidity of the band structure with changing magnetic field strength does not preclude the use of the Dingle damping of QO as a means to measure properties of the gapped band structure at zero field. Though we have focused here on the free energy, it is worth also considering other experimentally accessible quantities. First, the vanishing of mean field contributions to the fundamental frequency oscillation only applies to the free energy and thermodynamic quantities. In general, other quantities like the conductivity may have additional contributions at first order from the effects we study here. Second, there have been a number of works examining the specific heat and thermal transport measured in certain Kondo insulators, which are more akin to what would be expected in metals and have been attributed to neutral in-gap states such as excitons~\cite{Knolle2017a} or impurity bands~\cite{Skinner2019}, or neutral Fermi surfaces resulting from fractionalized electronic degrees of freedom~\cite{Baskaran2015,Erten2016,Erten2017,Sodemann2018,Chowdhury2018,Varma2020}. Our work here suggests that replacing rigid band structures with mean-field bands dependent on $B$ in those models that rely on band geometry may have qualitatively important effects. Our results emphasize that QOs of the magnetization provide rich detail on the nature of the electronic state of band insulators. Measurements of the Dingle damping factor are particularly valuable. For materials that fall in the category of conventional band-insulators--including those where the band-gap includes self-consistent mean-field contributions--there should be agreement between the Dingle damping factor of the first harmonic and the electronic hybridization gap in the zero field limit. Disagreement would be an indication of the relevance of physics beyond what is captured by the mean-field models we have considered here. \section*{Acknowledgements} We thank Johannes Knolle for helpful discussions. We would also like to acknowledge Brian Skinner, Trithep Devakul, and Yves Hon Kwan for their insightful comments and questions. \paragraph{Funding information} This work is supported by EPSRC Grant No. EP/P034616/1 and by a Simons Investigator Award. \begin{appendix} \section{Comparison with Previous Results} \label{sec:comparison} Here we confirm that our $T=0$ result for the free energy agrees with the $T\to0$ limit of Eq.(8) in Ref.~\cite{Knolle2015}. The system considered therein assumed an infinite valence band mass, corresponding to $\eta=0$ here. In the notation used here, setting the chemical potential to lie inside the gap, and correcting for a missing factor of 2 and alternating sign in that equation, the oscillatory part of the free energy obtained there is \begin{equation} \tilde{\Omega}_R(T) = 2 T n_\Phi \sum_{p=1}^\infty \frac{(-1)^p}{p}\cos\left(2\pi p \frac{\epsilon_0}{\omega_c}\right)\sum_{n=0}^\infty \exp\left[-\frac{4\pi^2 p T}{\omega_c}\left(n+\frac{1}{2}\right) - \frac{p g^2}{\omega_c T}\frac{1}{n+\frac{1}{2}}\right]. \end{equation} Define the dimensionless quantity $t_n = T(n+1/2)/\omega$, so that in the $T\to0$ limit the sum over $n$ becomes an integral over $t$, \begin{equation} \tilde{\Omega}_R(T\to0) \to 2 \omega_c n_\Phi \sum_{p=1}^\infty \frac{(-1)^p}{p}\cos\left(2\pi p \frac{\epsilon_0}{\omega_c}\right) \int_0^\infty \dd t\exp\left[p f(t)\right], \end{equation} where \begin{equation} f(t) = -4\pi^2 t - \frac{g^2}{\omega_c^2 t}. \end{equation} One may recognize the resulting integral as being proportional the modified Bessel function of the second kind, $K_1(2\pi p g/\omega_c) g/(\pi\omega_c)$. Alternatively, the integral can be evaluated by the method of steepest descent to directly find the form for $g\gg \omega_c$. The saddle point is given by \begin{equation} f'(t^\ast) = 0 \Rightarrow t^\ast = \frac{g}{2\pi\omega_c} \end{equation} letting us to approximate $f(t)$ as \begin{equation} f(t) \approx f(t^\ast) + \frac{1}{2}f''(t^\ast)(t-t^\ast)^2 \end{equation} inside the integral, which is then of Gaussian form and can be evaluated to give \begin{equation} \tilde{\Omega}_R(T\to0) = \sqrt{\frac{\abs{g}\omega_c}{2}}\frac{n_\Phi}{\pi}\sum_{p=1}^\infty \frac{(-1)^p}{p^{3/2}} \cos\left(2\pi p \frac{\epsilon_0}{\omega_c}\right) e^{-2\pi \frac{2\abs{g}}{\omega_c}}. \end{equation} Recalling that $n^\ast = \epsilon_0/\omega^\ast - \gamma$ and $\omega^\ast = (1+\eta)\omega_c$, this exactly matches \cref{eq:OmegatildeR} for $\eta=0$ and $\gamma=1/2$. \section{Evaluation of Oscillatory Functions} \label{sec:Poisson} Functions written as a sum over Landau level indices can be divided into oscillatory and non-oscillatory parts using the Poisson summation formula. As a demonstration of the general procedure, here we provide the explicit calculation of the functions $\alpha_0$, $\tilde\alpha(B)$, and $\beta_0$, which then give $\tilde{\Delta}(B)$ as in \cref{eq:alphabeta}. Introduce the notation $E(n+\gamma) \equiv \epsilon_n^c - \epsilon_n^v = (n+\gamma)\omega^\ast - \epsilon_0 = (n-n^\ast)\omega^\ast$, with $n^\ast = \epsilon_0/\omega^\ast - \gamma$. Then we have \begin{multline} \alpha(B) = n_\Phi \sum_{n=0}^\infty \frac{1}{\sqrt{E(n+\gamma)^2 + 4 \Delta_0^2}} \\ = n_\Phi \int_0^\infty \dd x \frac{1}{\sqrt{E(x)^2+4\Delta_0^2}} + 2n_\Phi \int_0^\infty \dd x \sum_{p=1}^\infty \frac{\cos\left(2\pi p (x-\gamma)\right)}{\sqrt{E(x)^2+4\Delta_0^2}}. \end{multline} The first term in the second equality is what we call $\alpha_0$ and the second term is $\tilde\alpha(B)$. Putting $\omega^\ast x = k^2/2m_c$ in $\alpha_0$ we find \begin{equation} \alpha_0 = \int_0^\infty \frac{\dd k}{2\pi} k \frac{1}{\sqrt{\left(\epsilon^c_\mathbf{k}-\epsilon^v_\mathbf{k} \right)^2+4\Delta_0^2}}, \end{equation} and we see that setting this equal to $1/V$ as in \cref{eq:alpha0} is precisely equivalent to the $B=0$ gap equation, \cref{eq:gapEqn0}, at least for the isotropic, parabolic dispersion implicitly assumed with this change of variables. Now for $\tilde\alpha(B)$, the change of variables $z = x - \epsilon_0/\omega^\ast = x - n^\ast - \gamma$ gives \begin{equation} \tilde\alpha(B) = \frac{2n_\Phi}{\omega^\ast}\int_{-n^\ast - \gamma}^\infty \dd z \sum_{p=1}^\infty \frac{\cos\left(2\pi p (z+n^\ast)\right)}{\sqrt{z^2 + \left(\frac{2\Delta_0}{\omega^\ast}\right)^2}}. \end{equation} We now assume that many Landau levels are occupied, i.e. $\epsilon_0 \gg \omega^\ast$, implying $n^\ast \gg 1$, which allows us to extend the lower limit of integration to $-\infty$. Rewriting the cosine as a sum of exponentials we then have \begin{equation} \tilde\alpha(B) \approx \frac{n_\Phi}{\omega^\ast}\sum_{p=1}^\infty e^{2\pi i p n^\ast}\int_{-\infty}^\infty \dd z \frac{e^{2\pi i p z}}{\sqrt{z^2+\left(\frac{2\Delta_0}{\omega^\ast}\right)}} + \text{ c.c.}, \end{equation} where c.c.\ means the complex conjugate of the given term, and we see that the integral has become a Fourier transform which gives a modified Bessel function of the second kind. Combining the two terms we then arrive at \begin{equation} \tilde\alpha(B) \approx \frac{4n_\Phi}{\omega^\ast}\sum_{p=1}^\infty \cos(2\pi p n^\ast) K_0\left(2\pi p \frac{2\Delta_0}{\omega^\ast}\right). \label{eq:alphatilde} \end{equation} We now need $\beta_0$, the non-oscillatory part of \begin{multline} \beta(B) = -n_\Phi \sum_{n=0}^\infty \frac{4\Delta_0}{\left(E(n+\gamma)^2 + 4\Delta_0^2\right)^{3/2}} \\ = -n_\Phi\int_0^\infty \dd x \frac{4\Delta_0}{\left(E(x)^2 + 4\Delta_0^2\right)^{3/2}} - 2 n_\Phi\int_0^\infty \dd x \sum_{p=1}^\infty \frac{4\Delta_0 \cos\left(2\pi p (x-\gamma)\right)}{\left(E(x)^2 + 4\Delta_0^2\right)^{3/2}}, \end{multline} which is the first term in the second line. Making the same changes of variables as above, then similarly extending the lower limit of integration to $-\infty$ we find \begin{equation} \beta_0 = -4\Delta_0 n_\Phi \int_{-\infty}^\infty \dd z \frac{1}{\left(z^2 + \left(\frac{2\Delta_0}{\omega^\ast}\right)^2\right)^{3/2}} = -\frac{2 n_\Phi}{\Delta_0\omega^\ast}. \label{eq:beta0} \end{equation} Combining \cref{eq:alphatilde,eq:beta0} as in \cref{eq:alphabeta} we find precisely the form of $\tilde\Delta(B)$ in \cref{eq:Deltatilde}. \section{Excitonic Insulator Temperature Dependence} \label{sec:temperature} Here we find the leading nonzero temperature corrections to the $T=0$ results presented in the main text for the excitonic insulator. At nonzero $T$ the mean field free energy is \begin{gather} \Omega_X = \frac{\Delta^2}{V} - T \int_{-\infty}^\infty \dd \epsilon \,g(\epsilon) \ln\left(1+e^{-\epsilon/T}\right) \label{eq:OmegaXT} \\ g(\epsilon) = \frac{1}{N}\sum_{\mathbf{k},\alpha} \mathcal{A}(\epsilon-E_\alpha(\mathbf{k})), \end{gather} where $g(\epsilon)$ is the density of states, written in terms of the spectral density $\mathcal{A}$ which is simply a $\delta$-function in the absence of disorder. Including a nonzero magnetic field via the prescriptions already discussed and employing the Poisson summation formula we find \begin{equation} g(\epsilon) = \frac{2n_\Phi}{\omega^\ast}\sqrt{\frac{\epsilon^2}{\epsilon^2-\Delta^2}}\Theta(\epsilon^2 - \Delta^2) \left\{1 + 2\sum_{p=1}^\infty \cos\left[2\pi p \left(\frac{2\sqrt{\epsilon^2 - \Delta^2}+\epsilon_0}{\omega^\ast} + \gamma\right)\right]\right\}, \end{equation} where $\Delta = \Delta(B,T)$ is the full field- and temperature-dependent gap function, and $\Theta(x)$ is the Heaviside theta function, which equals $1$ for $x>0$ and vanishes otherwise. Here the theta function gives the gap in the spectrum--there are no states for energies with $\abs{\epsilon}<\abs{\Delta}$. Using this form of the density of states in \cref{eq:OmegaXT} and changing to a new integration variable $\xi$ defined through $\epsilon = \sqrt{\xi^2 + \Delta^2}$ we obtain \begin{multline} \label{eq:OmegaT} \Omega(B,T) = \frac{\Delta^2}{V} - \frac{2n_\Phi T}{\omega^\ast} \int_0^\infty \dd\xi \ln\left[2\left(1+\cosh\left(\frac{\sqrt{\xi^2+\Delta^2}}{T}\right)\right)\right] \\ \times \left\{1 + 2\sum_{p=1}^\infty \cos\left[2\pi p \left(\frac{2\xi + \epsilon_0}{\omega^\ast} + \gamma\right)\right] \right\}. \end{multline} As noted above, the gap $\Delta$ is itself a function of temperature, and for $B=0$ obeys \begin{equation} \frac{1}{V} = \nu \int_0^\infty \dd \xi \frac{\tanh\left(\frac{\sqrt{\xi^2+\Delta^2}}{2T}\right)}{2\sqrt{\xi^2+\Delta^2}}, \end{equation} where $\nu$ is the density of states in two dimensions. We can expand around $T=0$ to give \begin{equation} \tanh\left(\frac{\sqrt{\xi^2+\Delta^2}}{2T}\right) \approx 1-2e^{-\sqrt{\xi^2+\Delta^2}/T}, \end{equation} allowing us to separate the gap equation into a temperature independent ($T=0$) part, determining the zero-temperature value of the gap, $\Delta(T=0)$, and a nonzero temperature part providing a correction to $\Delta$ that is exponentially small for temperatures $T\ll\Delta(T=0)$. (The same can be done for $B\neq0$ as well.) This defines what we mean by the low-temperature regime. Returning now to the free energy, we can approximate the temperature dependent factor in the low temperature regime, \begin{multline} T\,\ln\left[2\left(1+\cosh\left(\frac{\sqrt{\xi^2+\Delta^2}}{T}\right)\right)\right] = \sqrt{\xi^2+\Delta^2} + 2\ln\left(1+e^{-\sqrt{\xi^2+\Delta^2}/T}\right) \\ \approx \sqrt{\xi^2+\Delta^2} - 2 T e^{-\sqrt{\xi^2+\Delta^2}/T}. \end{multline} With this we then separate \cref{eq:OmegaT} into two terms. It is straightforward to confirm that the $T$-independent term reproduces what is found for the $T=0$ free energy of the excitonic insulator after applying the Poisson summation formula. The second term then contains the entirety of thermally activated contribution to the free energy, which we see is exponentially suppressed--the largest this term can be is $T\exp(-\Delta(T=0)/T) \ll 1$. Thus, in the low temperature regime $T\ll\Delta(T=0)$ the zero-temperature calculations we have provided in the main text are accurate up to exponentially small corrections. \end{appendix}
1,108,101,565,320
arxiv
\section{Introduction :} Quasi-particle models of quark gluon plasma (qQGP) are phenomenological models to explain the non-ideal behavior of quark gluon plasma (QGP), seen in lattice simulation of quantum chromodynamics (QCD) at finite temperature \cite{bo.1,bo.1b,fo.1,pn.1} and relativistic heavy ion collisions. There are various quasi-particle models, which may be broadly classified into two groups depending upon two approaches. One approach, say model-I, was advocated first by Goloviznin and Satz \cite{gs.1,go.1,bu.1,ca.1,pe.1,bl.1} where the thermodynamics (TD) of quasi-particle system of quarks and gluons is developed by starting from the standard ideal gas expression for pressure, and all other TD quantities are obtained from it. The second approach, model-II \cite{ba.1,ba.3,ba.3b,ba.3c,ba.3d,sr.1,yi.1,zh.1,co.1,zu.1,ji.1}, starts from energy density, which is a defined thermodynamic variable in ensemble theories, and then all other TD quantities are derived from it. In model-I, one needs certain condition for thermodynamic consistency which was called TD consistent relation. In model-II, it is not necessary because it is consistent with statistical mechanics and thermodynamics from the start \cite{ba.4}. Yet, under certain extra conditions it leads to model-I \cite{ba.1}. All these phenomenological models have few free parameters. There were also some attempts to unify above two models \cite{br.1,ga.1} in a thermodynamically self-consistent way. Unfortunately, all models fit the lattice gauge theory (LGT) results by varying the free parameters of the model. Hence, one may not be able to differentiate the models based on the fits to LGT \cite{ba.4} data. Model-I has more than two parameters and model-II has only one. Of course, there are other phenomenological models like strongly coupled plasma models sQGP \cite{sh.1}, SCQGP \cite{ba.5}, etc. which were developed to study the non-ideal behaviour of QGP. One looks for these QCD motivated phenomenological models because of the limitations of perturbative QCD (pQCD). Even at high temperature like 1000 $T_c$, pQCD with expansion of the order of $g^6$ needed to fit the LGT data \cite{fo.1} and again with one fitting parameter. However, it fails to fit near $T_c$ up to $T \approx 3 T_c$. It is interesting to note that phenomenological models like qQGP and SCQGP with single parameter are able to fit LGT data of Wuppertal-Budafest group \cite{fo.1} from 1.5 $T_c$ to 1000 $T_c$ \cite{ba.4,ba.6}, reasonably well. Near $T = T_c$ ( $T_c < 1.5 T_c$) models based on plasma may not be applicable and models like sQGP \cite{sh.1}, monopoles \cite{ch.1}, etc. may become relevent. Here we comment on the extensively used Landau's formalism of statistical mechanics for QGP and compare our results with qQGP models discussed above. We see that the standard statistical mechanics of Landau \cite{la.1} may be used to study QCD motivated quasi-particle models and leads to the modification of expressions for derivable thermodynamic quantities like pressure, entropy, etc. in contradiction to earlier works \cite{gs.1,go.1,bu.1,ca.1,pe.1,bl.1}. Only the expression for energy density has the ideal gas form, but pressure, entropy are all have extra temperature dependent term in addition to ideal gas expression due to temperature dependent quasi-particle mass. But, these results are consistent with the Pathria's formalism \cite{pa.1} of statistical mechanics of qQGP \cite{ba.1}. \section{Landau's formalism:} Many authors of qQGP (model-I) start from the expression for pressure, P, of ideal gas, which they claim to follow Landau's formalism. However, here we would like to point out that above is true only for particles with constant mass. Of course, it was pointed out and discussed in detail by Gorenstein and Yang \cite{go.1}, but their demand that the expressions for both pressure and energy density must be in the form of ideal gas, is over specification or constraints. In Pathria's formulation of statistical mechanics of qQGP \cite{ba.1}, energy density is defined as a statistical average in canonical or grand canonical ensemble and all thermodynamical quantities, including pressure, are derived from it. The energy density is in the form of ideal gas, but the pressure is not. Simultaneous demand that energy density and pressure must be of ideal gas form leads to thermodynamical inconsistency, requiring extra constraints to satisfy thermodynamic relations. If one wants to follow Landau's formalism, i.e., to develop statistical mechanics and thermodynamics, starting from pressure, we may follow Landau's formalism with external conditions \cite{la.1} (see page 109 to 110). In quasi-particle models, the thermal mass which is a function of temperature, may be taken as external parameter. That is, mass is externally controlled, depending on the temperature of reservoir in canonical or grand canonical ensembles. Following this concepts, we may start with Gibb's distribution function, \begin{equation} w_n = e^{\alpha + \beta E_n} \,\, , \end{equation} where $E_n(V,\lambda_1, \lambda_2, ...)$ which depends on external parameters $V, \lambda_1, \lambda_2, ...$ . Here, the external parameter may be volume $V$ and thermal mass $m_{th}(T)$. $\beta \equiv -1/T$, where $T$ is the temperature of the system which is equal to temperature of the reservoir in canonical ensemble. It must be stressed here that statistical mechanics is a probabilistic theory and average quantities are related to thermodynamic quantities. One can not twist and reformulate statistical mechanics for thermodynamics. For e.g., here, one asks the question what is the probability for the system to have energy, say, $E_r$. Because, the system in CE is in thermal contact with the reservoir and keeps on exchanging energy and hence $E_r$ fluctuates around the average value $U = \bar{E_r}$, which we call as the thermodynamic internal energy. Once we get $U$ all other thermodynamic quantities like pressure, entropy, etc. may be obtained from thermodynamic laws. No need of separate application of statistical mechanics for pressure and entropy. We consider here QGP with chemical potential $\mu =0$ and hence canonical ensemble formalism is sufficient. That is, we consider QGP made up of quasi-particles of quarks and gluons in thermal equilibrium with temperature $T$. In canonical ensemble $T, V, N$ are variables and hence as $T$ changes $m_{th}(T)$ changes and therefore $m_{th}(T)$ acts like an external parameter. $\alpha$ is the normalization factor such that \begin{equation} \sum_n e^{\alpha + \beta E_n} = 1\,\, , \label{eq:g} \end{equation} or, \begin{equation} \alpha = - \ln (\sum_n e^{\beta E_n}) = - \ln Q_N (T,V) \,\,, \label{eq:al} \end{equation} In ref. \cite{la.1}, Landau formally chose $\alpha = \frac{F}{T}$, where $F$ is the Helmholtz free energy or thermodynamic potential. However, we, at this stage, treat $\alpha$ as a normalizing factor (Eq.(\ref{eq:al})), and compare with thermodynamic relations and fix it later. $Q_N$ is the canonical ensemble partition function. Following Landau \cite{la.1}, differentiating Eq.(\ref{eq:g}) with it's dependence, namely $\alpha, \beta, V$, etc. \begin{equation} \sum_n e^{\alpha + \beta E_n} \left(\delta \alpha + E_n \delta \beta + \beta \frac{\partial E_n}{\partial \lambda_i} \delta \lambda_i \right) = 0\,\, , \end{equation} where $\lambda_i$ may be external parameters like volume $V$, temperature, etc. Using the definition of average, \begin{equation} \delta \alpha + \bar{E} \delta \beta + \beta \overline{\frac{\partial E_n}{\partial \lambda_i}} \delta \lambda_i = 0 \,\, , \end{equation} where bar refers to statistical average. It may be further reduced into, \begin{equation} \delta \bar{E} - \overline{ \frac{\partial E_n}{\partial V} } \delta V = \frac{\delta (\alpha + \beta \bar{E})}{\beta} + \overline{\frac{\partial E_n}{\partial m}} \frac{\partial m}{\partial \beta} \delta \beta \,\, , \label{eq:var} \end{equation} which may be compared with thermodynamic relation, \begin{equation} \delta U + P \delta V = T \delta S \,\, , \label{eq:td} \end{equation} and we may identify the entropy as, \begin{equation} S = -\alpha + \frac{\bar{E}}{T} + \int^T \frac{d\tau}{\tau} \overline{\frac{\partial E_n}{\partial m}} \frac{\partial m}{\partial \tau} \,\, , \end{equation} and $U=\bar{E}$, $P = - \overline{ \frac{\partial E_n}{\partial V} }$. From U and S, we may obtain Helmholtz free energy $F = U - T S = - P V$ and hence pressure P. Note that all these derived quantities $S$, $F$ and $P$ have an extra temperature dependent term in addition to that of ideal system. Thus, even the standard notion of entropy is not valid for quasi-particle system with variable mass. These results are consistent with our earlier formalism \cite{ba.1} following Pathria \cite{pa.1} and we were able to fit lattice data using just one parameter \cite{ba.1,ba.3,ba.3b,ba.3c,ba.3d,sr.1,yi.1,zh.1,co.1,zu.1,ji.1}. \section{Landau's formalism for qQGP with bag pressure:} Using Landau's formalism of statistical mechanics with external conditions, we develop a consistent thermodynamics which reduces to our earlier qQGP which we developed following Pathria, where one starts from energy density and derive all other TD quantities. In this section, we redo the above calculations with temperature dependent bag pressure or zero point energy. In above section, we assumed that whole energy is used to excite quasi-particles with thermal masses alone. Here, following Gorenstein and Yang \cite{go.1}, we introduce temperature dependent vacuum energy also. Hence, $E_n$ depends on another external parameter $B(T)$ in addition to the thermal mass $m(T)$ and we have one more term due to variation in $B(T)$ which immediately leads to the modification of Eq.(\ref{eq:var}). Let us again start from Landau's steps with external parameters $m(T)$ and $B(T)$ ($\lambda_i$s) in addition to $V$. Hence, we have, \begin{equation} \delta \alpha + \bar{E} \delta \beta + \beta \overline{ \frac{\partial E_n}{\partial V} } \delta V + \beta \overline{ \frac{\partial E_n}{\partial \lambda_i} } \delta \lambda_i = 0 \,\, , \end{equation} which may be reduced to \begin{equation} \delta (T \alpha) = - \left(\frac{\bar{E}}{T} - \alpha \right) \delta T + \overline{ \frac{\partial E_n}{\partial V} } \delta V + \overline{ \frac{\partial E_n}{\partial \lambda_i} } \delta \lambda_i \,\, . \label{la.f} \end{equation} There is a summation over the index $i$. In Landau's case, where $V$ is the only external parameter and the last term in above equation is absent. Hence, by choosing $T \alpha = F$ and $\bar{E} = U$, above reduces to \begin{equation} \delta F = - S \delta T + \overline{ \frac{\partial E_n}{\partial V} } \delta V \,\, , \end{equation} a valid thermodynamic relation. Note that $T \alpha = F$ is an assumption which leads to the thermodynamic relation \begin{equation} \delta F = - S \delta T - P \delta V \,\, . \end{equation} In our case, here, $\lambda_i$ is $m(T)$ or $B(T)$ which are functions of $T$ and hence Eq.(\ref{la.f}) may be written as \begin{equation} \delta (T \alpha) = - \left(\frac{\bar{E}}{T} - \alpha - \overline{ \frac{\partial E_n}{\partial \lambda_i} } \frac{\partial \lambda_i}{\partial T}\right) \delta T + \overline{ \frac{\partial E_n}{\partial V} } \delta V \,\, . \end{equation} To compare above equation with valid thermodynamic relations, we have three possibilities. First one is the same as that of Landau, provided $\overline{ \frac{\partial E_n}{\partial \lambda_i} } \frac{\partial \lambda_i}{\partial T} = 0$ which immediately gives so called TD consistency relation and one arrives at model-I of quasi-particle models. Second choice may be to take internal energy $U = \bar{E} - T \overline{ \frac{\partial E_n}{\partial \lambda_i} } \frac{\partial \lambda_i}{\partial T}$ which however is inconsistent with the definition of average in statistical mechanics and hence not acceptable. Third choice may be, with little algebra, reduce it to the form \begin{equation} \delta \left(T \alpha - T \int^T \frac{d\tau}{\tau} \overline{\frac{\partial E_n}{\partial \lambda_i}} \frac{\partial \lambda_i}{\partial \tau}\right) = - \left(\frac{\bar{E}}{T} - \alpha - \int^T \frac{d\tau}{\tau} \overline{\frac{\partial E_n}{\partial \lambda_i}} \frac{\partial \lambda_i}{\partial \tau} \right) \delta T + \overline{ \frac{\partial E_n}{\partial V} } \delta V \,\, . \end{equation} Now we chose $F = T \alpha - T \int^T \frac{d\tau}{\tau} \overline{\frac{\partial E_n}{\partial \lambda_i}} \frac{\partial \lambda_i}{\partial \tau} = - P V$ and $U = \bar{E}$ and above equation reduces to a valid thermodynamic equation Eq.(\ref{eq:td}) with $S = \frac{U}{T} - \alpha - \int^T \frac{d\tau}{\tau} \overline{\frac{\partial E_n}{\partial \lambda_i}} \frac{\partial \lambda_i}{\partial \tau}$. Thus for pressure, for example, we get \begin{equation} P = - \frac{T}{V} \left( \alpha - \int^T \frac{d\tau}{\tau} \left[\overline{\frac{\partial E_n}{\partial m}} \frac{\partial m}{\partial \tau} + \overline{\frac{\partial E_n}{\partial B}} \frac{\partial B}{\partial \tau} \right] \right) \,\, . \end{equation} Substituting for $E_n$ which includes the vacuum energy contribution \cite{go.1} above reduces to \begin{equation} P = P_{id} - B(T) + \frac{T}{V} \int^T \frac{d\tau}{\tau} \left[\overline{\frac{\partial E_n}{\partial m}} \frac{\partial m}{\partial \tau} + V \frac{\partial B}{\partial \tau} \right] \,\, . \end{equation} Above expressions are thermodynamically consistent as it is. Both $m(T)$ and $B(T)$ are independent functions of $T$ which may be modelled. But, if we impose extra condition that \begin{equation} \left[ \overline{\frac{\partial E_n}{\partial m}} \frac{\partial m}{\partial \tau} + V \frac{\partial B}{\partial \tau} \right] = 0 \,\, , \label{eq:tcr} \end{equation} then we get qQGP model-I where $B(T)$ and $m(T)$ are related by above equation. Eq.(\ref{eq:tcr}) is what they wrongly called TD consistent relation, which is not necessary but simplifies the model. Our model-II corresponds to $B = 0$ which means that all energy is in quasi-particle modes. \section{Conclusions:} Following the Landau's formalism of statistical mechanics for a system subjected to external conditions \cite{la.1}, we developed the statistical mechanics and thermodynamics of quasi-particle system of QGP. We arrive very naturally at our earlier formalism of qQGP \cite{ba.1}, developed using Pathria's formalism of statistical mechanics \cite{pa.1}, where one starts from the expression for energy density. When we apply, the so called TD consistent relation, we get back the quasi-particle model-I \cite{go.1}. Therefore, the TD consistent relation is not needed to study QGP, and qQGP model with TD consistent relation may be a special case. If one starts from the ideal gas expression of pressure to develop thermodynamics of quasi-particle system, one always end up with thermodynamic inconsistency. Thus, Landau's formalism of statistical mechanics with external conditions \cite{la.1} clearly shows that the pressure for quasi-particle system is not in the form of ideal gas in contradiction with many quasi-particle models (model-I).
1,108,101,565,321
arxiv
\section{Introduction} Swimming magnetic microrobots have attracted much attention in recent years. These robots can be used for operation in hard-to-reach environments of the human body and performing safety critical medical operations such as targeted therapy and tissue removal~\cite{peyer2013bio}. Despite recent advances in microfabrication and actuation technologies for swimming microrobots, systematic design of automatic motion control systems for these magnetic microswimmers is an open problem to date. Being susceptible to gravity and bodily fluid flows, operating in low-Reynolds-number regimes, low accuracy of measurement devices in small scales, and actuator saturation are among a few interesting challenges that arise in closed-loop control of this class of microrobots. In this document, I will first present a brief overview of the dynamical model of planar swimming helical microrobots. Next, I will highlight some of the inherent challenges in automatic control of these robots. Then, I will formulate the straight-line path following control problem (SLPFCP) for a single swimming microrobot subject to control input saturation. Finally, I will propose some further possible avenues for solving the SLPFCP. \section{Dynamical Model} \begin{figure}[h] \centering \includegraphics[width=0.4\linewidth]{./microConfig.png} \caption{A helical microswimmer consisting of a spherical magnetic head attached to a right-handed helix.} \label{fig:micro} \end{figure} The geometry of a helical microrobot is completely determined by the number of turns of the helix $n_\text{h}$, the helix pitch angle $\theta_0$, the helix radius $r_\text{h}$, and the magnetic head radius $r_\text{m}$. Figure~\ref{fig:micro} depicts the configuration of a generic helical microrobot. The frame $x_\text{h} - z_\text{h}$ is the helix coordinate frame which is attached to the center of the helix $O_\text{h}$. We fix a right-handed inertial coordinate frame in the Euclidean space and denote it by $W$. We denote the unit vectors in the direction of $x$ and $z$ coordinates of the frame $W$ by $\hat{e}_x$ and $\hat{e}_z$, respectively. Using resistive force theory (RFT), Mahoney \emph{et al.} have derived the dynamical model of a 3D helical microswimmer operating in low-Reynolds-number regimes~\cite{mahoney2011velocity}. In this modeling approach, the velocity of each infinitesimally small segment of the helix is mapped to parallel and perpendicular differential fluid drag forces acting on the segment. Integrating the differential forces in three dimensions, along the length of the helix, the fluidic force and and torque acting on the helical part of the robot are obtained. Adding the fluidic forces acting on the head, the dynamical equations of motion of the microswimmer are obtained (see~\cite{mahoney2011velocity} for detailed derivations). For the sake of simplicity, we assume that the microswimmer motion is confined to the $x-z$ plane. We let the position of the center of mass and the velocity of the microswimmer in the inertial frame $W$ be given by $p=[p_\text{x},\,p_\text{z}]^{\top}$ and $v=[v_\text{x},\,v_\text{z}]^{\top}$, respectively. We denote the orientation of the microswimmer in the inertial coordinate frame by $\theta$. Therefore, \[ \theta = \text{atan2}(p_\text{z},p_\text{x}). \] The dynamics of the planar microswimmer are given by~\cite{mahoney2011velocity} \begin{eqnarray}\label{eq:2D_dyn} \dot{p} & =& A_{\theta}d_{\text{g}} + B_{\theta}u, \end{eqnarray} where \[ A_\theta = R_\theta A_h^{-1} R_{\theta}^{\top},\; B_\theta = -R_\theta A_h^{-1}B_h,\; d_{\text{g}}=-mg \hat{e}_z. \] In the above, $R_{\theta}$ is the rotation matrix from the robot frame to the inertial frame. Also, the constant matrices \begin{equation} \label{eq:ABstruct} A_h = \begin{bmatrix} a_1 & 0 \\ 0 & a_2 \end{bmatrix},\; B_h = \begin{bmatrix} b_1 \\ b_2 \end{bmatrix}, \end{equation} depend on the geometry of the helical microswimmer and the helix drag coefficients in its fluid environment. Moreover, $d_{\text{g}}$ is the gravitational force that is acting on the robot in the inertial coordinate frame. Finally, $u$ is the frequency of rotation of the microrobot about the helix axis, which is proportional to the frequency of rotation of the actuating uniform magnetic field induced by a set of electromagnetic coils (see Figure~\ref{fig:micro2}). \begin{remark} As it is shown in~\cite{mahoney2011velocity}, the constant matrix $B_h$ can be written as \begin{equation}\label{eq:Bh} B_h = ( \xi_{\parallel} - \xi_{\perp} ) B_h^{\prime}, \end{equation} where $\xi_{\parallel}$ and $\xi_{\perp}$ are the helix tangential and normal drag coefficients, and $B_h^{\prime}\in\mathbb{R}^{2\times 1}$ is a constant vector. The difference between $\xi_{\parallel}$ and $\xi_{\perp}$ plays a key role in control analysis, as discussed later in the document. \end{remark} The microrobot dynamics can then be represented by the nonlinear affine control system \begin{eqnarray}\label{eq:2D_dynb} \dot{x} & =& f(x)+g(x)u, \end{eqnarray} \noindent where $x:=p \subset \mathbb{R}^2$ is the state of the system and \[ f(x):= A_{\theta}d_\text{g},\;\; g(x):= B_{\theta},\;\; \theta = \text{atan2}(x_2,x_1). \] \section{Actuation and Sensing Limitations} There are several challenges in automatic control of swimming magnetic microrobots due to the limitations in actuation and sensing technologies. The most pertinent challenges for controlling the nonlinear system in~\eqref{eq:2D_dynb} are as follows. \begin{figure}[h] \centering \includegraphics[width=0.22\linewidth]{./microConfig2.png} \caption{A rotating uniform magnetic field transduced into forward motion using a helical propeller.} \label{fig:micro2} \end{figure} \begin{itemize} \item[L1] \textbf{Step-out frequency:} The mere propulsion mechanism driving the microswimmer forward is that of the robot rotation about the helix axis\footnote{A classical paper on analysis of helical propulsion of micro-organisms is due to Chwang and Wu~\cite{chwang1971note}.}. This helical rotation is induced by a rotating magnetic field about the helix axis (see Figure~\ref{fig:micro2}). The microrobot body rotation frequency, which is required for maintaining synchrony with the rotating external magnetic field, cannot go over a certain threshold. This maximum rotational frequency is known as the \emph{step-out frequency}. In particular, we have the constraint \begin{equation} \label{eq:stepout} |u| \leq f_{\text{SO}}, \end{equation} on the control input, where $f_{\text{SO}}$ is the step-out frequency. The step-out frequency for each microswimmer is known \emph{a priori}. % % \item[L2] \textbf{Unknown orientation of the microrobot:} This problem, which becomes more significant in the context of 3D microrobot control, is due to limitations of the optical microscopes used for sensing the position and orientation of the microrobot. In this document, however, we assume that the orientation of the microrobot in the $x-z$ plane is known. \end{itemize} \section{Microswimmer Straight Line Path-Following Control Problem} We would like to solve the following control problem for a magnetic helical microswimmer. \textbf{Straight Line Path-Following Control Problem (SLPFCP).} Consider a planar magnetic microswimmer whose dynamics are given by~\eqref{eq:2D_dynb} with step-out frequency $f_{\text{SO}}$. Given the direcion vector \[ \hat{e}_{\theta_r} = \begin{bmatrix} \cos(\theta_r) \\ \sin(\theta_r) \end{bmatrix}, \] for some constant angle $\theta_r$, make the microrobot converge to the line \begin{equation}\label{eq:line} \mathcal{P}:=\{ p \in \mathbb{R}^2 : p = t\hat{e}_{\theta_r} ,\, t\in \mathbb{R} \}, \end{equation} and to traverse the line with a bounded velocity such that $|u(t)|\leq f_{\text{SO}}$ for all $t\geq 0$. \textbf{Solution Strategy.} Our solution, which is based on zeroing proper outputs for the microrobot, unfolds in the following three steps. \begin{itemize} \item[\textbf{Step 1}] We consider the output \begin{equation} \label{eq:output} y = \hat{e}^{\perp}_{\theta_r} {^\top}p, \end{equation} where $\hat{e}^{\perp}_{\theta_r}:=[-\sin(\theta_r),\; \cos(\theta_r) ]^{\top}$ is the unit vector perpendicular to $\hat{e}_{\theta_r}$ (see Figure~\ref{fig:vtvn}). Zeroing the output in~\eqref{eq:output} corresponds to making the robot to converge to the line $\mathcal{P}$. \begin{figure}[h] \centering \includegraphics[width=0.22\linewidth]{./vtvn.png} \caption{Velocity vector of the microswimmer and the unit direction vectors $\hat{e}_{\theta_r}$ and $\hat{e}^{\perp}_{\theta_r}$.} \label{fig:vtvn} \end{figure} \item[\textbf{Step 2}] We perform a zero dynamics analysis for the output in~\eqref{eq:output}. In particular, we provide necessary and sufficient conditions for the output to have well-defined relative degree and derive the induced zero dynamics. \item[\textbf{Step 3}] We cast the control problem as a quadratic program using a proper control Lyapunov function based on the output in~\eqref{eq:output}. \end{itemize} \section{Zero Dynamics Analysis} Let us consider the output $y=h(x)$ given in~\eqref{eq:output} for the control system in~\eqref{eq:2D_dynb}. Let us define $\Delta \theta := \theta - \theta_r$. Taking the derivative of the output along the vector field of the control system in~\eqref{eq:2D_dynb}, we obtain \begin{equation} \label{eq:ydot} \dot{y} = L_{f}h(x) + L_g h(x) u, \end{equation} where \[ L_f h(x) = -mg \Big( \frac{\sin(\theta)\sin(\Delta \theta)}{a_{1}} + \frac{\cos(\theta)\cos(\Delta \theta)}{a_{2}} \Big), \] and \[ L_g h(x) = - \Big( \frac{b_{1}}{a_{1}} \sin(\Delta \theta) + \frac{b_{2}}{a_{2}} \cos(\Delta \theta) \Big). \] \begin{proposition}\label{prop:rel_deg} Consider a helical microswimmer whose dynamics are given by~\eqref{eq:2D_dynb}. The output in~\eqref{eq:output} has well-defined relative degree one for the microswimmer if and only if \begin{equation} \label{eq:relDeg} \xi_{\parallel} \neq \xi_{\perp}, \end{equation} where $ \xi_{\parallel}$ and $\xi_{\perp}$ are the tangential and normal drag coefficients of the helical microswimmer. \end{proposition} \begin{proof} The output in~\eqref{eq:output} has well-defined relative degree one if and only if $L_g h(x)\neq 0$ for all $p \in \mathcal{P}$. Therefore, the well-defined relative degree condition holds if and only if \[ L_g h(x) \Big|_{x \in \mathcal{P}} \neq 0 \iff - \Big( \frac{b_{1}}{a_{1}} \sin(\Delta \theta) + \frac{b_{2}}{a_{2}} \cos(\Delta \theta) \Big)\bigg|_{x \in \mathcal{P}}\neq 0. \] On the set $\mathcal{P}$, we have $\Delta \theta=0$. Therefore, the well-defined relative degree condition holds if and only if $b_2\neq 0$. From~\eqref{eq:Bh}, we deduce that the constant $b_2$, which depends on the physical properties of the microswimmer and its ambient environment, is non-zero if and only if~\eqref{eq:relDeg} holds. \end{proof} \noindent \textbf{Derivation of the Zero Dynamics.} Under the well-defined relative degree condition given by~\eqref{eq:relDeg}, the zero dynamics manifold $\mathcal{Z}$ associated with the output~\eqref{eq:output} is the set $\mathcal{P}$ given by~\eqref{eq:line}. The zero dynamics of the microrobot, when the output~\eqref{eq:output} is zeroed, can be derived as follows. Under the well-defined relative degree condition in~\eqref{eq:relDeg}, the control input \begin{equation} \label{eq:ustart} u^{\star} = -\frac{L_f h(x)}{L_g h(x)}\bigg|_{x\in \mathcal{P}}, \end{equation} makes the zero dynamics manifold $\mathcal{P}$ invariant. Consider the coordinate transformation \begin{equation} \label{eq:stateCoord} \begin{bmatrix} \eta \\ z \end{bmatrix} := \begin{bmatrix} \hat{e}^{\perp}_{\theta_r} {^\top} x \\ \hat{e}_{\theta_r} {^\top} x \end{bmatrix}. \end{equation} In the new coordinates, the zero dynamics manifold is given by \begin{equation}\label{eq:zeroDynMnfld} \mathcal{Z} = \{ (\eta , z) : \eta = 0 \}. \end{equation} Furthermore, it can be shown that the zero dynamics are given by \begin{equation}\label{eq:preZerDyn2} \dot{z} = \frac{-mg}{a_1 b_2} \big( b_1 \cos(\theta_r) + b_2 \sin(\theta_r) \big). \end{equation} Therefore, the velocity of the microrobot on the straight line $\mathcal{P}$ is seen to be equal to \[ \frac{-mg}{a_1 b_2} \big( b_1 \cos(\theta_r) + b_2 \sin(\theta_r) \big). \] \begin{remark} It is possible to regulate the velocity of the microrobot on $\mathcal{P}$ by a dynamic output stabilization approach. In this document, however, we do not pursue this straight line maneuvering control problem. \end{remark} \section{Possible Further Avenues for the Microrobot SLPFCP} There are two possible avenues for continuing further. One is based on the CLF-based quadratic programs, as proposed by Ames \emph{et al.} in~\cite{galloway2015torque,ames2014rapidly}. The other one is to solve this control problem as an optimal decision strategy (ODS) using the framework of Spong \emph{et al.} in~\cite{spong1986control,spong1984control}. I believe that the ODS framework, which has a nice geometric flavor due to its ``desired velocity'' assignment in state-space, includes as a special case the CLF-based quadratic program in the setting of microrobot SLPFCP. However, I need to investigate further. \subsection{Formulating the Control Problem as a CLF-based Quadratic Program} Let us consider the the nonlinear control system~\eqref{eq:2D_dynb}, the output in~\eqref{eq:output} for the system, and the state coordinate transformation in~\eqref{eq:stateCoord}. Using the framework in~\cite{galloway2015torque,ames2014rapidly}, we can consider the control Lyapunov function \[ V_{\epsilon}(\eta) = \frac{1}{2\epsilon^2} \eta^2, \] where $\epsilon$ is some positive constant, which affects the rate of convergence to the zero dynamics manifold $\mathcal{P}$. Using the control input \[ u = u^{\star} + \frac{\mu}{L_g h(x)}, \] \noindent where $u^{\star}$ is the feed-forward term in~\eqref{eq:ustart}, which makes $\mathcal{P}$ forward invariant, we can cast the control design in the following quadratic program. \begin{equation}\label{eq:CLFQP} \begin{aligned} & \underset{\mu}{\text{min.}} \qquad \mu^{\top}\mu \\ & \text{s. t.} & L_{\bar{f}} V_{\epsilon}(\eta,z) + L_{\bar{g}} V_{\epsilon}(\eta,z)\mu + \frac{c_3}{\epsilon}V_{\epsilon}(\eta,z) \leq 0,\\ & & \frac{\mu}{L_g h(x)} \geq (-f_{\text{SO}} - u^{\star} ),\\ & & \frac{\mu}{L_g h(x)} \leq (-f_{\text{SO}} + u^{\star} ). \end{aligned} \end{equation} \subsection{Formulating the Control Problem as an ODS} \begin{equation}\label{eq:ODS} \begin{aligned} \underset{\mu}{\text{min.}} \; & \big\{ \frac{1}{2} u^{\top}g^{\top} Q g u - ( A_\theta d_\text{g} - v^{\text{d}}(p) )^{\top} B_\theta u \big\} \\ \text{s. t.} & \;\;\; A_{\text{ODS}} u \leq b_{\text{ODS}}, \end{aligned} \end{equation} \noindent where \[ v^{\text{d}}(p) = R_{\theta_\text{r}} \begin{bmatrix} \Delta_\text{LOS} \\ -\| p\| \sin(\Delta \theta) \end{bmatrix} \] \[ A_{\text{ODS}} = \begin{bmatrix} 1 \\ -1 \end{bmatrix} \] \[ b_{\text{ODS}} = \begin{bmatrix} f_\text{SO} \\ f_\text{SO} \end{bmatrix} \] \[ \dot{p} = v^{\text{d}}(p), \] \[ v_{\theta_\text{r}}^{\top} \dot{p} = -\| p \| \sin(\Delta\theta) = -\| p \| \big[ \sin(\theta)\cos(\theta_\text{r}) - \cos(\theta)\sin(\theta_\text{r}) \big] \] \[ = -\big[ p_\text{y} \cos(\theta_\text{r}) - p_\text{x} \sin(\theta_\text{r}) \big] = - v_{\theta_\text{r}}^{\top} p \] \bibliographystyle{IEEEtran} \section*{Supplementary Material To ``Quadratic Optimization-Based Nonlinear Control for Protein Conformation Prediction''} \noindent{\textbf{Notation.}} In these supplementary notes, we will number the equations using (s\#), the references using [S\#], and the figures using SF\#. Therefore, (s1) refers to Equation~1 in the supplementary notes while (1) refers to Equation~1 in the original article. The same holds true for the references and the figures. \subsection{Entropy-Loss Rate of the Protein Molecule} There are a few equivalent ways to compute the entropy changes of a protein molecule during folding including the methods based on configurational entropy computations and the methods based on thermodynamics-based operational definitions. Here, we will use Clausius thermodynamics expression in [S1], which is similar to the derivations in~\cite{arkun2010prediction} in that it is based on classical thermodynamics arguments. Based on Clausius equation, the change of the molecule entropy is given by \begin{myequation} \Delta S(\pmb{\theta}) = \kappa_0 \Delta \mathcal{G}(\pmb{\theta}), \end{myequation} \hspace{-1.5ex} where $\kappa_0$ is a temperature-dependent constant and $\mathcal{G}(\cdot)$ is the aggregated free energy of the molecule given by~\eqref{eq:freePot}. Hence, \begin{myequation} \frac{d}{dt} S(\pmb{\theta}) = \kappa_0 \frac{\partial \mathcal{G}}{\partial \pmb{\theta}} \dot{\pmb{\theta}} = \kappa_0 \frac{\partial \mathcal{G}}{\partial \pmb{\theta}} \big( \mathcal{J}^\top(\pmb{\theta}) \mathcal{F}(\pmb{\theta}) + \mathbf{u}_{c}\big). \label{eq:ineqS} \end{myequation} \hspace{-1.5ex} Using H\"{o}lder's and the triangle inequalities in~\eqref{eq:ineqS}, it can be seen that \begin{myequation} |\dot S| \leq \kappa_0 \bigg|\frac{\partial \mathcal{G}}{\partial \pmb{\theta}}\bigg|_1 \bigg( \big|\mathcal{J}^\top(\pmb{\theta}) \mathcal{F}(\pmb{\theta})\big|_\infty + \big|\mathbf{u}_{c}\big|_\infty\bigg). \end{myequation} \hspace{-1.5ex} where $|\cdot|_1$ denotes the vector 1-norm. \subsection{In-Depth Comparison with the Optimal Control Framework in~\cite{arkun2010prediction,arkun2011protein,arkun2012combining}} One of the inspirations for our ODS-based control scheme for protein folding in~\eqref{eq:ODS} has been due to the pioneering work by Arkun and collaborators~\cite{arkun2010prediction,arkun2011protein,arkun2012combining}. Here, we provide an in-depth comparison with our work and state possible extensions of the work by Arkun and collaborators~\cite{arkun2010prediction,arkun2011protein,arkun2012combining} to our nonlinear setting. \begin{itemize} \item \textbf{Difference in the studied dynamics:} As we stated in Remark 3.1 of the article, we are considering a \emph{nonlinear dynamics} (as opposed to the \emph{linear dynamics} in~\cite{arkun2010prediction,arkun2011protein,arkun2012combining}) in the dihedral angle space of the molecule (as opposed to~\cite{arkun2010prediction,arkun2011protein,arkun2012combining}, where the state vector of the system is the Cartesian position of the $\text{C}_\alpha$ atoms). \item \textbf{Nonlinear extension of~\cite{arkun2010prediction,arkun2011protein,arkun2012combining}:} In the linear time-invariant (LTI) setup considered by Arkun and collaborators~\cite{arkun2010prediction,arkun2011protein,arkun2012combining}, the control input is penalized by using an infinite horizon LQR cost function to avoid extremely fast decay of excess entropy. In the context of the KCM framework, a cost function in the dihedral angle space can be written similar to~\cite{arkun2010prediction,arkun2011protein,arkun2012combining} as % \begin{myequation}\label{eq:ArkunCost} J[\mathbf{u}_c] := \int_{0}^{T_f} \big( \tilde{\pmb{\theta}}(t)^\top \mathbf{Q} \tilde{\pmb{\theta}}(t)+ \rho \mathbf{u}_c(t)^\top \mathbf{P} \mathbf{u}_c(t) \big) dt, \end{myequation} % \hspace{-1.5 ex} where $\tilde{\pmb{\theta}}:= \pmb{\theta} - \pmb{\theta}^{\text{ref}}$ is the error from the current to a reference conformation, e.g., the native conformation of the protein. Furthermore, the tunable parameter $\rho>0$ is employed to make a trade-off between avoiding the high energy regions of the landscape while choosing entropically preferred folding pathways by penalizing high-entropy-loss routes. \underline{The similarities with~\cite{arkun2010prediction,arkun2011protein,arkun2012combining} stop here.} Under an LTI control setup (which is not the case for the nonlinear dynamics in~\eqref{eq:KCM_ode}) and letting $T_f \to \infty$, the solution to~\eqref{eq:ArkunCost} takes the form of the feedback control law $\mathbf{u}_c^\ast(\pmb{\theta})=-\mathbf{K}(\rho)\tilde{\pmb{\theta}}$, where the state feedback gain matrix $\mathbf{K}(\rho)$ is the solution to a proper algebraic Riccati equation (ARE). However, this ARE-based feedback control law is not the minimzer of the cost function in~\eqref{eq:ArkunCost} when the dynamics are governed by~\eqref{eq:KCM_ode}. \\ % \hspace{2pt} Indeed, if the system has nonlinearities, which is the case in this article, there is a need for solving a Hamilton-Jacobi equation (HJE). HJEs are partial differential equations whose explicit closed-form solutions do not exist, even in the case of a simple nonlinearity (see, e.g., [S3]). \\ % \hspace{3ex} Hamilton-Jacobi equation (HJE) for the protein folding dynamics given by~\eqref{eq:KCM_ode} can obtained as follows. Let us consider the nonlinear dynamics in~\eqref{eq:KCM_ode} and the cost function in~\eqref{eq:ArkunCost}. Defining $\tilde{\pmb{\theta}}:=\pmb{\theta} - \pmb{\theta}^{\text{ref}}$, $\tilde{\mathbf{G}}(\tilde{\pmb{\theta}}):=\mathcal{J}^\top(\tilde{\pmb{\theta}}+\pmb{\theta}^{\text{ref}})\mathcal{F}(\tilde{\pmb{\theta}}+\pmb{\theta}^{\text{ref}})$, the optimal control input is given by [S4] % \begin{myequation} \mathbf{u}_c^{\ast}(\tilde{\pmb{\theta}}) = -\frac{1}{2\rho} \mathbf{P}^{-1} \nabla V(\tilde{\pmb{\theta}})+ \tilde{\mathbf{G}}(\mathbf{0}). \label{eq:optimalCont} \end{myequation} % \hspace{-1.5 ex} where $V(\cdot)$ is a twice continuously differentiable function that satisfies HJE \begin{myequation} \frac{1}{4\rho} \nabla V^\top \mathbf{P}^{-1} \nabla V - ( \mathcal{J}^\top(\pmb{\theta}) \mathcal{F}(\pmb{\theta}) - \tilde{\mathbf{G}}(\mathbf{0}) ) \nabla V - \tilde{\pmb{\theta}}^\top \mathbf{Q} \tilde{\pmb{\theta}} = 0. \label{eq:HJE} \end{myequation} % For solving the above numerically demanding partial differential equation, there are a few methods such as employing artificial neural networks (see, e.g., [S4]). \item \textbf{Computational load:} Even if we neglect the complexity arising from solving HJEs in a nonlinear setting, the framework in~\cite{arkun2010prediction,arkun2011protein,arkun2012combining} requires \emph{at least twice the computational time} of our framework. This is due to the fact that the framework in~\cite{arkun2010prediction,arkun2011protein,arkun2012combining} requires knowing the final folded structure of the protein to find the proper folding pathways. Our work, on the other hand, does not require knowing this final folded structure. Rather, by following the reference vector field coming from the KCM framework, folding starts from a random structure and converges to a local minimum of the aggregate free energy of the molecule. Therefore, a fair comparison with the nonlinear extension of~\cite{arkun2010prediction,arkun2011protein,arkun2012combining} would be to run the simulations in the dihedral angle space twice; namely, finding the final folded structure $\pmb{\theta}^{\text{ref}}$ using the conventional KCM simulation in the first run, and, in the second run, using the nonlinear extension of the optimal control framework in~\cite{arkun2010prediction,arkun2011protein,arkun2012combining} (as formulated in~\eqref{eq:optimalCont},~\eqref{eq:HJE}) to guide the molecule towards the final conformation $\pmb{\theta}^{\text{ref}}$, which has been obtained from the first run. \item \textbf{The underlying philosophy of the pointwise optimal control laws:} ODS-based control and its resulting QP-based synthesis~\cite{spong1986control} were originally developed to address some of the challenges arising from integral performance measures such as the one in~\eqref{eq:ArkunCost}. Indeed, the QP-based synthesis avoids problems such as computationally intensive schemes and bypassing the Pontryagin maximum principle and/or HJEs under the bound constraints on the allowable input torques. Even ignoring the computational complexity of such schemes, one might prefer an instantaneous or pointwise optimization procedure which optimizes the present state of the system without regard to future events. In the context of our QP-based control framework, the reference vector field in~\eqref{eq:KCM_vecField} guides the protein towards its native conformation on the folding landscape. \end{itemize} \subsection{Proof of Proposition~\ref{prop:LipCont}} \begin{proof} The solution $\omega$ to the linear programming problem in~\eqref{eq:unique} is the width of a feasible set associated with the QP in~\eqref{eq:ODS}. Furthermore, Lipschitz continuity of $ \mathcal{J}^\top(\pmb{\theta}) \mathcal{F}(\pmb{\theta})$ at conformation $\pmb{\theta}_0$ implies Lipschitz continuity of $\mathbf{G}(\pmb{\theta})$ in the QP. Following Theorem~1 in~[S5], which is based on Mangasarian-Fromovitz regularity conditions~[S6], uniqueness and Lipschitz continuity of the control input follow. \end{proof} \subsection{Proof of Proposition~\ref{prop:trajRel}} \begin{proof} Employing the control input~\eqref{eq:KCM_control} in~\eqref{eq:KCM_ode} and noting that the iteration in~\eqref{eq:KCM_alg} is the forward Euler difference equation for the closed-loop dynamics with time step $h$, the rest of the proof follows a standard argument from numerical analysis (see, e.g., the proof of Theorem 1.1 in~[S7]). \end{proof} \subsection{Details of Implementation and Computational Complexity of the Overall Algorithm} \noindent\textbf{Details of implementation.} A flowchart of the program is depicted in Figure~\ref{fig:flowChart}. The convergence criteria, following PROTOFOLD~I~\cite{kazerounian2005protofold}, is that the magnitude of all of the joint equivalent torques to be less than or equal to 1 kcal/(mol.A), which is in accordance with the experimentally determined values in folded proteins. Our QP-based control input generation block is using the MATLAB `\texttt{quadprog}' routine, where we are using the `\texttt{interior-point-convex}' as the QP numerical solver. After performing the direct kinematics calculations on the chain, we are using the `\texttt{molecule3D}' program [S9] for drawing the protein molecules. Interested readers are encouraged to contact the corresponding author (email:\texttt{[email protected]}) for obtaining a copy of the program. \noindent\textbf{Computational complexity.} The big O notation for complexity analysis of numerical algorithms can be defined as follows. Given two real-valued functions $f(\cdot)$ and $g(\cdot)$, we say that $f(x)=\mathcal{O}(g(x))$ if there exists a positive real number $M$ and a real number $x_0$ such that $|f(x)|\leq M |g(x)|$ for all $x\geq x_0$. The QP given by~\eqref{eq:ODS} belongs to the class of box-constrained convex quadratic programs that can be efficiently solved using interior-point algorithms (see, e.g., [S8]). The time needed for solving such a QP using interior-point methods is of order $\mathcal{O}(N^3)$, where there are $N-1$ peptide planes in the protein molecule. Furthermore, if there are $N_a$ atoms in the chain, the computational complexity associated with computing the intramolecular forces at each iteration is of order $\mathcal{O}(N_a^2)$. Finally, the computational complexity associated with direct kinematics calculations is of order $\mathcal{O}(N)$. \begin{figure}[t] \centering \includegraphics[width=0.52\textwidth]{./figures/flowChart.png} \caption{\small The flowchart of the computer program. Contact the corresponding author (email:\texttt{[email protected]}) for obtaining a copy of the program.} \vspace{-1.5ex} \label{fig:flowChart} \end{figure} \noindent\textbf{Initial conformation of the protein molecule.} In the presented simulation results in the paper, the initial protein conformation vector (in degrees) is given by \begin{align} \nonumber \pmb{\theta}_0 = \big[ & 28.3, 28.6, 26.1, 28.7, 27.7, 26.0, 26.6, 27.5, 28.8, \cdots \\ \nonumber & 28.8, 26.2, 28.8, 28.8, 27.2, 28.3, 26.2, 27.1, 28.7, \cdots \\ \nonumber & 28.3, 28.8, 27.8, 25.8\big]^{\circ\top}. \end{align} \begin{figure*}[t] \centering \includegraphics[width=0.65\textwidth]{./figures/protBasic.png} \caption{\small The protein molecule kinematic structure under the $\text{C}_\alpha-\text{CO}-\text{NH}-\text{C}_\alpha$ coplanarity assumption.} \vspace{-1.5ex} \label{fig:protBasic1} \end{figure*} \section*{Supplementary References} \small{ \noindent [S1]\hspace{1pt} S. Hikiri, T. Yoshidome, and M. Ikeguchi. ``Computational methods for configurational entropy using internal and Cartesian coordinates,'' \emph{J. Chem. Theory Comput.}, vol. 12, no. 12, pp. 5990-6000, 2016.\\ \noindent [S2]\hspace{1pt} T. Ohtsuka. ``Solutions to the Hamilton-Jacobi equation with algebraic gradients,'' \emph{IEEE Trans. Automat. Contr.}, 56(8), pp. 1874-1885, 2010.\\ \noindent [S3]\hspace{1pt} C-H. Won, S. Biswas. ``Optimal control using an algebraic method for control-affine non-linear systems,'' Int. J. Contr., vol. 80, no. 9, pp. 1491-1502, 2007.\\ \noindent [S4]\hspace{1pt} T. Cheng, F. L. Lewis and M. Abu-Khalaf. ``Fixed-final-time-constrained optimal control of nonlinear systems using neural network HJB approach,'' \emph{IEEE Trans. Neural Netw.}, vol. 18, no. 6, pp.1725-1737, 2007.\\ \noindent [S5]\hspace{1pt} B. Morris, M. J. Powell, and A. D. Ames. ``Sufficient conditions for the Lipschitz continuity of QP-based multi-objective control of humanoid robots,'' in \emph{Proc. 52nd IEEE Conf. Dec. Contr.}, Firenze, Italy, Dec. 2013, pp. 2920-2926.\\ \noindent [S6]\hspace{1pt} O. L., Mangasarian, and S. Fromovitz. ``The Fritz John necessary optimality conditions in the presence of equality and inequality constraints,'' J. Math. Anal. Appl., vol. 17, no. 1, pp. 37-47, 1967.\\ \noindent [S7]\hspace{1pt} A. Iserles, \emph{A first course in the numerical analysis of differential equations}. Cambridge University Press, 2009, no. 44.\\ \noindent [S8]\hspace{1pt} Y. Wang, and S. Boyd. ``Fast evaluation of quadratic control-Lyapunov policy,'' IEEE Trans. Contr. Syst. Technol., vol. 19, no. 4, pp. 939-946, 2010.\\ \noindent [S9]\hspace{1pt} A. Ludwig. \emph{molecule3D} (2016). Accessed: Oct. 2020. [Online]. Available: https://www.mathworks.com/matlabcentral/fileexchange/55231-molecule3d, MATLAB Central File Exchange. } \section*{ACKNOWLEDGMENTS} \vspace{-1ex} \bibliographystyle{./styles/IEEEtran} \section{Simulation Results} \label{sec:sims} In this section we present simulation results to validate our proposed approach to protein unfolding using Chetaev instability framework. We consider a protein molecule chain consisting of $N-1=10$ peptide planes with a $22$-dimensional dihedral angle space in~\eqref{eq:KCM_ode}. Our implementation follows the PROTOFOLD~I guidelines~\cite{kazerounian2005protofold,Tavousi2016}. We have performed the unfolding simulations on the folded protein conformation $\pmb{\theta}^\ast$\footnote{$\pmb{\theta}^\ast=$ [1.34 1.37 1.26 1.29 1.42 1.73 1.65 1.65 1.46 1.62 1.49 2.01 1.31 0.99 1.98 1.9 1.59 1.56 1.57 0.93 1.22 1.29]$^\top$ in radians.} that has been obtained using the KCM-based iteration in~\cite{kazerounian2005protofold}. The candidate Chetaev function $C_\text{twz}:\mathbb{R}^{22} \to \mathbb{R}_{+}$ is chosen according to~\eqref{eq:candidChetaev} with $\alpha_C=\tfrac{\pi}{4}$. To visualize this function, we have plotted it using the dihedral angle vectors of the form $\pmb{\theta}=\pmb{\theta}^\ast+[\delta \theta_1,\delta \theta_2,\delta \theta_3, 0, \cdots, 0]^\top$, where the triplet $(\delta \theta_1,\delta \theta_2,\delta \theta_3)$ belong to the three-dimensional sphere of radius $0.1$ radians and centered at the origin. Figure~\ref{fig:chetProj} depicts both the folded conformation at $\pmb{\theta}^\ast$ and the visualization of the candidate Chetaev function when translated to the origin. Using the setup above, we conduct two numerical simulations. The first one entails the optical tweezer-based unfolding while the second one entails an unfolding simulation using the Artstein-Sontag universal formula in~\eqref{eq:sontagInput} with $p=q=2$ and $C_\text{twz}$ in Proposition~\ref{prop:CCF2} as the CCF. In the first simulation, where the numerical condition of Proposition~\ref{prop:hessUnfold} is verified, we let the optical tweezer displacement be $\mathbf{x}_{\text{twz}}=x_0 \tfrac{\mathbf{r}_\text{NC}(\pmb{\theta}^\ast)}{\|\mathbf{r}_\text{NC}(\pmb{\theta}^\ast)\|}$, where $x_0$ is chosen to be $51$ nm. In other words, we elongated the protein molecule along the vector that connects the $N$-terminus to the $C$-terminus. Furthermore, we utilized a modulated optical trap stiffness of $\kappa(\pmb{\theta})=\kappa_0|\mathbf{r}_\text{NC}(\pmb{\theta}^\ast) - \mathbf{r}_\text{NC}(\pmb{\theta})|^2$, where $\kappa_0=0.16 \text{ pN/nm}$. Figure~\ref{fig:unfoldSim} depicts the free energy of the molecule under the optical tweezer-based and the CCF-based unfolding control inputs as well as several protein conformations along their respective unfolding pathways (see Supplementary Material for further discussion). \begin{figure}[t] \centering \includegraphics[width=0.49\textwidth]{./figures/chetaevProj.png} \caption{\small Visualization of the candidate Chetaev function $C_\text{twz}:\mathbb{R}^{22} \to \mathbb{R}$ based on the depicted folded conformation at $\pmb{\theta}^\ast$.} \vspace{-3.0ex} \label{fig:chetProj} \end{figure} \begin{figure}[t] \centering \includegraphics[width=0.37\textwidth]{./figures/freeEnergyV2.png} \caption{\small Energy profiles from unfolding numerical simulations under optical tweezer-based control inputs (red curve; molecule labeled by TWZ) and a theoretically-motivated input generated by the Artstein-Sontag universal formula (blue curve; molecule labeled by CCF).} \vspace{-3.0ex} \label{fig:unfoldSim} \end{figure} \section{Dynamical Model} \label{sec:dynmodel} Figure~\ref{fig:micro} depicts the configuration of a generic helical microrobot. The robot geometry is completely determined by the number of turns of the helix $n_\text{h}$, the helix pitch angle $\theta_\text{h}$, the helix radius $r_\text{h}$, and the magnetic head radius $r_\text{m}$. The helix coordinate frame $x_\text{h} - z_\text{h}$ is attached to the helix center $O_\text{h}$. We also fix a right-handed inertial coordinate frame in the Euclidean space and denote it by $W$. We denote the unit vectors in the direction of $x$ and $z$ coordinates of the global frame $W$ by $\hat{e}_x$ and $\hat{e}_z$, respectively. We let the position of the center of mass and the velocity of the microswimmer in the inertial frame $W$ be given by $p=[p_\text{x},\,p_\text{z}]^{\top}$ and $v=[v_\text{x},\,v_\text{z}]^{\top}$, respectively. We denote the angle of the center of mass (COM) position vector in the inertial coordinate frame by $\theta$. Therefore, \begin{equation}\label{eq:theta} \theta := \text{atan2}(p_\text{z},p_\text{x}). \end{equation} \begin{remark} In this paper, we are only interested in controlling the position of the microrobot and not its orientation in the $x-z$ plane. Indeed, our model has \emph{only} two degrees of freedom, namely, the COM Cartesian coordinates given by $p_\text{x}$ and $p_\text{z}$. The angle $\theta$ in~\eqref{eq:theta} and the magnitude $\|p\|$ are the polar coordinates of the position vector $p=[p_\text{x},\,p_\text{z}]^{\top}$. \hfill$\triangle$ \end{remark} \begin{figure}[t] \centering \includegraphics[scale=0.425]{./figures/micro1.png} \caption{The configuration of a magnetic helical microrobot.} \label{fig:micro} \end{figure} The microrobot propulsion mechanism, which is based on transducing the external magnetic field energy to microrobot forward motion, can be described as follows. An external rotating uniform magnetic field, denoted by $H$ in Figure~\ref{fig:micro}, causes the magnetic helical microrobot to rotate about its axis. The resulting rotation about the helix axis in the ambient fluidic environment will then induce a screw-like motion and drive the microswimmer forward. Early analyses of helical propulsion of micro-organisms in fluids were carried out by~\cite{chwang1971note} and~\cite{purcell1977life}. In this paper, we use the control-oriented model developed in~\cite{mahoney2011velocity}. Using resistive force theory (RFT), Mahoney \emph{et al.}~\cite{mahoney2011velocity} have derived the dynamical model of 3D helical microswimmers operating in low-Reynolds-number regimes. In this modeling approach, the velocity of each infinitesimally small segment of the helix is mapped to parallel and perpendicular differential fluid drag forces acting on the segment. Integrating the differential forces along the length of the helix in three dimensions, the fluidic force and torque acting on the helical part of the robot are obtained. Adding the fluidic forces acting on the head, the dynamical equations of motion of the microswimmer are obtained. In this article we only consider the planar motion of the microrobot where the swimmer is rotated around its helical axis and the direction of motion is perpendicular to the plane of microrobot rotation~\cite{zhang2009artificial}. Assuming that the microswimmer motion is confined to the $x-z$ plane, the dynamics of the planar microswimmer are given by (see~\cite{mahoney2011velocity} for detailed derivations) \begin{eqnarray}\label{eq:2D_dyn} \dot{p} & =& A_{\theta}d_{\text{g}} + B_{\theta}u, \end{eqnarray} where \[ A_\theta = R_\theta A_h^{-1} R_{\theta}^{\top},\; B_\theta = -R_\theta A_h^{-1}B_h,\; d_{\text{g}}=-mg \hat{e}_z. \] In the above, $R_{\theta}$ is the rotation matrix from the robot body-fixed frame $x_\text{h}-z_\text{h}$ to the inertial frame $W$. Furthermore, the constant matrices \begin{equation} \label{eq:ABstruct} A_h = \begin{bmatrix} a_1 & 0 \\ 0 & a_2 \end{bmatrix},\; B_h = \begin{bmatrix} b_1 \\ b_2 \end{bmatrix}, \end{equation} depend on the geometry of the helical microswimmer and the helix drag coefficients in its fluid environment. Moreover, $d_{\text{g}}$ is the net weight force (gravity minus buoyancy) that is acting on the robot in the inertial coordinate frame. Finally, $u$ is the frequency of rotation of the microrobot about the helix axis, which is proportional to the frequency of rotation of the external actuating uniform magnetic field (see Figures~\ref{fig:micro} and~\ref{fig:los}). As it is shown in~\cite{mahoney2011velocity}, the constant control input vector $B_h$ in~\eqref{eq:2D_dyn} can be written as \begin{equation}\label{eq:Bh} B_h = ( \xi_{\parallel} - \xi_{\perp} ) B_h^{\prime}, \end{equation} where $\xi_{\parallel}$ and $\xi_{\perp}$ are the helix tangential and normal drag coefficients, and $B_h^{\prime}=[b_1^{\prime}, b_2^{\prime}]^\top\in\mathbb{R}^{2\times 1}$ is a constant vector with $b_1^{\prime}\neq 0$ and $b_2^{\prime}\neq 0$. The difference between the tangential and normal drag coefficients, i.e., $\xi_{\parallel}$ and $\xi_{\perp}$, plays a key role in control analysis, as discussed in Section~\ref{subsec:zerodyn}. The microrobot dynamics in~\eqref{eq:2D_dyn} can be written in the nonlinear affine control form \begin{eqnarray}\label{eq:2D_dynb} \dot{p} & =& f(p)+g(p)u, \end{eqnarray} \noindent where $p \subset \mathbb{R}^2$ is the state of the system and \[ f(p):= A_{\theta}d_\text{g},\;\; g(p):= B_{\theta},\;\; \theta := \text{atan2}(p_z,p_x). \] \section{Control Problem Formulation and Solution Strategy} \label{sec:ContProb} In this section we formulate the straight-line path following control problem for swimming magnetic helical microrobots and outline our solution strategy. \textbf{Straight-Line Path Following Control (LFC) Problem.} Consider a planar magnetic microswimmer whose dynamics are given by~\eqref{eq:2D_dynb}. Given a step-out frequency $f_{\text{SO}}$ and direction vector \[ \hat{e}_{_{\theta_\text{r}}} = \begin{bmatrix} \cos(\theta_\text{r}) \\ \sin(\theta_\text{r}) \end{bmatrix}, \] for some constant angle $\theta_r$, make the microrobot to converge to the line \begin{equation}\label{eq:line} \mathcal{P}:=\{ p \in \mathbb{R}^2 : p = \tau\hat{e}_{_{\theta_\text{r}}} ,\, \tau\in \mathbb{R} \}, \end{equation} and to traverse it with a bounded velocity such that $|u(t)|\leq f_{\text{SO}}$ for all $t\geq 0$. Given the path $\mathcal{P}$ in~\eqref{eq:line}, we consider the output \begin{equation}\label{eq:output} y = h(p) := \hat{e}_{_{\theta_\text{r}}}^{\perp^\top} p, \end{equation} \noindent for the dynamical system in~\eqref{eq:2D_dynb}, where \[ \hat{e}_{_{\theta_\text{r}}}^{\perp} = \begin{bmatrix} -\sin(\theta_\text{r}) \\ \cos(\theta_\text{r}) \end{bmatrix}, \] is the unit vector perpendicular to path $\mathcal{P}$. Indeed, the zero level set of the output in~\eqref{eq:output} is the path $\mathcal{P}$ in~\eqref{eq:line}. \noindent\textbf{Solution Strategy.} Our solution to microrobot LFC problem unfolds in the following steps. \begin{itemize} \item[] \textbf{Step 1:} In Section~\ref{subsec:zerodyn}, we find the necessary and sufficient condition by which the path $\mathcal{P}$ in~\eqref{eq:line} can be made invariant via the choice of a proper control input for the microrobot closed-loop dynamics. This invariance property ensures that once the microrobot is initialized on $\mathcal{P}$, it will remain on it for all future time. Furthermore, we derive the constrained dynamics of the microswimmer on path $\mathcal{P}$. \item[] \textbf{Step 2:} In Section~\ref{subsec:los}, we consider the reference model \begin{equation} \label{eq:refmodel} \dot{p} = \alpha_{\text{d}}v^{\text{d}}(p), \end{equation} where $\alpha_{\text{d}}$ is a positive constant and $v^{\text{d}}(p)$ is the desired closed-loop vector field coming from a line-of-sight guidance law. The constant parameter in guidance law~\eqref{eq:refmodel} is determined by the constrained dynamics of the microrobot on $\mathcal{P}$ obtained in Step 1. We prove that LFC problem objective is achieved for the reference model in~\eqref{eq:refmodel}. % \item[] \textbf{Step 3:} Having obtained a desired closed loop vector field that achieves the LFC objective in Step~2, we cast the control problem as an ODS-based quadratic program in Section~\ref{subsec:ods}. This ODS-based quadratic program minimizes, at each point $p$ of the state space, the difference between the open-loop and the reference model vector fields in~\eqref{eq:2D_dynb} and~\eqref{eq:refmodel}, while respecting the control input constraints. % \end{itemize} \section{Concluding Remarks and Future Research Directions} \label{sec:conc} Using an ODS-based approach, we presented an optimization-based control solution for path following control of swimming helical magnetic microrobots subject to control input constraints. An LOS-based reference model was chosen for the ODS-based quadratic program. The proposed control methodology leads us to further research avenues for controlling helical magnetic microrobots such as way-point tracking control in the cluttered areas of the human body, three-dimensional maneuvering control, control in the presence of disturbances such as vessel blood flow, and control of a collection of microrobots moving in a formation. \section{CLF-based Quadratic Program Solution} \begin{itemize} \item This is the trivial solution after the zero dynamics analysis is done. However, it lacks a proper geometric insight. Indeed, this control strategy makes the robot to converge to the zero dynamics manifold in a direction perpendicular to the manifold. \end{itemize} \section{Introduction} Swimming microrobots have attracted much attention in recent years due to their potential impact on medicine and micro-manipulation. In \emph{in vivo} medical applications, these robots can be used for performing safety critical therapeutic and diagnostic medical operations, such as targeted drug delivery in hard-to-reach environments of the human body. In \emph{in vitro} or lab-on-a-chip applications, these robots can be used for cell manipulation and characterization as well as micro-fluid control. See, e.g.,~\cite{hong2017real, huang2016magnetic,nelson2010microrobots, abbott2009should,behkam2006design,behkam2007bacterial} for recent developments in this field. Microrobots can be categorized based on their morphologies and actuation methods. Microrobot \emph{morphologies} such as cilia-like, eukaryotic-like, and helical morphologies are inspired from biological micro-organisms. The \emph{helical morphology}, which is inspired by bacterial flagella, is argued to provide the best overall choice for \emph{in vivo} applications~\cite{nelson2010microrobots,abbott2009should}. In addition to morphology, the type of actuator is another significant factor in microrobot design. To date, two classes of actuation methods have been proposed for swimming microrobot propulsion in fluidic environments; namely, \emph{untethered magnetic actuation} in~\cite{abbott2009should,honda1996micro,nelson2010microrobots}, and \emph{molecular motors} in~\cite{behkam2006design,behkam2007bacterial, kosa2007propulsion,kosa2008flagellar}. Actuators that use molecular motors pose numerous technological challenges in microfabrication as well as wireless power transmission and control. The untethered magnetic field actuation mechanisms, on the other hand, remove the need for replicating biological molecular motors and scale well in terms of microfabrication as well as wireless power and control. Despite recent advances in microfabrication and actuation technologies for swimming magnetic microrobots, systematic design of automatic motion control algorithms for these microswimmers is an open problem to date. One of the major control limitations for magnetic helical microrobots is due to the existence of an upper limit on the rotational frequency of the robot body around its helical axis. The threshold frequency of helical microrobots' rotation, which keeps the robot in synchrony with the external rotating magnetic field, is known as the \emph{step-out frequency} beyond which the microrobot's velocity rapidly declines~\cite{nelson2010microrobots,abbott2009should}. The typical control approach in microrobot motion control literature relies on asymptotic tracking of suitably designed reference signals~\cite{vartholomeos2006analysis, arcese2013adaptive,zhang2017adaptive}. However, trajectory tracking controllers are subject to performance limitations, as demonstrated by~\cite{aguiar2005path}. An alternative approach is based on path following control schemes, which rely on making the desired paths invariant and attractive for the microrobot closed-loop dynamics. Few researchers have proposed path following controllers for magnetic microrobots~\cite{xu2015planar,khalil2017rubbing,oulmas2016closed}. However, the proposed path following schemes do not directly address the control input saturation issue. In this paper, we propose a path following control scheme that makes a given helical magnetic microrobot to converge to a desired straight line without violating the input saturation limits. Our feedback control solution is based on the optimal decision strategy (ODS), which is a pointwise optimal control law that minimizes the deviation between the open-loop dynamics vector field and a reference model vector field. Optimal decision strategies have been used in several applications such as controlling transients in power systems~\cite{thomas1976model}, and robotic manipulators with bounded inputs~\cite{spong1986control,spong1984control}. ODS-based strategies belong to the larger family of optimization-based nonlinear controllers~\cite{galloway2015torque,morris2015continuity,powell2015model, Ames17a,bouyar2017}, whose applications in robotics and driverless cars are growing, thanks in part to recent advancements in mobile computation power. Our proposed ODS-based control law for magnetic helical microrobots, which is cast as a quadratic program, minimizes the difference between the microrobot velocity and a line-of-sight (LOS)-based reference vector field subject to input saturation constraints at each point of the state space. The proposed reference vector field is inspired from the line-of-sight (LOS) path following laws that are widely used for underactuated marine craft control (see, e.g.,~\cite{fossen2003line, pettersen2001way,fossen2017direct}). \noindent\textbf{Contributions of the paper.} This paper contributes to solving the path following control problem for swimming microbots in \emph{two important ways}. First, the paper develops an LOS-based guidance law for swimming microrobots, inspired from the automatic ship steering literature~\cite{fossen2003line,pettersen2001way,fossen2017direct}. Second, using the LOS-based guidance law, this paper casts the control input computation as a quadratic program induced by the ODS framework~\cite{spong1986control}, which belongs to the wider class of real-time optimization-based controllers. The rest of this paper is organized as follows. First, we present the dynamical model of planar swimming helical microrobots in Section~\ref{sec:dynmodel}. Next, we formulate the straight-line path following control problem for a single swimming microrobot subject to control input constraints and outline our solution strategy in Section~\ref{sec:ContProb}. Thereafter, we present our ODS-based control scheme for swimming helical microrobots in Section~\ref{sec:Sol}. After presenting the simulation results in Section~\ref{sec:sims}, we conclude the paper with final remarks and future research directions in Section~\ref{sec:conc}. \section{Simulation Results} \label{sec:sims} In this section we present numerical simulation results to validate the performance of the ODS-based control method described in Section~\ref{sec:Sol}. We consider a helical magnetic microrobot with number of turns $n_{_\text{h}}=4$, net effective mass $m_{_\text{h}}=8.9 \,\mu\text{g}$, helix radius $r_{_\text{h}}=0.5 \,\text{mm}$, helix pitch angle $\theta_\text{h}=45^{\circ}$, and magnetic head diameter $d_{_\text{m}}=1 \text{mm}$. Further, we assume that the microrobot is swimming in a fluid environment with viscosity $\eta_{\text{v}}=0.98\, \text{cP}$ and density $\rho = 970\, \text{kg}/\text{m}^3$ (close to that of water). The chosen parameters, which are taken from~\cite{abbott2009should} and~\cite{mahoney2011velocity}, satisfy the low-Reynolds-number regime condition and thus the dynamical model in~\eqref{eq:2D_dyn} is valid. Using the above microrobot and fluid parameters, the tangential and normal drag coefficients are found to be (see, e.g.,~\cite{mahoney2011velocity} for physical identities giving the relationship between the drag coefficients and the properties of the microswimmer and its ambient fluid environment) \[ \xi_{\parallel} = 1.21 ,\; \xi_{\perp} = 2.03. \] We would like the microrobot to follow the straight line \[ \mathcal{P}:=\{ p \in \mathbb{R}^2 : p = \tau\hat{e}_{_{\pi/4}} ,\, \tau\in \mathbb{R} \}, \] subject to control input saturation limit $f_{_\text{SO}}=24\,\text{Hz}$. Since $\xi_{\parallel}\neq \xi_{\perp}$, from Proposition~\ref{prop:rel_deg}, we conclude that straight line $\mathcal{P}$ can be made invariant via feedback. The microrobot speed of line traversal, once constrained to $\mathcal{P}$, is $v^{\star} = 727\; \mu\text{m}/\text{sec}$, which has been computed using~\eqref{eq:microvel}. The look-ahead distance parameter $\Delta_\text{LOS}$ in the LOS guidance law in~\eqref{eq:los} should be set equal to $v^{\star}/\alpha_{\text{d}}$, where we have chosen $\alpha_{\text{d}}=1$, if the accurate parameters of the robot and its environment are known. However, these parameters might not be known \emph{a priori}. In order to investigate the robustness of our proposed ODS-based scheme to parametric uncertainties, we design our control laws with approximate drag coefficients \[ \hat{\xi}_{\parallel} = 1.10 ,\; \hat{\xi}_{\perp} = 1.31. \] We use the ODS-based quadratic program in~\eqref{eq:ODS} with the positive definite matrix \[ Q = \begin{bmatrix} 1 & 0 \\ -1 & 5 \end{bmatrix}. \] In our simulation studies, we have initialized the microrobot position at $p(0)=[1\,\text{mm},\; 1\,\text{mm}]^{\top}$. In addition to the ODS-based scheme, we have also employed an input-output feedback linearizing control law, where the control input has been designed with no explicit consideration of saturation limits. However, during the simulations, we saturate this control input at $\pm f_{\text{SO}}$ whenever the values of the input-output feedback linearizing-based control law go beyond the saturation limits. Plots (a) and (b) in Figure~\ref{fig:path_nominal} depict the path of the microrobot and the control input time profile under the ODS-based scheme with $f_{\text{SO}}=24$ Hz. Plots (c) and (d) in Figure~\ref{fig:path_nominal} demonstrate the simulation results for the conventional input-output feedback linearizing control law that has been saturated with $f_{\text{SO}}=24$ Hz. As it can be seen from these plots, the swimming microrobt achieves a good path path-following performance under the ODS-based scheme in comparison with the saturated input-output feedback linearizing scheme where the microswimmer deviates more from the nominal path. In order to investigate the performance of our ODS-based scheme under smaller values of saturation limit, we have run simulations for step-out frequencies $f_{\text{SO}}=12$ Hz and $f_{\text{SO}}=6$ Hz, respectively. Figure~\ref{fig:path_nominal2} depicts the path of the microswimmer and the control input time profiles under these two frequencies. As it can be seen from the figure, a good path following can still be achieved once the saturation limit is halved from $24$ Hz to $12$ Hz. However, the ODS-based scheme fails once the saturation limit is halved from $12$ Hz to $6$ Hz. \begin{remark}[Sensing and Actuation] In order to obtain the planar Cartesian coordinates of a given microrobot, off-the-shelf components such as microscopes and cameras combined with image-processing techniques can be employed for imaging and localizing the microrobot~\cite{nelson2010microrobots}. Three-axes Helmholtz coils, as described in~\cite{bell2007flagella}, can be used in order to generate rotating magnetic fields in arbitrary directions. \hfill$\triangle$ \end{remark} \begin{figure*} \centering \begin{subfigure}{0.4\textwidth} \includegraphics[width=0.9\textwidth]{./figures/ODS_nominal_path.png} \caption{} \end{subfigure} \quad\quad \begin{subfigure}{0.4\textwidth} \includegraphics[width=0.9\textwidth]{./figures/ODS_nominal_u.png} \caption{} \end{subfigure} \begin{subfigure}{0.4\textwidth} \includegraphics[width=0.9\textwidth]{./figures/OUTYsat_nominal_path.png} \caption{} \end{subfigure} \quad\quad \begin{subfigure}{0.4\textwidth} \includegraphics[width=0.9\textwidth]{./figures/OUTYsat_nominal_u.png} \caption{} \end{subfigure} \caption{Microrobot position and control input time profiles under: (a, b) the ODS-based scheme and (c, d) the input-output feedback linearizing control law with $f_\text{SO}=24$ [Hz].} \label{fig:path_nominal} \end{figure*} \begin{figure*} \centering \begin{subfigure}{0.4\textwidth} \includegraphics[width=0.98\textwidth]{./figures/ODS_half_path.png} \caption{} \end{subfigure} \quad\quad \begin{subfigure}{0.4\textwidth} \includegraphics[width=0.98\textwidth]{./figures/ODS_half_u.png} \caption{} \end{subfigure} \begin{subfigure}{0.4\textwidth} \includegraphics[width=0.98\textwidth]{./figures/ODS_fourth_path.png} \caption{} \end{subfigure} \quad\quad \begin{subfigure}{0.4\textwidth} \includegraphics[width=0.98\textwidth]{./figures/ODS_fourth_u.png} \caption{} \end{subfigure} \caption{Microrobot position and control input time profiles under the ODS-based scheme with: (a, b) $f_\text{SO}=12$ [Hz] and (c, d) $f_\text{SO}=6$ [Hz].} \label{fig:path_nominal2} \end{figure*} \section{Conclusion} \label{sec:conc} This paper formulated the important biochemical process of KCM-based protein unfolding as a destabilizing control analysis/synthesis problem. In light of this formulation and using the Chetaev instability framework, numerical conditions on optical tweezer forces that lead to protein unfolding have been derived. Furthermore, after presenting two types of CCFs, a theoretically-motivated CCF-based Artstein-Sontag unfolding input has been derived. These findings lead us to further research avenues such as encoding entropy-loss constraints for protein unfolding using the obtained CCFs, investigating folding pathways for two-state proteins, and studying protein folding/unfolding in solutions. \section*{Supplementary Material To ``Chetaev Instability Framework for Kinetostatic Compliance-Based Protein Unfolding''} \noindent{\textbf{Notation.}} In these supplementary notes, we will number the equations using (s\#), the references using [S\#], and the figures using SF\#. Therefore, (s1) refers to Equation~1 in the supplementary notes while (1) refers to Equation~1 in the original article. The same holds true for the references and the figures. \subsection{Schematic of protein folding/unfolding against the free energy landscape} Figure~\ref{fig:foldingSchem} depicts a schematic of folding/unfolding against the free energy landscape of the protein molecule. \begin{figure}[h] \centering \includegraphics[width=0.42\textwidth]{./figures/protUnfoldSchematic2.png} \caption{\small Protein folding/unfolding against the free energy landscape of the molecule.} \vspace{-3ex} \label{fig:foldingSchem} \end{figure} \subsection{Principles of operation of optical tweezers} \label{subsec:optTweezer} Optical tweezers, which were discovered and developed by Ashkin~[S2], operate based on the principle of force exertion by light on matter due to the change in momentum that light carries. Figure~\ref{fig:optTwz} depicts a schematic of optical tweezer-based protein unfolding. Optical tweezers can be utilized for unfolding protein molecules by chemically attaching a latex bead (microsphere) to the protein under study. The bead could subsequently be trapped and utilized both as a handle for manipulating the protein molecule and as a tool for measuring displacements at a very small scale. Under the assumptions of a nearly Gaussian beam and a locally harmonic potential energy for a given optical tweezer, the optical trap can be modeled as a Hookean spring~\cite{bustamante2020single}. \begin{figure}[h] \centering \includegraphics[scale=0.45]{./figures/optTwz.png} \caption{\small Schematic of optical tweezer-based protein unfolding (recreated from~[S1]).} \vspace{-6ex} \label{fig:optTwz} \end{figure} \subsection{On the choice of the candidate CF} Figure~\ref{fig:coneChetaevGeom} represents the geometric interpretation associated with the three dimensional vectors utilized for defining $C_\text{twz}(\cdot)$ in~\eqref{eq:candidChetaev}. \textcolor{black}{The vectors $\mathbf{r}_\text{NC}(\pmb{\theta})$ and $\mathbf{r}_\text{NC}(\pmb{\theta}^{\ast})$ represent the ``length'' of the molecule at a general conformation $\pmb{\theta}$ and the folded conformation $\pmb{\theta}^{\ast}$, respectively. The positive level sets of the candidate CF $C_\text{twz}(\cdot)$ in~\eqref{eq:candidChetaev}, which depends on $\mathbf{r}_\text{NC}(\pmb{\theta})$ and $\mathbf{r}_\text{NC}(\pmb{\theta}^{\ast})$, correspond to conformations at which the protein molecule has been elongated from the folded conformation along desired directions. Indeed, when $\dot{C}_\text{twz}>0$ along the trajectories of the unfolding dynamics on the positive level sets of $C_\text{twz}(\cdot)$, the molecule is being elongated away from its folded conformation.} \begin{figure}[h] \centering \includegraphics[width=0.25\textwidth]{./figures/coneChetaev.png} \caption{\small When $C_\text{twz}(\pmb{\theta})=0$, $\Delta \mathbf{r}_\text{NC}(\pmb{\theta},\pmb{\theta}^\ast)$ belongs to a right circular cone with vertex at the origin, axis parallel to $\mathbf{r}_\text{NC}(\pmb{\theta}^\ast)$, and aperture $2\alpha_C$.} \vspace{-2.5ex} \label{fig:coneChetaevGeom} \end{figure} \subsection{Proof of Technical Results} \noindent\textbf{Proof of Lemma~\ref{lem:lemChet}.} Since $\Delta \mathbf{r}_\text{NC}(\pmb{\theta}^\ast,\pmb{\theta}^\ast)=\mathbf{0}$, we conclude that $C_\text{twz}(\pmb{\theta}^\ast)=0$. Let us define the mapping % \begin{myequation} \begin{aligned} f: \mathbb{R}^{2N}\backslash \{\pmb{\theta}^\ast\} \to \mathbb{R},\, \pmb{\theta} \mapsto \cos^{-1}(\tfrac{\Delta \mathbf{r}_\text{NC}(\pmb{\theta},\pmb{\theta}^\ast)^\top \mathbf{r}_\text{NC}(\pmb{\theta}^\ast)}{|\Delta \mathbf{r}_\text{NC}(\pmb{\theta},\pmb{\theta}^\ast)| |\mathbf{r}_\text{NC}(\pmb{\theta}^\ast)|}), \end{aligned} \label{eq:fDefn} \end{myequation} % \hspace{-1.1ex}\textcolor{black}{where $f(\pmb{\theta})$ is the angle between $\mathbf{r}_\text{NC}(\pmb{\theta}^\ast)$ and $\Delta \mathbf{r}_\text{NC}(\pmb{\theta},\pmb{\theta}^\ast)$.} We claim \begin{myequation} \mathcal{C}_\text{twz}^{+} = f^{-1}\big( (\alpha_C,\tfrac{\pi}{2}) \cap (-\tfrac{\pi}{2},-\alpha_C) \big). \label{eq:VplusDefn} \end{myequation} % Consider an arbitrary $\pmb{\theta}_0 \in \mathbb{R}^{2N}$ such that $C_\text{twz}(\pmb{\theta}_0) > 0$. From~\eqref{eq:candidChetaev} and~\eqref{eq:fDefn}, it can be seen that \begin{myequation} C_\text{twz}(\pmb{\theta}_0) = |\Delta \mathbf{r}_{_\text{NC}}(\pmb{\theta}_0,\pmb{\theta}^\ast)|^2 |\mathbf{r}_{_\text{NC}}(\pmb{\theta}^\ast)|^2 ( \cos^2(\alpha_C) \text{-} \cos^2(f(\pmb{\theta}_0))). \end{myequation} % Therefore, $C_\text{twz}(\pmb{\theta}_0) > 0$ if and only if $\pmb{\theta}_0 \in \mathcal{C}_\text{twz}^{+}$. Since $f(\cdot)$ is a continuous function, we conclude that $\mathcal{C}_\text{twz}^{+}$ in~\eqref{eq:VplusDefn} is an open set and $\pmb{\theta}^\ast$ belongs to the boundary of $\mathcal{C}_\text{twz}^{+}$. Therefore, $\mathcal{C}_\text{twz}^{+} \cap \mathcal{B}_{\pmb{\theta}^\ast}(\epsilon) \neq \emptyset$ for all $\epsilon \in (0, \epsilon_0]$ for some $\epsilon_0>0$.\hfill$\blacksquare$\\ \noindent\textbf{Proof of Proposition~\ref{prop:propUnfold}.} Considering the smooth candidate Chetaev function in~\eqref{eq:candidChetaev}, the directional derivative $D C_\text{twz}(\pmb{\theta})\mathbf{f}^{\text{unfold}}(\pmb{\theta})$ in the direction of $\mathbf{f}^\text{unfold}(\cdot)$ is given by $\tfrac{\partial C_\text{twz}}{\partial \pmb{\theta}} \mathbf{f}^{\text{unfold}}(\pmb{\theta})$, where \begin{myequation} \begin{aligned} \frac{\partial C_\text{twz}}{\partial \pmb{\theta}} = 2 \bigg\{ |\mathbf{r}_{_\text{NC}}(\pmb{\theta}^\ast)|^2 \cos^2(\alpha_C) \Delta \mathbf{r}_{_\text{NC}}(\pmb{\theta},\pmb{\theta}^\ast)^\top - \\ \Delta \mathbf{r}_{_\text{NC}}(\pmb{\theta},\pmb{\theta}^\ast)^\top \mathbf{r}_{_\text{NC}}(\pmb{\theta}^\ast)\mathbf{r}_{_\text{NC}}(\pmb{\theta}^\ast)^\top \bigg\} \frac{\partial \mathbf{r}_\text{NC}}{\partial \pmb{\theta}}. \end{aligned} \end{myequation} Therefore, due to Lemma~\ref{lem:lemChet}, the statement of the proposition follows from Theorem~\ref{thm:cheta}.\hfill$\blacksquare$\\ \noindent\textbf{Proof of Proposition~\ref{prop:hessUnfold}.} Given the $C^\infty$-smoothness of $P(\cdot)$ in a neighborhood of $\pmb{\theta}^\ast$, we can utilize the Taylor's theorem for multivariate functions to arrive at \begin{myequation} P(\pmb{\theta}) = P(\pmb{\theta}^\ast) + \frac{\partial P}{\partial \pmb{\theta}}\bigg|_{\theta^\ast} \delta\pmb{\theta} + \delta\pmb{\theta}^\top \text{Hess}(P)|_{\pmb{\theta}^\ast} \delta\pmb{\theta} + \text{ H.O.T}, \end{myequation} where $\pmb{\theta} \in \mathcal{B}_{\pmb{\theta}^\ast}(\epsilon) \cap \mathcal{V}^+$ for some $\epsilon > 0$ and $\delta \pmb{\theta} := \pmb{\theta} - \pmb{\theta}^\ast$. It can be seen that $P(\pmb{\theta}^\ast) = 0$ because $\pmb{\theta}^\ast$ is an equilibrium for the dynamics in~\eqref{eq:mainDyn}. Furthermore, $\tfrac{\partial P}{\partial \pmb{\theta}}|_{\pmb{\theta}^\ast}=\mathbf{0}$ because $\mathbf{f}^\text{unfold}(\pmb{\theta}^\ast)=\mathbf{0}$ and $\tfrac{\partial C_\text{twz}}{\partial \pmb{\theta}}|_{\pmb{\theta}^\ast} =\mathbf{0}$ according to~\eqref{eq:jacobChet}. Therefore, $P(\pmb{\theta})>0$ on $\mathcal{B}_{\pmb{\theta}^\ast}(\epsilon) \cap \mathcal{V}^+$ if and only if the stated condition in the proposition holds.\hfill$\blacksquare$ \\ \noindent{\textbf{Proof of Proposition~\ref{prop:CCF2}}.} The proof is almost verbatim the proof of Proposition~\ref{prop:CCF} with the difference that the relation in~\eqref{eq:borderEq2} should be replaced with \begin{myequation} \pmb{\theta} \in \partial\mathcal{C}^{+} \text{ if and only if } \Delta \mathbf{r}_\text{NC}(\pmb{\theta},\pmb{\theta}^\ast)^\top \mathbf{M}(\pmb{\theta}^\ast) \tfrac{\partial\mathbf{r}_\text{NC}}{\partial \pmb{\theta}}=\mathbf{0}. \label{eq:borderEq2} \end{myequation} Since $\mathbf{M}(\pmb{\theta}^\ast)$ is a symmetric and non-singular matrix with $\det(\mathbf{M})=-\sin^2(\alpha_c) |\mathbf{r}_\text{NC}(\pmb{\theta}^\ast)|^6$, the relation in~\eqref{eq:borderEq2} holds if and only if the nonzero vector $\mathbf{M}(\pmb{\theta}^\ast)\Delta \mathbf{r}_\text{NC}(\pmb{\theta},\pmb{\theta}^\ast)$ is perpendicular to the collection of $2N$ three-dimensional vectors $\frac{\partial\mathbf{r}_\text{NC}}{\partial \theta_j}$. The rest of the proof follows verbatim the proof of Proposition~\ref{prop:CCF}. \hfill$\blacksquare$ \\ \noindent{\textbf{The Analytical Expression for the Entries of Hessian}.} The entries of the Hessian of the function $P(\cdot)$ given by~\eqref{eq:CtwzCond} at a folded conformation $\pmb{\theta}^{\ast}$ take the form \begin{myequation} \begin{aligned} \tfrac{\partial^2 P}{\partial \theta_j \partial \theta_k}\big|_{\pmb{\theta}^\ast} = & 2 \sum_{i=1}^{2N} \tfrac{\partial \mathbf{r}_\text{NC}}{\partial \theta_i}(\pmb{\theta}^\ast) \mathbf{M}(\pmb{\theta}^\ast)\big\{ \tfrac{\partial \mathbf{r}_{\text{NC}}(\pmb{\theta}^\ast)}{\partial \theta_j} \tfrac{\partial f_{i,\text{unfold}}(\pmb{\theta}^\ast)}{\partial \theta_k} + \\ & \tfrac{\partial \mathbf{r}_{\text{NC}}(\pmb{\theta}^\ast)}{\partial \theta_k} \tfrac{\partial f_{i,\text{unfold}}(\pmb{\theta}^\ast)}{\partial \theta_j}\big\},\; 1 \leq j,\, k\leq 2N, \end{aligned} \end{myequation} \hspace{-1ex}where $\mathbf{M}(\pmb{\theta}^\ast)$ is defined in Proposition~\ref{prop:propUnfold}, the vector $\mathbf{r}_\text{NC}(\pmb{\theta})\in \mathbb{R}^3$ connects the nitrogen atom in the $\text{N}$-terminus to the carbon atom in the $\text{C}$-terminus, and $f_{i,\text{unfold}}(\pmb{\theta})$ is the $i$-th element of the vector $\mathbf{f}^\text{unfold}(\pmb{\theta})$ in~\eqref{eq:mainDyn}. \subsection{Control Chetaev Functions} In this section we provide some preliminaries on control Chetaev functions due to Efimov and collaborators~[S3]. The reader is referred to the line of work by Efimov, Kellett, and collaborators~[S3] and [S4] for further details. Consider the nonlinear control-affine system \begin{myequation} \dot{\mathbf{x}}= \mathbf{F}(\mathbf{x}) + \mathbf{G}(\mathbf{x})\mathbf{u}, \label{eq:nonlinSys1} \end{myequation} \hspace{-1ex}where $\mathbf{x}\in\mathbb{R}^n$ is the state vector, $\mathbf{u}\in\mathbb{R}^m$ is the control input, $\mathbf{F}(\mathbf{x}^\ast)=\mathbf{0}$, and $\mathbf{F}:\mathbb{R}^n \to \mathbb{R}^n$ and $\mathbf{G}:\mathbb{R}^m \to \mathbb{R}^n$ are locally Lipschitz continuous functions. A smooth function $V:\mathcal{B}_{\mathbf{x}^\ast}(\epsilon)\to \mathbb{R}$, such that $V(\mathbf{x}^\ast)=0$ and $\mathcal{V}^+\cap \mathcal{B}_{\mathbf{x}^\ast}(\epsilon)\neq \emptyset$, where $\mathcal{V}^+:=\{\mathbf{x}: V(\mathbf{x})> 0\}$ and for any $\epsilon\in (0,\epsilon_0]$, is called a CCF for the control system in~\eqref{eq:nonlinSys1} if \begin{myequation} \sup\limits_{\mathbf{u}\in \mathbb{R}^m} \{D^- V(\mathbf{x})\mathbf{F}(\mathbf{x}) + (D^- V(\mathbf{x})\mathbf{G}(\mathbf{x}))^\top \mathbf{u}\} > 0, \label{eq:CCF_V} \end{myequation} \hspace{-1ex}for all $\mathbf{x}\in \mathcal{V}^+$. As it is argued in [S3], if for all $\mathbf{x}\in \mathcal{V}^+$ at which $|D^- V(\mathbf{x})\mathbf{G}(\mathbf{x})|=0$, it holds that $|D^- V(\mathbf{x})\mathbf{F}(\mathbf{x})|>0$, then $V(\cdot)$ is a CCF for~\eqref{eq:nonlinSys1}. Furthermore, in the special case when $\mathbf{G}(\mathbf{x})=\mathbf{I}_{n\times n}$ and $\mathbf{u}\in\mathbb{R}^n$, if $|D^- V(\mathbf{x})|\neq 0$ for all $\mathbf{x}\in \mathcal{V}^+$, then $V(\cdot)$ is a CCF for~\eqref{eq:nonlinSys1}. \section*{Supplementary References} \small{ \noindent [S1]\hspace{1pt} J. L. Killian, F. Ye, and M. D. Wang, ``Optical tweezers: a force to be reckoned with,'' \emph{Cell}, vol. 175, no. 6, pp. 1445–1448, 2018.\\ \noindent [S2]\hspace{1pt} A. Ashkin, ``Acceleration and trapping of particles by radiation pressure,'' \emph{Phys. Rev. Lett.}, vol. 24, no. 4, p. 156, 1970.\\ \noindent [S3]\hspace{1pt} D. Efimov, W. Perruquetti, and M. Petreczky, ``On necessary conditions of instability and design of destabilizing controls,'' in \emph{53rd IEEE Conf. Dec. Contr. (CDC).} IEEE, 2014, pp. 3915--3917.\\ \noindent [S4]\hspace{1pt} P. Braun, L. Gr\"{u}ne, and C. M. Kellett, ``Complete instability of ¨ differential inclusions using Lyapunov methods,'' in \emph{57th IEEE Conf. Dec. Contr. (CDC).} IEEE, 2018, pp. 718--724. } \section{Destabilizing control analysis/synthesis for studying the KCM-based protein unfolding} \label{sec:ContProb} In this section we first formulate the KCM-based protein unfolding as a destabilizing control analysis/synthesis problem for~\eqref{eq:KCM_ode}. \textcolor{black}{Next, we present a CF for analysis of a folded conformation instability under optical tweezers. Thereafter, we introduce a class of CCFs for synthesis of destabilizing inputs that elongate protein strands from their folded conformations.} \subsection{\textcolor{black}{Control problem formulation and a trivial solution}} \label{subsec:pucp} It is known that the KCM-based iteration when initiated in a vicinity of a stable conformation $\pmb{\theta}^\ast$ converges to it~\cite{kazerounian2005protofold,tavousi2015protofold}. The conformation $\pmb{\theta}^\ast$ corresponds to a local minimum of the aggregated free energy function $\mathcal{G}(\cdot)$ in~\eqref{eq:freePot}, where the equivalent torque in~\eqref{eq:tau_KCM} becomes $\pmb{\tau}(\pmb{\theta}^\ast)=\mathcal{J}^\top(\pmb{\theta}^\ast) \mathcal{F}(\pmb{\theta}^\ast)=\mathbf{0}$. Indeed, the conformation $\pmb{\theta}^\ast$ is an isolated and locally asymptotically stable equilibrium point for $\dot{\pmb{\theta}}=\mathcal{J}^\top(\pmb{\theta}) \mathcal{F}(\pmb{\theta})$ (see, e.g.,~\cite{tavousi2015protofold,kazerounian2005protofold,mohammadi2021quadratic}). In light of the properties of a folded conformation $\pmb{\theta}^\ast$, we can state our control objective for unfolding as follows. \noindent\textbf{Protein Unfolding Control Problem (PUCP).} Consider the KCM-based protein conformation dynamics given by~\eqref{eq:KCM_ode} and a stable folded conformation given by $\pmb{\theta}^\ast$. Find a closed-loop feedback control input $\mathbf{u}_c=\mathbf{u}_{\text{unfold}}(\pmb{\theta})$ for~\eqref{eq:KCM_ode} such that the folded conformation $\pmb{\theta}^\ast$ becomes an unstable equilibrium for \begin{equation}\label{eq:KCM_odeUnfold} \dot{\pmb{\theta}} = \mathbf{f}^{\text{unfold}}(\pmb{\theta}):= \mathcal{J}^\top(\pmb{\theta}) \mathcal{F}(\pmb{\theta}) + \mathbf{u}_{\text{unfold}}(\pmb{\theta}). \end{equation} A \emph{trivial solution} to PUCP is given as follows. \begin{proposition} Consider $u_\text{unfold}(\pmb{\theta})=-2\mathcal{J}^\top(\pmb{\theta})\mathcal{F}(\pmb{\theta})$ for~\eqref{eq:KCM_odeUnfold}. Under this control input the folded conformation $\pmb{\theta}^\ast$ will become a purely repulsing equilibrium for the resulting closed-loop dynamics given by $\dot{\pmb{\theta}}=-\mathcal{J}^\top(\pmb{\theta}) \mathcal{F}(\pmb{\theta})$. \end{proposition} \begin{proof} The folded conformation $\pmb{\theta}^\ast$ is an isolated and locally asymptotically stable equilibrium point for $\dot{\pmb{\theta}}=\mathcal{J}^\top(\pmb{\theta}) \mathcal{F}(\pmb{\theta})$. Therefore, we can invoke a converse argument~\cite{lin1996smooth} to conclude that there exists a Lyapunov function \begin{equation} V^{\text{fold}}: \mathbb{R}^{2N} \to \mathbb{R}_+,\, \pmb{\theta} \mapsto V^{\text{fold}}(\pmb{\theta}) \end{equation} for these dynamics from which the local asymptotic stability of $\pmb{\theta}^\ast$ follows. It can be easily seen that $V^{\text{fold}}(\cdot)$ is indeed an anti-control Lyapunov function (ALF)~\cite{efimov2011oscillating} for $\dot{\pmb{\theta}}=-\mathcal{J}^\top(\pmb{\theta}) \mathcal{F}(\pmb{\theta})$, where $V^{\text{fold}}(\pmb{\theta}^\ast)=0$, $V^{\text{fold}}(\pmb{\theta})>0$ on $\mathcal{N}_{\pmb{\theta}^\ast}\backslash \{\pmb{\theta}^\ast\}$, where $\mathcal{N}_{\pmb{\theta}^\ast}$ is an open neighborhood of $\pmb{\theta}^\ast$, and $-\tfrac{\partial V^{\text{fold}}}{\partial \pmb{\theta}} \mathcal{J}^\top(\pmb{\theta}) \mathcal{F}(\pmb{\theta})>0$ for all $\pmb{\theta}\neq \pmb{\theta}^\ast$ on an open neighborhood of $\pmb{\theta}^\ast$ that is a subset of $\mathcal{N}_{\pmb{\theta}^\ast}$. \end{proof} \begin{remark} There are some drawbacks associated with considering $V^{\text{fold}}(\cdot)$ as a candidate Chetaev function for protein unfolding. First, no closed-form expression for $V^{\text{fold}}(\cdot)$ is readily available. Second, $V^{\text{fold}}(\cdot)$ does not inherently possess the notion of direction of the unfolding forces. \end{remark} \subsection{\textcolor{black}{Instability of folded conformations under optical tweezers}} \label{subsec:analysispucp} \textcolor{black}{In this section, we present an analysis to guarantee that the forces due to optical tweezers can be considered as solutions to PUCP formulated in Section~\ref{subsec:pucp}. Such an analysis provides numerical conditions for real-time control of optical tweezer-based protein unfolding.} Using the torque in~\eqref{eq:unfoldTorque}, the unfolding conformation dynamics in~\eqref{eq:KCM_odeUnfold} become \begin{equation} \dot{\pmb{\theta}} = \mathbf{f}^\text{unfold}(\pmb{\theta}), \label{eq:mainDyn} \end{equation} where $\mathbf{f}^\text{unfold}(\pmb{\theta}):=\mathcal{J}^\top(\pmb{\theta}) \big\{\mathcal{F}(\pmb{\theta}) + \mathcal{F}_\text{twz}(\pmb{\theta}) \big\}$. \textcolor{black}{Accordingly, if the protein molecule is under the effect of the forces generated by an optical tweezer as in~\eqref{eq:optTwzForce}, then the unfolding dynamics are given by~\eqref{eq:KCM_odeUnfold}, where the unfolding control torque vector $\mathbf{u}_\text{unfold}(\pmb{\theta})$ is given by~\eqref{eq:unfoldTorque} and~\eqref{eq:genForce}.} Since the trap stiffness is modulated such that $\kappa(\pmb{\theta}^\ast)=0$ (see Remark~\ref{rem:modul}), the conformation $\pmb{\theta}^\ast$ is an equilibrium of~\eqref{eq:mainDyn}. \textcolor{black}{To analyze a folded conformation instability under the effect of optical tweezers,} we are interested in deriving \textcolor{black}{numerical} conditions guaranteeing that PUCP objectives are met for the unfolding dynamics in~\eqref{eq:mainDyn}. It is natural to expect that these conditions should depend on the direction of the applied force in~\eqref{eq:optTwzForce}. Considering the fact that a protein molecule extends under an optical tweezer-based force from its folded conformation $\pmb{\theta}^\ast$, we consider the candidate CF \begin{equation} \begin{aligned} C_\text{twz}(\pmb{\theta}) = |\Delta \mathbf{r}_\text{NC}(\pmb{\theta},\pmb{\theta}^\ast)|^2 |\mathbf{r}_\text{NC}(\pmb{\theta}^\ast)|^2 \cos^2(\alpha_C) - \\ \big(\Delta \mathbf{r}_\text{NC}(\pmb{\theta},\pmb{\theta}^\ast)^\top \mathbf{r}_\text{NC}(\pmb{\theta}^\ast)\big)^2 , \label{eq:candidChetaev} \end{aligned} \end{equation} for the unfolding dynamics in~\eqref{eq:mainDyn}. At any conformation $\pmb{\theta}$, the vector $\mathbf{r}_\text{NC}(\pmb{\theta})\in \mathbb{R}^3$ in~\eqref{eq:candidChetaev} connects the nitrogen atom in the $\text{N}$-terminus to the carbon atom in the $\text{C}$-terminus. Furthermore, the difference vector is given by $\Delta \mathbf{r}_\text{NC}(\pmb{\theta},\pmb{\theta}^\ast):=\mathbf{r}_\text{NC}(\pmb{\theta})-\mathbf{r}_\text{NC}(\pmb{\theta}^\ast)$. Finally, the constant $\alpha_C$ is any angle in-between $0$ and $\tfrac{\pi}{2}$. \textcolor{black}{Details on the geometric interpretation of this candidate CF is provided in Supplementary Material. As it will be seen in the next section (Propositions~\ref{prop:CCF} and \ref{prop:CCF2}), this candidate CF is also a CCF.} The proof of the next lemma \textcolor{black}{is given in Supplementary Material}. \begin{lemma} Consider the candidate CF $C_\text{twz}: \mathbb{R}^{2N}\to \mathbb{R}$ in~\eqref{eq:candidChetaev}. We have $C_\text{twz}(\pmb{\theta}^\ast)=0$. Moreover, there exists $\epsilon_0>0$ such that $\mathcal{C}_\text{twz}^{+} \cap \mathcal{B}_{\pmb{\theta}^\ast}(\epsilon) \neq \emptyset$, where $\mathcal{C}_\text{twz}^{+}:= \big\{\pmb{\theta} \big|\, C_\text{twz}(\pmb{\theta}) > 0 \big\}$, for all $\epsilon \in (0, \epsilon_0]$. \label{lem:lemChet} \end{lemma} Lemma~\ref{lem:lemChet} is used in the following proposition \textcolor{black}{whose proof is given in Supplementary Material}. \begin{proposition} Consider the optical tweezer-based unfolding dynamics in~\eqref{eq:mainDyn}. The folded conformation $\pmb{\theta}^\ast$ is an unstable equilibrium for~\eqref{eq:mainDyn} if % % \begin{equation} P(\pmb{\theta}):=\frac{\partial C_\text{twz}}{\partial \pmb{\theta}} \mathbf{f}^\text{unfold}(\pmb{\theta}) > 0 \text{ for all } \pmb{\theta}\in \mathcal{C}_\text{twz}^{+}, \label{eq:CtwzCond} \end{equation} \noindent where $\mathcal{C}_\text{twz}^{+}$ is defined in Lemma~\ref{lem:lemChet}, and % \begin{equation} \begin{aligned} \frac{\partial C_\text{twz}}{\partial \pmb{\theta}} = 2 \Delta \mathbf{r}_{_\text{NC}}(\pmb{\theta},\pmb{\theta}^\ast)^\top \mathbf{M}(\pmb{\theta}^\ast) \frac{\partial \mathbf{r}_\text{NC}}{\partial \pmb{\theta}}, \end{aligned} \label{eq:jacobChet} \end{equation} % where $\mathbf{M}(\pmb{\theta}^\ast) := |\mathbf{r}_{_\text{NC}}(\pmb{\theta}^\ast)|^2 \cos^2(\alpha_c) \mathbf{I}_3 - \mathbf{r}_{_\text{NC}}(\pmb{\theta}^\ast) \mathbf{r}_{_\text{NC}}(\pmb{\theta}^\ast)^\top$. \label{prop:propUnfold} \end{proposition} \hspace{1ex} The condition given by~\eqref{eq:CtwzCond} can be \textcolor{black}{numerically} checked by investigating the Hessian of the function $P(\cdot)$ at the folded conformation under study as follows. \begin{proposition} Consider the folded protein molecule conformation $\pmb{\theta}^\ast$ for the closed-loop dynamics in~\eqref{eq:mainDyn} and the set $\mathcal{C}_\text{twz}^{+}$ in Lemma~\ref{lem:lemChet}. Considering the Hessian matrix $\text{Hess}(P)|_{\pmb{\theta}^\ast}$, the condition given by~\eqref{eq:CtwzCond} holds if and only if $\delta\pmb{\theta}^\top \text{Hess}(P)|_{\pmb{\theta}^\ast} \delta\pmb{\theta} > 0$ for all $\delta\pmb{\theta}$ such that $\pmb{\theta}^\ast + \delta\pmb{\theta} \in \mathcal{C}_\text{twz}^{+} \cap \mathcal{B}_{\pmb{\theta}^\ast}(\epsilon)$ for some $\epsilon > 0$. \label{prop:hessUnfold} \end{proposition} \begin{remark} \textcolor{black}{The condition in Proposition~\ref{prop:hessUnfold}, which relies on numerical computation of $\text{Hess}(P)$ at the folded conformation $\pmb{\theta}^\ast$, can be used for real-time control of optical tweezers and AFMs in protein unfolding applications. In particular, to approximate each entry $\tfrac{\partial^2 P}{\partial \theta_i \partial \theta_j}$, one can use % \begin{equation} \tfrac{P\big( \pmb{\theta}^\ast + \varepsilon_i \mathbf{1}_i + \varepsilon_j \mathbf{1}_j \big) - P\big( \pmb{\theta}^\ast + \varepsilon \mathbf{1}_i \big)-P\big( \pmb{\theta}^\ast + \varepsilon \mathbf{1}_j \big) + P( \pmb{\theta}^\ast)}{\varepsilon_i \varepsilon_j}, \label{eq:HessianNum} \end{equation} where $P(\pmb{\theta}^\ast)=0$ because $\mathbf{f}^{\text{unfold}}(\pmb{\theta}^\ast)=\mathbf{0}$ (see Supplementary Material for analytical expression of these entries). Moreover, $\varepsilon_i$, $\varepsilon_j$ are some small positive constants and the vectors $\mathbf{1}_i$, $\mathbf{1}_j$ are the $i$-th, $j$-th columns of the identity matrix $\mathbf{I}_{2N}$, respectively (see, e.g.,~\cite{dennis1996numerical}).} \textcolor{black}{For real-time implementation, which is beyond the scope of the current article, there is a need for high-speed scanning and high-bandwidth control at nanoscale, where specialized DSP/FPGA-based embedded hardware have been developed (see, e.g.,~\cite{sun2010field,ragazzon2018model}).} \end{remark} \subsection{\textcolor{black}{Synthesis of destabilizing control inputs for unfolding}} \label{subsec:synthesispucp} \textcolor{black}{In this section we present a class of CCFs for the KCM-based molecular dynamics. We then utilize these CCFs in the Artstein-Sontag universal formula, which was first presented in~\cite{efimov2014necessary} in the context of CCFs, to synthesize destabilizing state feedback control torques for folded protein conformations. The following propositions provide two types of CCFs for the KCM-based dynamics of protein molecules, where the CCF in the second proposition is the candidate CF given by~\eqref{eq:candidChetaev}.} \begin{proposition} \textcolor{black}{Consider the KCM-based dynamics in~\eqref{eq:KCM_ode} and a folded conformation $\pmb{\theta}^\ast$. Consider the function $C: \mathcal{B}_{\pmb{\theta}^\ast}(\epsilon_0) \to \mathbb{R}$ given by % \begin{equation} \begin{aligned} C(\pmb{\theta}) = g(|\Delta \mathbf{r}_\text{NC}(\pmb{\theta},\pmb{\theta}^\ast)|^2), \end{aligned} \label{eq:candCCF} \end{equation} % where $g: \mathbb{R}_{+} \to \mathbb{R}_{+}$ is any smooth strictly monotonic function such that $g(0)=0$. The function $C(\cdot)$ is a CCF for the nonlinear control-affine system in~\eqref{eq:KCM_ode}.} \label{prop:CCF} \end{proposition} \begin{proof} \textcolor{black}{Consider the set $\mathcal{C}^+ :=\{ \pmb{\theta}\in \mathcal{B}_{\pmb{\theta}^\ast}(\epsilon_0) : C(\pmb{\theta})>0 \}$. From~\eqref{eq:candCCF}, we have $C(\pmb{\theta}^\ast)=0$. Also, $\mathcal{C}^+ \cap \mathcal{B}_{\pmb{\theta}^\ast}(\epsilon)\neq \emptyset$ for all $\epsilon\in (0,\epsilon_0]$. Furthermore, $ \tfrac{\partial C}{\partial \pmb{\theta}} = g^\prime(|\Delta \mathbf{r}_\text{NC}(\pmb{\theta},\pmb{\theta}^\ast)|^2)\Delta \mathbf{r}_\text{NC}(\pmb{\theta},\pmb{\theta}^\ast)^\top \tfrac{\partial\mathbf{r}_\text{NC}}{\partial \pmb{\theta}}, $ where $g^{\prime}(.)$ is the derivative of $g(\cdot)$. We claim that \begin{equation} \partial \mathcal{C}^{+} \cap \mathcal{C}^+ \cap \mathcal{B}_{\pmb{\theta}^\ast}(\epsilon) = \emptyset, \label{eq:CCFclaim} \end{equation} where $\partial \mathcal{C}^{+} := \{\pmb{\theta} : \big|\tfrac{\partial C}{\partial \pmb{\theta}}\big| = 0\}$. To see this, note that \begin{equation} \pmb{\theta} \in \partial\mathcal{C}^{+} \text{ if and only if } \Delta \mathbf{r}_\text{NC}(\pmb{\theta},\pmb{\theta}^\ast)^\top \tfrac{\partial\mathbf{r}_\text{NC}}{\partial \pmb{\theta}}=\mathbf{0}. \label{eq:borderEq} \end{equation} On the set $\mathcal{C}^+$, $\Delta \mathbf{r}_\text{NC}(\pmb{\theta},\pmb{\theta}^\ast)\neq \mathbf{0}$. Therefore, the relation in~\eqref{eq:borderEq} holds if and only if $\Delta \mathbf{r}_\text{NC}(\pmb{\theta},\pmb{\theta}^\ast)^\top\tfrac{\partial\mathbf{r}_\text{NC}}{\partial \theta_j}=\mathbf{0}$ for all $1 \leq j \leq 2N$. Due to the kinematic structure of the protein molecule, which can be classified as a manipulator with hyper degrees of freedom (see~\cite{mochiyama1999shape}), we have \begin{equation} \begin{aligned} \frac{\partial\mathbf{r}_\text{NC}}{\partial \theta_j} = \Xi(\pmb{\theta},\mathbf{u}_j^0) \mathbf{u}_j^0 \times (\mathbf{r}_\text{NC}(\pmb{\theta}) - \mathbf{r}_j(\pmb{\theta})),\, 1\leq j\leq 2N, \end{aligned} \label{eq:CCF_struct} \end{equation} where $\Xi$ is given by~\eqref{eq:kineProtMat} and $\mathbf{r}_j(\pmb{\theta})$ is the position vector of the backbone chain atom ($\text{C}_\alpha$ atom for even $j$ and $N$ atom for odd $j$) in the $j$-th peptide plane. Therefore, the relation in~\eqref{eq:borderEq} holds if and only if the nonzero vector $\Delta \mathbf{r}_\text{NC}(\pmb{\theta},\pmb{\theta}^\ast)$ is perpendicular to the collection of $2N$ three-dimensional vectors $\frac{\partial\mathbf{r}_\text{NC}}{\partial \theta_j}$. This geometric relation holds only if the protein molecule conformation $\pmb{\theta}$ is such that all the amino-acid linkages are co-located on the same two-dimensional plane corresponding to an unfolded structure, which is impossible in a neighborhood of a folded conformation $\pmb{\theta}^\ast$. Due to~\eqref{eq:CCFclaim}, we conclude that $\underset{\mathbf{u}_\text{c}\in \mathbb{R}^{2N}}{\sup}\{ \tfrac{\partial C}{\partial \pmb{\theta}} (\mathcal{J}^\top(\pmb{\theta}) \mathcal{F}(\pmb{\theta})+\mathbf{u}_\text{c})\}>0$ for all $\pmb{\theta}\in \mathcal{C}^+$. Therefore, $C(\cdot)$ is a CCF for~\eqref{eq:KCM_ode}.} \end{proof} \begin{proposition} \textcolor{black}{Consider the KCM-based dynamics in~\eqref{eq:KCM_ode} and a folded conformation $\pmb{\theta}^\ast$. The function $C_{\text{twz}}(\cdot)$ in~\eqref{eq:candidChetaev} is a CCF for~\eqref{eq:KCM_ode}.} % \label{prop:CCF2} \end{proposition} \hspace{1ex} \textcolor{black}{Having a family of CCFs afforded by Propositions~\ref{prop:CCF} and~\ref{prop:CCF2}, we can use the following CCF-based Artstein-Sontag universal formula from~\cite{efimov2014necessary} \begin{equation} \mathbf{u}_{\text{c}}(\pmb{\theta}) = -\phi\big[a(\pmb{\theta}),|B(\pmb{\theta})|\big] B(\pmb{\theta}), \label{eq:sontagInput} \end{equation} to synthesize unfolding control inputs that solve the PUCP formulated in Section~\ref{subsec:pucp}, where $a(\pmb{\theta}):=\tfrac{\partial C}{\partial \pmb{\theta}} \mathcal{J}^\top(\pmb{\theta}) \mathcal{F}(\pmb{\theta})$, $B^\top(\pmb{\theta}) := \tfrac{\partial C}{\partial \pmb{\theta}}$, and $\phi(a,b):=\tfrac{a-\sqrt[p]{|a|^p+b^{2q}}}{b^2}$ if $b \neq 0$, and $\phi(a,b):=0$ if $b=0$, with any $2q\geq p > 1$, and $q>1$.} \section{Introduction} Protein molecules make conformational transitions in-between two or more native conformations to perform a plethora of crucial biological functions~\cite{finkelstein2016protein}. These transitions usually involve two distinct processes of folding and unfolding. During the folding/unfolding process, a polypeptide chain folds/unfolds into/from a biologically active molecule (\textcolor{black}{see Supplementary Material for an energy landscape schematic of folding/unfolding}). In addition to its role for understanding the protein structures, unfolding is used for engineering materials such as hydrogels and anti-viral drugs as well as designing protein-based nanomachines~\cite{bergasa2020interdiction,fang2013forced,fu2019harvesting}. Algorithmic prediction of pathways through which a folded protein molecule unfolds to its denatured state has been among the main tools for investigating the biochemical process of unfolding. The AI-based approaches to predicting the folded structure of proteins such as Google AlphaFold~\cite{alquraishi2020watershed}, which are rooted in pattern recognition, cannot address issues such as folding/unfolding pathway prediction. \textcolor{black}{Indeed, AlphaFold's main task has been to predict the most likely structures of folded proteins while ignoring the kinetics and stability of folding/unfolding processes (see, e.g.,~\cite{fersht2021alphafold} for further details).} Alternatively, dynamics-based approaches rooted in physical first-principles provide a more preferred way for predicting the folding/unfolding pathways~\cite{heo2019driven}. To address the high computational burden of physics-based approaches, the framework of kinetostatic compliance method (KCM), which is based on modeling protein molecules as mechanisms with a large number of rigid nano-linkages that fold under the nonlinear effect of electrostatic and van der Waals interatomic forces, has been developed in the literature~\cite{alvarado2003rotational,kazerounian2005nano,kazerounian2005protofold,tavousi2015protofold,Tavousi2016}. The KCM framework has been successful in investigating the effect of hydrogen bond formation on protein kinematic mobility~\cite{shahbazi2010hydrogen} and synthesizing molecular nano-linkages~\cite{tavousi2016synthesizing,chorsi2021kinematic}. Furthermore, as demonstrated by~\cite{mohammadi2021quadratic}, this framework lends itself to a nonlinear control theoretic interpretation, where entropy-loss constraints can be encoded in KCM-based folding via proper quadratic optimization-based nonlinear control techniques. Despite its success for investigating the structure of proteins, the KCM framework except for a simulation study in~\cite{su2007comparison} has not been systematically utilized for studying unfolding. In this paper, we formulate the KCM-based protein unfolding as a destabilizing control analysis/synthesis problem by utilizing the Chetaev instability framework. \textcolor{black}{A systemic development of Chetaev functions (CFs) and control Chetaev functions (CCFs), which are rooted in the seminal work of N. G. Chetaev~\cite{chetaev1961stability}, has been recently initiated by Efimov and collaborators~\cite{efimov2009oscillatority,efimov2011oscillating,efimov2014necessary} and Kellett and collaborators~\cite{braun2018complete}. CFs and CCFs are not only useful for studying instability of nonlinear control systems but also for synthesis of destabilizing control inputs to induce oscillations via static feedback and safety critical control applications.} CFs and CCFs in this paper are employed to investigate the instability properties of the folded protein conformations under study. \noindent{\textbf{Contributions of the paper.}} This paper has the following contributions. First, this paper bridges the two fields of protein unfolding in biochemistry and the Chetaev instability framework in nonlinear control theory. Second, this paper provides a nonlinear control theoretic foundation for single-molecule force-based protein denaturation. The single-molecule protein unfolding using devices such as optical tweezers~\cite{bustamante2020single} and atomic force microscopes (AFMs)~\cite{ragazzon2018model} has proven to be useful in nano-/molecular-robotics~\cite{thammawongsa2012nanorobot,wen2019nanorobotic}. Third, this paper adds to the body of knowledge on KCM-based modeling for further investigation of structural properties of protein molecules. The rest of this paper is organized as follows. First, we review the KCM framework for modeling protein molecules, the control theoretic interpretation of it, and \textcolor{black}{some basics from the Chetaev instability theory} in Section~\ref{sec:dynmodel}. Thereafter, \textcolor{black}{we model the dynamics of protein unfolding under optical tweezers} in Section~\ref{sec:Sol}. Next, \textcolor{black}{a CF for analysis of optical tweezer-based unfolding and a class of CCFs for synthesizing unfolding control inputs, which elongate protein molecules, are presented in Section~\ref{sec:ContProb}.} \textcolor{black}{After presenting the simulation results that compare optical tweezer-based unfolding against a synthesized Artstein-Sontag destabilizing feedback} in Section~\ref{sec:sims}, we conclude the paper in Section~\ref{sec:conc}. \noindent{\textbf{Notation.}} We let $\mathbb{R}_{+}$ denote the set of all non-negative real numbers. We let $(\cdot)^\top$ denote the vector/matrix transpose operator. Given $\mathbf{x}\in\mathbb{R}^M$, we let $|\mathbf{x}|:=\sqrt{\mathbf{x}^{\top}\mathbf{x}}$ denote the Euclidean norm of $\mathbf{x}$. Given $\varepsilon\in \mathbb{R}_{+}$, we let $\mathcal{B}_{\mathbf{x}}(\varepsilon)$ denote the open ball centered at $\mathbf{x}$ with radius $\varepsilon$. Given two vectors $\mathbf{v}_1,\, \mathbf{v}_2 \in \mathbb{R}^3$, we let $\mathbf{v}_1 \times \mathbf{v}_2$ denote their cross product. \section{KCM-Based Modeling of Protein Unfolding under Optical Tweezers} \label{sec:Sol} In this section we present the dynamics of protein unfolding under the effect of an optical tweezer. \textcolor{black}{We remark that the derivation of KCM-based dynamical model of protein unfolding under optical tweezers has not been presented elsewhere in the literature.} Optical tweezers can be utilized for unfolding protein molecules by chemically attaching a latex bead (microsphere) to the protein under study, where the bead can be trapped and used as a handle for stretching the molecule. \textcolor{black}{The reader is referred to Supplementary Material for further details.} The optical tweezer tension force on the molecule strand can be modeled by the Hookean force~\cite{bustamante2020single} \begin{equation} \textbf{F}_\text{twz}(\pmb{\theta}) = \kappa(\pmb{\theta}) \mathbf{x}_\text{twz}, \label{eq:optTwzForce} \end{equation} \noindent where $\mathbf{x}_\text{twz}$ represents the displacement of the microsphere from the center of the beam and $\kappa(\pmb{\theta})$ is the optical trap stiffness modulated by the protein conformation using a proper technique such as linear polarization~\cite{schonbrun2008spring,li2015tracking}. In particular, we assume that the trap stiffness can be modulated as a smooth function of $\pmb{\theta}$ in a way that at a given folded conformation $\pmb{\theta}^\ast$, $\kappa(\pmb{\theta}^\ast)=0$. \begin{remark} For a given positive integer $m$ and a positive constant $\kappa_0$, a candidate optical trap modulated stiffness is % \begin{equation} \kappa(\pmb{\theta}) = \kappa_0 |\mathbf{r}_\text{NC}(\pmb{\theta}^\ast) - \mathbf{r}_\text{NC}(\pmb{\theta})|^m, \label{eq:optTrapStiff2} \end{equation} % \label{rem:modul} \hspace{-1ex}where at any conformation $\pmb{\theta}$, the vector $\mathbf{r}_\text{NC}(\pmb{\theta})\in \mathbb{R}^3$ connects the nitrogen atom in the $\text{N}$-terminus to the carbon atom in the $\text{C}$-terminus (see Figure~\ref{fig:protBasic}). \end{remark} \vspace{0ex} \hspace{1ex} Let us consider the optical tweezer force in~\eqref{eq:optTwzForce} and a protein molecule chain with $N-1$ peptide planes. Without loss of generality, we assume that this force is being exerted on the C-terminus of the protein molecule. Accordingly, the resultant torque applied to the last link $\text{AA}_{\text{last}}$ of the molecule at conformation $\pmb{\theta}$, is given by $ \mathbf{T}_\text{twz}(\pmb{\theta}) = \sum_{a_i \in \text{AA}_{\text{last}}} \mathbf{r}_i(\pmb{\theta}) \times \kappa(\pmb{\theta}) \mathbf{x}_\text{twz}, $ where $\mathbf{r}_i(\pmb{\theta})$ denotes the position of atom $a_i$, which is the $i$-th atom located in peptide plane $\text{AA}_{\text{last}}$. Therefore, the force-torque couple due to the effect of the optical tweezer is $ \mathbf{TF}_\text{twz}(\pmb{\theta}):= \begin{bmatrix} \mathbf{T}_\text{twz}(\pmb{\theta})^\top , \mathbf{F}_\text{twz}(\pmb{\theta})^\top \end{bmatrix}^\top \in \mathbb{R}^6. $ Consequently, the optical tweezer-based force-torque couple, which is exerted on the last peptide plane of the protein molecule, gives rise to the equivalent joint torque vector \begin{equation} \mathbf{u}_\text{unfold}(\pmb{\theta}) = \mathcal{J}^\top(\pmb{\theta}) \mathcal{F}_\text{twz}(\pmb{\theta}), \label{eq:unfoldTorque} \end{equation} where $\mathcal{F}_\text{twz}(\pmb{\theta})$ is the generalized force vector \begin{equation} \mathcal{F}_\text{twz}(\pmb{\theta}) = \begin{bmatrix} \mathbf{0}^\top , \cdots , \mathbf{0}^\top , \mathbf{TF}_\text{twz}(\pmb{\theta})^\top\end{bmatrix}^\top \in\; \mathbb{R}^{6(2N)}, \label{eq:genForce} \end{equation} which gets mapped through~\eqref{eq:unfoldTorque} to the vector $\mathbf{u}_\text{unfold}(\pmb{\theta})\in \mathbb{R}^{2N}$ affecting the dihedral angles in the molecule. \section{Preliminaries} \label{sec:dynmodel} \begin{figure}[t] \centering \includegraphics[width=0.45\textwidth]{./figures/protBasic.png} \caption{\small The protein molecule kinematic structure.} \vspace{-4.5ex} \label{fig:protBasic} \end{figure} \subsection{Kinematic linkage representation of protein molecules} \label{subsec:kinLinkage} In this section, we review the KCM-based modeling of the main chain \emph{in vacuo} (see~\cite{tavousi2015protofold,Tavousi2016} for further details). Proteins essentially consist of peptide planes that are rigidly joined together in a long chain (see Figure~\ref{fig:protBasic}). Central carbon atoms denoted by $\text{C}_\alpha$ play the role of hinges in-between the amino acids. The red line segments in Figure~\ref{fig:protBasic} represent covalent atomic bonds. The first $\text{C}_\alpha$ is connected to the N-terminus, and one other peptide plane. Similarly, the last $\text{C}_\alpha$ hinges to the C-terminus, and one other peptide plane. The backbone conformation of protein molecules can be described by using a collection of bond lengths and two sets of rotation angles around $\text{N}-\text{C}_\alpha$ and $\text{C}_\alpha-\text{C}$, which are known as the dihedral angles. Hence, the vector of dihedral angles $ \pmb{\theta}= \big[\theta_1,\cdots,\theta_{2N}]^\top \in \mathbb{R}^{2N} $ represents the protein configuration with $N-1$ peptide planes. Along the rotation axis corresponding to each DOF of the protein molecule, a unit vector denoted by $\mathbf{u}_j$, $1\leq j \leq 2N$, can be considered. In addition to the unit vectors $\mathbf{u}_j$, Kazerounian and collaborators~\cite{kazerounian2005nano,kazerounian2005protofold,tavousi2015protofold} utilize the so-called \textbf{body vectors} to represent the spatial orientation of the peptide planes. These body vectors are denoted by $\mathbf{b}_{j}$, $1\leq j \leq 2N$, and describe the relative position of any two atoms in the chain. Using the vectors $\mathbf{u}_j$ and $\mathbf{b}_{j}$, the conformation of a protein molecule can be completely described. In particular, after designating the zero position conformation with $\pmb{\theta}=\mathbf{0}$ corresponding to the biological reference position of the chain (see~\cite{kazerounian2005nano}), the transformations \begin{equation} \mathbf{u}_j(\pmb{\theta}) = \Xi(\pmb{\theta},\mathbf{u}_j^0) \mathbf{u}_j^0 ,\, \mathbf{b}_j(\pmb{\theta}) = \Xi(\pmb{\theta},\mathbf{u}_j^0) \mathbf{b}_j^0, \label{eq:kineProtu} \end{equation} where \begin{equation} \Xi(\pmb{\theta},\mathbf{u}_j^0)=\prod_{r=1}^{j} R(\theta_j,\mathbf{u}_j^0), \label{eq:kineProtMat} \end{equation} describe the molecule kinematic configuration. In~\eqref{eq:kineProtMat}, the matrix $R(\theta_j,\mathbf{u}_j^0)\in \text{SO}(3)$ represents the rotation about the unit vector $\mathbf{u}_j^0$ with angle $\theta_j$. Having obtained the body vectors $\mathbf{b}_j(\pmb{\theta})$ from~\eqref{eq:kineProtu} and under the assumption that the N-terminus nitrogen atom is fixed at the origin, the coordinates of the backbone chain atoms in the $k$-th peptide plane are found from $ \mathbf{r}_i(\pmb{\theta}) = \sum_{j=1}^{i} \mathbf{b}_j(\pmb{\theta} $. \subsection{Control theoretic interpretation of KCM-based dynamics} The KCM framework in~\cite{kazerounian2005protofold,tavousi2015protofold,Tavousi2016} builds upon the experimental fact that the inertial forces during the folding of a protein chain can be neglected in the presence of the interatomic forces (see, e.g.,~\cite{adolf1991brownian,kazerounian2005protofold}). Instead, the dihedral angles rotate under the kinetostatic influence of these forces. To present the KCM-based protein dynamics, let us consider a polypeptide chain with $N-1$ peptide planes and conformation vector $\pmb{\theta}\in \mathbb{R}^{2N}$. The molecule aggregated free energy is given by (see~\cite{Tavousi2016} for detailed derivations) % \begin{equation}\label{eq:freePot} \mathcal{G}(\pmb{\theta}) := \mathcal{G}^{\text{elec}}(\pmb{\theta}) + \mathcal{G}^{\text{vdw}}(\pmb{\theta}), \end{equation} % where $\mathcal{G}^{\text{elec}}(\pmb{\theta})$ and $\mathcal{G}^{\text{vdw}}(\pmb{\theta})$ represent the the electrostatic and van der Waals potential energies, respectively. The resultant forces and torques on each of the $N-1$ rigid peptide links can be computed from the molecule free energy in~\eqref{eq:freePot} and the protein chain kinematics. These forces and torques can then be appended in the generalized force vector $\mathcal{F}(\pmb{\theta})\in \mathbb{R}^{6N}$. Following the steps of derivation in~\cite{tavousi2015protofold}, one can utilize a proper mapping to relate $\mathcal{F}(\pmb{\theta})$ to the equivalent torque vector $\pmb{\tau}(\pmb{\theta})\in \mathbb{R}^{2N}$ acting on the dihedral angles at conformation $\pmb{\theta}$, which is given by \begin{equation}\label{eq:tau_KCM} \pmb{\tau}(\pmb{\theta}) = \mathcal{J}^\top(\pmb{\theta}) \mathcal{F}(\pmb{\theta}). \end{equation} In~\eqref{eq:tau_KCM}, the matrix $\mathcal{J}(\pmb{\theta})\in \mathbb{R}^{6(2N)\times 2N}$ represents the Jacobian of the protein chain at conformation $\pmb{\theta}$. The vector $\mathcal{F}(\pmb{\theta})$ is due to the torques and forces, which arise from the interatomic electrostatic and van der Waals effects, at conformation $\pmb{\theta}$. As it is demonstrated in~\cite{tavousi2015protofold,kazerounian2005protofold}, for a protein molecule with $N-1$ peptide planes, the matrix $\mathcal{J}^\top(\pmb{\theta})\in \, \mathbb{R}^{2N \times 6(2N)}$ has the structure \begin{equation} \mathcal{J}^\top(\pmb{\theta}) = \begin{bmatrix} \mathbf{J}_1^\top(\pmb{\theta}) & \mathbf{J}_1^\top(\pmb{\theta}) & \cdots & \mathbf{J}_1^\top(\pmb{\theta}) \\ \textbf{0} & \mathbf{J}_2^\top(\pmb{\theta}) & \cdots & \mathbf{J}_2^\top(\pmb{\theta}) \\ \vdots & \vdots & \vdots & \vdots \\ \textbf{0} & \textbf{0} & \cdots & \mathbf{J}^\top_{2N}(\pmb{\theta}) \end{bmatrix}, \label{eq:overallJacob} \end{equation} where \begin{equation} \mathbf{J}_k(\pmb{\theta}) = \begin{bmatrix} \mathbf{u}_k(\pmb{\theta}) \\ - \mathbf{u}_k(\pmb{\theta}) \times \mathbf{r}_k(\pmb{\theta}) \end{bmatrix} \in \, \mathbb{R}^{6},\, 1 \leq k \leq 2N, \label{eq:localJacob} \end{equation} in which $\mathbf{u}_k(\pmb{\theta})$ denotes the unit vector introduced in Section~\ref{subsec:kinLinkage} and $\mathbf{r}_k(\pmb{\theta})$ connects the N-terminus to either $\text{C}_\alpha$ atom (even $k$) or $N$ atom (odd $k$) on the $k$-th peptide plane. Using the torque vector in~\eqref{eq:tau_KCM} and starting from an initial conformation $\pmb{\theta}_0$, the KCM-based iteration is given by $\pmb{\theta}_{i+1} = \pmb{\theta}_{i} + h \textbf{f}_\text{KCM} \big(\pmb{\tau}(\pmb{\theta}_i)\big),$ \noindent where $i$ is a non-negative integer, $h$ is a positive real constant that tunes the maximum dihedral angle rotation magnitude in each step, and $\textbf{f}_\text{KCM}(\cdot)$ is a proper mapping, which can be chosen to be the identity map in its simplest form (see~\cite{tavousi2015protofold} for further details). The successive iteration of the KCM-based folding is performed until all of the kinetostatic torques converge to a minimum corresponding to a local minimum of the free energy. Furthermore, $\mathcal{J}^\top(\pmb{\theta}) \mathcal{F}(\pmb{\theta})$ is a locally Lipschitz continuous function in a neighborhood of a given folded conformation. In~\cite{mohammadi2021quadratic}, it has been demonstrated that the KCM-based iteration can be interpreted as a control synthesis problem. In particular, the work in~\cite{mohammadi2021quadratic} considered the following nonlinear control-affine system \begin{equation}\label{eq:KCM_ode} \dot{\pmb{\theta}} = \mathcal{J}^\top(\pmb{\theta}) \mathcal{F}(\pmb{\theta}) + \mathbf{u}_{c}, \end{equation} \hspace{-1ex} where the Jacobian matrix $\mathcal{J}(\pmb{\theta})$ and the generalized force vector $\mathcal{F}(\pmb{\theta})$ are the same as in~\eqref{eq:tau_KCM}. It was demonstrated that forward Euler discretization of the closed-loop dynamics in~\eqref{eq:KCM_ode} under a proper closed-loop feedback control input $ \mathbf{u}_{c}(\pmb{\theta})$ restores the KCM-based iteration for protein folding. \vspace{-0ex} \textcolor{black}{ \subsection{Chetaev instability theory preliminaries}} \label{subsec:chetaevPrelim} \textcolor{black}{Consider the nonlinear dynamical system \begin{equation} \dot{\mathbf{x}}= \mathbf{F}(\mathbf{x}), \label{eq:nonlinSys} \end{equation} where $\mathbf{F}:\mathbb{R}^n \to \mathbb{R}^n$ is a locally Lipschitz continuous function. We say that the equilibrium $\mathbf{x}^\ast \in \mathbb{R}^n$ is unstable for~\eqref{eq:nonlinSys} if for any $\delta>0$ there exists $\mathbf{x}_0 \in \mathcal{B}_{\mathbf{x}^\ast}(\delta)$ such that $\mathbf{x}(T_{\mathbf{x}_0},\mathbf{x}_0)\notin \mathcal{B}_{\mathbf{x}^\ast}(\delta)$ for some $T_{\mathbf{x}_0} \in \mathbb{R}_+$, where the solution of~\eqref{eq:nonlinSys} with $\mathbf{x}(0)=\mathbf{x}_0$ is given by $\mathbf{x}(\cdot,\mathbf{x}_0)$ on the solution maximal interval of existence. \begin{theorem}[\cite{chetaev1961stability}] Consider the system in~\eqref{eq:nonlinSys} and the equilibrium $\mathbf{x}^\ast$. Let $V: \mathbb{R}^n \to \mathbb{R}$ be a locally Lipschitz continuous function such that $V(\mathbf{x}^\ast)=0$. Suppose that there exists $\epsilon_0>0$ such that $\mathcal{V}^+ \cap \mathcal{B}_{\mathbf{x}^\ast}(\epsilon)\neq \emptyset$ for any $\epsilon\in (0,\epsilon_0]$, where $\mathcal{V}^+:=\{\mathbf{x}\in\mathcal{B}_{\mathbf{x}^\ast}(\epsilon): V(\mathbf{x})>0\}$. If the lower Dini derivative of $V$ along $\mathbf{F}$ satisfies $D^{-}V(\mathbf{x})\mathbf{F}(\mathbf{x})>0$ for all $\mathbf{x}\in\mathcal{V}^+$, then the equilibrium $\mathbf{x}^\ast$ is unstable for~\eqref{eq:nonlinSys}. \label{thm:cheta} \vspace{-0.25ex} \end{theorem}} \textcolor{black}{A function $V(\cdot)$ that satisfies the conditions in Proposition~\ref{thm:cheta} is called a CF for system~\eqref{eq:nonlinSys}. While the classical theorem by Chetaev~\cite{chetaev1961stability} only provided sufficient conditions for instability, Efimov and collaborators provided the necessary part of the Chetaev's theorem in~\cite{efimov2014necessary}. Furthermore, they developed the CCF concept as a counterpart of the Control Lyapunov function (CLF) theory ~\cite{efimov2014necessary}. Kellett and collaborators~\cite{braun2018complete} extended the work in~\cite{efimov2014necessary} to differential inclusions. See Supplementary Material for the required background on CCFs.} \section{Introduction} This demo file is intended to serve as a ``starter file'' for IEEE conference papers produced under \LaTeX\ using IEEEtran.cls version 1.8b and later. I wish you the best of success. \hfill mds \hfill August 26, 2015 \subsection{Subsection Heading Here} Subsection text here. \subsubsection{Subsubsection Heading Here} Subsubsection text here. \section{Conclusion} The conclusion goes here. \section*{Acknowledgment} The authors would like to thank... \section{Introduction} \IEEEPARstart{T}{his} demo file is intended to serve as a ``starter file'' for IEEE journal papers produced under \LaTeX\ using IEEEtran.cls version 1.8b and later. I wish you the best of success. \hfill mds \hfill August 26, 2015 \subsection{Subsection Heading Here} Subsection text here. \subsubsection{Subsubsection Heading Here} Subsubsection text here. \section{Conclusion} The conclusion goes here. \appendices \section{Proof of the First Zonklar Equation} Appendix one text goes here. \section{} Appendix two text goes here. \section*{Acknowledgment} The authors would like to thank... \ifCLASSOPTIONcaptionsoff \newpage \fi \section{Introduction} \IEEEPARstart{T}{his} demo file is intended to serve as a ``starter file'' for IEEE \textsc{Transactions on Magnetics} journal papers produced under \LaTeX\ using IEEEtran.cls version 1.8b and later. I wish you the best of success. \hfill mds \hfill August 26, 2015 \subsection{Subsection Heading Here} Subsection text here. \subsubsection{Subsubsection Heading Here} Subsubsection text here. \section{Conclusion} The conclusion goes here. \appendices \section{Proof of the First Zonklar Equation} Appendix one text goes here. \section{} Appendix two text goes here. \section*{Acknowledgment} The authors would like to thank... \ifCLASSOPTIONcaptionsoff \newpage \fi \section{Introduction}\label{sec:introduction}} \else \section{Introduction} \label{sec:introduction} \fi \IEEEPARstart{T}{his} demo file is intended to serve as a ``starter file'' for IEEE Computer Society journal papers produced under \LaTeX\ using IEEEtran.cls version 1.8b and later. I wish you the best of success. \hfill mds \hfill August 26, 2015 \subsection{Subsection Heading Here} Subsection text here. \subsubsection{Subsubsection Heading Here} Subsubsection text here. \section{Conclusion} The conclusion goes here. \appendices \section{Proof of the First Zonklar Equation} Appendix one text goes here. \section{} Appendix two text goes here. \ifCLASSOPTIONcompsoc \section*{Acknowledgments} \else \section*{Acknowledgment} \fi The authors would like to thank... \ifCLASSOPTIONcaptionsoff \newpage \fi \section{Introduction} \IEEEPARstart{T}{his} demo file is intended to serve as a ``starter file'' for IEEE Communications Society journal papers produced under \LaTeX\ using IEEEtran.cls version 1.8b and later. I wish you the best of success. \hfill mds \hfill August 26, 2015 \subsection{Subsection Heading Here} Subsection text here. \subsubsection{Subsubsection Heading Here} Subsubsection text here. \section{Conclusion} The conclusion goes here. \appendices \section{Proof of the First Zonklar Equation} Appendix one text goes here. \section{} Appendix two text goes here. \section*{Acknowledgment} The authors would like to thank... \ifCLASSOPTIONcaptionsoff \newpage \fi \section{Introduction} This document is a model and instructions for \LaTeX. Please observe the conference page limits. \section{Ease of Use} \subsection{Maintaining the Integrity of the Specifications} The IEEEtran class file is used to format your paper and style the text. All margins, column widths, line spaces, and text fonts are prescribed; please do not alter them. You may note peculiarities. For example, the head margin measures proportionately more than is customary. This measurement and others are deliberate, using specifications that anticipate your paper as one part of the entire proceedings, and not as an independent document. Please do not revise any of the current designations. \section{Prepare Your Paper Before Styling} Before you begin to format your paper, first write and save the content as a separate text file. Complete all content and organizational editing before formatting. Please note sections \ref{AA}--\ref{SCM} below for more information on proofreading, spelling and grammar. Keep your text and graphic files separate until after the text has been formatted and styled. Do not number text heads---{\LaTeX} will do that for you. \subsection{Abbreviations and Acronyms}\label{AA} Define abbreviations and acronyms the first time they are used in the text, even after they have been defined in the abstract. Abbreviations such as IEEE, SI, MKS, CGS, ac, dc, and rms do not have to be defined. Do not use abbreviations in the title or heads unless they are unavoidable. \subsection{Units} \begin{itemize} \item Use either SI (MKS) or CGS as primary units. (SI units are encouraged.) English units may be used as secondary units (in parentheses). An exception would be the use of English units as identifiers in trade, such as ``3.5-inch disk drive''. \item Avoid combining SI and CGS units, such as current in amperes and magnetic field in oersteds. This often leads to confusion because equations do not balance dimensionally. If you must use mixed units, clearly state the units for each quantity that you use in an equation. \item Do not mix complete spellings and abbreviations of units: ``Wb/m\textsuperscript{2}'' or ``webers per square meter'', not ``webers/m\textsuperscript{2}''. Spell out units when they appear in text: ``. . . a few henries'', not ``. . . a few H''. \item Use a zero before decimal points: ``0.25'', not ``.25''. Use ``cm\textsuperscript{3}'', not ``cc''.) \end{itemize} \subsection{Equations} Number equations consecutively. To make your equations more compact, you may use the solidus (~/~), the exp function, or appropriate exponents. Italicize Roman symbols for quantities and variables, but not Greek symbols. Use a long dash rather than a hyphen for a minus sign. Punctuate equations with commas or periods when they are part of a sentence, as in: \begin{equation} a+b=\gamma\label{eq} \end{equation} Be sure that the symbols in your equation have been defined before or immediately following the equation. Use ``\eqref{eq}'', not ``Eq.~\eqref{eq}'' or ``equation \eqref{eq}'', except at the beginning of a sentence: ``Equation \eqref{eq} is . . .'' \subsection{\LaTeX-Specific Advice} Please use ``soft'' (e.g., \verb|\eqref{Eq}|) cross references instead of ``hard'' references (e.g., \verb|(1)|). That will make it possible to combine sections, add equations, or change the order of figures or citations without having to go through the file line by line. Please don't use the \verb|{eqnarray}| equation environment. Use \verb|{align}| or \verb|{IEEEeqnarray}| instead. The \verb|{eqnarray}| environment leaves unsightly spaces around relation symbols. Please note that the \verb|{subequations}| environment in {\LaTeX} will increment the main equation counter even when there are no equation numbers displayed. If you forget that, you might write an article in which the equation numbers skip from (17) to (20), causing the copy editors to wonder if you've discovered a new method of counting. {\BibTeX} does not work by magic. It doesn't get the bibliographic data from thin air but from .bib files. If you use {\BibTeX} to produce a bibliography you must send the .bib files. {\LaTeX} can't read your mind. If you assign the same label to a subsubsection and a table, you might find that Table I has been cross referenced as Table IV-B3. {\LaTeX} does not have precognitive abilities. If you put a \verb|\label| command before the command that updates the counter it's supposed to be using, the label will pick up the last counter to be cross referenced instead. In particular, a \verb|\label| command should not go before the caption of a figure or a table. Do not use \verb|\nonumber| inside the \verb|{array}| environment. It will not stop equation numbers inside \verb|{array}| (there won't be any anyway) and it might stop a wanted equation number in the surrounding equation. \subsection{Some Common Mistakes}\label{SCM} \begin{itemize} \item The word ``data'' is plural, not singular. \item The subscript for the permeability of vacuum $\mu_{0}$, and other common scientific constants, is zero with subscript formatting, not a lowercase letter ``o''. \item In American English, commas, semicolons, periods, question and exclamation marks are located within quotation marks only when a complete thought or name is cited, such as a title or full quotation. When quotation marks are used, instead of a bold or italic typeface, to highlight a word or phrase, punctuation should appear outside of the quotation marks. A parenthetical phrase or statement at the end of a sentence is punctuated outside of the closing parenthesis (like this). (A parenthetical sentence is punctuated within the parentheses.) \item A graph within a graph is an ``inset'', not an ``insert''. The word alternatively is preferred to the word ``alternately'' (unless you really mean something that alternates). \item Do not use the word ``essentially'' to mean ``approximately'' or ``effectively''. \item In your paper title, if the words ``that uses'' can accurately replace the word ``using'', capitalize the ``u''; if not, keep using lower-cased. \item Be aware of the different meanings of the homophones ``affect'' and ``effect'', ``complement'' and ``compliment'', ``discreet'' and ``discrete'', ``principal'' and ``principle''. \item Do not confuse ``imply'' and ``infer''. \item The prefix ``non'' is not a word; it should be joined to the word it modifies, usually without a hyphen. \item There is no period after the ``et'' in the Latin abbreviation ``et al.''. \item The abbreviation ``i.e.'' means ``that is'', and the abbreviation ``e.g.'' means ``for example''. \end{itemize} An excellent style manual for science writers is \cite{b7}. \subsection{Authors and Affiliations} \textbf{The class file is designed for, but not limited to, six authors.} A minimum of one author is required for all conference articles. Author names should be listed starting from left to right and then moving down to the next line. This is the author sequence that will be used in future citations and by indexing services. Names should not be listed in columns nor group by affiliation. Please keep your affiliations as succinct as possible (for example, do not differentiate among departments of the same organization). \subsection{Identify the Headings} Headings, or heads, are organizational devices that guide the reader through your paper. There are two types: component heads and text heads. Component heads identify the different components of your paper and are not topically subordinate to each other. Examples include Acknowledgments and References and, for these, the correct style to use is ``Heading 5''. Use ``figure caption'' for your Figure captions, and ``table head'' for your table title. Run-in heads, such as ``Abstract'', will require you to apply a style (in this case, italic) in addition to the style provided by the drop down menu to differentiate the head from the text. Text heads organize the topics on a relational, hierarchical basis. For example, the paper title is the primary text head because all subsequent material relates and elaborates on this one topic. If there are two or more sub-topics, the next level head (uppercase Roman numerals) should be used and, conversely, if there are not at least two sub-topics, then no subheads should be introduced. \subsection{Figures and Tables} \paragraph{Positioning Figures and Tables} Place figures and tables at the top and bottom of columns. Avoid placing them in the middle of columns. Large figures and tables may span across both columns. Figure captions should be below the figures; table heads should appear above the tables. Insert figures and tables after they are cited in the text. Use the abbreviation ``Fig.~\ref{fig}'', even at the beginning of a sentence. \begin{table}[htbp] \caption{Table Type Styles} \begin{center} \begin{tabular}{|c|c|c|c|} \hline \textbf{Table}&\multicolumn{3}{|c|}{\textbf{Table Column Head}} \\ \cline{2-4} \textbf{Head} & \textbf{\textit{Table column subhead}}& \textbf{\textit{Subhead}}& \textbf{\textit{Subhead}} \\ \hline copy& More table copy$^{\mathrm{a}}$& & \\ \hline \multicolumn{4}{l}{$^{\mathrm{a}}$Sample of a Table footnote.} \end{tabular} \label{tab1} \end{center} \end{table} \begin{figure}[htbp] \centerline{\includegraphics{fig1.png}} \caption{Example of a figure caption.} \label{fig} \end{figure} Figure Labels: Use 8 point Times New Roman for Figure labels. Use words rather than symbols or abbreviations when writing Figure axis labels to avoid confusing the reader. As an example, write the quantity ``Magnetization'', or ``Magnetization, M'', not just ``M''. If including units in the label, present them within parentheses. Do not label axes only with units. In the example, write ``Magnetization (A/m)'' or ``Magnetization \{A[m(1)]\}'', not just ``A/m''. Do not label axes with a ratio of quantities and units. For example, write ``Temperature (K)'', not ``Temperature/K''. \section*{Acknowledgment} The preferred spelling of the word ``acknowledgment'' in America is without an ``e'' after the ``g''. Avoid the stilted expression ``one of us (R. B. G.) thanks $\ldots$''. Instead, try ``R. B. G. thanks$\ldots$''. Put sponsor acknowledgments in the unnumbered footnote on the first page. \section*{References} Please number citations consecutively within brackets \cite{b1}. The sentence punctuation follows the bracket \cite{b2}. Refer simply to the reference number, as in \cite{b3}---do not use ``Ref. \cite{b3}'' or ``reference \cite{b3}'' except at the beginning of a sentence: ``Reference \cite{b3} was the first $\ldots$'' Number footnotes separately in superscripts. Place the actual footnote at the bottom of the column in which it was cited. Do not put footnotes in the abstract or reference list. Use letters for table footnotes. Unless there are six authors or more give all authors' names; do not use ``et al.''. Papers that have not been published, even if they have been submitted for publication, should be cited as ``unpublished'' \cite{b4}. Papers that have been accepted for publication should be cited as ``in press'' \cite{b5}. Capitalize only the first word in a paper title, except for proper nouns and element symbols. For papers published in translation journals, please give the English citation first, followed by the original foreign-language citation \cite{b6}. \section{Introduction} This demo file is intended to serve as a ``starter file'' for IEEE Computer Society conference papers produced under \LaTeX\ using IEEEtran.cls version 1.8b and later. I wish you the best of success. \hfill mds \hfill August 26, 2015 \subsection{Subsection Heading Here} Subsection text here. \subsubsection{Subsubsection Heading Here} Subsubsection text here. \section{Conclusion} The conclusion goes here. \ifCLASSOPTIONcompsoc \section*{Acknowledgments} \else \section*{Acknowledgment} \fi The authors would like to thank...
1,108,101,565,322
arxiv
\section{Introduction}\label{sec:introduction} Deep Neural Networks (DNNs) are incorporated in real-world applications used by a broad spectrum of industry sectors including healthcare \citep{Shorten2021DeepLA,Fink2020PotentialCA}, finance \citep{Huang2020DeepLI, Culkin2017MachineLI}, self-driving vehicles \citep{Swinney2021UnmannedAV}, and cybersecurity \citep{Ferrag2020DeepLF}. These applications utilize DNNs in various fields such as computer vision \citep{Hassaballah2020DeepLI,Swinney2021UnmannedAV}, audio signal processing \citep{Arakawa2019ImplementationOD,Tashev2017DNNbasedCV},and natural language processing \citep{Otter2021ASO}. Many services in large companies such as Google and Amazon have DNN-based back-end software (e.g., Google Lens and Amazon Rekognition) with tremendous volume of queries per second. For instance, Google processes over 99,000 searches every second \citep{mohsin_2022} and spends a substantial amount of computation power and time at their models' run-time \citep{Xiang2019PipelinedDC}. These services are often time-sensitive, resource-intensive, and require high availability and reliability. Now the question is how fast the current state-of-the-art (STOA) DNN models are at inference time and to what extent they can provide low latency responses to queries. The SOTA model depends on the application domain and the problem at hand. However, the trend in DNN design is indeed toward pre-trained large-scale models due to their reduced training cost (only fine-tuning) while providing dominating results (since they are huge models trained on an extensive dataset). One of the downsides of large-scale models (pre-trained or not) is their high inference latency. Although the inference latency is usually negligible per instance, as discussed, a relatively slow inference can jeopardize a service's performance in terms of throughput when the QPS is high. In general, in a DNN-based software development and deployment pipeline, the inference stage is part of the so called ``model serving'' process, which enables the model to serve inference requests or jobs \citep{Xiang2019PipelinedDC} by directly loading the model in the process or by employing serving frameworks such as TensorFlow Serving \citep{Olston2017TensorFlowServingFH} or Clipper \citep{Crankshaw2017ClipperAL}. The inference phase is an expensive stage in a deep neural model's life cycle in terms of time and computation costs \citep{Desislavov2021ComputeAE}. Therefore, efforts towards decreasing the inference cost in production have increased rapidly throughout the past few years. From the software engineering perspective, caching is a standard practice to improve software systems performance, which helps avoid redundant computations. Caching is the process of storing recently observed information to be reused when needed in the future, instead of re-computation \citep{Wessels-2001,caching-def}. Caching is usually orthogonal to the underlying procedure, meaning that it is applied by observing the inputs and outputs of the target procedure and does not engage with the internal computations of the cached function. Caching effectiveness is best observed when the cached procedure often receives duplicated inputs while in a similar internal state---for instance, accessing a particular memory block, loading a web page, or fetching the books listed in a specific category in a library database. It is also possible to adopt a standard caching approach with DNNs (e.g., some work cache a DNN's output solely based on its input values \citep{Crankshaw2017ClipperAL}). However, it would most likely provide a meager improvement due to the high dimension and size of the data (such as images, audios, texts) and low duplication among the requests. However, due to the feature extracting nature of the deep neural networks, we can expect the inputs with similar outputs (e.g.,\ images of the same person or the same object) to have a pattern in the intermediate layers' activation values. Therefore, we exploit the opportunity to cache a DNN's output based on the intermediate layers' activation values. This way, \textbf{we can cache the results not by looking at the raw inputs but by looking at their extracted features in the intermediate layers within the model's forward-pass}. The intermediate layers often have even higher dimensions than the input data. Therefore, we use shallow classifiers \citep{Kaya2019ShallowDeepNU} to replace the classic cache storing and look-up procedures. A shallow classifier is a supplementary model attached to an intermediate layer in the base model that uses the intermediate layer's activation values to infer a prediction. In the caching method, training a shallow classifier on a set of samples mimics the procedure of storing those samples in a cache storage, and inferring for a new sample using the shallow classifier mimics the look-up procedure. Caching is more problematic in regression models where the outputs are continuous values. Specifically, it is less likely that two different samples have the same outcome in a regression model compared to a classification one. Therefore, the experiments in this research focus on classification models. Thus, here we propose caching the predictions made by off-the-shelf classification models using shallow classifiers trained using the samples and information collected at inference time. We first evaluate the rationality of our method in our first research question by measuring how it affects the final accuracy of the given base models and assessing the effectiveness of the parameters we introduce (tolerance and confidence thresholds) as a knob to control the caching certainty. We further evaluate the method in terms of computational complexity and inference latency improvements in the second and third research questions. We measure this improvements by comparing the FLOPs count, memory consumption, and inference latency for the original model vs. the cache-enabled version that we build throughout this experiment. We observed up to 58\% reduction in FLOPs, up to 46\% acceleration in inference latency while inferring on CPU and up to 18\% on GPU, with less than 2\% drop in accuracy. In the rest of the paper, we discuss our motivations in section \ref{sec:motivation}, the background and related works in section \ref{sec:bkg}, details of the method in section \ref{sec:method}, design and evaluation of the study in section \ref{sec:empirical-evaluation}, and lastly, we conclude the discussions in section \ref{sec:conclusion}. \section{Motivation}\label{sec:motivation} Many real-world software services utilize deep neural models and, simultaneously, require low response time to meet their service level objectives (SLO). This requirement usually leads to allocating expensive infrastructure and hardware resources to the services \citep{VelascoMontero2019OnTC}. The high computational cost of DNN models directly affects the service provider in terms of their delivery cost and the environment in terms of the carbon footprint of the data centers running such services 24/7. Countless high-traffic online platforms such as online stores, photo/video sharing platforms, digital advertising platforms, and trading platforms use neural networks within the process of serving their user requests. For instance, displaying an online advertisement involves an online ad-click rate prediction based on the user features \citep{Gharibshah2020DeepLF}. Furthermore, online stores also use deep learning classification models for various purposes, such as product categorization, recommendation, product review sentiment analysis, and customer churn rate prediction. In terms of the traffic load, Google Lens for instance has reached an average of 3 billion usages per month in 2021 \citep{maxham_diaz_2021}. Employing a variety of machine learning and deep learning models, Google spends billions of dollars on data centers and infrastructure to process such volume of requests \citep{spadafora_2022}. Thus, extensive work towards minimizing the energy footprint of the large-scale services has been done \citep{Lo2014TowardsEP,Buyya2018SustainableCC}. On the other hand, the trend in DNNs deployment on resource-constrained devices such as mobile and IoT devices has also been rising in the past few years \citep{Lin2020MCUNetTD,Yoo2020DeepLP}. Various scenarios involve DNNs performing on-device predictions where low inference latency and/or low compute consumption is required. For instance, traffic sign classification in autonomous vehicles \citep{Zhang2020LightweightDN} requires low latency, and on-device voice command recognition systems \citep{Lin2018EdgeSpeechNetsHE} and mobile visual assistants \citep{9179386} require low compute consumption. Moreover, using pre-trained off-the-shelf DNN models and adapting them to new tasks using transfer learning is playing a fundamental role in enabling practitioners in different areas to utilize DNNs \citep{Shrestha2019CrossFrequencyCO,Abed2020AlzheimersDP,Lee2020EvaluationOT}. However, the pre-trained models' original training data is not always available to the users. The absence of the training data can be due to different reasons, such as the high volume or cost of the data, privacy requirements, or intellectual property regulations. Accounting for such common cases, we restrict our method to use only the data collected at inference time (test set). The inference data are unlabelled, meaning that their ground truth labels are not available to the user. Hence, our method relies only on the model's internal values and final outputs and does not require access to the ground truth labels. DNNs compute performance improvement has received a considerable amount of attention in terms of specialized hardware accelerators \citep{Wang2019BenchmarkingTG,Dally2020DomainspecificHA,Deng2020ModelCA}, and framework-level optimizations \citep{Crankshaw2017ClipperAL,Shi2018PerformanceMA}. On the other hand, model compression methods propose modifications to the model's structure (i.e., weights and connections) to reduce their compute complexity. By applying one or more model compression methods, practitioners either replace the modified model and lose a fixed amount of accuracy, or manage multiple versions of the model with different accuracy and complexity. Having multiple model versions, they select one for inference based on the current workload \citep{Taylor2018AdaptiveDL, Marco2020OptimizingDL} or available resources \citep{Guan2018EnergyefficientAI}. However, our method optimizes the model while preserving its original structure, allowing the user to enable/disable the optimization without the overhead of managing and loading/offloading multiple model versions. Considering the trends, requirements, and motivations discussed above, we design the caching method to add one or more alternative exit paths in the model with less computation required than the remaining layers in the backbone, controlled by the shallow classifiers we train using only the inference data. \section{Background and related works}\label{sec:bkg} In this section, we briefly review the background topics to the model inference optimization problem. Following this background discussions, we introduce the techniques used to build the caching procedure. Figure\ref{fig:background} puts the discussed background and related techniques into the picture. \begin{figure}[htbp] \centering \begin{center} \includegraphics[width=\columnwidth]{background.eps} \caption{DNN inference optimization perspectives and solutions}\label{fig:background} \end{center} \end{figure} \subsection{Inference optimization} There are two perspectives addressing the model inference optimization problem. The first perspective is interested in optimizing the model deployment platform and covers a broad range of optimization targets \citep{Yu2021ASO}. These studies often target the deployment environments in resource-constrained edge devices \citep{Liu2021SecDeepSA,Zhao2018DeepThingsDA} or resourceful cloud-based devices \citep{Li2020AutomatingCD}. Others focus on hardware-specific optimizations \citep{Zhu2018ResearchOP} and inference job scheduling \citep{Wu2020IrinaAD}. The second perspective is focused on minimizing the model's inference compute requirements by compressing the model. Among model compression techniques, model pruning \citep{han2015deep,Zhang2018ASD,Liu2019RethinkingTV}, model quantization \citep{Courbariaux2015BinaryConnectTD,Rastegari2016XNORNetIC,Nagel2019DataFreeQT}, and model distillation \citep{Bucila2006ModelC,Polino2018ModelCV,Hinton2015DistillingTK} are being extensively used. These ideas alleviate the model's computational complexity by pruning the weights, computing the floating-point calculations at lower precision, and distilling the knowledge from a teacher (more complex) model into a student (less complex) model, respectively. These techniques modify the original model and often cause a fixed amount of loss in the test accuracy. \subsection{Early Exits in DNNs}\label{subsec:early-exit} ``Early exit'' generally refers to an alternative path in a DNN model which can be taken by a sample instead of proceeding to the next layers in the model. Many previous works have used the early exit concept for different purposes \citep{Xiao2021SelfCheckingDN,Scardapane2020WhySW,Matsubara2022SplitCA}. Among them, Shallow Deep Networks (SDN) \citep{Kaya2019ShallowDeepNU} points out the ``overthinking'' problem in deep neural networks. ``Overthinking'' refers to the models spending a fixed amount of computational resources for any query sample, regardless of their complexity (i.e., how deep the neural network should be to infer the correct prediction for the sample). Their research proposes attaching shallow classifiers to the intermediate layers in the model to form the early exits. Each shallow classifier in SDN provides a prediction based on the values of the intermediate layer to which it is attached. On the other hand, \citep{Xiao2021SelfCheckingDN} incorporates the shallow classifiers to obtain multiple predictions for each sample. In their method, they use early exits as an ensemble of models to increase the base model's accuracy. The functionality of the shallow classifiers in our proposed method is similar to SDN. However, the SDN method trains the shallow classifier using the ground truth data in the training set and overlooks the available knowledge in the original model. This constraint renders the proposed method useless when using a pre-trained model without access to the original training data, which is commonly the case for practitioners. \subsection{DNN Distillation and Self-distillation}\label{subsec:distillation} Among machine learning tasks, the classification category is one of the significant use cases where DNNs have been successful in recent years. Classification is applied to a broad range of data such as image \citep{Bharadi2017ImageCU}, text \citep{Varghese2020DeepLI}, audio \citep{Lee2009UnsupervisedFL}, and time-series \citep{Zheng2014TimeSC} classification. Knowledge distillation(KD) \citep{Bucila2006ModelC,Polino2018ModelCV,Hinton2015DistillingTK} is a model compression method that trains a relatively small (less complex) model known as the student to mimic the behavior of a larger (more complex) model known as the teacher. Classification models usually provide a probability distribution (PD) representing the probability of the input belonging to each class. KD trains the student model to provide similar PDs (i.e.,\ soft labels) to the teacher model rather than training it with just a class label for each sample (i.e.,\ hard labels). KD uses specialized loss functions in the training process, such as Kullback-Leibler Divergence \citep{Joyce2011} to measure how one PD is different from another. KD usually is a 2-step process consisting of training a large complex model to achieve high accuracy and distilling its knowledge into a smaller model. An essential challenge in KD is choosing the right teacher and student models. Self-distillation \citep{self-distillation} addresses this challenge by introducing a single-step method to train the teacher model along with multiple shallow classifiers. Each shallow classifier in self-distillation is a candidate student model which is trained by distilling the knowledge from one or more of the deeper classifiers. In contrast to SDN, self-distillation utilizes knowledge distillation to train the shallow classifiers. However, it still trains the base model from scratch along with the shallow classifiers, using the original training set. This training procedure conflicts with our objectives in both aspects. Specifically, we use a pre-trained model and keep it unchanged throughout the experiment and only use inference data to train the shallow classifiers. Our work modifies and puts the presented methods in SDN and self-distillation in the context of caching the final predictions of pre-trained DNN models. The method trains the shallow classifiers using only the unlabelled samples collected at run-time and measures the improvement in inference compute costs achieved by the early exits throughout the forward-passes. \subsection{DNN Prediction Caching} Clipper \citep{Crankshaw2017ClipperAL} is a serving framework that incorporates caching DNNs predictions based on their inputs. Freeze Inference \citep{Kumar2019AcceleratingDL} investigates the use of traditional ML models such as K-NN and K-Means to predict based on intermediate layers' values. They show that the size and computation complexity of those ML models grows proportionally with the number of available samples and their computational overheads by far exceed any improvement. In Learned Caches, \citep{Balasubramanian2021AcceleratingDL} extend the Freeze Inference by replacing the ML models with a pair of DNN models. A predictor model predicting the outputs and a binary classifier predicting whether the output should be used as the final prediction. Their method uses the ground truth data in the process of training the predictor and selector models. In contrast, our method 1) only uses unlabelled inference data, 2) automates the process of cache-enabling, 3) uses a confidence-based cache hit determination, 4) handles batch processing by batch shrinking. \section{Methodology}\label{sec:method} In this section, we explain the method to convert a pre-trained deep neural model (which we call the backbone) to its extended version with our caching method (called cache-enabled model). The caching method adds one or more early-exit paths to the backbone, controlled by the shallow classifiers (which we call the cache models), allowing the model to infer a decision faster at run-time for some test data samples (cache hits). Faster decision for a portion of queries will result in a reduced mean response time. ``Cache model'' is a supplementary model that we attach to an intermediate layer in the backbone, and given the layer's values provides a prediction (along with a confidence value) for the backbone's output. Just a reminder that as our principal motivation, we assume that the original training data is unavailable for the user, as is the case for most large-scale pre-trained models used in practice. Therefore, in the rest of the paper, unless we explicitly mention it, the terms dataset, training set, validation set, and test set all refer to the whole available data at run-time or a respective subset. Our procedure for cache-enabling a pre-trained model is chiefly derived from the self-distillation method \citep{self-distillation}. However, we adopt the method to cache-enable pre-trained models using only their recorded outputs, without access to the ground truth (GT) labels. A step-by-step guide on cache-enabling an off-the-shelf pre-trained model from a user perspective contains the following steps: \begin{enumerate} \item Identify the candidate layers to be cached \item Build a cache model for each candidate \item Assign confidence thresholds to the built models for determining the cache hits \item Evaluate and optimize the cache-enabled model \item Update and maintenance \end{enumerate} In the following subsections, we further discuss the procedure and design decisions in each step outlined above. \begin{figure}[ht] \centering \includegraphics[width=\columnwidth]{procedure.eps} \caption{Cache-enabling procedure, candidate layers, and data paths. }\label{fig:procedure} \end{figure} \subsection{Identifying candidate layers}\label{subsec:candidates} Choosing which layers to cache is the first step toward cache-enabling a model. A candidate layer is a layer that we will examine its values correlation to the final predictions by training a cache model based on them. One can simply list all the layers in the backbone as candidates. However, since we launch a search for a cache model per candidate layer in the next step, we suggest narrowing the list by filtering out some layers with the following criteria: \begin{itemize} \item Some layers are disabled at inference time, such as dropouts and batch normalizations. These layers do not modify their input values at inference time. Therefore, we cross them off the candidates list. \item A few last layers in the model (close to the output layer, such as \texttt{L15} in Figure \ref{fig:procedure}) might not be valuable candidates for caching since the remaining layers might not have heavy computations to reach the output. \item DNN models usually are composed of multiple components (i.e.\ first-level modules) consisting of multiple layers such as multiple residual blocks in ResNet models \cite{He2016DeepRL}). We narrow down the search space to the outputs layers in those components. \item We only consider the layers which, given their activation values, the backbone's output is uniquely determined without any other layer's state involved (i.e., the backbone's output is a function of the layer's output). In other words, a layer with other layers or connections in parallel (such as \texttt{L7-L11} and \texttt{L13} in the Figure \ref{fig:procedure}) is not suitable for caching since the backbone's output does not solely depend on the layer's output. \end{itemize} Having the initial set of the candidate layers, we next build and associate a cache model to each one. \subsection{Building cache models}\label{subsec:cache-models} Building a cache model to be associated with an intermediate layer in the backbone consists of finding a suitable architecture for the cache model and training the model with that architecture. The details of the architecture search (search space, search method, and evaluation method) and the training procedure (training data extraction and the loss function) are discussed in the following two subsections. \subsubsection{Cache models architecture} A cache model can have an architecture with any size in depth and breadth, as long as it provides more computational improvement than its overhead. In other words, it must have substantially less complexity (i.e., number of parameters and connections) than the rest of the layers in the backbone that come after the corresponding intermediate layer. The search space for such models would contain architectures with different numbers and types of layers (e.g.,\ a stack of dense and/or convolution layers). Nevertheless, all the models in the search space must output a PD identical to the backbone's output in terms of size (i.e., the number of classes) and activation (e.g.,\ SoftMax or LogSoftMax). In our experiments, the search space consists of architectures with a stack of (up to 2) convolution layers followed by another stack of (up to 2) linear layers, with multiple choices of kernel and stride sizes for the convolutions and neuron counts for the linear layers. However, users can modify or expand the search space according to their specific needs and budget. The objective of the search is to find a minimal architecture that converges and predicts the backbone's output with acceptable accuracy. Note that any accuracy given by a cache model (better than random) can be helpful as we will have a proper selection mechanism later in the process to only use the cache predictions that are (most likely) correct, and also to discard the cache models yielding low computational improvement. The user can conduct the search by empirically sampling through the search space or by using a automated Neural Architecture Search (NAS) tool such as Auto-Keras \citep{Jin2019AutoKerasAE}, Auto-PyTorch \citep{zimmer2021auto}, Neural Network Intelligence (NNI) \citep{nni2021}, or NASLib \citep{naslib-2020}. However, we used NNI to conduct the search and customized the evaluation process to account for the models' accuracy and their computational complexity. We have used the floating point operations (FLOPs) count as the estimation for the models' computational complexity in this stage. Several factors influence a cache model's architecture for a given intermediate layer. These factors include the target intermediate layer's dimensions, its position in the backbone, and the dataset specifications such as its number of target classes. For instance, the first cache models in CIFAR100-Resnet50 and in CIFAR10-Resnet18 experiments (shown as cache1 in Figure \ref{fig:extended-schemas}) have the same input size, but since CIFAR100 has more target classes, it reasonably requires a cache model with more learning capacity. Therefore, using NAS to design the cache models helps automate the process and alleviate deep learning expert supervision in designing the cache models. Regardless of the search method, evaluating a nominated architecture requires training a model with the given architecture which we discuss the procedure in the next section. Moreover, since the search space is limited in depth, it is possible that for some intermediate layers, neither of the cache models converge (i.e., the model provides nearly random results). In such cases, we discard the candidate layer as non-suitable for caching. \subsubsection{Training a cache model} \label{subsec:training} Figure (\ref{fig:procedure}) illustrates a cache-enabled model's schema consisting of the backbone (the dashed box) and the associated cache models. A cache model's objective is to predict the output of the backbone model, given the corresponding intermediate layer's output, per input sample. Similar to the backbone, cache models are classification models. However, their inputs are the activation values in the intermediate layers. As suggested in self-distillation \citep{self-distillation}, training a cache model is essentially similar to distilling the knowledge from the backbone (final classifier) into the cache model. Therefore, to distill the knowledge from the backbone into the cache models, we need a medial dataset (MD) based on the collected inference data (ID). The medial dataest for training a cache model associated to an intermediate layer \texttt{L} in the backbone \texttt{B} consists of the set of activation values in the layer \texttt{L} and the PDs given by \texttt{B} per samples in the given ID, formally annotated as below: \begin{equation} \label{eqn:InputLabelPairs1} MD_L = [i \in ID : <B_L(i) , B(i)>] \end{equation} where: \noindent \details{\texttt{MD\textsubscript{L}}}{Medial dataset for the cache model associated with the layer \texttt{L}} \details{\texttt{ID}}{The collected inference data consisting of unlabelled samples} \details{\texttt{B\textsubscript{L}(i)}}{Activation values in layer \texttt{L} given the sample \texttt{i} to the backbone \texttt{B}} \details{\texttt{B(i)}}{The backbone's PD output for the sample \texttt{i}} Note that the labels in MDs are the backbone's outputs and not the GT labels, as we assumed the GT labels to be unavailable. We split the MD\textsubscript{L} into three splits ($MD_L^{Train}$, $MD_L^{Val}$, $MD_L^{Test}$) and use them respectively similar to the common deep learning training and test practices. Similar to distillation method \citep{Hinton2015DistillingTK}, we use Kullback–Leibler Divergence (KLDiv) \citep{Joyce2011} loss function in the training procedure. KLDiv measures how different are the two given PDs. Thus, minimizing the KLDiv loss value over $MD_L^{Train}$ trains the cache model to estimate the prediction of the backbone ($B(i)$). Unlike self-distillation where \citep{self-distillation} train the backbone and the shallow classifiers simultaneously, in our method, while training a cache model, it is crucial to freeze the rest of the model including the backbone and the other cache models (if any) in the collection, to ensure the training process does not modify any parameter not belonging to the current cache model. \subsection{Assigning confidence threshold} The probability value associated to the predicted class (the one with the highest probability) is known as the model's confidence in the prediction. The cache model's prediction confidence for a particular input will indicate whether we stick with that prediction (cache hit) or we proceed with the rest of the backbone to the next --- or probably final --- exit (cache miss). Confidence calibration means enhancing the model to provide an accurate confidence. In other words, a well-calibrated model's confidence accurately represents the likelihood for that prediction to be correct(\cite{pmlr-v70-guo17a}). An over-confident cache model will lead the model to prematurely exit for some samples based on incorrect predictions, while an under-confident cache model will bear a low cache hit rate. Therefore, after building a cache model, we also calibrate its confidence using $MD_L^{Val}$ to better distinguish the predictions more likely to be correct. Several confidence calibration methods are discussed in \citep{pmlr-v70-guo17a}, among which the temperature scaling (in the output layer) has shown to be practical and easy to implement. Having the model calibrated, we next assign a confidence threshold value to the model which will be used at inference time to determine the cache hits and misses. When a cache model identifies a cache hit, its prediction is considered to be the final prediction. However, when needed for validation and test purposes, we obtain the predictions from the cache model and the backbone. \begin{table}[h] \begin{center} \begin{minipage}{\columnwidth} \centering \caption{Cache prediction confusion matrix, C: Cached predicted class, B: Backbone's predicted class, GT: Ground Truth label}\label{table:confusion}% \begin{tabular}{@{}c|ccc@{}} \toprule Category & B = C & B = GT & C = GT \\ \midrule \textbf{$BC$} & \checkmark & \checkmark & \checkmark \\ $\overline{BC}$ & \checkmark & X & X \\ $B\overline{C}$ & X & \checkmark & X \\ $\overline{B}C$ & X & X & \checkmark \\ $\overline{B}\ \overline{C}$ & X & X & X \\ \end{tabular} \end{minipage} \end{center} \end{table} A cache model's prediction (C) for an input to the backbone falls into one of the 5 correctness categories listed in table \ref{table:confusion} with respect to the ground truth labels (GT) and the backbone's prediction (B) for the input. Among the cases where the cache model and the backbone disagree, the $B\overline{C}$ predictions negatively affect the final accuracy and on the other hand, the $\overline{B}C$ predictions positively affect the final accuracy. The Equation \ref{eqn:accuracy_change} formulates a cache model's actual effect on the final accuracy. \begin{equation} \label{eqn:accuracy_change} F_\Delta(\theta) = \overline{B}C_\Delta(\theta) - B\overline{C}_\Delta(\theta) \end{equation} Where: \noindent \details{$\Delta$}{The cache model} \details{$F_\Delta$}{The actual accuracy effect $\Delta$ causes given $\theta$ as threshold} \details{$B\overline{C}_\Delta$}{Ratio of $B\overline{C}$ predictions by $\Delta$ given $\theta$ as threshold} \details{$\overline{B}C_\Delta$}{Ratio of $\overline{B}C$ predictions by $\Delta$ given $\theta$ as threshold} However, since we use the unlabelled inference data to form the MDs, we can only estimate an upper bound for the cache model's effect in the final accuracy. The estimation assumes that an incorrect cache would always lead to an incorrect classification for the sample ($\overline{B}C$). We estimate the change in the accuracy upper bound a cache model causes given a certain confidence threshold, by its hit rate and cache accuracy: \begin{equation} \label{eqn:accuracy_change_ub} F_\Delta(\theta) \le HR_\Delta(\theta) \times (1- CA_\Delta(\theta)) \end{equation} Where \noindent \details{$\Delta$}{The cache model} \details{$F_\Delta$}{The expected accuracy drop $\Delta$ causes given $\theta$ as threshold} \details{$HR_\Delta$}{Hit rate provided by $\Delta$ given $\theta$ as threshold} \details{$CA_\Delta$}{Cache accuracy provided by $\Delta$ given $\theta$ as threshold} Given the tolerance $T$ for drop in final accuracy, we assign a confidence threshold to each cache model that yields no more than $X/2^n\%$ expected accuracy drop on $MD_L^{Val}$ according to the Equation \ref{eqn:accuracy_change_ub}, where \texttt{n} is the 1-based index of the cache model in the setup. It is important to note that there are alternative methods to distribute the accuracy drop budget among the cache models. For instance, one can equally distribute the budget. However, as we show in the evaluations later in section \ref{subsec:rq1}, we find it reasonable to assign more budget to the cache models shallower positions in the backbone. \subsection{Evaluation and optimization of the cache-enabled model}\label{subsec:positions} So far, we have a set of cached layers and their corresponding cache models ready for deployment. Algorithm \ref{alg:prediction} demonstrates a Python-style pseudo implementation of cache-enabled model inference process. When the cache-enabled model receives a batch of samples, it proceeds layer-by-layer similar to the standard forward-pass. Once a cached layer's activation values are available, it will pass the values to the corresponding cache model and obtains an early prediction with a confidence value per sample in the batch. For each sample, if the corresponding confidence value exceeds the specified threshold, we consider it a cache hit. Hence, we have the final prediction for the sample without passing it through the rest of the backbone. At this point, the prediction can be sent to the procedure awaiting the results (e.g.\ an API, a socket connection, a callback). We shrink the batch by discarding the cache hits items at each exit and proceed with a smaller batch to the next (or the final) exit. \begin{algorithm} \caption{Cache-enabled model inference}\label{alg:prediction} \begin{algorithmic}[1] \Require Backbone \Comment{The original model} \Require CachedLayers \Comment{List of cached layers} \Require Layer \Comment{As part of Backbone, including the associated cache model and threshold} \Procedure{ForwardPass}{X, callback} \Comment{X: Input batch} \For{\texttt{Layer in Backbone.Layers}} \Comment{In order of presence}\footnotemark \State X $\gets$ Layer(X) \newline \If{Layer in CachedLayers} \State Cache $\gets$ Layer.CacheModel \State T $\gets$ Cache.Threshold \State cachedPDs $\gets$ Cache(X) \State confidences $\gets$ max(cachedPDs, axis=1) \State callback(cachedPDs[confidences$\geq$ T]) \Comment{Resolve cache hits} \State X $\gets$ X[confidences$<$T] \Comment{Shrink the batch} \EndIf \EndFor \EndProcedure \end{algorithmic} \end{algorithm} \footnotetext{The loop is to show that each cache model will receive the cached layer's activation values when available, immediately, before proceeding to the next layer in the base model.} So far in the method, we have only evaluated the cache models individually, but to gain the highest improvement, we must also evaluate their collaborative performance within the cache-enabled model. Once the cache-enabled model is deployed, each cache model affects the following cache models' hit rates by narrowing the set of samples for which they will infer. More specifically, even if a cache model shows promising hit rate and accuracy in individual evaluation, its performance in the deployment can be affected due to the previous cache hits made by the earlier cache models (connected to shallower layers in the backbone). Therefore, we need to choose the optimum subset of cache models to infer the predictions with the minimum computations. A brute force approach to find the optimum subset would require evaluating the cache-enabled model with each subset of the cache models. However, we implement a more efficient method without multiple executions of the cache-enabled model. First, for each cache model, we record its prediction per sample in the $MD_L^{Val}$ and their confidence values. We also record two FLOPs counts per cache model; One is the cache model's FLOPs count(C\textsubscript{1}), and the other is the fallback FLOPs count which denotes the FLOPs in the remaining layers in the backbone(C\textsubscript{2}). For example, for the layer \texttt{L12} in the Figure \ref{fig:procedure}, C\textsubscript{1} is the corresponding cache model's FLOPs count, and C\textsubscript{2} is the FLOPs count in the layers \texttt{L13} through \texttt{L16}. For each subset $S$, we process the lists of predictions recorded for each model in $S$ to generate the lists of samples they actually receive when deployed along with other cache models in $S$. The processing consist of keeping only the samples in each list for which there has been no cache hits by the previous cache models in the subset. Further, we divide each list into two parts according to each cache model's confidence threshold; One consisting of the cache hits, and the other consisting of the cache misses. Finally, we score each subset using the processed lists and recorded values for each cache model in $S$ as follows: \begin{equation} \label{eqn:cache_score} K(S) = \sum_{\Delta\in S} \piped{H_\Delta} \times (C_{2, \Delta} - C_{1, \Delta}) - \piped{M_\Delta} \times C_{1, \Delta} \end{equation} Where \noindent \details{$K$}{The caching score for subset $S$} \details{$\Delta$}{A cache model in $S$} \details{$H_\Delta$}{The generated list of cache hits for $\Delta$} \details{$M_\Delta$}{The generated list of cache misses for $\Delta$} \details{$C_{1, \Delta}$}{FLOPs count recorded for $\Delta$} \details{$C_{2, \Delta}$}{Fallback FLOPs count recorded for $\Delta$} The score equation accounts for both the improvement a cache model provides through its cache hits within the subset, and the overhead it produces for its cache misses. Final schemas after applying the method on MobileFaceNet, EfficientNet, ResNet18, and ResNet50 are illustrated in Figure \ref{fig:extended-schemas}. The figure demonstrates the chosen subsets and their associated cache models per backbone and dataset. \begin{figure}[!htbp] \centering \includegraphics[width=\textwidth]{extended-schemas.eps} \caption{Final schema of the cache models, for the experiments CIFAR10-Resnet18, CIFAR100-Resnet150, LWF-EfficientNet, and LFW-MobileFaceNet }\label{fig:extended-schemas} \end{figure} \subsection{Updates and maintenance}\label{subsec:maintenance} Similar to conventional caching, layer caching also requires recurring updates to the cache space to adapt to the trend in inference data. However, unlike conventional caching, we can not update the cache models in real-time. Therefore, to update the cache models using the extended set of collected inference samples, we retrain them and re-adjust their confidence thresholds. The retraining adapts the cache models to the trend in the incoming queries and maintains their cache accuracy. We consider two triggers for the updates: \begin{inparaenum}[I)] \item When the size of the recently collected data reaches a threshold (e.g.\ 20\% of the collected samples are new) and \item When the backbone is modified or retrained. \end{inparaenum} However, users must adapt the recommended triggers to their requirements and budget. \section{Empirical Evaluation}\label{sec:empirical-evaluation} In this section, we explain our experiment's objective, research questions, the tool implementation, and the experiment design including the backbones and datasets, evaluation metrics, and the environment configuration. \subsection{Objectives and research questions} The high-level objective of this experiment is to assess the ability of the automated layer caching mechanism to improve the compute requirements and inference time for DNN-based services. To address the above objective, we designed the following research questions (RQ): \begin{itemize} \item [RQ1] To what extent the cache models can accurately predict the backbone's output and the ground truth data?\\ This RQ investigates the core idea of caching as a mechanism to estimate the final outputs earlier in the model. The assessments in this RQ considers the cache models' accuracy in predicting the backbone's output (cache accuracy) and predicting the correct labels (GT accuracy). \item [RQ2] To what extent can cache-enabling improve compute requirements?\\ In this RQ, we are interested in how cache-enabling affects the models' computation requirements. In these measurements, we measure the FLOPs counts and memory usage as the metrics for the models' compute consumption. \item [RQ3] How much acceleration does cache-enabling provide on CPU/GPU?\\ In this RQ, we are interested in the actual amount of end-to-end speed up that a cache-enabled model can achieve. We break this result down to CPU and GPU accelerations, since they address different types of computation during the inference phase and thus may have been differently affected. \end{itemize} \subsection{Tasks and datasets}\label{subsec:datasets} Among the diverse set of classification tasks in real-world that are implemented by solutions utilizing DNN models, we have selected two representatives: face recognition and object classification. Both tasks are quite commonly addressed by DNNs and often used in large-scale services that have non-functional requirements such as: high throughput (due to the nature of the service and the large volume of input data) and are time-sensitive. The face recognition models are originally trained on larger datasets such as MS-Celeb-1M \citep{Guo2016MSCeleb1MAD} and are usually tested with different --- and smaller --- datasets such as LFW \citep{Huang2008LabeledFI}, CPLFW \citep{Zheng2017CrossAgeLA}, RFW \citep{Wang2019RacialFI}, AgeDB30 \citep{Moschoglou2017AgeDBTF}, and MegaFace \citep{KemelmacherShlizerman2016TheMB} for testing the models against specific challenges, such as age/ethnic biases, and recognizing mask covered faces. We used the Labeled Faces in the Wild (LFW) dataset for face recognition which contains 13,233 images of 5,749 people. We used the images of 127 identities who have at least 11 images in the set so we can split them for training, validation and testing. We also used CIFAR10 and CIFAR100 test sets \citep{Krizhevsky2009LearningML} for object classification, each containing 10000 images distributed equally among 10 and 100 classes, respectively. A reminder that we do not use the training data, rather we only use the test sets to simulate incoming queries at run-time. Specifically, we use only the test splits of the CIFAR datasets. However, we use the whole LFW data as it has not been used to train the face recognition models. Moreover, we do not use the labels in these test sets in the training and optimization process, rather we only use them in the evaluation step to provide GT accuracy statistics. Each dataset mentioned above represents an inference workload for the models. Thus, we split each one into training, validation and test partitions with 50\%, 20\%, and 30\% proportionality, respectively. However, we augmented the test sets using flips and rotations to improve the statistical significance of our testing measurements. \subsection{Backbones}\label{subsec:backbones} The proposed cache-enabling method is applicable to any deep classifier model. However, the results will vary for different models based on their complexity. Among the available face recognition models, we have chosen well-known MobileFaceNet and EfficientNet models to evaluate the method, and we experiment with ResNet18 and ResNet50 for object classification. The object classification models are typical classifier models out-of-the-box. However, the face recognition models are feature extractors that provide embedding vectors for each image based on the face/landmarks features. They can still be used to classify a face-identity dataset. Therefore, we attached a classifier block to those models and trained them (with the feature extractor layers frozen) to classify the images of the 127 identities with the highest number of images in the LFW dataset (above 10). It is important to note that since the added classifier block is a part of the pre-trained model under study, we discarded the data portion used to train the classifier block to ensure we still hold on to the constraint of working with pre-trained models without access to the original training dataset. \subsection{Metrics and measurements}\label{sub:measurement} Our evaluation metrics for RQ1 are ground truth (GT) accuracy and cache accuracy. Cache accuracy measures how accurately a cache model predicts the backbone's outputs (regardless of correctness). The GT accuracy applies to both cache-enabled model and each individual cache model. However, the cache accuracy only applies to the cache models. In RQ2, we compare the original models and their cache-enabled version in terms of the average FLOPs count occurring for inference and their memory usage. We only measure the resources used in inference. Specifically, we exclude the training-specific layers (e.g., Batch Normalization and Dropout) and computations (e.g., gradient operations) in the analysis. FLOPs count takes the model architecture and the input size into account and estimates the computations required by the model to infer for the input \citep{Desislavov2021ComputeAE}. In other words, the fewer FLOPs used for inference, the more efficient is the model in terms of compute and energy consumption. On the other hand, we report two aspects of memory usage for the models. First is the total space used to load the models on the memory (i.e.\ model size). This metric is essentially agnostic to the performance of cache models and only considers the memory cost of loading them along with the backbone. In addition to the memory required for their weights, DNNs also allocate a sizeable amount of temporary memory for buffers (also referred to as tensors) that correspond to intermediate results produced during the evaluation of the DNN's layers \cite{Levental2022MemoryPF}. Therefore, our second metric is the live tensor memory allocations (LTMA) during inference. LTMA measures the total memory allocated to load, move, and transform the input tensor through the model's layers to form the output tensor while executing the model. In RQ3, we compare the average inference latency by the original model and its cache-enabled counterpart. Inference latency measures the time spent from passing the input to the model till it exits the model (by either an early exit or the final classifier in the backbone). Various factors affect the inference latency including hardware-specific optimizations (e.g., asynchronous computation), framework, and model implementation. In our measurements, the framework and model implementations are fixed as discussed in section \ref{subsec:implementation}. However, to account for other factors, we repeat each measurement for 100 times and report the average inference latency recorded for each experiment. Further, to also account for the asynchronous computations effects in GPU inference latency, we repeated the experiments with different batch sizes. \subsection{Implementation}\label{subsec:implementation} We developed the caching tool using PyTorch, which is accessible through the GitHub repository\footnote{https://github.com/aminabedi/Automated-Layer-Caching}. Figure \ref{fig:framework} shows the overall system design. The tool provides a NAS module, an optimizer module, and a deployment module. The NAS module provides the architectures to be used per cache model. The optimizer assigns the confidence thresholds, finds the best subset of the cache models and provides evaluation reports. Lastly, the deployment module launches a web server with the cache-enabled model ready to serve queries. \begin{figure}[ht] \centering \includegraphics[width=\columnwidth]{framework.eps} \caption{Caching system overall framework}\label{fig:framework} \end{figure} \subsubsection{NAS Module} Existing NAS tools typically define different search spaces according to different tasks which constrains their applicability to certain input types and sizes. Using such tools with input constraints defeats our method's generalization and automation purpose since the cache models' inputs can have any dimension and size. For instance, ProxylessNAS \citep{Cai2019ProxylessNASDN} specializes in optimizing the neural architecture performance for a target hardware. However, it is only applicable for image classification tasks and requires certain input specifications (e.g.,\ 3xHxW images normalized using given values). Similarly, Auto-PyTorch \citep{zimmer2021auto} and Auto-Keras are only applicable to tabular, text, and image datasets. We chose NNI by Microsoft \citep{nni2021} as it does not constrain the model inputs in terms of type, size, and dimensions. NNI also provides an extensible search space definition with support for variable number of layers and nested choices (e.g., choosing among different layer types, each with different layer-specific parameters). Given the backbone implementation, the dataset, and the search space, the module launches an NNI experiment per candidate layer to search for an optimum cache model for the layer. Each experiment launches a web GUI for the progress reports and the results. We aim for end-to-end automation in the tool. However, currently, the user still needs to manually export the architecture specifications when using the NAS module and convert them to a proper python implementation (i.e., a PyTorch module implementing the architecture). The specifications are available to the user through the experiments web GUI and also in the trial output files. This shortcoming is due to the NNI implementation, which does not currently provide access to the model objects within the experiments. We have created an enhancement suggestion on the NNI repository to support the model object access (issue \#4910). \subsubsection{Optimizer and deployment modules} Given the backbone's implementation and the cache models, the optimizer evaluates cache models, assigns their confidence thresholds, finds the best subset of the cache models and disables the rest, and finally reports the relevant performance metrics for the cache-enabled model and each cache model. We used the DeepSpeed by Microsoft and PyTorch profiler to profile the FLOPs counts, memory usage, and latency values for the cache models and the backbones. The user can use each module independently. Specifically, the user can skip the architecture search via the NAS module and provide the architectures manually to the optimizer, and the module trains them before proceeding to the evaluation. The tool also offers an extensive set of configurations. More specifically, the user can configure the tool to use one device (e.g., GPU) for training processes and the other (e.g., CPU) for evaluation and deployment. The deployment module launches a web server and exposes a WebSocket API to the cache-enabled model. The query batches passed to the socket will receive one response per item, as soon as the prediction is available through either of the (early or final) exits. \subsubsection{Backbone Implementation} We used the backbone implementations and weights provided by the FaceX-Zoo \citep{Wang2021FaceXZooAP} repository to conduct the experiments with LWF dataset on MobileFaceNet and EfficientNet models. For experimenting with CIFAR10 and CIFAR100, we used the implementations provided by torchvision \citep{Marcel2010TorchvisionTM} and the weights provided by \citep{huy_phan_2021_4431043} and \citep{weiaicunzai2020}. All the backbone implementations were modified to implement an interface that handles the interactions with the cache models, controls the exits (cache hits and misses), and provides the relevant reports and metrics. We documented the interface usage in the repository, so users can experiment with new backbones and datasets. We refer interested readers to a blog post on how to extract intermediate activations in PyTorch \citep{nanbhas2020forwardhook} which introduces three methods to access the activation values. The introduced forward hooks method in PyTorch is very convenient for read-only purposes. However, our method requires performing actions based on the activation values, specifically, cache lookup and batch shrinking and avoiding further computation through the next layers. Therefore, we used the so called ``hacker'' method to access the activation values and perform these action and provided the interface for easy replication on different backbones. \subsection{Environment setup} The hardware used for inference substantially affects the results due to the hardware-specific optimizations such as computation parallelism. In our experiments, we have used an ``Intel(R) Core(TM) i7-10700K CPU @ 3.80GH'' to measure on-CPU inference times and an ``NVIDIA GeForce RTX 3070'' GPU to measure on-GPU inference time. \subsection{Experiment results}\label{sec:evaluation} In this sub-section, we evaluate the results of applying the method on the baseline backbones and discuss the answers to the RQs. \subsubsection{RQ1. How accurate are the cache models in predicting the backbone's output and the ground truth labels?}\label{subsec:rq1} In this RQ we are interested in the built cache models' performance in terms of their hit rate, GT accuracy, and cache accuracy. We break down the measurements into two parts. The first part covers the cache models' individual performance over the whole test set without any other cache model involved. The second part covers their collaborative performance within in the cache-enabled model. \subsubsection{Cache models' individual performance}\label{subsubsec:individual-performance} \begin{figure}[!htbp] \centering \includegraphics[width=\columnwidth]{Cifar100-Resnet50} \caption{Individual accuracy and hit rate of the cache models vs. confidence threshold per cache model in CIFAR100 - Resnet50 experiment}\label{fig:acc-conf-cifar100-resnet50} \end{figure} Figure \ref{fig:acc-conf-cifar100-resnet50} portrays each cache model's individual performance against any confidence threshold value in CIFAR100-Resnet50 experiment. The figures demonstrating the same measurements for other experiments are available in appendix \ref{appendice1}. We make three key observations here. First, deeper cache models are more confident and accurate in their predictions. For instance, cache 1 in the Figure \ref{fig:acc-conf-cifar100-resnet50} has 33.36\% GT accuracy and 35.74\% cache accuracy, while these metrics increase to 78.60\% and 95.38\% for Cache 3, respectively. This observation agrees with the generally acknowledged feature extraction pattern in the DNNs --- deeper layers convey more detailed information. The second key observation is the inverse correlation between the cache models' accuracy (both GT and cache) and their hit rates. This observation highlights the reliability of confidence thresholds in distinguishing the predictions more likely to be correct. For instance, cache 1 in Figure \ref{fig:acc-conf-cifar100-resnet50}, with a 20\% confidence threshold, yields 35.24\% hit rate but also 8.99\% drop in the final accuracy. However, with a 60\% confidence threshold, it yields a 4\% hit rate and does not reduce the final accuracy more than 0.1\%. The third observation is that the cache accuracy is higher than the GT accuracy in all cases. This difference is because we have trained the cache models to mimic the backbone only by observing its activation values in the intermediate layers and outputs. Since we have not assumed access to the GT labels (which is the case for inference data collected at run-time) while training the cache models, they have learned to make correct predictions only through predicting the backbone's output, which might have been incorrect in the first place. On the other hand, we observed that the cache models predict the correct labels for a portion of samples for which the backbone misclassifies. For instance, for 0.92\% of the samples, cache 3 (in the Figure \ref{fig:acc-conf-cifar100-resnet50}) correctly predicted the GT labels while the backbone failed ($\overline{B}C$ predictions). This shows the cache models' potential to partially compensate for their incorrect caches ($B\overline{C}$ predictions) by correcting the backbone's predictions for some samples ($\overline{B}C$). This indeed agrees with the overthinking concept in SDN (as discussed in \ref{subsec:distillation}) since for this set of samples, the cache models have been able to predict correctly in the shallower layers of the backbone. \subsubsection{Cache models' collaborative performance} \begin{table}[!htbp] \begin{center} \begin{minipage}{\columnwidth} \caption{Cache models' collaborative performance in terms of hit rate(HR), cache accuracy (A\textsubscript{cache}), GT accuracy (A\textsubscript{GT}), and their effect on the final accuracy($\downarrow$A\textsubscript{effect}). LFW: Labeled Faces in the Wild, MFN: MobileFaceNet, EFN: EfficientNet}\label{tab:collaboration}% \begin{tabular}{@{}ccccc|cccc@{}} \toprule \multirow{2}{*}{Data}&\multirow{2}{*}{Model}&\multicolumn{2}{c}{Final accuracy}&\multirow{2}{*}{Exit\#} & \multirow{2}{*}{HR} & \multirow{2}{*}{A\textsubscript{cache}} & \multirow{2}{*}{A\textsubscript{GT}} & \multirow{2}{*}{$\downarrow$ A\textsubscript{effect}}\\ & & Base & Cache-enabled & & & & \\ \midrule \multirow{8}{*}{\rotatebox[origin=c]{90}{CIFAR10}} & \multirow{4}{*}{\rotatebox[origin=c]{90}{Resnet18}} & \multirow{4}{*}{88.71\%} & \multirow{4}{*}{86.49\%} & 1 & 67.21\% & 92.29\% & 88.91\% & 01.31\% \\ & & & & 2 & 10.33\% & 89.76\% & 76.63\% & 0.56\% \\ & & & & 3 & 11.24\% & 85.71\% & 51.43\% & 0.25\%\\ & & & & 4 & 8.32\% & 91.37\% & 35.71\% & 0.1 \%\\ & \multirow{4}{*}{\rotatebox[origin=c]{90}{Resnet50}} & \multirow{4}{*}{87.92\%} & \multirow{4}{*}{85.88\%} & 1 & 61.41\% & 89.12\% & 86.19\% & 1.12\%\\ & & & & 2 & 15.73\% & 93.01\% & 77.84\% & 0.58\%\\ & & & & 3 & 10.29\% & 82.22\% & 53.33\% & 0.3\%\\ & & & & 4 & 6.1\% & 97.47\% & 42.65\% & 0.04\%\\ \hline \multirow{8}{*}{\rotatebox[origin=c]{90}{CIFAR100}} & \multirow{4}{*}{\rotatebox[origin=c]{90}{Resnet18}} & \multirow{4}{*}{75.92\%} & \multirow{4}{*}{74.47\%} & 1 & 11.96\% & 99.29\% & 82.11\% & 0.94\% \\ & & & & 2 & 58.26\% & 99.62\% & 85.41\% & 0.1\% \\ & & & & 3 & 7.26 \% & 93.81\% & 59.29\% & 0.3\%\\ & & & & 4 & 5.36\% & 55.56\% & 38.89\% & 0.11\%\\ & \multirow{4}{*}{\rotatebox[origin=c]{90}{Resnet50}} & \multirow{4}{*}{78.98\%} & \multirow{4}{*}{77.04\%} & 1 & 11.92\% & 76.34\% & 80.2\% & 1.32\%\\ & & & & 2 & 61.98\% & 98.56\% & 84.55\% & 0.34\%\\ & & & & 3 & 11.5\% & 97.85\% & 63.69\% & 0.27\%\\ & & & & 4 & 7.38\% & 73.68\% & 52.63\% & 0.1\%\\ \hline \multirow{5}{*}{\rotatebox[origin=c]{90}{LFW}} & \multirow{3}{*}{\rotatebox[origin=c]{90}{MFN}} & \multirow{3}{*}{97.78\%} & \multirow{3}{*}{96.91\%} & 1 & 37.35\% & 98.63\% & 97.88\% & 0.51\% \\ & & & & 2 & 41.02\% & 99.71\% & 99.71\% & 0\% \\ & & & & 3 & 55.95\% & 93.44\% & 96.18\% & 0.24\% \\ & \multirow{2}{*}{\rotatebox[origin=c]{90}{EFN}} & \multirow{2}{*}{97.29\%} & \multirow{2}{*}{95.35\%} & 1 & 63.73\% & 96.82\% & 96.24\% & 1.67\%\\ & & & & 2 & 14.52\% & 99.12\% & 98.76\% & 0.02\%\\ \hline \end{tabular} \end{minipage} \end{center} \end{table} Table \ref{tab:collaboration} describes the cache models' collaborative performance within the cache-enabled model per experiment. In the table, we also report how each cache model's cache hits have affected the final accuracy. Here, we observe that while evaluating the cache models on the subset of samples, which were missed by the previous cache models (the relatively more complex ones), the measured hit rate and GT accuracy is substantially lower compared to the evaluation on the whole dataset. This is indeed due to the fact that the simpler samples (less detailed and easier to classify) are resolved earlier in the model. More specifically, hit rate decreases since the cache models are less confident in their prediction for the more complex samples, and GT accuracy also decreases since the backbone also is less accurate for such samples. However, we observe that the cache models still have high cache accuracy with low impact on the overall accuracy. This observation shows how the confidence-based caching method has effectively enabled the cache models to provide early predictions and keep the overall accuracy drop within the given tolerance. \subsubsection{RQ2. To what extent can cache-enabling improve compute requirements?}\label{subsec:rq2} In this RQ, we showcase the amount of computation caching can save in terms of FLOPs count and analyze the memory usage of the models. \begin{table}[!htbp] \begin{center} \begin{minipage}{\columnwidth} \centering \caption{Original and cache-enabled models FLOPs (M:Mega - $10^6$)}\label{table:flops} \begin{tabular}{@{}ll|ccc@{}} \toprule \multirow{2}{*}{Dataset(input size)} & \multirow{2}{*}{Model} & \multicolumn{2}{c}{FLOPs} & \multirow{2}{*}{$\downarrow$ Ratio} \\ & & Original & Cache-enabled \\ \midrule \multirow{2}{*}{CIFAR10($3\times32\times32$)} & Resnet18 & 765M & 414M & 45.88\%\\ & Resnet50 & 1303M & 601M & 53.87\%\\ \hline \multirow{2}{*}{CIFAR100($3\times32\times32$)} & Resnet18 & 766M & 374M & 51.17\%\\ & Resnet50 & 1304M & 547M & 58.05\%\\ \hline \multirow{2}{*}{LFW($3\times112\times112$)} & MobileFaceNet & 474M & 296M & 37.55\% \\ & EfficientNet & 272M & 182M & 33.08\% \\ \end{tabular} \end{minipage} \end{center} \end{table} Table \ref{table:flops} demonstrates the average amount of FLOPs computed for inference per sample. Here we observe that shrinking the batches proportionally decreases the FLOPs count required for inference. \begin{table}[!htbp] \begin{center} \begin{minipage}{\columnwidth} \centering \caption{Original and cache-enabled models memory usage}\label{table:memory} \begin{tabular}{@{}ll|cccc@{}} \toprule \multirow{3}{*}{Dataset(input size)} & \multirow{3}{*}{Model} & \multicolumn{2}{c}{Original} & \multicolumn{2}{c}{Cache-enabled} \\ & & Model Size & LTMA & Model Size & LTMA \\ \midrule \multirow{2}{*}{CIFAR10($3\times32\times32$)} & Resnet18 & 43MB & 102MB & 97MB & 88MB\\ & Resnet50 & 91MB & 235MB & 243MB & 201MB\\ \hline \multirow{2}{*}{CIFAR100($3\times32\times32$)} & Resnet18 & 43MB & 104MB & 383MB & 93MB\\ & Resnet50 & 91MB & 235MB & 552MB & 189MB\\ \hline \multirow{2}{*}{LFW($3\times112\times112$)} & MobileFaceNet & 286MB & 567MB & 350MB & 515MB \\ & EfficientNet & 147MB & 371MB & 297MB & 349MB \\ \end{tabular} \end{minipage} \end{center} \end{table} Moreover, table \ref{table:memory} shows the memory used to load the models (i.e.,\ the model size) and the total LTMA during inference while inferring for the test set. As expected, the cache-enabled models' size is larger than the original model in all cases since they include the backbone and the additional cache models. However, the decreased LTMA in all cases shows the reduced amount of memory allocations during the inference time. Generally, lower LTMA indicates smaller tensor dimensions (e.g.\ batch size, input and operators' dimensions) \citep{Ren2021SentinelET}. However, in our case, since we do not change neither of the dimensions, lower LTMA is due to avoiding the computations in the remaining layers after cache hits which require further memory allocations. Although the FLOPs count and memory usage indicate the model's inference computational requirements, the decreased amount of FLOPs and LTMA does not necessarily lead to proportional reduction in the models' inference latency, which we further investigate in the next RQ. \subsubsection{RQ3. How much acceleration does cache-enabling provide on CPU/GPU?}\label{subsec:rq3} In this RQ, we investigate the end-to-end improvement that cache-enabling offers. The results of this measurement clearly depend on multiple deployment factors such as the underlying hardware and framework, and as we discuss later in the section, their asynchronous computation capabilities. \begin{table}[h] \begin{center} \begin{minipage}{\columnwidth} \centering \caption{end-to-end evaluation of cache-enabled models improvement in average inference latency, batch size = 32, MFN: MobileFaceNet, EFN: EfficientNet}\label{table:latencies}% \begin{tabular}{@{}ll|cccccc@{}} \toprule \multirow{2}{*}{Dataset} &\multirow{2}{*}{Model} & \multicolumn{2}{c}{Original latency} & \multicolumn{2}{c}{Cache-enabled latency} & \multicolumn{2}{c}{$\downarrow$ Ratio}\\ & & CPU & GPU & CPU & GPU & CPU & GPU \\ \midrule \multirow{2}{*}{CIFAR10} &Resnet18 & 13.4 ms& 1.08 ms& 10.11 ms& 0.98 ms& 24.55\%& 10.2\% \\ & Resnet50 & 18.73 ms & 1.81 ms & 14.62 ms & 1.51 ms & 31.08\% & 16.57\% \\ \hline \multirow{2}{*}{CIFAR100} & Resnet18 & 14.23 ms& 1.39 ms& 9.39 ms& 1.25 ms& 34.01\%& 10.08\%\\ & Resnet50 & 19.59 ms & 2.05 ms & 9.02 ms & 1.84 ms & \textbf{46.08\%} & 16.75\% \\ \hline \multirow{2}{*}{LFW} & MFN & 25.34 ms & 8.22 ms & 16.91 ms & 7.30 ms & 33.23\% & 11.19\% \\ & EFN & 39.41 ms & 17.63 ms & 27.98 ms & 14.38 ms & 29.01\% & \textbf{18.44}\% \\ \end{tabular} \end{minipage} \end{center} \end{table} Table (\ref{table:latencies}) shows the average latency for the base models on CPU and GPU, vs. their cache-enabled counterparts, evaluated on the test set. The first key observation here is the improvements on CPU. This improvement is due to the low parallelism in the CPU architecture. Essentially, the computations volume on CPU is proportional to the number of samples. Therefore, when a sample takes an early exit, the remaining computation required to finish the tasks for the batch proportionally decreases. The second observation is the relatively lower latency improvement on GPU. This observation shows that shrinking a batch does not proportionally reduce the inference time on GPU, which is due to the high parallelism in the hardware. Shrinking the batch on GPU provides a certain overhead since it interrupts the on-chip parallelism and hardware optimizations. This interruption forces the hardware to re-plan its computations which can be time consuming. Thus, batch shrinking improvements can be insignificant on GPU. \begin{table}[h] \begin{center} \begin{minipage}{\columnwidth} \centering \caption{Inference latency improvement on GPU vs. batch size in Resnet18 and Resnet50 trained on CIFAR100}\label{table:gpu-batch}% \begin{tabular}{@{}lc|cccccc@{}} \toprule Model & Batch Size & Original Latency & Cache-enabled Latency & $\downarrow$ Ratio\\ \midrule \multirow{4}{*}{Resnet18} & 16 & 1.34 ms & 1.18 ms & 11.83\% \\ & 32 & 1.39 ms & 1.25 ms & 10.08\% \\ & 64 & 1.43 ms & 1.77 ms & -24.28\% \\ & 128 & 1.61 ms & 2.11 ms & -31.05\% \\ \hline \multirow{4}{*}{Resnet50} & 16 & 1.98 ms & 1.71 ms & 13.68\% \\ & 32 & 2.05 ms & 1.84 ms & 16.75\% \\ & 64 & 2.19 ms & 1.98 ms & 9.21\% \\ & 128 & 2.7 ms & 3.22 ms & -19.43\% \\ \end{tabular} \end{minipage} \end{center} \end{table} Table \ref{table:gpu-batch} further demonstrates how the batch size affects the improvement provided by caching. The key observation here is that increasing the batch size can negate the caching effect on the inference latency which as discussed is due to fewer number of batches that are fully resolved through the cache models and do not reach the last layers. In conclusion, the latency improvement here highly depends on the hardware used in inference and must be specifically analyzed per hardware environment and computation parameters such as batch size. However, the method still can be useful when the model is not performing batch inferences (batch size = 1). One can also use the tool and get a best prediction so far within the forward-pass process by disabling the batch shrinking. Doing so will generate multiple predictions per input sample, one per exit (early and final). \subsection{Limitation and future directions}\label{subsec:discussion} The first limitation of this study is that the proposed method is limited to classification models since it would be more complicated for the cache models to predict a regression model's output due to their continuous values. This limitation is strongly tied to the effectiveness of knowledge distillation in case of regression models. The method also does not take the internal state of the backbone (if any) into account, such as the hidden states in recurrent neural networks. Therefore, the method's effectiveness for such models still needs to be further assessed. Moreover, practitioners should take the underlying hardware and the backbone structure into account as they directly affect the final performance. On this note, as shown in section \ref{subsec:rq3}, different models provide different performances in terms of inference latency in the first place, therefore, choosing the right model for the task comes first, and caching can be helpful in improving the performance. \section{Conclusion}\label{sec:conclusion} In this paper, we have showed that our automated cashing approach is able to extend a pre-trained classification DNN to a cache-enabled version using a relatively small and unlabelled dataset. The required training dataset for cashing models are collected just by recording the input items and their corresponding backbone outputs at the inference time. We have also shown that the caching method can introduce significant improvement in the model's computing requirements and inference latency, specially when the inference is performed on CPU. We discussed the parameters, design choices, and the procedure of cache-enabling a pre-trained off-the-shelf model, and the required updates and maintenance. In conclusion, while traditional caching might not be beneficial for DNN models due to the diversity, size and dimensions of the inputs, caching the features in the hidden layers of the DNNs using the cache models can achieve significant improvement in the model's inference computational complexity and latency. As shown in sections \ref{subsec:rq2} and \ref{subsec:rq3}, caching reduces the average inference FLOPs by up to 58\% and the latency up to 46.09\% on CPU and 18.44\% on GPU. \backmatter \bmhead{Acknowledgments} The work of Pooyan Jamshidi has been partially supported by NSF (Awards 2007202, 2107463, and 2233873) and NASA (Award 80NSSC20K1720). \section*{Declarations} \textbf{Conflict of interest} There are no conflict of interests. \begin{appendices} \section{Cache models' individual performance for all experimenst}\label{appendice1} The following figures demonstrate the hit rate, GT accuracy, and cache accuracy of each cache model vs. the confidence threshold, per experiment dataset and backbone. \begin{figure}[p] \centering \includegraphics[width=\columnwidth]{Cifar10-Resnet18} \caption{Experiment: CIFAR10-Resnet18} \end{figure} \begin{figure}[p] \centering \includegraphics[width=\columnwidth]{Cifar10-Resnet50} \caption{Experiment: CIFAR10-Resnet50} \end{figure} \begin{figure}[p] \centering \includegraphics[width=\columnwidth]{Cifar100-Resnet18} \caption{Experiment: CIFAR100-Resnet18} \end{figure} \begin{figure}[p] \centering \includegraphics[width=\columnwidth]{LFW-MobileFaceNet.eps} \caption{Experiment: LFW-MobileFaceNet} \end{figure} \begin{figure}[p] \centering \includegraphics[width=\columnwidth]{LFW-EfficientNet.eps} \caption{Experiment: LFW-EfficientNet} \end{figure} \end{appendices}
1,108,101,565,323
arxiv
\section{Introduction} Pre-trained language models (PrLM), including ELECTRA \cite{clark2020electra}, RoBERTa\cite{liu2019roberta}, and BERT \cite{devlin:bert}, have demonstrated strong performance in downstream tasks \cite{wang2018glue}. Leveraging a self-supervised training on large text corpora, these models are able to provide contextualized representations in a more efficient way. For instance, BERT uses Masked Language Modeling and Nest Sentence Prediction as pre-training objects and is trained on a corpus of 3.3 billion words. In order to be adaptive for a wider range of applications, PrLMs usually generate sub-token-level representations (words or subwords) as basic linguistic units. For downstream tasks such as natural language understanding (NLU), span-level representations, e.g. phrases and name entities, are also important. Previous works manifest that by changing pre-training objectives, PrLMs' ability to capture span-level information can be strengthened to some extent. For example, base on BERT, SpanBERT \cite{joshi2019spanbert} focuses on masking and predicting text spans, instead of sub-token-level information for pre-training. Entity-level masking is used as a pre-training strategy by ERNIE models \cite{sun2019ernie,zhang2019ernie}. The upper mentioned methods prove that the introduction of span-level information in pre-training to be effective for different NLU tasks. However, the requirements for span-level information of various NLU tasks differs a lot from case to case. The methods of introducing span-level information in pre-training phase, proposed by previous works, do not fit into the requirements and cannot improve the performance for all NLU tasks. For instance, ERNIE models \cite{sun2019ernie} perform remarkably well in Relation Classification, while underperforms BERT in language inference tasks, such as MNLI \cite{nangia2017repeval}. Therefore, it is imperative to develop a strategy to incorporate span-level information into PrLMs in a more flexible and universally adaptive way. This paper proposes a novel approach, Span Fine-tuning (SF), to leverage span-level information in fine-tuning phase and therefore formulate a task-specific strategy. Compared to existing works, our approach requires less time and computing resources, and is more adaptive to various NLU tasks. In order to maximize the value and contribution of span-level information, in additional to the sub-token-level representation generated by BERT, Span Fine-tuning also applies a computationally motivated segmentation to further improve the overall experience. Although various techniques, such as dependency parsing \cite{zhou2019limit} or semantic role labeling (SRL) \cite{zhang2019semantics}, have been used as auxiliary tools for sentence segmentation, these methods demand extra parsing procedure, which increase complexities in actual practice. Span Fine-tuning first leverages a pre-sampled $n$-gram dictionary to segment input sentences into spans. Then, the sub-token-level representations within the same span are combined to generate a span-level representation. Finally, the span-level representations are merged with sub-token-representations into a sentence-level representation. In this way, the sentence-level representation is able to contain and maximize the utilization of both sub-token-level and span-level information. Experiments are conducted on the GLUE benchmark \cite{wang2018glue}, which includes many NLU tasks, such as text classification, semantic similarity, and natural language inference. Empirical results demonstrate that Span Fine-tuning is able to further improve the performance of different PrLMs, including BERT \cite{devlin:bert}, RoBERTa \cite{liu2019roberta} and SpanBERT \cite{joshi2019spanbert}. The result of the experiments with SpanBERT indicates that Span Fine-tuning leverages span-level information differently compared to PrLMs pre-trained with span-level information, which shows the distinguishness of our approach. It is also verified in ablation studies and analysis that Span Fine-tuning is essential for further performance improvement for PrLMs. \section{Related Work} \subsection{Pre-trained language models} Learning reliable and broadly applicable word representations has been an ongoing heated focus for natural language processing community. Language modeling objectives are proved to be effective for distributed representation generation \cite{mnih2009scalable}. By generating deep contextualized word representations, ELMo \cite{Peters2018ELMO} advances state of the art for several NLU tasks. Leveraging Transformer \cite{vaswani2017attention}, BERT \cite{devlin:bert} further advances the field of transfer learning. Recent PrLMs are established based on the various extensions of BERT, including using GAN-style architecture \cite{clark2020electra}, applying a parameter sharing strategy \cite{lan2019albert}, and increasing the efficiency of parameters \cite{liu2019roberta}. \begin{figure*}[t] \centering \includegraphics[width=0.95\textwidth]{figures/LaCo.png} \caption{\label{fig:framework} Overview of the framework of our proposed method} \end{figure*} \subsection{Span-level pre-training methods} Previous works manifest that the introduction of span-level information in pre-training phase can improve PrLMs' performance. In the first place, BERT leverages the prediction of single masked tokens as one of the pre-training objectives. Due to the use of WordPiece embeddings \cite{wu2016google}, BERT is able to segment sentences into sub-word level tokens, so that the masked tokens are at sub-token-level, e.g. "\#\#ing". \cite{devlin:bert} shows that masking the whole word, rather than only single tokens, can further enhance the performance of BERT. Later, it is proved by \cite{sun2019ernie,zhang2019ernie} that the masking of entities is also helpful for PrLMs. By randomly masking adjoining spans in pre-training, SpanBERT \cite{joshi2019spanbert} can generate better representation for given texts. AMBERT \cite{zhang2020ambert} achieves better performance than its precursors in NLU tasks by incorporating both sub-token-level and span-level tokenization in pre-training. The upper mentioned studies all focus on introducing span-level information in pre-training. To the best of our knowledge, the introduction of span-level information in fine-tuning is still a white space to explore, which makes our approach a valuable attempt. \subsection{Integration of fine-grained representation} Different formats of downstream tasks require sentence-level representations, such as natural language inference \citep{Bowman2015A,nangia2017repeval}, semantic textual similarity \citep{cer2017semeval} and sentiment classification \citep{socher2013recursive}. Besides directly pre-training the representation of coarser granularity \citep{le2014distributed,logeswaran2018efficient}, a lot of methods have been explored to obtain a task-specific sentence-level representation by integrating fine-grained token-level representations\citep{conneau2017supervised}. \citet{kim2014convolutional} shows that by applying a convolutional neural network (CNN) on top of pre-trained word vectors, we can get a sentence-level representation that is well adapted to classification tasks. \citet{lin2017structured} leverage a self-attentive module over hidden states of a BiLSTM to generate sentence-level representations. \citet{zhang2019semantics} use a CNN layer to extract word-level representations form sub-word representations and combine them with word-level semantic role representations. Inspired by these methods, after a series of preliminary attempts, we choose a hierarchical CNN architecture to recombine fine-grained representations to coarse-grained ones. \section{Methodology } Figure \ref{fig:framework} demonstrates the overall framework of Span Fine-tuning, which is essentially uses BERT as a foundation and incorporates segmentation as an auxiliary tool. The figure does not exhaustively depict the details of BERT, given the model is relatively popular and ubiquitous. Further information on BERT is available in \cite{devlin:bert}. In Span Fine-tuning, an input sentence is divided into sub-word-level tokens and then sent to BERT to generate sub-token-level representations. At the same time, the input is segmented into spans based on $n$-gram statistics. By combining the segmentation information with sub-token-level representations generated by BERT, we divided the representation into several spans. Then, the spans are sent through a hierarchical CNN module to obtain a span-level information enhanced representation. Finally, the sub-token-level representation of \texttt{[CLS]} token generated by BERT and the span-level information enhanced representation are concatenated and form a final representation, which maximized the value of both sub-token-level and span-level information for NLU tasks. \begin{figure*}[t] \centering \includegraphics[width=0.95\textwidth]{figures/segmentation.pdf} \caption{\label{fig:segmentation} Segmentation Examples } \end{figure*} \subsection{Sentence Segmentation} Semantic role labeling (SRL) \cite{zhang2019semantics} and dependency parsing \cite{zhou2019limit} have been used as auxiliary tools for segmentation by previous works. Nonetheless, these techniques demand additional parsing procedures, and therefore increase complexities for real application. In order to obtain a simpler and more convenient segmentation, base on frequency, we select meaningful $n$-grams appeared in wikitext-103 dataset\footnote{PMI method has also been tried to adjust our dictionary, but the result is not competitive.} to form a pre-sampled dictionary. We use the dictionary to match $n$-grams from the head of each input sentence. $n$-grams with greater lengths are prioritized, while unmatched tokens remain the same. In this way, we are able to obtain a specific segmentation of the input sentence. Figure \ref{fig:segmentation} demonstrates some examples of sentence segmentation from the GLUE dataset. \subsection{Sentence Encoder Architecture} An input sentence $X=\{x_1, \dots, x_n\}$ is given with a length $n$. The sentence is firstly divided into sub-word tokens (with a special token \texttt{[CLS]} at the beginning) and converted to sub-token-level representations $E=\{e_1, \dots, e_m\}$ (usually $m$ is larger than $n$) according to embeddings proposed by \cite{wu2016google}. Then, the transformer encoder (BERT) captures the contextual information for each token by self-attention and generates a sequence of sub-token-level contextual embeddings $T=\{t_1, \dots, t_m\}$, in which $t_1$ is the contextual representation of special token \texttt{[CLS]}. Based on the segmentation generated by the $n$-gram statistics, the sub-token-level contextual representations are combined into several spans $\{C_1, \dots, C_r\}$, with $r$ as a hyperparameter indicating the max number of spans for all processed sentences. Each $C_i$ contains several contextual sub-token-level representations extracted from $T$ dedoted as $\{t^i_1, t^i_2, ..., t^i_l\}$. $l$ is another hyperparameter representing the max number of tokens for all the spans. A CNN-Maxpooling module is applied to each $C_i$ to get a span-level representation $c_i$: \begin{equation} \begin{split} c{^i_j} &= ReLU(W_1\left[t^i_j, t^i_{j+1}, \dots,t^i_{j+k-1}\right] + b_1),\\ c_i &= MaxPooling({c^i_1, \dots,c^i_r}), \end{split} \end{equation} where $W_1$ and $b_1$ are trainable parameters and $k$ is the kernel size. Based on the span-level representations $\{c_1, \dots, c_r\}$, another CNN-Maxpooling module is applied to obtain a sentence-level representation $s$ with enhanced span-level information: \begin{equation} \begin{split} s{'_i} &= ReLU(W_2\left[c_i, c_{i+1}, \dots,c_{i+k-1}\right] + b_2),\\ s &= MaxPooling({s{'_1}, \dots,s{'_r}}), \end{split} \end{equation} Finally, we concatenate $s$ with the contextual sub-token-level representation $t_1$ of special token \texttt{[CLS]} provided by BERT, and generate a sentence-level representation $s^*$ that maximizes the value of both sub-token-level and span-level information for NLU tasks: $s^* = s \diamond t_1$. \begin{table*} \centering \setlength{\tabcolsep}{2pt} { \begin{tabular}{lcccccccccc} \hline \hline Method& CoLA & SST-2 & MNLI & QNLI & RTE & MRPC & QQP & STS-B & Avg & Gain\\ & (mc) & (acc) & m/mm(acc) & (acc) & (acc) & (F1) & (F1) & (pc) & - & -\\ \hline \multicolumn{10}{c}{\emph{In literature}} \\ BERT$_\text{BASE}$ & 52.1 & 93.5 & 84.6/83.4 & - & 66.4 & 88.9 & 71.2 & 87.1 & 78.3\\ BERT$_\text{LARGE}$ &60.5 & \textbf{94.9} & 86.7/85.9 & 92.7 & 70.1 & 89.3 & 72.1 & 87.6 & 80.5\\ \hdashline BERT-1seq\footnotemark[3]& 63.5 & 94.8 & 88.0/87.4 & 93.0 & 72.1 & 91.2 & 72.1 & 89.0 & 83.5 &\multirow{2}{*}{1.0}\\ SpanBERT & \textbf{64.3} & 94.8 & \textbf{88.1}/\textbf{87.7} & \textbf{94.3} & 79.0 & 90.9 & 71.9 & \textbf{89.9} & \textbf{84.5}\\ \hline \multicolumn{10}{c}{\emph{Our implementation}} \\ BERT$_\text{BASE}$ & 51.4 & 92.1 & 84.4/83.5 & 90.3 & 67.1 & 88.3 & 71.3 & 85.1 & 79.3 & \multirow{2}{*}{\textbf{1.1}}\\ BERT$_\text{BASE}$ + SF & 55.1 & 93.6 & 84.8/84.3 & 90.6 & 69.6 & 88.7 & 71.9 & 86.5 & 80.4\\ \hdashline BERT$_\text{WWM}$ & 61.1 &93.6 &87.1/86.5 & 93.9 & 77.3 & 90.0 & 71.9 & 88.1 & 83.3 & \multirow{2}{*}{\textbf{1.1}}\\ BERT$_\text{WWM}$ + SF & 62.9 &94.1 &87.6/87.0 & \textbf{94.3} & \textbf{81.4} & \textbf{91.1} & \textbf{72.4} & 89.1 & 84.4\\ \hline \hline \end{tabular} } \caption{\label{tab:glue} Test sets performance on GLUE benchmark. All the results are obtained from \cite{liu2019multi}, \cite{radford2018improving}. For a simple demonstration, problematic WNLI set are excluded, and we do not show the accuracy of the datasets have F1 scores. \emph{mc} and \emph{pc} denote the Matthews correlation and Pearson correlation respectively. } \end{table*} \subsection{Tasks and Datasets} To evaluate Span Fine-tuning, experiments are conducted on nine NLU benchmark datasets, including text classification, natural language inference, semantic similarity. Eight of which are available from the GLUE benchmark \cite{wang2018glue}. And the rest one is SNLI \cite{Bowman2015A}, a widely accepted natural language inference dataset. \subsection{Pre-trained Language Model} We leverage the PyTorch implementation of BERT \cite{devlin:bert}, RoBERTa \cite{liu2019roberta} and SpanBERT \cite{joshi2019spanbert} based on HuggingFace’s codebase\footnote{\url{https://github.com/huggingface}} \cite{Wolf2019HuggingFacesTS} as our PrLMs and baselines. \section{Experiments} \subsection{Set Up} We select all the $n$-grams with $n \le 5$, which occurs more than ten times in the wikitext-103 dataset, to form a dictionary. The pre-sampled dictionary, containing more than 400 thousand $n$-grams, is used to segment input sentences. During segmentation, two hyperparameters are involved: $r$ representing the largest number of spans in a sentence, and $l$ indicating the largest number of tokens included in a span. In order to maintain different dimensions of features for each input sentence, padding and tail are employed. We set $r$ equals to 16, and based on NLU tasks, choose $l$ in \{64,128\} . The fine-tuning procedure is as the same as BERT's. Adam is used as the optimizer. The initial learning rate is in \{1e-5,2e-5, 3e-5\}, the warm-up rate is 0.1, and the L2 weight decay is 0.01. The batch size is set in \{16, 32, 48\}. The maximum number of epochs is set in \{2,3,4,5\} based on NLU tasks. Input sentences are divided into subtokens and converted to WordPiece embeddings, with a maximum length in \{128, 256\}. The output size of the CNN layer is the same as the hidden size of PrLM, and the kernel size is set to 3. \subsection{Results with BERT as PrLM} Two released BERT \cite{devlin:bert}, BERT Large Whole Word Masking and BERT Base, are first used as pre-trained encoder and baselines for Span Fine-tuning. Compared with BERT Large, BERT Large Whole Word Masking reach a better performance, since it uses whole-word masking in pre-training phase. Therefore, we select BERT Large Whole Word Masking as a stronger baseline. The results indicate that Span Fine-tuning can maximize the contribution of span-level information, even when compared to a stronger baseline. Table \ref{tab:glue} exhibits the results on the GLUE datasets, showing that Span Fine-tuning can significantly improve the performance of PrLMs. Since our approach leverages BERT as a foundation, and undergoes the the same evaluation procedure, it is evident that the performance gain is fully contributed by the newly introduced Span Fine-tuning. In order to test the statistical significance of the results, we follow the procedure of \cite{zhang2020retrospective}. We use the McNemars test, this test is designed for paired nominal observations, and it is appropriate for binary classification tasks.The p-value is defined as the probability of obtaining a result equal to or more extreme than what was observed under the null hypothesis. The smaller the p-value, the higher the significance. A commonly used level of reliability of the result is 95\%, written as p = 0.05. As shown in table \ref{tab:pvalue}, compared with the baseline, for all the binary classification tasks of GLUE benchmark, our method pass the significance test. \begin{table}[htb!] \centering\small \setlength{\tabcolsep}{3pt} { \begin{tabular}{lcccccc} \toprule & CoLA & SST-2 & QNLI& RTE& MRPC&QQP\\ \midrule p-value & 0.005 & 0.012 & 0.023 & 0.009& 0.008&0.031\\ \bottomrule \end{tabular} } \caption{\label{tab:pvalue} Results of McNemars tests for binary classification tasks of GLUE benchmark, tests are conducted based on the results of best run of BERT$_\text{BASE}$ and BERT$_\text{BASE}$ + SF. } \end{table} \footnotetext[3]{The baseline of SpanBert, a BERT pre-trained without next sentence prediction object.} Span Fine-tuning can reach the same performance improvement as previous methods. As illustrated in Table \ref{tab:glue}, on average, SpanBERT can improve the result by one percentage point over the baseline (BERT-1seq), while Span Fine-tuning is able to achieve an improvement of 1.1 percentage points over our baseline. However, as showed in Table \ref{tab:compare}, Span Fine-tuning requires considerably less time and computing resources compared to the large-scale pre-training for span-level information incorporation. When the Span Fine-tuning is adopted, the extra parameters are only 3 percent of the total parameters of the adopted PrLMs for every downstream task, and introduce little extra overhead. \begin{table}[htb!] \centering \begin{tabular}{lcc} \toprule Method & Time & Resource\\ \midrule Pre-train & 32 days& 32 Volta V100 \\ Span Fine-tune & 12 hours max & 2 Titan RTX\\ \bottomrule \end{tabular} \caption{\label{tab:ablation} The comparison between incorporation of span-level information in pre-training and Span Fine-tuning .} \label{tab:compare} \end{table} \begin{table*} \centering \setlength{\tabcolsep}{5pt} { \begin{tabular}{lccccccccc} \toprule Method& CoLA & SST-2 & MNLI & QNLI & RTE & MRPC & QQP & STS-B & Avg.\\ & (mc) & (acc) & m/mm(acc) & (acc) & (acc) & (F1) & (acc) & (pc) & -\\ \midrule SpanBERT$_\text{LARGE}$ & 64.3 & 94.8 & 88.1/87.7 & 94.3 & 79.0 & 90.9 & 89.5 & 89.9 & 86.5\\ SpanBERT$_\text{LARGE}$ + SF & 65.9 & 95.1 & 88.4/88.1 & 94.3 & 83.3 & 92.1 & 90.9 & 90.1 & 87.6\\ RoBERTa$_\text{LARGE}$ & 68.0 & \textbf{96.4} & 90.2/\textbf{90.2} & \textbf{94.7} & 86.6 & 90.9 & \textbf{92.2} & \textbf{92.4} & 89.0\\ RoBERTa$_\text{LARGE}$ + SF & \textbf{68.9} &96.1 &\textbf{90.3}/\textbf{90.2} & 94.3 & \textbf{90.6} & \textbf{92.8} & \textbf{92.2} & \textbf{92.4} & \textbf{89.8}\\ \bottomrule \end{tabular} } \caption{\label{tab:strong} Results on test sets of GLUE benchmark with stronger baseline, we average results from three different random seeds. } \end{table*} Besides, Span Fine-tuning is more flexible and adaptive compared to previous methods. Table \ref{tab:glue} shows that Span Fine-tuning is able to achieve stronger results on all NLU tasks compared to the baseline, whereas the results of SpanBERT in certain tasks, such as Quora Question Pairs and Microsoft Research Paraphrase Corpus, are worse than its baseline. Since for spanBERT, the utilization of span-level information is fixed for every downstream task. Whereas in our method, an extra module designed to incorporate span-level information is trained during the fine-tuning, which can be more dynamically adapted to different downstream tasks. \begin{table}[htb!] \centering \begin{tabular}{lrr} \toprule Method & Dev & Test \\ \midrule BERT$_\text{WWM}$ & 92.0& 91.4 \\ BERT$_\text{WWM}$ + SF&92.3& 91.7\\ SemBERT$_\text{WWM}$&92.2& 91.9\\ \bottomrule \end{tabular} \caption{\label{tab:snli} Accuracy on dev and test sets of SNLI. SemBERT$_\text{WWM}$ \cite{zhang2019semantics} is the published SoTA on SNLI.} \label{tab:booktabs} \end{table} Table \ref{tab:snli} indicates that Span Fine-tuning also enhances the result of PrMLs on the SNLI benchmark. The improvement achieved by Span Fine-tuning is similar to published state-of-the-art accomplished by SemBERT. However, compared to SemBERT, Span Fine-tuning saves a lot more time and computing resources. Span Fine-tuning merely leverages a pre-sampled dictionary to facilitate segmentation, whereas SemBERT leverages a pre-trained semantic role labeller, which brings extra complexities to the whole segmentation process. Furthermore, Span Fine-tuning is different from SemBERT in terms of motivation, method and contribution factors. The motivation of SemBERT is to enhance PrLMs by incorporating explicit contextual semantics, whereas the motivation of our work is to let PrLMs leverage span-level information in fine-tuning. When it comes to method, SemBERT concatenate the original representations given by BERT with representations of semantic role labels, in comparison, our work directly leverages a segmentation given by a pre-sampled dictionary to generate span-enhanced representation and requires no pre-trained semantic role labeler. The gain of SemBERT comes from semantic role labels while the gain of our work comes from the specific segmentation, which is very different. It's worth noticing that semantic role labeler can also generate segmentation. However, semantic role labeler will generate multiple segmentation for sentence which has various predicate-argument structures. Furthermore, such segmentation is sometimes coarse-grained (with span more than ten words), which is unpractical for our work. \subsection{Results with Stronger PrLMs} In addition to BERT, we also apply Span Fine-tuning to stronger PrLMs, such as RoBERTa \cite{liu2019roberta} and SpanBERT \cite{joshi2019spanbert}, which optimize BERT by enhancing pre-training procedure and predicting text spans rather than single tokens respectively. Table \ref{tab:strong} shows that Span Fine-tuning can strengthen both RoBERTa and SpanBERT. RoBERTa is a already very strong baseline, we remarkably improve the performance of RoBERTa on RTE by four percentage points. SpanBERT already incorporated span-level information during the pre-training, but the results still support that Span Fine-tuning utilizes the span-level formation and improves the performance of PrLMs in a different dimension. \section{ Analysis} \subsection{Ablation Study} In order to determine the key factors in Span Fine-tuning, a series of studies are conducted on the dev sets of eight NLU tasks. BERT$_\text{BASE}$ is chosen as the PrLM for the ablation studies. As shown in Table \ref{tab:ablation}, three sets of ablation studies are performed. For experiment BERT$_{BASE}$ + CNN, only a hierarchical CNN structure is applied in to evaluate whether the improvement comes from the extra parameters. To illustrate, we firstly apply two layers of CNN over the token-level representations given by BERT. Then, a max pooling operation is applied to get the sentence-level representation. Finally, the sentence-level representation and the 'CLS' representation of BERT is concatenated and sent to the classifier. In this way, the parameters of BERT$_{BASE}$ + CNN are the same as in our method. For experiment BERT$_{BASE}$ + CNN + Random SF, random sentence segmentation is applied to the experiment to test if the proposed segmentation method of Span Fine-tuning really functions in span-level information incorporation. For experiment BERT$_{BASE}$ + CNN + NLTK SF, we conduct the experiments using a pre-trained chunker from Natural Language Toolkit to see whether the proposed segmentation method of Span Fine-tuning can achieve further improvements. \begin{table}[h!] \centering \begin{tabular}{lc} \toprule method & Avg Score \\ \midrule BERT$_\text{BASE}$ & 82.6 \\ BERT$_\text{BASE}$ + CNN&82.5\\ BERT$_\text{BASE}$ + Random SF\footnotemark[4] & 83.0\\ BERT$_\text{BASE}$ + NLTK SF\footnotemark[5] & 83.7\\ BERT$_\text{BASE}$ + SF & \textbf{84.2}\\ \bottomrule \end{tabular} \caption{\label{tab:ablation} Ablation studied on dev sets of GLUE benchmark, we average results from three different random seeds.} \end{table} \footnotetext[4]{Random SF represents Span Fine-tuning with randomly segmented sentences.} \footnotetext[5]{NLTK SF represents Span Fine-tuning with segmentation generated by an NLTK pre-trained chunker.} The results of the experiment BERT$_{BASE}$ + CNN suggest that the improvement is unlikely to come from the extra parameters, since it reduce the overall performance by 0.1 percent. The experiment BERT$_{BASE}$ + Random SF and BERT$_{BASE}$ + NLTK SF indicate that the segmentation generated by a pre-train chunker or even random segmentation can also achieve enhancement under the Span Fine-tuning structure. However, a pre-trained chunker demands additional part-of-speech parsing process, while our segmentation method only relies on a pre-sampled dictionary and saves a lot more time, and at the same time, achieves greater improvement. Our Span Fine-tuning is able to remarkably enhance the result on all NLU tasks, raising average score by 1.6 percentage points. Overall, the result of experiments indicate that the performance improvement is primarily a result of our unique segmentation method. \subsection{Encoder Architecture} \cite{conneau2017supervised} mentions that the influence of sentence encoder architectures on PrLM performance varies a lot from case to case. \cite{toshniwal-etal-2020-cross} also suggests that different span representations can affect NLPs tasks greatly. \begin{table}[htb!] \centering \begin{tabular}{lrr} \toprule Method & Dev & Test \\ \midrule CNN-Max & 90.9& 90.9 \\ CNN-CNN&\textbf{91.3}& \textbf{91.1}\\ Attention\footnotemark[6]-Max &90.7& 90.5\\ Attention-Attention&90.8& 90.8\\ \bottomrule \end{tabular} \caption{\label{tab:structure} Accuracy on dev and test sets of SNLI. SemBERT$_\text{WWM}$ \cite{zhang2019semantics} is the published SOTA on SNLI.} \label{tab:booktabs} \end{table} \footnotetext[6]{Attention indicate the Self-attentive module \cite{lin2017structured}.} To evaluate the effectiveness of our encoder architecture, we replace the component of the encoding layer and the overall structure respectively. For the component of the encoding layer, CNN \cite{kim2014convolutional} and the Self-attentive module \cite{lin2017structured} are compared. For the overall structure, two structures are considered: a single layer structure with the max-pooling operation and a hierarchical structure. By matching every component of the encoding layer with the overall structure, four different encoder architectures are generated: CNN-Maxpooling, CNN-CNN, Attention-Maxpooling, Attention-Attention. Experiments are conducted on SNLI dev and test sets. Table \ref{tab:structure} suggests that the hierarchical CNN (CNN-CNN) is most suitable encoder architecture for us. \subsection{Size of $n$-gram Dictionary} Since our segmentation method is based on a pre-sampled dictionary, the size of dictionaries will have a large impact on segmentation results. Figure \ref{fig:span_number} depicts how the average number of spans in the sentences changed along with dictionary size in CoLA and MRPC datasets. At the origin, where no segmentation is applied, every token is considered as a span. The number of spans drops significantly, as the dictionary size grows and more $n$-grams are matched and grouped together. \begin{figure}[htb!] \centering \includegraphics[width=0.48\textwidth]{figures/span_number.pdf} \caption{\label{fig:span_number} Influence of dictionary size on the average number of spans in the sentences} \end{figure} To evaluate the influence of dictionary size on PrLM performance, experiments on the dev sets of two NLU tasks are implemented: CoLA and MRPC. To concentrate on the impact of segmentation and reduce the impacts from sub-token-level representations provided by PrLM, the concatenation process is not applied to this experiment. Rather, the span-level information enhanced representations are directly sent to a dense layer to generate prediction. As demonstrated in figure \ref{fig:case_study}, the incorporation of pre-sampled $n$-gram dictionary generates a stronger performance compared to random segmentation. Moreover, dictionaries of medium sizes (20$k$ to 200$k$) commonly result in better performance. Such trend can be explained by intuition, give dictionaries of small sizes are likely to omit meaningful $n$-grams, whereas the ones of large sizes tend to over-combine meaningless $n$-grams. \begin{figure}[htb!] \centering \includegraphics[width=0.48\textwidth]{figures/ablation.jpg} \caption{\label{fig:case_study} The influence of the size of $n$-gram dictionary} \end{figure} \subsection{Span Fine-tuning for Token-Level Tasks} The upper mentioned experiments are conducted on the GLUE benchmark, whose tasks are all at the sentence level. Nevertheless, token-level representations are needed in many other NLU task, such as name-entity recognition (NER). Our approach can be applied to token-level tasks with simple modification of encoder architecture (e.g. removing the pooling layer of CNN module). Table \ref{tab:ner} shows the results of our approach on the CoNLL-2003 Named Entity Recognition (NER) task \cite{tjong-kim-sang-de-meulder-2003-introduction} with BERT as our PrLM. \begin{table}[htb!] \centering\small \setlength{\tabcolsep}{1pt} { \begin{tabular}{lccccc} \toprule & BERT$_\text{BASE}$ & BERT$_\text{BASE}$+SF& BERT$_\text{LARGE}$ & BERT$_\text{LARGE}$+SF \\ \midrule Dev & 91.7 & 92.1 & 92.3 & 92.5 \\ Test & 95.7 & 96.2 & 96.5 & 96.8 \\ \bottomrule \end{tabular} } \caption{\label{tab:ner} F1 on dev and test sets of named entity recognition from CoNLL-2003, we average results from three different random seeds. } \end{table} \section{Conclusion} This paper proposes Span Fine-tuning that maximize the advantages of flexible span-level information in fine-tuning with sub-token-level representations generated by PrLMs. Leveraging a reasonable segmentation provided by a pre-sampled $n$-gram dictionary, Span Fine-tuning can further enhance the performance of PrLMs on various downstream tasks. Compared with previous span pre-training methods, our Span Fine-tuning remains competitive for the following reasons: \paragraph{Task-adaptive} For methods that incorporate span-level information in pre-training, the utilization of span-level information is unlikely easily adjusted for every downstream task as span pre-training has been fixed after tremendous computational cost. In our method, the extra module designed to incorporate span-level information is trained during the fine-tuning, resulting in a more dynamically adaptation to different downstream tasks. \paragraph{Flexible to PrLMs} Our approach can be generally applied to various PrLMs including RoBERTa and SpanBERT. \paragraph{Novelty} Our approach can further improve the performance of PrLMs pre-trained with span-level information (e.g. SpanBERT). Such result implies that we our method utilizes the span-level information in a different manner comparing with PrLMs pre-trained with span-level information, which makes our method distinguished comparing with previous works.
1,108,101,565,324
arxiv
\section{Introduction} It is well known that every symplectic form on $X = S^2\times S^2$ is, after multiplication by a suitable constant, symplectomorphic to a product form ${\omega}^{\lambda} = (1+{\lambda}){\sigma}_1 + {\sigma}_2$ for some ${\lambda} \ge 0$, where the $2$-form ${\sigma}_i$ has total area $1$ on the $i$th factor. We are interested in the structure of the space ${\cal J}^{\lambda}$ of all $C^\infty$ ${\omega}^{\lambda}$-compatible almost complex structures on $X$. Observe that ${\cal J}^{\lambda}$ itself is always contractible. However it has a natural stratification that changes as ${\lambda}$ passes each integer. The reason for this is that as ${\lambda}$ grows the set of homology classes that can be represented by an ${\omega}^{\lambda}$-symplectically embedded $2$-sphere changes. Since each such $2$-sphere can be parametrized to be $J$-holomorphic for some $J\in {\cal J}^{\lambda}$, there is a corresponding change in the structure of ${\cal J}^{\lambda}$. To explain this in more detail, let $A\in H_2(X,{\bf Z})$ be the homology class $[S^2\times pt]$ and let $F = [pt\times S^2]$. (The reason for this notation is that we are thinking of $X$ as a fibered space over the first $S^2$-factor, so that the smaller sphere $F$ is the fiber.) When $\ell -1 < {\lambda} \le \ell$, $$ {\omega}^{\lambda}(A-kF) > 0, \quad\mbox{for}\;\; 0\le k \le \ell. $$ Moreover, it is not hard to see that for each such $k$ there is a map $\rho_k:S^2\to S^2$ of degree $-k$ whose graph $$ z\mapsto (z,\rho_k(z)) $$ is an ${\omega}^{\lambda}$-symplectically embedded sphere in $X$. It follows easily that the space $$ {\cal J}_k^{\lambda} = \{J\in {\cal J}^{\lambda}: \mbox{there is a $J$-hol curve in class \,} A-kF\} $$ is nonempty whenever $k< {\lambda} + 1$. Let $$ {\ov{\cal J}}_k^{\lambda} = \cup_{m\ge k} \;{\cal J}_m^{\lambda}. $$ Because $(A-kF)\cdot (A-mF) < 0$ when $k\ne m > 0$, positivity of intersections implies that there is exactly one $J$-holomorphic curve in class $A-kF$ for each $J\in {\cal J}_k$. We denote this curve by $\Delta_J$. \begin{lemma}\label{basic} The spaces ${\cal J}_k^{\lambda}, 0\le k \le \ell$, are disjoint and ${\ov{\cal J}}_k^{\lambda}$ is the closure of $ {\cal J}_k^{\lambda}$ in ${\cal J}^{\lambda}$. Further, ${\cal J}^{\lambda} = {\ov{\cal J}}_0^{\lambda}$. \end{lemma} \proof{} It is well known that for every $J\in {\cal J}$ the set of $J$-holomorphic curves in class $F$ form the fibers of a fibration $\pi_J: X\to S^2$. Moreover, the class $A$ is represented by either a curve or a cusp-curve (i.e. a stable map).\footnote { We will follow the convention of [LM] by defining a ``curve" to be the image of a single sphere, while a ``cusp-curve" is either multiply-covered or has domain equal to a union of two or more spheres.} Since the class $F$ is always represented and $(mA + pF)\cdot F = m$, it follows from positivity of intersections that $m\ge 0$ whenever $mA+pF$ is represented by a curve. Hence any cusp-curve in class $A$ has one component in some class $A-kF$ for $k\ge 0$, and all others represent a multiple of $F$. In particular, each $J\in {\cal J}^{\lambda}$ belongs to some set ${\cal J}_k^{\lambda}$. Moreover, because $(A-kF)\cdot (A-mF) < 0$ when $k\ne m$ and $k,m\ge 0$, the different ${\cal J}_k^{\lambda}$ are disjoint. The second statement holds because if $J_n$ is a sequence of elements in ${\cal J}_k^{\lambda}$, then the corresponding sequence of $J_n$-holomorphic curves in class $A-kF$ has a convergent subsequence whose limit is a cusp-curve in class $A-kF$. This limit has to have a component in some class $A-mF$, for $m\ge k$, and so $J\in {\cal J}_m^{\lambda}$ for some $m\ge k$. For further details see Lalonde--McDuff [LM], for example.\hfill$\Box$\medskip Here is our main result. Throughout we are working with $C^\infty$-maps and almost complex structures, and so by manifold we mean a Fr\'echet manifold. By a stratified space ${\cal X}$ we mean a topological space that is a union of a finite number of disjoint manifolds that are called strata. Each stratum ${\cal S}$ has a neighborhood ${\cal N}_{\cal S}$ that projects to ${\cal S}$ by a map ${\cal N}_{\cal S}\to {\cal S}$. When ${\cal N}_{\cal S}$ is given the induced stratification, this map is a locally trivial fiber bundle whose fiber has the form of a cone $C({\cal L})$ over a finite dimensional stratified space ${\cal L}$ that is called the {\em link} of ${\cal S}$ in ${\cal X}$. Moreover, ${\cal S}$ sits inside ${\cal N}_{\cal S}$ as the set of vertices of all these cones. \begin{thm}\label{main} (i) For each $1\le k\le \ell$, ${\cal J}_k^{\lambda}$ is a submanifold of ${\cal J}^{\lambda}$ of codimension $4k-2$. {\smallskip} {\noindent} (ii) For each $m > k \ge 1$ the normal link ${\cal L}_{m,k}^{\lambda}$ of ${\cal J}_{m}^{\lambda}$ in ${\ov{\cal J}}_k^{\lambda}$ is a stratified space of dimension $4(m-k) -1$. Thus, there is a neighborhood of ${\cal J}_{m}^{\lambda}$ in ${\ov{\cal J}}_k^{\lambda}$ that is fibered over ${\cal J}_{m}^{\lambda}$ with fiber equal to the cone on ${\cal L}_{m, k}^{\lambda}$. {\smallskip} {\noindent} (iii) The structure of the link ${\cal L}_{m,k}^{\lambda}$ is independent of ${\lambda}$ (provided that ${\lambda} >m -1$.) \end{thm} The first part of this theorem was proved by Abreu in~[A], at least in the $C^s$-case where $s<\infty$. (Details are given in \S4.1 below.) The second and third parts follow by globalising recent work by Fukaya--Ono [FO], Li--Tian~[LiT], Liu--Tian~[LiuT1], Ruan~[R] and others on the structure of the compactification of moduli spaces of $J$-holomorphic spheres via stable maps. We extend current gluing methods by showing that it is possible to deal with obstruction bundles whose elements do not vanish at the gluing point: see \S 4.2.3. Another essential point is that we use Fukaya and Ono's method of dealing with the ambiguity in the parametrization of a stable map since this involves the least number of choices and allows us to globalize by constructing a gluing map that is equivariant with respect to suitable local torus actions: see~\S 4.2.4 and \S 4.2.5. The above theorem is the main tool used in [AM] to calculate the rational cohomology ring of the group $G^{\lambda}$ of symplectomorphisms of $(X, {\omega}^{\lambda})$. Observe that part (iii) states that the normal structure of the stratum ${\cal J}_k^{\lambda}$ does not change with ${\lambda}$. On the other hand, it follows from the results of [AM] that the cohomology of ${\cal J}_k^{\lambda}$ definitely does change as ${\lambda}$ passes each integer. Obviously, it would be interesting to know if the topology of ${\cal J}_k^{\lambda}$ is otherwise fixed. For example, one could try to construct maps ${\cal J}^{\lambda}\to {\cal J}^{\mu}$ for ${\lambda} < \mu$ that preserve the stratification, and then try to prove that they induce homotopy equivalences ${\cal J}_k^{\lambda} \to {\cal J}_k^{\mu}$ whenever $\ell -1 < {\lambda} \le \mu \le \ell$. The most we have so far managed to do in this direction is to prove the following lemma that, in essence, constructs maps ${\cal J}^{\lambda}\to {\cal J}^{\mu}$ for ${\lambda} < \mu$. It is not clear whether these are homotopy equivalences for ${\lambda}, \mu \in (\ell-1, \ell]$. It is convenient to fix a fiber $F_0 = pt\times S^2$ and define $$ {\cal J}_k^{\lambda}({\cal N}(F_0)) = \{ J\in {\cal J}_k^{\lambda}: J = J_{split}\mbox{ near } F_0\}, $$ where $J_{split} $ is the standard product almost complex structure. \begin{lemma}\label{inc} (i) The inclusion ${\cal J}^{\lambda}({\cal N}(F_0))\to {\cal J}^{\lambda}$ induces a homotopy equivalence ${\cal J}_k^{\lambda}({\cal N}(F_0))\stackrel{\simeq}{\to} {\cal J}_k^{\lambda}$ for all $k < {\lambda}+1$. {\noindent} (ii) Given any compact subset $C\subset {\cal J}^{\lambda}({\cal N}(F_0))$ and any $ \mu> {\lambda}$ there is a map $$ {\iota}_{{\lambda},\mu}: C\to {\cal J}^{\mu}({\cal N}(F_0)) $$ that takes $C\cap {\cal J}_k^{\lambda}({\cal N}(F_0))$ into ${\cal J}_k^{\mu}({\cal N}(F_0))$ for all $k$. \end{lemma} This lemma is proved in \S 2. The next task is to calculate the links ${\cal L}_{m,k}^{\lambda}$. So far this has been done for the easiest case: \begin{prop}\label{pr:lens} For each $k\ge 1$ and ${\lambda}$ the link ${\cal L}_{k+1,k}^{\lambda}$ is the $3$-dimensional lens space $L(2k,1)$. \end{prop} Finally, we illustrate our methods by using the stable map approach to confirm that the link of ${\cal J}_2^{\lambda}$ in ${\cal J}^{\lambda}$ is $S^5$, as predicted by part (i) of Theorem~\ref{main}. Our method first calculates an auxiliary link ${\cal L}_{\cal Z}$ from which the desired link is obtained by collapsing certain strata. The $S^5$ appears in a surprisingly interesting way, that can be briefly described as follows. Let ${\cal O}(k)$ denote the complex line bundle over $S^2$ with Euler number $k$, where we write ${\bf C}$ instead of ${\cal O}(0)$. Given a vector bundle $E\to B$ we write $S(E)\to B$ for its unit sphere bundle. Note that the unit $3$-sphere bundle $$ S({\cal O}(k) \oplus {\cal O}(m))\to S^2 $$ decomposes as the composite $$ S(L_{P(k,m)})\to {\cal P} ({\cal O}(k)\oplus {\cal O}(m)) \to S^2 $$ where $L_{P(k,m)}\to {\cal P} ({\cal O}(k)\oplus {\cal O}(m))$ is the canonical line bundle over the the projectivization of ${\cal O}(k)\oplus {\cal O}(m)$. In particular, the space $S({\cal O}(-1) \oplus {\bf C})$ can be identified with $S(L_{P(-1,0)})$. But ${\cal P} ({\cal O}(-1)\oplus {\bf C})$ is simply the blow up ${\bf C} P^2\#{\overline{{{\bf C}}P}\,\!^{2}}$, and its canonical bundle is the pullback of the canonical bundle over ${\bf C} P^2$. We also consider the singular line bundle (or orbibundle) $L_Y\to L$ whose associated unit sphere bundle has total space $L(Y) = S^5$ and fibers equal to the orbits of the following $S^1$-action on $S^5$: $$ {\theta}\cdot (x,y,z) = (e^{i{\theta}}x, e^{i{\theta}}y, e^{2i{\theta}}z),\quad x,y,z\in {\bf C}. $$ \begin{thm}\label{LINK} (i) The space ${\cal L}_{\cal Z}$ obtained by plumbing the unit sphere bundle of ${\cal O}(-3)\oplus {\cal O}(-1)$ with the singular circle bundle $S(L_Y)\to Y$ may be identified with the unit circle bundle of the canonical bundle over ${\cal P}({\cal O}(-1)\oplus {\bf C}) = {\bf C} P^2\#{\overline{{{\bf C}}P}\,\!^{2}}$. {\noindent} (ii) The link ${\cal L}_{2,0}^{\lambda}$ is obtained from ${\cal L}_{\cal Z}$ by collapsing the fibers over the exceptional divisor to a single fiber, and hence may be identified with $S^5$. Under this identification, the link ${\cal L}_{2,1}^{\lambda} = {\bf R} P^3$ corresponds to the inverse image of a conic in ${\bf C} P^2$. \end{thm} In his recent paper~[K], Kronheimer shows that the universal deformation of the quotient singularity ${\bf C}^2/({\bf Z}/m{\bf Z})$ is transverse to all the submanifolds ${\cal J}_k$ and so is an explicit model for the normal slice of ${\cal J}_m$ in ${\cal J}$. Hence one can investigate the structure of the intermediate links ${\cal L}_{m,k}^{\lambda}$ using tools from algebraic geometry. It is very possible that it would be easier to calculate these links this way. However, it is still interesting to try to understand these links from the point of view of stable maps, since this is more closely connected to the symplectic geometry of the manifold $X$. Another point is that throughout we consider ${\omega}^{\lambda}$-compatible almost complex structures rather than ${\omega}^{\lambda}$-tame ones. However, it is easy to see that all our results hold in the latter case. {\medskip} \subsection*{Other ruled surfaces} All the above results have analogs for other ruled surfaces $Y\to {\Sigma}$. If $Y$ is diffeomorphic to the product ${\Sigma}\times S^2$, we can define ${\omega}^{\lambda}, {\cal J}_k^{\lambda}$ as above, though now we should allow ${\lambda}$ to be any number $> -1$ since there is no symmetry between the class $A = [{\Sigma}\times pt] $ and $F = [pt\times S^2]$. In this case Theorem~\ref{main} still holds. The reason for this is that if $u:{\Sigma}\to Y$ is an injective $J$-holomorphic map in class $A-kF$ where $k\ge 1$, then the normal bundle $E$ to the image $u({\Sigma})$ has negative first Chern class so that the linearization $Du$ of $u$ has kernel and cokernel of constant dimension. (In fact, the normal part of $Du$ with image in $E$ is injective in this case. See Theorem 1$^\prime$ in Hofer--Lizan--Sikorav~[HLS].) However, Lemma~\ref{basic} fails unless ${\Sigma}$ is a torus since there are tame almost complex structures on $Y$ with no curve in class $[A]$. One might think to remedy this by adding other strata ${\cal J}_{-k}^{\lambda}$ consisting of all $J$ such that the class $A+kF$ is represented by a $J$-holomorphic curve $u:({\Sigma}, j)\to Y$ for some complex structure $j$ on ${\Sigma}$. However, although the universal moduli space ${\cal M}(A+kF, {\cal J}^{\lambda})$ of all such pairs $(u,J)$ is a manifold, the map $(u,J)\to J$ is no longer injective: even if one cuts down the dimension by fixing a suitable number of points each $J$ will in general admit several curves through these points. Moreover, as $u$ varies over ${\cal M}(A+kF, {\cal J}^{\lambda})$ the dimension of the kernel and cokernel of $Du$ can jump. Hence the argument given in \S 4.1 below that the strata ${\cal J}_k^{\lambda}$ are submanifolds of ${\cal J}^{\lambda}$ fails on several counts. In the case of the torus, ${\cal J}_0^{\lambda}$ is open and so Lemma~\ref{basic} does hold. However, it is not clear whether this is enough for the main application, which is to further our understanding of the groups ${\rm G}^{\lambda}$ of symplectomorphisms of $(Y, {\omega}^{\lambda})$. One crucial ingredient of the argument in~[AM] is that the action of this group on each stratum ${\cal J}_k^{\lambda}$ is essentially transitive. More precisely, we show that the action of ${\rm G}^{\lambda}$ on ${\cal J}_k^{\lambda}$ induces a homotopy equivalence ${\rm G}^{\lambda}/{\rm Aut}(J_k) \to {\cal J}_k^{\lambda}$, where $J_k$ is an integrable element of ${\cal J}_k^{\lambda}$ and ${\rm Aut}(J_k)$ is its stabilizer. It is not clear whether this would hold for the stratum ${\cal J}_0^{\lambda}$ when ${\Sigma} = T^2$. One might have to take into account the finer stratification considered by Lorek in~[Lo]. He points out that the space ${\cal J}_0^{\lambda}$ of all $J$ that admit a curve in class $A$ is not homogeneous. A generic element admits a finite number of such curves that are regular (that is $Du$ is surjective), but since this number can vary the set of regular elements in ${\cal J}_0^{\lambda}$ has an infinite number of components. Lorek also characterises the other strata that occur. For example, the codimension $1$ stratum consists of $J$ such that all $J$-holomorphic $A$ curves are isolated but there is at least one where the kernel of $Du$ has dimension $3$ instead of $2$. (Note that these $2$ dimensions correspond to the reparametrization group, since $Du$ is the full linearization, not just the normal component.) Similar remarks can be made about the case when $Y\to {\Sigma}$ is a nontrivial bundle. In this case we can label the strata ${\cal J}_k^{\lambda}$ so that the $J\in {\cal J}_k^{\lambda}$ admit sections with self-intersection $-2k+1$. Again Theorem~\ref{main} holds, but Lemma~\ref{basic} may not. When ${\Sigma} = S^2$ the homology class of the exceptional divisor is always represented, so that ${\cal J}^{\lambda} = {\ov{\cal J}}_1^{\lambda}$. When ${\Sigma} = T^2$, the homology class of the section of self-intersection $+1$ is always represented. Thus ${\cal J}^{\lambda} = {\ov{\cal J}}_{-1}^{\lambda}$. Hence the analog of Lemma~\ref{basic} holds in these two cases. Moreover all embedded tori of self-intersection $+1$ are regular (by the same result in~[HLS]), which may help in the application to ${\rm Symp}(Y)$. We now state in detail the result for the nontrivial bundle $Y\to S^2$ since this is used in~[AM]. Here $Y= {\bf C} P^2\#{\overline{{{\bf C}}P}\,\!^{2}}$, and so every symplectic form on $Y$ can be obtained from an annulus $A_{r,s} =\{ z\in {\bf C}^2: r \le |z| \le s\}$ by collapsing the boundary spheres to $S^2$ along the characteristic orbits. This gives rise to a form ${\omega}_{r,s}$ that takes the value $\pi s^2$ on the class $L$ of a line and $\pi r^2$ on the exceptional divisor $E$. Let us write ${\omega}^{\lambda}$ for the form ${\omega}_{r,s}$ where $\pi s^2 = 1+{\lambda}, \pi r^2 = {\lambda} > 0$. Then the class $F = L-E$ of the fiber has size $1$ as before, and ${\cal J}_k^{\lambda}$, $k\ge 1$, is the set of ${\omega}^{\lambda}$-compatible $J$ for which the class $E - (k-1) F$ is represented. \begin{thm} When $Y = {\bf C} P^2\#{\overline{{{\bf C}}P}\,\!^{2}}$ the spaces ${\cal J}_k^{\lambda}$ are Fr\'echet submanifolds of ${\cal J}^{\lambda}$ of codimension $4k$, and form the strata of a stratification of ${\cal J}^{\lambda}$ whose normal structure is independent of ${\lambda}$. Moreover, the normal link of ${\cal J}_{k+1}^{\lambda}$ in ${\cal J}_k^{\lambda}$ is the lens space $L(4k+1, 1)$, $k\ge 1$. \end{thm} {\medskip} This paper is organised as follows. \S 2 describes the main ideas in the proof of Theorem~\ref{main}. This relies heavily on the theory of stable maps, and for the convenience of the reader we outline its main points. References for the basic theory are for example [FO], [LiT] and [LiuT1]. \S 3 contains a detailed calculation of the link of ${\cal J}_2^{\lambda}$ in ${\cal J}^{\lambda}$. In particular we discuss the topological structure of the space of degree $2$ holomorphic self-maps of $S^2$ with up to $2$ marked points, and of the canonical line bundle that it carries. Plumbing with the orbibundle $L_Y\to Y$ turns out to be a kind of orbifold blowing up process: see \S 3.1. Finally, in \S 4 we work out the technical details of gluing that are needed to establish that the submanifolds ${\cal J}_k^{\lambda}$ do have a good normal structure. The basic method here is taken from McDuff--Salamon [MS] and Fukaya--Ono [FO]. {\medskip} {\noindent} {\bf Acknowledgements}{\smallskip} I wish to thank Dan Freed, Eleni Ionel and particularly John Milnor for useful discussions on various aspects of the calculation in \S 3, and Fukaya and Ono for explaining to me various details of their arguments. \section{Main ideas} We begin by proving Lemma~\ref{inc} since this is elementary, and then will describe the main points in the proof of Proposition~\ref{main}. {\medskip} \subsection{The effect of increasing ${\lambda}$} {\noindent} {\bf Proof of Lemma~\ref{inc}} Recall that $F_0$ is a fixed fiber $pt\times S^2$ and that $$ {\cal J}_k^{\lambda}({\cal N}(F_0)) = \{ J\in {\cal J}_k^{\lambda}: J = J_{split}\mbox{ near } F_0\}, $$ We will also use the space $$ {\cal J}_k^{\lambda}(F_0) = \{ J\in {\cal J}_k^{\lambda}: J = J_{split}\mbox{ on } TF_0\}. $$ Let ${\cal F}^{\lambda}$ be the space of ${\omega}^{\lambda}$-symplectically embedded curves in the class $F$ through a fixed point $x_0$. Because there is a unique $J$-holomorphic $F$-curve through $x_0$ for each $J\in {\cal J}$ (see Lemma~\ref{basic}), there is a fibration $$ {\cal J}^{\lambda}(F_0)\to {\cal J}^{\lambda} \to {\cal F}^{\lambda}. $$ Since the elements of ${\cal J}^{\lambda}(F_0)$ are sections of a bundle with contractible fibers, ${\cal J}^{\lambda}(F_0)$ is contractible. Hence ${\cal F}^{\lambda}$ is also contractible. By using the methods of Abreu [A], it is not hard to show that the symplectomorphism group ${\cal G}^{\lambda} = {\rm Symp}_0(X,{\omega}^{\lambda})$ of $(X,{\omega}^{\lambda})$ acts transitively on ${\cal F}^{\lambda}$. Since the action of ${\cal G}^{\lambda}$ on ${\cal J}^{\lambda}$ preserves the strata ${\cal J}_k^{\lambda}$, it follows that the projection ${\cal J}_k^{\lambda}\to {\cal F}^{\lambda}$ is surjective. Hence there are induced fibrations $$ {\cal J}_k^{\lambda}(F_0)\to {\cal J}_k^{\lambda} \to {\cal F}^{\lambda}. $$ This implies that the inclusion ${\cal J}_k^{\lambda}(F_0)\to {\cal J}_k^{\lambda}$ is a weak homotopy equivalence. We now claim that the inclusion ${\cal J}_k^{\lambda}({\cal N}(F_0))\to {\cal J}_k^{\lambda}(F_0)$ is also a weak homotopy equivalence. To prove this, we need to show that the elements of any compact set ${\cal K}\subset {\cal J}_k^{\lambda}$ can be homotoped near $F_0$ to make them coincide with $J_{split}$. Since the set of tame almost complex structures at a point is contractible, this is always possible in ${\cal J}^{\lambda}$: the difficulty here is to ensure that ${\cal K}$ remains in ${\cal J}_k^{\lambda}$ throughout the homotopy. Here is a sketch of one method. For each $J\in {\cal J}_k^{\lambda}$ let ${\Delta}_J$ denote the unique $J$-holomorphic curve in class $A-kF$. Then ${\Delta}_J$ meets $F_0$ transversally at one point, call it $q_J$. For each $J\in {\cal K}$, isotop the curve ${\Delta}_J$ fixing $q_J$ to make it coincide in a small neighborhood of $q_J$ with the flat section $S^2\times pt$ that contains $q_J$. (Details of an very similar construction can be found in [MP], Prop 4.1.C.) Now lift this isotopy to ${\cal J}_k^{\lambda}$. Finally adjust the family of almost complex structures near $F_0$, keeping ${\Delta}_J$ holomorphic throughout. This proves (i). Statement (ii) is now easy. For any compact subset $C$ of ${\cal J}^{\lambda}({\cal N}(F_0))$ there is ${\varepsilon}>0$ such that $J = J_{split}$ on the ${\varepsilon}$-neighborhood ${\cal N}_{\varepsilon}(F_0)$ of $F_0$. Let $\rho$ be a nonegative $2$-form supported inside the $2$-disc of radius ${\varepsilon}$ that vanishes near $0$, and let $\pi^*(\rho)$ denote its pullback to ${\cal N}_{\varepsilon}(F_0)$ by the obvious projection. Then every $J$ that equals $J_{split}$ on ${\cal N}_{\varepsilon}(F_0)$ is compatible with the form ${\omega}^{\lambda} + {\kappa}\pi^*(\rho)$ for all ${\kappa} > 0$. Since ${\omega}^{\lambda} + {\kappa}\pi^*(\rho)$ is isotopic to ${\omega}^{\mu}$ for some $\mu$, there is a diffeomorphism $\phi$ of $X$ that is isotopic to the identity and is such that $\phi^*({\omega}^{\lambda} + {\kappa}\pi^*(\rho)) = {\omega}^{\mu}$. Moreover, because, by construction, $\pi^*(\rho) = 0$ near $F_0$, we can choose $\phi = {\rm Id} $ near $F_0$. Hence the map $ J\mapsto \phi^*(J) $ takes ${\cal J}^{\lambda}({\cal N}(F_0))$ to ${\cal J}^{\mu}({\cal N}(F_0))$. Clearly it preserves the strata ${\cal J}_k$.\hfill$\Box$\medskip \subsection{Stable maps} From now on, we will drop ${\lambda}$ from the notation, assuming that $k<{\lambda} + 1$ as before. We study the spaces ${\cal J}_k$ and ${\ov{\cal J}}_k$ by exploiting their relation to the corresponding moduli spaces of $J$-holomorphic curves in $X$. \begin{defn} \rm When $k\ge 1$, ${\cal M}_k = {\cal M}(A-kF,{\cal J})$ is the universal moduli space of all unparametrized $J$-holomorphic curves in class $A-kF$. Thus its elements are equivalence classes $[h,J]$ of pairs $(h,J)$, where $J\in {\cal J} = {\cal J}^{\lambda}$, $h$ is a $J$-holomorphic map $S^2\to X$ in class $A - kF$, and where $(h,J) \equiv (h\circ{\gamma}, J)$ when ${\gamma}:S^2\to S^2$ is a holomorphic reparametrization of $S^2$. Similarly, we write ${\cal M}_0={\cal M}(A, x_0,{\cal J})$ for the universal moduli space of all unparametrized $J$-holomorphic curves in class $A$ that go through a fixed point $x_0\in X$. Thus its elements are equivalence classes of triples $[h,z,J]$ with $z\in S^2$, $(h,J)$ as before, $h(z) = x_0$ and where $(h,z,J) \sim (h\circ{\gamma}, {\gamma}^{-1}(z), J)$ when ${\gamma}:S^2\to S^2$ is a holomorphic reparametrization of $S^2$. \end{defn} The next lemma restates part (i) of Theorem~\ref{main}. The proof uses standard Fredholm theory for $J$-holomorphic curves and is given in \S 4.1. The only noteworthy point is that when $k > 0$ the almost complex structures in ${\cal J}_k$ are not regular. In fact, the index of the relevant Fredholm operator is $-(4k-2)$. However, because we are in $4$-dimensions the Fredholm operator has no kernel, which is the basic reason why the space of $J$ for which it has a solution is a submanifold of codimension $4k-2$. \begin{lemma}\label{mfld} For all $k\ge 0$, the projection $$ \pi_k: {\cal M}_k\to {\cal J}_k: \quad [h,J]\mapsto J $$ is a diffeomorphism of the Fr\'echet manifold ${\cal M}_k$ onto the submanifold ${\cal J}_k$ of ${\cal J}$. This submanifold is an open subset of ${\cal J}$ when $k = 0$ and has codimension $4k-2$ otherwise. \end{lemma} Our tool for understanding the stratification of ${\cal J}$ by the ${\cal J}_k$ is the compactification ${\ov{{\cal M}}}(A-kF, {\cal J})$ of ${\cal M}(A-kF, {\cal J})$ that is formed by $J$-holomorphic stable maps. For the convenience of the reader we recall the definition of stable maps with $p$ marked points. We always assume the domain ${\Sigma}$ to have genus $0$. Therefore it is a connected union $\cup_{i = 0}^m{\Sigma}_i$ of Riemann surfaces each of which has a given identification with the standard sphere $(S^2,j_0)$. (Note that we consider ${\Sigma}$ to be a topological space: the labelling of its components is a convenience and not part of the data.) The intersection pattern of the components can be described by a tree graph with $m+1$ vertices, one for each component of ${\Sigma}$, that are connected by an edge if and only if the corresponding components intersect. No more than two components meet at any point. Also, there are $p$ marked points $z_1,\dots, z_p$ placed anywhere on ${\Sigma}$ except at an intersection point of two components. (Such pairs $({\Sigma}, z_1,\dots, z_p) = ({\Sigma},z)$ are called semi-stable curves.) Now consider a triple $({\Sigma}, h, z)$ where $h:{\Sigma}\to X$ is such that $h_*([{\Sigma}]) = B$ and where the following {\it stability condition} is satisfied: \begin{quote}{\it the restriction $h_i$ of the map $h$ to ${\Sigma}_i$ is nonconstant unless ${\Sigma}_i$ contains at least $3$ special points.} \end{quote} (By definition, special points are either points of intersection with other components or marked points.) A {\em stable map} ${\sigma} = [{\Sigma}, h,z]$ in class $B\in H_2(X,{\bf Z})$ is an equivalence class of such triples, where $({\Sigma}, h, z')\equiv({\Sigma}, h\circ {\gamma},z)$ if there is an element ${\gamma}$ of the group ${\rm Aut}({\Sigma})$ of all holomorphic self-maps of ${\Sigma}$ such that $ {\gamma}(z_i) = z_i'$ for all $i$. For example, if ${\Sigma}$ has only one component and there are no marked points, then $({\Sigma}, h) \equiv ({\Sigma}, h\circ {\gamma})$ for all ${\gamma}\in {\rm Aut}(S^2) = {\rm PSL}(2,{\bf C})$. Thus stable maps are {\em unparametrized}. We may think of the triple $({\Sigma}, h, z)$ as a parametrized stable map. Almost always we will only consider stable maps that are $J$-holomorphic for some $J$. If necessary, we will include $J$ in the notation, writing elements as ${\sigma} = [{\Sigma}, h, z, J]$, but often $J$ will be understood. Note that some stable maps ${\sigma}= [{\Sigma}, h,z,J]$ have a nontrivial reparametrization group ${\Gamma}_{\sigma}$. Given a representative $({\Sigma}, h,z,J)$ of ${\sigma}$, this group may be defined as $$ {\Gamma}_{\sigma} = \{{\gamma}\in{\rm Aut}({\Sigma}) : h\circ {\gamma} = h, {\gamma}(z_i) = z_i, 1\le i\le p\}. $$ It is finite because of the stability condition. The points where this reparametrization group ${\Gamma}_{\sigma}$ is nontrivial are singular or orbifold points of the moduli space. Here is an example where it is nontrivial. \begin{example}\label{smap}\rm Let ${\Sigma}$ have three components, with ${\Sigma}_2$ and ${\Sigma}_3$ both intersecting ${\Sigma}_1$ and let $z_1$ be a marked point on ${\Sigma}_1$. Then we can allow $h_1$ to be constant without violating stability. If in addition $h_2, h_3$ have the same image curve, there is an automorphism that interchanges ${\Sigma}_2$ and ${\Sigma}_3$. Since nearby stable maps do not have this extra symmetry, $[{\Sigma}, h, z_1]$ is a singular point in its moduli space. However, because marked points are labelled, there is no such automorphism if we put one marked point $z_2$ on ${\Sigma}_2$ and another $z_3$ at the corresponding point on ${\Sigma}_3$, i.e. so that $h_2(z_2) = h_3(z_3)$. One can also destroy this automorphism by adding just one marked point $z_0$ to $[{\Sigma}, h, z_1]$ anywhere on ${\Sigma}_2$ or ${\Sigma}_3$. \end{example} \begin{defn} \rm For $k\ge 0$ we define ${\ov{{\cal M}}}(A-kF, J)$ to be the space of all $J$-holomorphic stable maps ${\sigma} = [{\Sigma}, h, J]$ in class $A-kF$. Further, given any subset ${\cal K}$ of ${\cal J}$ we write $$ {\ov{{\cal M}}}(A-kF, {\cal K}) = \cup_{J\in {\cal K}}\,{\ov{{\cal M}}}(A-kF, J). $$ It follows from the proof of Lemma~\ref{basic} that the domain ${\Sigma} = \cup_{i=0}^p {\Sigma}_i$ of ${\sigma}\in {\ov{{\cal M}}}(A-kF, {\cal J})$ contains a unique component that is mapped to a curve in some class $A-mF$, where $m \ge k$. We call this component the {\it stem} of ${\Sigma}$ and label it ${\Sigma}_0$. Thus ${\ov{{\cal M}}}(A-kF, {\cal J}_m)$ is the moduli space of all curves whose stem lies in class $A-mF$. Note that ${\Sigma} - {\Sigma}_0$ has a finite number of connected components called {\it branches}. If $h_0$ is parametrized as a section, a branch $B_w$ that is attached to ${\Sigma}_0 = S^2$ at the point $w$ is mapped into the fiber $\pi_J^{-1}(w)$. In particular, distinct branches are mapped to distinct fibers. \end{defn} The moduli spaces ${\ov{{\cal M}}}(A-kF, J)$ and ${\ov{{\cal M}}}(A-kF, {\cal J})$ have natural stratifications, in which each stratum is defined by fixing the topological type of the pair $({\Sigma},z)$ and the homology classes $[h_*({\Sigma}_i)]$ of the components. Observe that the class $A - mF$ of the stem is fixed on each stratum ${\cal S}$ in ${\ov{{\cal M}}}(A-kF, {\cal J})$. Hence there is a projection $$ {\cal S}\to {\cal J}_m, $$ whose fiber at $J\in {\cal J}_m$ is some stratum of ${\ov{{\cal M}}}(A-kF, J)$. Usually, in order to have a moduli space with a nice structure one needs to consider perturbed $J$-holomorphic curves. But, because we are working with genus $0$ curves in dimension $4$, the work of Hofer--Lizan--Sikorav~[HLS] shows that all $J$-holomorphic curves are essentially regular. In particular, all curves representing some multiple $mF$ of the fiber class are regular. Therefore each stratum of ${\ov{{\cal M}}}(A-kF, J)$ is a (finite-dimensional) manifold. The following result is an immediate consequence of Lemma~\ref{mfld}. \begin{lemma}\label{strat} Each stratum ${\cal S}$ of ${\ov{{\cal M}}}(A-kF, {\cal J})$ is a manifold and the projection ${\cal S}\to {\cal J}_m$ is a locally trivial fibration. \end{lemma} \begin{defn}\label{def:om} \rm When $k \ge 1$, we set ${\ov{{\cal M}}}_k = {\ov{{\cal M}}}(A-kF,{\cal J})$. Further, ${\ov{{\cal M}}}_0 = {\ov{{\cal M}}}(A, x_0, {\cal J})$ is the space of all stable maps $[{\Sigma}, h, z, J]$ where $[{\Sigma}, h,z]$ is a $J$-holomorphic stable map in class $A$ with one marked point $z$ such that $h(z) = x_0$. \end{defn} In the next section we show how to fit the strata of ${\ov{{\cal M}}}_k$ together by gluing to form an orbifold structure on ${\ov{{\cal M}}}_k$ itself. \subsection{Gluing} In this section we describe the structure of a neighborhood ${\cal N}({\sigma})\subset {\ov{{\cal M}}}_k$ of a single point ${\sigma} \in {\ov{{\cal M}}}(A-kF, {\cal J}_m)$. Suppose that ${\sigma} = [{\Sigma}, h, J]$, and order the components ${\Sigma}_i$ of ${\Sigma}$ so that ${\Sigma}_0$ is the stem and so that the union $\cup_{i\le \ell}{\Sigma}_i$ is connected for all $\ell$. Then each ${\Sigma}_i, i> 0$ is attached to a unique component ${\Sigma}_{j_i}, {j_i} < i$ by identifying some point $w_i\in {\Sigma}_i$ with a point $z_i\in {\Sigma}_{j_i}$. At each such intersection point consider the ``gluing parameter" $$ a_i\in T_{w_i}{\Sigma}_i\otimes_{{\bf C}} T_{z_i}{\Sigma}_{j_i}. $$ The basic process of gluing allows one to resolve the singularity of ${\Sigma} $ at the node $w_i = z_i$ by replacing the component ${\Sigma}_i$ by a disc attached to ${\Sigma}_{j_i}$ and suitably altering the map $h$. As we now explain, there is a $2$-dimensional family of ways of doing this that is parametrized by (small) $a_i$. \begin{prop}\label{nbhdpt} Each ${\sigma} \in {\ov{{\cal M}}}(A-kF, {\cal J}_m)$ has a neighborhood ${\cal N}({\sigma})$ in ${\ov{{\cal M}}}_k$ that is a product ${\cal U}_{\cal S}({\sigma})\times ({\cal N}(V_{\sigma})/{\Gamma}_{\sigma})$, where ${\cal U}_{\cal S}({\sigma})\subset {\ov{{\cal M}}}(A-kF, {\cal J}_m)$ is a small neighborhood of ${\sigma}$ in its stratum ${\cal S}$ and where ${\cal N}(V_{\sigma})$ is a small ${\Gamma}_{\sigma}$-invariant neighborhood of $0$ in the space of gluing parameters $$ V_{\sigma}= \bigoplus_{i>0}\, T_{w_i}{\Sigma}_i\otimes_{{\bf C}} T_{z_i}{\Sigma}_{j_i}. $$ \end{prop} \proof{} The proof is an adaptation of standard arguments in the theory of stable maps. The only new point is that the stem components are not regular so that when one does any gluing that involves this component one has to allow $J$ to vary in a normal slice ${\cal K}_J$ to the submanifold ${\cal J}_m$ at $J$. This analytic detail is explained in \S4.2. What we will do here is describe the topological aspect of the proof. First of all, let us describe the process of gluing. Given $a\in V_{\sigma}$, the idea is first to construct an approximately $J$-holomorphic stable map $({\Sigma}_a, h_a, J)$ on a glued domain ${\Sigma}_a$ and then to perturb $h_a$ and $J$ using a Newton process to a $J_a$-holomorphic map $h_a: {\Sigma}_a\to X$ in ${\ov{{\cal M}}}(A-kF, {\cal K}_J)$. We will describe the first step in some detail here since it will be used in \S 3. The analytic arguments needed for the second step are postponed to \S 4. The glued domain ${\Sigma}_a$ is constructed as follows. For each $i$ such that $a_i \ne 0$, cut out a small open disc ${{\rm Int}} D_{w_i}(r_i)$ in ${\Sigma}_i$ centered at $w_i$ and a similar disc ${{\rm Int}} D_{z_i}(r_i)$ in $B_{j_i}$ where $r_i^2=\|a_i\|$, and then glue the boundaries of these discs together with a twist prescribed by the argument of $a_i$. The Riemann surface ${\Sigma}_a$ is the result of performing this operation for each $i$ with $a_i\ne 0$. (When $a_i = 0$ one simply leaves the component ${\Sigma}_i$ alone.) To be more precise, consider gluing $z\in {\Sigma}_0$ to $w\in {\Sigma}_1$. Take a K\"ahler metric on ${\Sigma}_0$ that is flat near $z$ and identify the disc $D_z(r)$ isometrically with the disc of radius $r$ in the tangent space $T_z = T_z({\Sigma}_0)$ via the exponential map. Take a similar metric on $({\Sigma}_1,w)$. Then the gluing ${\partial} D_z(r)\to {\partial} D_w(r)$ may be considered as the restriction of the map $$ \Psi_a: T_z - \{0\} \;\longrightarrow\; T_w - \{0\} $$ that is defined for $x\in T_z$ by the requirement: $$ x\otimes \Psi_a(x) = a,\quad x\in T_z. $$ Thus, with respect to chosen identifications of $T_z$ and $T_w$ with ${\bf C}$, $\Psi_a$ is given by the formula: $x\mapsto a/x$ and so takes the circle of radius $r = \sqrt{\|a\|}$ into itself. This describes the glued domain ${\Sigma}_a$ as a point set. It remains to put a metric on ${\Sigma}_a$ in order to make it a Riemann surface. By hypothesis the original metrics on ${\Sigma}_0, {\Sigma}_1$ are flat near $z$ and $w$ and so may be identified with the flat metric $|dx|^2$ on ${\bf C}$. Since $$ \Psi_a^*(|dx|^2) = \left|\frac a{x} \right|^2|dx|^2, $$ $\Psi_a(|dx|^2) = |dx|^2$ on the circle $|x| = r$. Hence, we may choose a function $ \chi_r:(0,\infty)\to (0,\infty)$ so that the metric $\chi_r(|x|)|dx|^2$ is invariant by $\Psi_a$ and so that $\chi_r(s) = 1$ when $s > (1+{\varepsilon})r$, and then patch together the given metrics on ${\Sigma}_0 - D_z(2r)$ and ${\Sigma}_1 - D_z(2r)$ via $\chi_r(|x|)|dx|^2$. In \S 3 we need to understand what happens as $a$ rotates around the origin. It is not hard to check that if we write $a_{\theta} = e^{i{\theta}} a_z\otimes a_w$, where $a_z\in T_z, a_w\in T_w$ are fixed, and if $\Psi_{a_{\theta}}$ identifies the point $p_z$ on ${\partial} D_z(r)$ with $p_w$ on ${\partial} D_w(r)$ then $$ \Psi_{a_{\theta}} (e^{i{\theta}} p_z) = p_w. $$ The next step is to define the approximately holomorphic map (or pre-gluing) $h_a: {\Sigma}_a\to X$ for sufficiently small $\|a\|$. The map $h_a$ equals $h$ away from the discs $D_{z_i}(r_i), D_{w_i}(r_i)$, and elsewhere is defined by using cut-off functions that depend only on $\|a\|$. To describe the deformation of $h_a$ to a holomorphic map one needs to use analytical arguments. Hence further details are postponed until \S 4. We are now in a position to describe a neighborhood of ${\sigma}$. It is convenient to think of $V_{\sigma}$ as the direct sum $V_{\sigma}' \oplus V_{\sigma}''$ where $V_{\sigma}'$ consists of the summands $T_{w_i}{\Sigma}_i\otimes_{{\bf C}} T_{z_i}{\Sigma}_{j_i}$ with $j_i = 0$ and $V_{\sigma}''$ of the rest. Note that the obvious action of ${\Gamma}_{\sigma}$ on $V_{\sigma}$ preserves this splitting. (It is tempting to think that the induced action on $V_{\sigma}'$ is trivial since the elements of ${\Gamma}_{\sigma}$ act trivially on the stem. However, this need not be so since they may rotate branch components that are attached to the stem.) If we glue at points parametrized by $a''\in V_{\sigma}''$ then the corresponding curves lie in some branch and are regular. Hence the result of gluing is a $J$-holomorphic curve (i.e. there is no need to perturb $J$). Further, because the gluing map ${\Tilde {\Gg}}$ is ${\Gamma}_{\sigma}$-equivariant, there is a neighborhood of ${\sigma}$ in $ {\ov{{\cal M}}}(A-kF, {\cal J}_m)$ of the form $$ {\cal U}''({\sigma}) = {\cal U}_{\cal S}({\sigma})\times ({\cal N}(V_{\sigma}'')/{\Gamma}_{\sigma}), $$ where ${\cal N}(V)$ denotes a neighborhood of $0$ in the vector space $V$. When we glue with elements from $V_{\sigma}'$, the homology class of the stem changes and so the result cannot be $J$-holomorphic since $J\in {\cal J}_m$. We show in Proposition~\ref{exun} that if ${\cal K}_J$ is a normal slice to the submanifold ${\cal J}_m$ at $J$ then for sufficiently small $a\in V_{\sigma}'$ the approximately holomorphic map $h_a:{\Sigma}_a\to X$ deforms to a unique $J_a$ holomorphic map ${\Tilde {\Gg}}(h_{\sigma}, a)$ with $J_a\in {\cal K}_J$. Therefore, for each element ${\sigma}'' = [{\Sigma}, h'',J''] \in {\cal U}''({\sigma})$ there is a homeomorphism from some neighborhood ${\cal N}(V_{{\sigma}''}')$ onto a neighborhood of ${\sigma}''$ in ${\ov{{\cal M}}}(A-kF, {\cal K}_{J''})$. Moreover, if ${\cal U}''({\sigma})$ is sufficiently small, the spaces $V_{{\sigma}''}$ can all be identified with $V_{\sigma}$ and it follows from the proof of Proposition~\ref{exun} that the neighborhoods ${\cal N}(V_{{\sigma}''})$ can be taken to have uniform size and so may all be identified. Hence the neighborhood ${\cal N}({\sigma})$ projects to ${\cal U}''({\sigma})$ with fiber at ${\sigma}''$ equal to ${\cal N}(V_{\sigma}')/{\Gamma}_{{\sigma}''}$. In general, the groups ${\Gamma}_{{\sigma}''}$ are subgroups of ${\Gamma}_{\sigma}$ that vary with ${\sigma}''$: in fact they equal the stabilizer of the corresponding gluing parameter $a''\in V({\sigma}'')$. However, since elements of ${\cal U}_{\cal S}({\sigma})$ lie in the same stratum they have isomorphic isotropy groups. It is now easy to check that the composite map $$ {\cal N}({\sigma})\to {\cal U}''({\sigma})\to {\cal U}_{\cal S}({\sigma}) $$ has fiber ${\cal N}(V_{\sigma})/{\Gamma}_{\sigma}$ as claimed. \hfill$\Box$\medskip \subsection{Moduli spaces and the stratification of ${\cal J}$} Since each stable $J$-curve in class $A - kF$ has exactly one component in some class $A- mF$ with $m \ge k$, the projection $\pi_k: {\ov{{\cal M}}}(A-kF, {\cal J})\to {\cal J}$ has image ${\ov{\cal J}}_k$. Consider the inverse image $$ {\ov{{\cal M}}}(A-kF, {\cal J}_m) = \pi_k^{-1}({\cal J}_m). $$ The next result shows that we can get a handle on the structure of ${\ov{\cal J}}_k$ by looking at the spaces ${\ov{{\cal M}}}(A-kF, {\cal J}_m)$. \begin{prop}\label{prop:fibk} When $k > 0$ the projection $$ \pi_k: {\ov{{\cal M}}}(A-kF, {\cal J}_m)\to {\cal J}_m $$ is a locally trivial fibration whose fiber ${\cal F}_J(m-k)$ at $J$ is the space of all stable $J$-curves $[{\Sigma},h]$ in class $A-kF$ that have as one component the unique $J$-holomorphic curve $\Delta_J$ in class $A-mF$. In particular, ${\cal F}_J(m-k)$ is a stratified space with strata that are manifolds of (real) dimension $\le 4(m-k)$. Its diffeomorphism type depends only on $k-m$. \end{prop} \proof{} Let us look at the structure of ${\cal F}_J(m-k) = \pi_k^{-1}(J)$. The stem of each element $[{\Sigma}, h,J]\in {\cal F}_J(m-k)$ is mapped to the unique $J$-curve $\Delta_J$ in class $A-mF$. Fix this component further by supposing that it is parametrized as a section of the fibration $\pi_J:X\to S^2$ (where $\pi_J$ is as in Lemma~\ref{basic}.) We may divide the fiber ${\cal F}_J(m-k)$ into disjoint sets ${\cal Z}_{{\cal D},J}$ each parametrized by a fixed decomposition ${\cal D}$ of $m-k$ into a sum $d_1 + \dots + d_p$ of unordered positive numbers. The elements of ${\cal Z}_{{\cal D},J}$ are those with $p$ branches $B_{w_1},\dots, B_{w_p}$ where $h_*[B_{w_i}] = d_i [F]$. Thus ${\cal Z}_{{\cal D},J}$ maps onto the configuration space of $p$ distinct (unordered) points in $S^2$ labelled by the positive integers $d_1,\dots, d_p$ with sum $m-k$. Moreover this map is a fibration with fiber equal to the product $$ \prod_{i=1}^d {\ov{{\cal M}}}_{0,1}(S^2, q, d_i) $$ where ${\ov{{\cal M}}}_{0,1}(S^2, q, d)$ is the space of $J$-holomorphic stable maps into $ S^2$ of degree $d$ and with one marked point $z$ such that $h(z) = q$. (This point $q$ is where the branch is attached to ${\Delta}_J$.) According to the general theory, ${\ov{{\cal M}}}_{0,1}(S^2, q, d)$ is an orbifold of real dimension $4(d-1)$. It follows easily that ${\cal Z}_{{\cal D},J}$ is an orbifold of real dimension $4(m-k)-2p$. It remains to understand how the different sets ${\cal Z}_{{\cal D},J}$ fit together, i.e. what happens when two or more of the points $w_i$ come together. This may be described by suitable gluing parameters as in Proposition~\ref{nbhdpt}. The result follows. (For more details see any reference on stable maps, eg [FO], [LiT], [LiuT1]. An example is worked out in \S3.2.4 below.) \hfill$\Box$\medskip {\noindent} {\bf Note} For an analogous statement when $k=0$ see Proposition~\ref{prop:fib0}. {\medskip} Our next aim is to describe the structure of a neighborhood of ${\ov{{\cal M}}}(A-kF, {\cal J}_m)$ in ${\ov{{\cal M}}}_k={\ov{{\cal M}}}(A-kF, {\cal J})$. We will write ${\cal Z}_J$ for the fiber ${\cal F}_J(m-k)$ of $\pi_k$ that was considered above and set $$ {\cal Z}= \bigcup_{J\in {\cal J}_m}\, {\cal Z}_J,\qquad {\cal Z}_{\cal D} = \bigcup_{J\in {\cal J}_m}\, {\cal Z}_{{\cal D},J}. $$ (The letter ${\cal Z}$ is used here because ${\cal Z}$ is the ``zero-section" of the space of gluing parameters ${\cal V}$ constructed below.) Consider an element ${\sigma} = [{\Sigma}, h, J] $ that lies in a substratum ${\cal Z}_{\cal S}$ of $ {\cal Z}_{\cal D}$ where ${\cal D} = d_1+\dots + d_p$. Then ${\Sigma}$ has $p$ branches $B_1,\dots, B_p$ that are attached at the distinct points $w_1,\dots, w_p\in {\Sigma}_0$. Let $z_i$ be the point in $ B_i$ that is identified with $w_i\in {\Sigma}_0$ and define $$ V_{{\sigma}} = \bigoplus_{i=1}^p \; T_{z_i}B_i\otimes_{{\bf C}} T_{w_i}{\Sigma}_0. $$ As explained in Proposition~\ref{nbhdpt}, the gluing parameters $a\in V_{\sigma}$ (when quotiented out by ${\Gamma}_{\sigma}$) parametrize a normal slice to ${\cal Z}_{\cal D}$ at ${\sigma}$. (Note that previously $V_{\sigma}$ was called $V_{\sigma}''$.) We now want to show how to fit these vector spaces together to form the fibers of an orbibundle\footnote { A rank $k$ orbibundle $\pi: E\to Y$ over an orbifold $Y$ has the following local structure. Suppose that ${\sigma}\in Y$ has local chart $U\subset {\widetilde U}/{\Gamma}_{\sigma}$ where the uniformizer ${\widetilde U}$ is a subset of ${\bf R}^n$. Then $\pi^{-1}(U)$ has the form ${\widetilde U}\times {\bf R}^k/{\Gamma}_{\sigma}$ where the action of ${\Gamma}_{\sigma}$ on ${\bf R}^n\times {\bf R}^k$ lifts that on ${\bf R}^n$ and is linear on ${\bf R}^k$. There is an obvious compatibility condition between charts: see~[FO],\S2.} over ${\cal Z}_{\cal D}$. Here we must incorporate twisting that arises from the fact that gluing takes place on the space of {\it parametrized } stable maps. Since this is an important point, we dwell on it at some length. For the sake of clarity, we will in the next few paragraphs denote parametrized stable maps by ${\tilde{{\sigma}}} = ({\Sigma},h)$ and the usual (unparametrized) maps by ${\sigma} = [{\Sigma}, h]$. Further, ${\Gamma}_{\tilde{{\sigma}}}$ denotes the corresponding realization of the group ${\Gamma}_{\sigma}$ as a subgroup of ${\rm Aut}({\Sigma})$. Recall that $X$ is identified with $S^2\times S^2$ in such a way that the fibration $\pi_J: X\to S^2$ whose fibers are the $J$-holomorphic $F$-curves is simply given by projection onto the first factor. Hence each such fiber has a given identification with $S^2$. Further, we assume that the stem $h_{{\sigma},0}:{\Sigma}_{{\sigma},0}\to \Delta_J$ is parametrized as a section $z\mapsto (z, \rho(z))$. Hence we only have to choose parametrizations of each branch. Since each branch component has at least one special point, its automorphism group is either trivial or has the homotopy type of $S^1$. Let ${\rm Aut\,}'({\Sigma})$ be the subgroup of ${\rm Aut\,}({\Sigma})$ consisting of automorphisms that are the identity on the stem. Then the identity component of ${\rm Aut\,}'({\Sigma})$ is homotopy equivalent to a torus $T^{k({\cal S})}$. (Here ${\cal S}$ is the label for the stratum containing ${\sigma}$.) Let $g$ be a ${\Gamma}_{\tilde{{\sigma}}}$-invariant metric on the domain ${\Sigma}$ that is also invariant under some action of the torus $T^{k({\cal S})}$. \begin{defn}\label{def:gp}\rm The group ${\rm Aut}^K({\Sigma})$ is defined to be the subgroup of the isometry group of $({\Sigma}, g)$ generated by ${\Gamma}_{\tilde{{\sigma}}}$ and $T^{k({\cal S})}$. Note that ${\Gamma}_{\tilde{{\sigma}}}$ is the semidirect product of a subgroup ${\Gamma}_{\tilde{{\sigma}}}'$ of $T^{k({\cal S})}$ with a subgroup ${\Gamma}_{\tilde{{\sigma}}}''$ that permutes the components of each branch. Further ${\rm Aut}^K({\Sigma})$ is a deformation retract of the subgroup $p^{-1}({\Gamma}_{\tilde{{\sigma}}}'')$ of ${\rm Aut}({\Sigma})$, where we consider ${\Gamma}_{\tilde{{\sigma}}}''$ as a subgroup of $\pi_0({\rm Aut}({\Sigma}))$ and $$ p:{\rm Aut}({\Sigma})\to \pi_0({\rm Aut}({\Sigma})) $$ is the projection. For a further discussion, see \S4.2.4. \end{defn} Let us first consider a fixed $J\in {\cal J}_k$. It follows from the above discussion that on each stratum ${\cal Z}_{{\cal S},J}$ there is a principal bundle $$ {\cal Z}_{{\cal S},J}^{para} \to {\cal Z}_{{\cal S},J} $$ with fiber ${\rm Aut}^K({\Sigma})$ such that the elements of ${\cal Z}_{{\cal S},J}^{para}$ are parametri\-zed stable maps ${\tilde{{\sigma}}} = ({\Sigma}, h)$. Since the space $V_{\tilde{{\sigma}}}$ of gluing parameters at ${\tilde{{\sigma}}}$ is made from tangent spaces to ${\Sigma}$ there is a well defined bundle $$ {\cal V}_{{\cal S},J}^{para} \to {\cal Z}_{{\cal S},J}^{para} $$ with fiber $V_{\tilde{{\sigma}}}$. Further the action of the reparametrization group ${\rm Aut}^K({\Sigma})$ lifts to ${\cal V}_{{\cal S},J}^{para}$ and we define ${\cal V}_{{\cal S},J}$ to be the quotient ${\cal V}_{{\cal S},J}^{para}/{\rm Aut}^K({\Sigma})$. Thus there is a commutative diagram $$ \begin{array}{ccc} {\cal V}_{{\cal S},J}^{para} & \to & {\cal V}_{{\cal S},J}\\ \downarrow & & \downarrow\\ {\cal Z}_{{\cal S},J}^{para} & \to & {\cal Z}_{{\cal S},J}. \end{array} $$ where the right hand vertical map is an orbibundle with fiber $V_{\tilde{{\sigma}}}/{\Gamma}_{\tilde{{\sigma}}}$. Now consider the space $ {\cal Z}_{{\cal D},J} = \cup_{{\cal S}\subset {\cal D}} {\cal Z}_{{\cal S},J}. $ The local topological structure of ${\cal Z}_{{\cal D},J}$ is given by gluing parameters as in Proposition~\ref{nbhdpt}. Observe that every $J$ is regular for the branch components so that the necessary gluing operations can be performed keeping $J$ fixed. The spaces ${\cal Z}_{{\cal D},J}^{para},{\cal V}_{{\cal D},J}^{para}$ are defined similarly and clearly there is a vector bundle ${\cal V}_{{\cal D},J}^{para}\to {\cal Z}_{{\cal D},J}^{para}$, We want to see that the union $$ {\cal V}_{{\cal D},J} = \bigcup _{{\cal S}\in {\cal D}} {\cal V}_{{\cal S},J}. $$ has the structure of an orbibundle over ${\cal Z}_{{\cal D},J}$. The point here is that the groups ${\rm Aut}^K({\Sigma})$ change dimension as ${\tilde{{\sigma}}}$ moves from stratum to stratum. Hence we need to see that the local gluing construction that fits the different strata in ${\cal V}_{{\cal D},J}$ together is compatible with the group actions. We show in \S4.2.4 below that the gluing map ${\Tilde {\Gg}}$ can be defined at the point ${\tilde{{\sigma}}}$ to be ${\rm Aut}^K({\Sigma})$-invariant, i.e. so that $$ {\Tilde {\Gg}}(h_{\sigma}, a) = {\Tilde {\Gg}}(h_{\sigma}\circ \theta^{-1}, \theta\cdot a), $$ where ${\Tilde {\Gg}}(h_{\sigma},a)$ is the result of gluing the map $h_{\sigma}$ with parameters $a$. In the situation considered here, we are dividing the set of gluing parameters at ${\tilde{{\sigma}}}$ into two, and will write $a = (a_b, a_s)$ where $a_b$ are the gluing parameters at intersections of branch components and $a_s$ are those involving the stem component. As $h_{\sigma}$ moves within ${\cal Z}_{{\cal D},J}^{para}$ we glue along $a_b$, considering $a_s$ to be part of the fiber $V_{\sigma}$. Moreover, if ${\tilde{{\sigma}}}' = ({\Sigma}_{a_b}, {\Tilde {\Gg}}(h_{\sigma}, a_b))$, Lemma~\ref{le:repr} (ii) shows that ${\Tilde {\Gg}}$ can be constructed to be compatible with the actions of the groups ${\rm Aut}^K({\Sigma})$ and ${\rm Aut}^K({\Sigma}')$ on the fibers $V_{\sigma}$ and $V_{{\sigma}'}$ of ${\cal V}_{{\cal D},J}^{para}$ at ${\tilde{{\sigma}}},{\tilde{{\sigma}}}'$. It follows without difficulty that the quotient $$ {\cal V}_{{\cal D},J}\to {\cal Z}_{{\cal D},J} $$ is an orbibundle. Finally, one forms spaces $$ {\cal V}_J = \bigcup_{\cal D}\;{\cal V}_{{\cal D},J},\qquad {\cal V} = \bigcup_{J\in {\cal J}_k}\;{\cal V}_J $$ whose local structure is also described by appropriate gluing parameters as above. Forgetting the gluing parameters gives projections $$ {\cal V}_J\to {\cal Z}_J = {\cal F}_J(m-k);\qquad {\cal V}\to {\cal Z} = {\cal F}(m-k), $$ and ${\cal Z}_J, {\cal Z}$ embed in ${\cal V}_J$ and ${\cal V}$ as the ``zero sections". The map ${\cal V}_J\to {\cal Z}_J$ preserves the stratifications of both spaces. However it is no longer an orbibundle since the dimension of the fiber $V_{\sigma}$ depends on ${\cal D}$. In fact, the way that the different sets ${\cal V}_{{\cal D},J}$ are fitted together is best thought of as a kind of plumbing: see \S3.2.4. \begin{example}\label{ex:lens}\rm Everything is greatly simplified when $m - k = 1$. Here there is only one decomposition ${\cal D}$ and the space ${\cal Z}_{{\cal D},J}$ consists of just one stratum diffeomorphic to $S^2$. Moreover the bundle ${\cal Z}_{{\cal D},J}^{para}\to {\cal Z}_{{\cal D},J}$ has a section with the following description. Choose $J\in {\cal J}_{k+1}$ so that $\pi_J:X\to S^2$ is the standard projection onto the first factor and so that the graph $h_0$ of the map $\rho_k:S^2\to S^2$ of degree $-(k+1)$ is $J$-holomorphic. Let ${\Sigma}_0, {\Sigma}_1$ be two copies of $S^2$ and for each $w\in S^2$ define $({\Sigma}_w, h_w)\in {\cal Z}_{{\cal D},J}^{para}$ by \begin{eqnarray*} {\Sigma}_w & = & {\Sigma}_0\, \cup_{w = \rho(w)}\, {\Sigma}_1,\\ h_w|_{{\Sigma}_0} = h_0,& & h_w|_{{\Sigma}_1}: z\mapsto (w,z). \end{eqnarray*} Hence in this case ${\cal V}_J$ is a complex line bundle over ${\cal Z}_J = S^2$. To calculate its Chern class, observe that ${\cal V}_J$ can be identified with the space $$ \bigcup_{w\in S^2} T_{\rho(w)}{\Sigma}_1\otimes T_w({\Sigma}_0) = (TS^2)^{-k-1}\otimes TS^2, $$ and so has Chern class $-2k$. \end{example} The following result is proved in \S4. \begin{prop}\label{glue} There is a neighborhood ${\cal N}_{\cal V}({\cal Z})$ of ${\cal Z}$ in ${\cal V}$ and a gluing map $$ {\cal G}: {\cal N}_{\cal V}({\cal Z}) \longrightarrow {\ov{{\cal M}}}(A-kF, {\cal J}) $$ that maps ${\cal N}_{\cal V}({\cal Z})$ homeomorphically onto a neighborhood of ${\ov{{\cal M}}}(A-kF,{\cal J}_m)$ in ${\ov{{\cal M}}}(A-kF, {\cal J})$. \end{prop} It follows from the construction of ${\cal G}:{\cal N}_{\cal V}({\cal Z})\to {\ov{{\cal M}}}(A-kF,{\cal J}_m)$ outlined in Proposition~\ref{nbhdpt} that the stem of the glued map ${\cal G}({\sigma}, a)$ lies in the class $A - (m - \sum_i n_i)F$ where the indices $i$ label the branches $B_i$ of ${\sigma}$ and $n_i$ is defined as follows. If $a_i = 0$ then $n_i = 0$. Otherwise, if ${\Sigma}_{j_i}$ is the component of $B_i$ that meets ${\Sigma}_0$ then $n_i$ is the multiplicity of $h_{j_i}$, that is $[h({\Sigma}_{j_i})] = n_i F$. Let ${\cal N}_p$ denote the set of all elements $({\sigma},a)\in {\cal N}_{\cal V}({\cal Z})$ such that the stem of the glued map lies in class $A-pF$. In other words, $$ {\cal N}_p = (\pi_k\circ{\cal G})^{-1} {\cal J}_p. $$ Clearly, ${\cal N}_p$ is a union of strata in the stratified space ${\cal N}_{\cal V}({\cal Z})$. Further, when $k > 0$ the map $\pi_k: {\cal G}({\cal N}_p)\to {\cal J}_p$ is a fibration with fiber ${\cal F}(p-k)$. The next proposition follows immediately from Proposition~\ref{prop:fibk}. \begin{prop}\label{link1} The link ${\cal L}_{m,k}$ is the finite-dimensional stratified space obtained from the link of ${\cal Z}_J$ in ${\cal V}_J$ by collapsing the fibers of the projections ${\cal V}_J\cap {\cal N}_p \to {\cal J}_p$ to single points. \end{prop} {\noindent} {\bf Proof of Proposition~\ref{pr:lens}} We have to show that the link ${\cal L}_{k+1,k}$ is the lens space $L(2k,1)$. We saw in Example~\ref{ex:lens} that ${\cal V}_J$ is a line bundle with Chern class $-2k$. In this case there is only one nontrivial stratum in ${\cal N}_{\cal V}({\cal Z})$, namely ${\cal N}_k$, which is the complement of the zero section. Moreover, the map $\pi_k\circ{\cal G}$ is clearly injective. Hence by the above lemma ${\cal L}_{k+1,k}$ is simply the unit sphere bundle of ${\cal V}_J$ and so is a lens space as claimed. \hfill$\Box$\medskip \section{The link ${\cal L}_{2,0}$ of ${\cal J}_2$ in ${\ov{\cal J}}_0$} In this section we illustrate Proposition~\ref{link1} by calculating the link ${\cal L}_{2,0}$. We know from Lemma~\ref{mfld} that ${\cal L}_{2,0} = S^5$. The general theory of \S2 implies that ${\cal L}_{2,0}$ can be obtained from the link ${\cal L}_{{\cal Z}}$ of the zero section ${\cal Z}_J$ in the stratified space ${\cal V}_J$ of gluing data by collapsing certain strata. When looked at from this point of view, the $S^5$ appears in quite a complicated way that was described in Theorem~\ref{LINK}. We begin here by explaining the plumbing construction, and then discuss how this relates to ${\cal L}_{\cal Z}$. \subsection{Some topology} Recall that $S(L_P)\to {\cal P} ({\cal O}(k)\oplus {\cal O}(m))$ is the unit circle bundle of the canonical line bundle $L_P$ over the projectivization ${\cal P} ({\cal O}(k)\oplus {\cal O}(m))$. \begin{lemma}\label{le:str} The bundle $S(L_P) \to {\cal P}({\cal O}(-1)\oplus {\bf C})$ can be identified with the pullback of the canonical circle bundle $S(L_{can})\to {\bf C} P^2$ over the blowdown map ${\bf C} P^2\# {\overline{{{\bf C}}P}\,\!^{2}}\to {\bf C} P^2$. \end{lemma} \proof{} It is well known that ${\cal P}({\cal O}(-1)\oplus {\bf C}) $ can be identified with ${\bf C} P^2\# {\overline{{{\bf C}}P}\,\!^{2}}\to {\bf C} P^2$. Indeed the section $S_- = {\cal P}(\{0\}\oplus{\bf C})$ has self-intersection $-1$, while $S_+ = {\cal P}({\cal O}(-1)\oplus \{0\})$ has self-intersection $1$. Further, the circle bundle $L_P$ is trivial over $S_-$ and has Euler class $-1$ over $S_+$ and over the fiber class. The result follows.\hfill$\Box$\medskip The space we are interested in is formed by plumbing a rank $2$ bundle $E\to S^2$ to a line bundle $L\to Y$, where $\dim (Y) = 4$. This plumbing $E{\mbox{$\,\equiv \hspace{-.92em} \| \,\,\,$}} L$ is the space obtained from the unit disc bundles $D(E)\to S^2$ and $D(L)\to Y$ by identifying the inverse images of discs $D^2, D^4$ on the two bases in the obvious way: the disc fibers of $D(E)\to S^2$ are identified with with flat sections of $D(L)$ over $D^4$ and flat sections of $D(E)$ over $D^2$ are identified with fibers of $D(L)$. There is a corresponding plumbing $S(E){\mbox{$\,\equiv \hspace{-.92em} \| \,\,\,$}} S(L)$ of the two sphere bundles $S^3\to S(E)\to S^2$ and $S^1\to S(L)\to Y$, obtained by cutting out the inverse images of open discs in the two bases and appropriately gluing the boundaries. The resulting space $S(E){\mbox{$\,\equiv \hspace{-.92em} \| \,\,\,$}} S(L)$ is the link of the core $S^2\cup Y$ in the plumbed bundle $E{\mbox{$\,\equiv \hspace{-.92em} \| \,\,\,$}} L$. \begin{lemma} Let $L_{can}\to {\bf C} P^2$ be the canonical line bundle and $E = {\cal O}(k)\oplus {\cal O}(m)$. Then $E{\mbox{$\,\equiv \hspace{-.92em} \| \,\,\,$}} L_{can}$ may be identified with the blow-up of ${\cal O}(k+1)\oplus {\cal O}(m+1)$ at a point on its zero section. Hence $$ S(E){\mbox{$\,\equiv \hspace{-.92em} \| \,\,\,$}} S(L_{can}) = S(({\cal O}(k+1)\oplus {\cal O}(m+1)). $$ \end{lemma} \proof{} First consider the structure of the blow up $\widetilde{{\bf C}}\,\!^3$ of ${\bf C}^3 = {\bf C}\times {\bf C}^2$ at the origin. The fibration $\pi: {\bf C} \times {\bf C}^2\to {\bf C}$ induces a fibration $$ \widetilde{\pi}: \widetilde{{\bf C}}\,\!^3\to {\bf C}. $$ Clearly, the inverse image $\widetilde{\pi} \,\!^{-1}(z)$ of each point $z\ne 0$ is a copy of ${\bf C}^2$ while $\widetilde{\pi} \,\!^{-1}(0)$ is the union of the exceptional divisor together with the set of lines in the original fiber $\pi^{-1}(0)$. Let ${\lambda}$ be the line in ${\bf C}\times {\bf C}^2$ through the origin and the point $(1,a,b)$. Lift ${\lambda}$ to the blow-up and consider its intersection with $$ \widetilde{\pi} \,\!^{-1}(S^1) = {\pi}^{-1}(S^1) \subset {\bf C}\times {\bf C}^2, $$ where $S^1$ is the unit circle in ${\bf C}$. This intersection consists of the points $ (e^{it}, e^{it}a,e^{it}b)$, hence it is these circles (rather than the circles $(e^{it}, a,b)$) that bound discs in the blowup. Therefore, if we think of the blowup $\widetilde{{\bf C}}\,\!^3$ as the plumbing of the bundle $\pi:{\bf C}\times {\bf C}^2\to {\bf C}$ with $L_{can}$, the original trivialization of $\pi$ differs from the trivialization (or product structure) near $\pi^{-1}(0)$ that is used to construct the plumbing. Now recall that $$ {\cal O}(k) = D^+\times {\bf C} \cup_\alpha D^-\times {\bf C}, $$ where $D^+, D^-$ are $2$-discs, with $D^+$ oriented positively and $D^-$ negatively, and where the gluing map $\alpha$ is given by $$ \alpha: {\partial} D^+\times {\bf C}\to {\partial} D^-\times {\bf C}:\quad (e^{it}, w)\mapsto (e^{it},e^{-ikt}w). $$ It follows easily that the blowup of $D({\cal O}(k+1)\oplus {\cal O}(m+1))$ at a point on its zero-section is obtained by plumbing the disc bundle $D({\cal O}(k)\oplus {\cal O}(m))$ with $D(L_{can})$. This proves the first statement. The second statement is then immediate. \hfill$\Box$\medskip We are interested in plumbing not with $L_{can}\to {\bf C} P^2$ but with a particular singular line bundle (or orbibundle) $L_Y\to Y$. This means that the unit circle bundle $S(L_Y)\to Y$ is a Seifert fibration with a finite number of singular (or multiple) fibers. In our case, there is an $S^1$ action on $S(L_Y)$ such that the fibers of the map $S(L_Y)\to Y$ are the $S^1$-orbits. In fact, we can identify $S(L_Y)$ with $S^5$ in such a way that the $S^1$ action is $$ {\theta}\cdot (x,y,z) = (e^{i{\theta}}x, e^{i{\theta}}y, e^{2i{\theta}}z),\quad x,y,z\in {\bf C}. $$ Thus there is one singular fiber that goes through the point $(0,0,1)$. All other fibers $F$ are regular. For each such $F$ there is a diffeomorphism of $S^5$ that takes $F$ to the circle $ {\gamma}_0=(e^{i{\theta}}, e^{i{\theta}}, 0)$. Identify a neighborhood of ${\gamma}_0$ with $S^1\times D^4$ in such a way that $$ S^5 = S^1\times D^4\cup D^2\times S^3, $$ with the identity map of $S^1\times S^3$ as gluing map. Then, in these coordinates near ${\gamma}_0$ the fibers of $S(L_Y)$ are (diffeomorphic to) the circles $$ {\gamma}_x = \left\{({\theta}, A_{{\theta}}(x))\in S^1\times D^4: A_{\theta} = \left(\begin{array}{cc}e^{i{\theta}} &0\\0&e^{2i{\theta}}\end{array} \right)\right\}. $$ By way of contrast, the fibers of $S^5$ with the Hopf fibration have neighborhoods fibered by the circles $$ {\gamma}_x' = \left\{({\theta}, A_{{\theta}}'(x))\in S^1\times D^4: A_{\theta}' = \left(\begin{array}{cc}e^{i{\theta}} &0\\0&e^{i{\theta}} \end{array} \right)\right\}. $$ The next result shows that plumbing with $S(L)$ is a kind of twisted blowup. \begin{prop}\label{str} Let $L_Y\to Y$ be the orbibundle described in the previous paragraph. Then the manifold obtained by plumbing $S({\cal O}(k)\oplus {\cal O}(m))$ with a regular fiber of $S(L_Y)$ is diffeomorphic to $S({\cal O}(k+2) \oplus {\cal O}(m+1))$. \end{prop} \proof{} We may think of plumbing as the result of a surgery that matches the flat circles $S^1\times pt$ in the copy of $S^1\times S^3$ in $S({\cal O}(k)\oplus {\cal O}(m))$ with the circles ${\gamma}_x$ in the neighborhood of a regular fiber ${\gamma}_0$ of $S(L_Y)$. We would get the same result if we matched the circles $$ {\delta}_x = \left\{({\theta}, A_{{\theta}}''(x))\in S^1\times S^3: A_{\theta}'' = \left(\begin{array}{cc}e^{-i{\theta}} &0\\0& 1 \end{array} \right)\right\} $$ in $S^1\times S^3\subset S({\cal O}(k)\oplus {\cal O}(m))$ with the circles ${\gamma}_x'$ in the standard (Hopf) $S^5$. But if we trivialize the boundary of $S({\cal O}(k)\oplus {\cal O}(m)) - D^2\times S^3$ by the circles ${\delta}_x$ we get the same as if we trivialized the boundary of $S({\cal O}(k+1)\oplus {\cal O}(m)) - D^2\times S^3$ in the usual way by flat circles. Thus \begin{eqnarray*} S({\cal O}(k)\oplus {\cal O}(m)){\mbox{$\,\equiv \hspace{-.92em} \| \,\,\,$}} S(L_Y) & = & S({\cal O}(k+1)\oplus {\cal O}(m)){\mbox{$\,\equiv \hspace{-.92em} \| \,\,\,$}} S(L_{can}) \\ &= & S({\cal O}(k+2)\oplus {\cal O}(m+1)). \end{eqnarray*} There is a question of orientations here: do we have to add or subtract $1$ from $k$ to compensate for the extra twisting in $S(L_Y)$? One can check that it is correct to add $1$ by using the present approach to give an alternate proof of the previous lemma. For, if we completely untwisted the circles in the neighborhood of ${\gamma}_0$ (thereby increasing the twisting of the other side by an additional $2$), we would be doing the trivial surgery in which the attaching map is the identity. Note also that because the sum ${\cal O}(k)\oplus {\cal O}(m)$ depends only on $k+m$ we could equally well have put the extra twist on the other factor. \hfill$\Box$\medskip \subsection{Structure of the pair $({\cal V}_J, {\cal Z}_J)$} Our aim is to prove the following proposition, where ${\cal V}_J$ is the space of gluing parameters for a fixed $J\in {\cal J}_2$ that describes the link of the space of $(A-2F)$-curves in the space of (pointed) $A$-curves. \begin{prop}\label{prop:link} The link ${\cal L}_{{\cal Z}}$ of the zero section ${\cal Z}_J$ in the stratified space ${\cal V}_J$ is constructed by plumbing $S({\cal O}(-3)\oplus {\cal O}(-1))$ to $S(L_Y)$. Hence $$ {\cal L}_{{\cal Z}} = S({\cal O}(-1)\oplus {\bf C}). $$ \end{prop} We are now not quite in the situation described in Proposition~\ref{prop:fibk} because we are including the open stratum ${\cal J}_0$ of ${\cal J}$. This means that we have to replace the space ${\ov{{\cal M}}}_k = {\ov{{\cal M}}}(A-kF, {\cal J})$ by a space ${\ov{{\cal M}}}_0$ of curves of class $A$ that go through the fixed point $x_0$. Since we are interested in working out the structure of the fiber of the projection ${\ov{{\cal M}}}_0\to{\cal J}$ at a point $J\in {\cal J}_2$, we will choose $x_0$ so that it does not lie on the unique $J$-holomorphic $(A-2F)$-curve $\Delta_J$ and then define ${\ov{{\cal M}}}_0$ to be the space ${\ov{{\cal M}}}(A,x_0, {\cal J})$ in Definition~\ref{def:om}. Let $\pi_0$ denote the projection $$ \pi_0:{\ov{{\cal M}}}_0\to{\cal J} $$ and set ${\ov{{\cal M}}}_0({\cal J}_m) = \pi_0^{-1}({\cal J}_m)$ as before. It is not hard to see that the following analog of Proposition~\ref{prop:fibk} holds. \begin{prop}\label{prop:fib0} (i) Let $J\in {\cal J}_m$ be any almost complex structure such that the unique $J$-holomor\-phic $(A-mF)$-curve $\Delta_J$ does not go through $x_0$. Then the projection $$ \pi_k: {\ov{{\cal M}}}_0({\cal J}_m)\to {\cal J}_m $$ is a locally trivial fibration near $J$, whose fiber ${\cal F}_J(0,m)$ is the space of all stable $J$-curves $[{\Sigma},u]$ in class $A$ that have $\Delta_J$ as one component and go through $x_0$. In particular, ${\cal F}_J(0,m)$ is a stratified space whose strata are orbifolds of (real) dimension $\le 4m -2$. {\smallskip} {\noindent} (ii) The singular fibers of $\pi_k: {\ov{{\cal M}}}_0({\cal J}_m)\to {\cal J}_m$ occur at points $J$ for which $x_0\in {\Delta}_J$. For such $J$, $\pi_k^{-1}(J)$ can be identified with the space ${\cal F}_J(m)$ described in Proposition~\ref{prop:fibk}. \end{prop} As before, we now construct a pair $({\cal V}_J, {\cal Z}_J)$ that describes a neighborhood of ${\ov{{\cal M}}}_0({\cal J}_m)$ in ${\ov{{\cal M}}}_0$. We will concentrate on the case $m = 2$ and will suppose that $x_0\notin {\Delta}_J$. We further normalize $J$ by requiring that the projection $\pi_J$ along the $J$-holomorphic $F$-curves is simply the projection onto the first factor $S^2$. We write $q_0 = \pi_J(x_0)$. {\medskip} {\noindent} {\bf \S 3.2.1 The bundle ${\cal V}_{2,J}\to {\cal Z}_{2,J}$.}{\smallskip} Observe first that ${\cal Z}_J$ has two subsets: ${\cal Z}_{1,J}$ consisting of all stable $A$-maps $[{\Sigma}, z_0, h]$ that are the union of the $(A-2F)$-curve $\Delta_J$ with a double covering of the fiber $F_0$ through $x_0$, and ${\cal Z}_{2,J}$ consisting of all stable $A$-maps $[{\Sigma}, z_0, h]$ that are the union of $\Delta_J$ with two distinct fibers. We will call these sets ${\cal Z}_{i,J}$ strata. This is accurate as far as ${\cal Z}_{2,J}$ is concerned, but strictly speaking ${\cal Z}_{1,J}$ is a union of strata. (Recall that the strata are detemined by the topological type of the marked domain $[{\Sigma}, z_0]$ , and the homology class of the images of its components under $h$.) Let us first consider ${\cal Z}_{2,J}$. Since $h(z_0) = x_0$ always, one of the two fibers has to be $F_0$ and the other moves. Therefore, the stratum ${\cal Z}_{2,J}$ maps onto $S^2 - \{q_0\}$. It is convenient to compactify ${\cal Z}_{2,J}$ by adding a point ${\sigma}_*$ that projects to $q_0$. The domain ${\Sigma}$ of ${\sigma}_*$ has $4$ components with ${\Sigma}_0,{\Sigma}_2,{\Sigma}_3$ all meeting ${\Sigma}_1$ and a marked point $z_0\in {\Sigma}_3$. The map $h_0:{\Sigma}_0\to {\Delta}_J$ parametrizes ${\Delta}_J$ as a section, $h_1$ takes ${\Sigma}_1$ onto the point $F_0\cap {\Delta}_J$, and $h_2,h_3$ have image $F_0$ with $h_3(z_0) = x_0$. The argument of Example~\ref{ex:lens} gives the following result. \begin{lemma} The space ${\cal V}_{2,J}$ of gluing parameters over ${\cal Z}_{2,j}\cup\{{\sigma}_*\} = S^2$ is the bundle $ {\cal O}(-2)\oplus {\bf C}$. \end{lemma} {\medskip} {\noindent} {\bf \S 3.2.2 The stratum ${\cal Z}_{1,J}$}{\smallskip} This gives half of ${\cal L}_{{\cal Z}}$. The other half comes from the link of the orbifold ${\cal Z}_{1,J}$ in ${\cal V}_{1,J}$. Thus the next step is to look at ${\cal Z}_{1,J}$. Let $p_0, p_1$ be two distinct points on $F_0 \equiv S^2$, with $p_0=x_0$ and $p_1 = \Delta_J\cap F_0$. Then ${\cal Z}_{1,J}$ is the orbifold $$ {\cal Z}_{1,J} = Y = {\ov{{\cal M}}}_{0,2}(S^2, p_0, p_1, 2) $$ of all stable maps to $S^2$ with two marked points $z_0, z_1$ that are in the class $2[S^2]$ and are such that $h(z_0) = p_0, h(z_1) = p_1$. We will also need to consider the space ${\ov{{\cal M}}}_{0,0}(S^2,2)$ of genus $0$ stable maps of degree $2$ into $S^2$ that have no marked points and the space $$ {\widetilde Y}= {\ov{{\cal M}}}_{0,3}(S^2, p_0, p_1, p_2, 2) $$ of all degree $2$ stable maps to $S^2$ with three marked points $z_0, z_1, z_2$ such that $h(z_0) = p_0, h(z_1) = p_1, h(z_2) = p_2 $. \begin{lemma}\label{ss} (i) ${\ov{{\cal M}}}_{0,0}(S^2,2)$ is a smooth manifold diffeomorphic to ${\bf C} P^2$. {\smallskip} {\noindent} (ii) ${\widetilde Y}= {\ov{{\cal M}}}_{0,3}(S^2, p_0, p_1, p_2, 2)$ is a smooth manifold diffeomorphic to ${\bf C} P^2$. {\smallskip} {\noindent} (iii) The forgetful map $f:{\widetilde Y}\to Y$ may be identified with the $2$-fold cover that quotients ${\bf C} P^2$ by the involution $\tau:[x:y:z]\mapsto [x:y:-z].$ In particular, $Y$ is smooth except at the point ${\sigma}_{01} = f([0:0:1])$ that has the local chart ${\bf C}^2/(x,y) = (-x,-y)$. This point ${\sigma}_{01}$ is the stable map $[S^2, h, z_0, z_1]$ where the critical values of $h$ are at $p_0$ and $p_1$. \end{lemma} \proof{} (i) The space ${\ov{{\cal M}}}_{0,0}(S^2,2)$ has two strata. The first, ${\cal S}_1$, consists of self-maps of $S^2$ of degree $2$, and the second, ${\cal S}_2$, consists of maps whose domain has two components, each taken into $S^2$ by a map of degree $1$. The equivalence relation on each stratum is given by precomposition with a holomorphic self-map of the domain. lt is not hard to check that each equivalence class of maps in ${\cal S}_1$ is uniquely determined by its two critical values (or branch points). Since these can be any pair of distinct points, ${\cal S}_1$ is diffeomorphic to the set of unordered pairs of distinct points in $S^2$. On the other hand there is one element ${\sigma}_w$ of ${\cal S}_2$ for each point $w \in S^2$, the correspondence being given by taking $w$ to be the image under $h$ of the point of intersection of the two components. If ${\sigma}_{\{x,y\}}$ denotes the element of ${\cal S}_1$ with critical values $\{x,y\}$, we claim that ${\sigma}_{\{x,y\}} \to {\sigma}_w$ when $x,y$ both converge to $w$. To see this, let $h_{\{x,y\}}:S^2\to S^2$ be a representative of ${\sigma}_{\{x,y\}}$ and let ${\alpha}_{\{x,y\}}$ be the shortest geodesic from $x$ to $y$. (We assume that $x,y$ are close to $w$.) Then $h_{\{x,y\}}^{-1}({\alpha}_{\{x,y\}})$ is a circle ${\gamma}_{\{x,y\}}$ through the critical points of $h_{\{x,y\}}$. This is obvious if $h_{\{x,y\}}$ is chosen to have critical points at $0,\infty$ and if $x= 0,y=\infty$ since $h_{\{x,y\}}$ is then a map of the form $z\mapsto az^2$. It follows in the general case because Mobius transformations take circles to circles. Hence $h_{\{x,y\}}$ takes each component of $S^2 - {\gamma}_{\{x,y\}}$ onto $S^2 - {\alpha}_{\{x,y\}}$. If we now let $x,y$ converge to $w$, we see that ${\sigma}_{\{x,y\}}$ converges to ${\sigma}_w$. The above argument shows that ${\ov{{\cal M}}}_{0,0}(S^2,2)$ is the quotient of $S^2\times S^2$ by the involution $(x,y)\mapsto (y,x)$. This is well known to be ${\bf C} P^2$. In fact, it is easy to check that the map $$ H: ([x_0:x_1],[y_0:y_1])\;\mapsto \; [x_0y_0: x_1y_1: x_0y_1 + x_1y_0 - x_0y_0- x_1y_1] $$ induces a diffeomorphism from the quotient to ${\bf C} P^2$. Under this identification the stratum ${\cal S}_2 = H(diag)$ is the quadric $(u+v+ w)^2 = 4uv$ (where we use coordinates $[u:v:w]$ on ${\bf C} P^2$). Further, if we put $$ p_0 = [0:1], \quad p_1 = [1:0],\quad p_2 = [1,1], $$ the set of points in ${\ov{{\cal M}}}_{0,0}(S^2,2) = {\bf C} P^2$ consisting of maps that branch over $p_i$ is a line $\ell_i$, the image by $H$ of $(S^2\times p_i)\cup p_i\times S^2$. Thus $$ \ell_0 = \{u=0\},\quad \ell_1 = \{v=0\},\quad \ell_2 = \{w=0\}. $$ Note finally that all stable maps in ${\ov{{\cal M}}}_{0,0}(S^2,2)$ are invariant by an involution: for example the map $z\mapsto z^2$ is invariant under the reparametrization $z\mapsto -z$. Since all elements have the same reparametrization group, ${\ov{{\cal M}}}_{0,0}(S^2,2)$ is smooth. However, this will no longer be the case when we add two marked points.\hfill$\Box$\medskip {\noindent} (ii) Now consider the forgetful map $$ \phi_{30}: {\ov{{\cal M}}}_{0,3}(S^2,p_0,p_1,p_2, 2) \to {\ov{{\cal M}}}_{0,0}(S^2,2). $$ For a general point of ${\ov{{\cal M}}}_{0,0}(S^2,2)$, that is a point where neither branching point is at $p_0$, $p_1$ or $p_2$, $\phi_{30}$ is $4$ to $1$. To see this, note that for $i = 0,1,2$, $z_i$ can be either of the points that get mapped to $p_i$ which seems to give an $8$-fold cover. However, because $h$ has degree $2$, $h$ is invariant under an involution ${\gamma}_h$ of $S^2$ that interchanges the two inverse images of a generic point. Hence the cover is $4$ to $1$, and the covering group is ${\bf Z}/2{\bf Z} \oplus {\bf Z}/2{\bf Z}$. When just one branching point is at some $p_i$, $\phi_{30}$ is $2$ to $1$, and when both branching points are at some $p_i$, it is $1$ to $1$. This determines $\phi_{30}$. In fact, with the above identification for ${\ov{{\cal M}}}_{0,0}(S^2,2) = {\bf C} P^2$, $\phi_{30}$ is the map $$ \phi_{30}: {\bf C} P^2\to{\bf C} P^2: \quad [x:y:z]\mapsto [x^2:y^2:z^2]. $$ Note that the inverse image of ${\cal S}_2 = \{4uv=(u+v+w)^2\}$ consists of the $4$ lines $$ x\pm y \pm iz = 0. $$ These components correspond to the $4$ different ways of arranging $3$ points on the two components of the stable maps in ${\cal S}_2$. Note further that none of the points in ${\ov{{\cal M}}}_{0,3}(S^2,p_0,p_1,p_2, 2) $ are invariant by any reparametrization of their domains. Hence all points of this moduli space are smooth.\hfill$\Box$\medskip {\noindent} (iii) Similar reasoning shows that the forgetful map $$ \phi_{20}: Y={\ov{{\cal M}}}_{0,2}(S^2,p_0,p_1, 2) \to {\ov{{\cal M}}}_{0,0}(S^2,2) $$ is a $2$-fold cover branched over $\ell_0\cup \ell_1$. Hence we may identify $Y$ as $$ Y = \{[u:v:w:t]\in {\bf C} P^3: t^2 = uv\} $$ where the cover $\phi_{20}:Y\to {\bf C} P^2$ forgets $t$. There is one point in $Y$ that is invariant under a reparametrization of its domain, namely the point ${\sigma}_{01}$ corresponding to the map $h:S^2\to S^2$ that branches at $p_0$ and $p_1$. In the above coordinates on $Y$, $$ {\sigma}_{01} = \phi_2^{-1}(\ell_0\cap \ell_1) = [0:0:1:0]. $$ It is also easy to check that $$ \phi_{32}: {\ov{{\cal M}}}_{0,3}(S^2,p_0,p_1,p_2, 2) = {\bf C} P^2 \to Y $$ has the formula $$ \phi_{32}([x:y:z])= [x^2:y^2:z^2: xy]. $$ Since $\phi_{32}\circ \tau( [x:y:z]) = \phi_{32}( [x:y:-z]) = \phi_{32}([x:y:z])$, $\phi_{32}$ is equivalent to quotienting out by $\tau$ as claimed.\hfill$\Box$\medskip {\medskip} {\noindent} {\bf \S 3.2.3 The bundle ${\cal V}_{1,J}\to {\cal Z}_{1,J}$.} {\smallskip} Now we consider the structure of the orbibundles of gluing parameters over ${\widetilde Y} = {\ov{{\cal M}}}_{0,3}(S^2,p_0,p_1,p_3, 2)$ and ${\cal Z}_{1,J}= Y= {\ov{{\cal M}}}_{0,2}(S^2,p_0,p_1, 2)$. We will call the first ${\widetilde L}\to {\widetilde Y}$ and the second $L_Y\to Y$. In both cases the fiber at the stable map $[{\Sigma}, h, z_i]$ is the tangent space $T_{z_1}{\Sigma}$. \begin{lemma} (i) The orbibundle ${\widetilde L}\to {\widetilde Y}$ is smooth and may be identified with the canonical line bundle $L_{can}$ over ${\widetilde Y} = {\bf C} P^2$. {\smallskip} {\noindent} (ii) The orbibundle $L_Y\to Y$ is smooth except at the point ${\sigma}_{01}$. It can be identified with the quotient of $L_{can}$ by the obvious lift $\tilde \tau$ of $\tau$. {\smallskip} {\noindent}(iii) The set $S(L_Y)$ of unit vectors in $L_Y$ is smooth and diffeomorphic to $S^5$. The orbibundle $S(L_Y)\to Y$ can be identified with the quotient of $S^5$ by the circle action $$ \theta\cdot(x,y,z) = (e^{i{\theta}} x, e^{i{\theta}} y, e^{2i{\theta}} z). $$ \end{lemma} \proof{} Since ${\widetilde Y}$ is smooth, the general theory implies that ${\widetilde L}$ is smooth. Therefore, it is a line bundle over ${\bf C} P^2$ and to understand its structure we just have to figure out its restriction to one line. It is easiest to consider one of the lines $x\pm y\pm iz=0$ that lie over ${\cal S}_2$. Recall that ${\sigma}_w\in {\cal S}_2$ is the stable map $[{\Sigma}_w,h_w]$ with domain ${\Sigma}_w = S^2\cup_{w=w}S^2$ and where $h$ is the identity map on each component. Suppose we look at the line in ${\widetilde Y}$ whose generic point has $z_1$ on one component of ${\Sigma}_w$ and $z_0, z_2$ on the other. Then the bundle ${\widetilde L}$ has a natural trivialization over the set $\{w\in S^2: w\ne z_0,z_1,z_2\}$. It is not hard to check that this trivialization extends over the points $z_0, z_2$ but that one negative twist is introduced when $z_1$ is added. The argument is very similar to the proof of Lemma~\ref{glu} below, and is left to the reader.\hfill$\Box$\medskip {\medskip} {\noindent} (ii) It follows from the general theory that $L_Y\to Y$ is smooth over the smooth points of $Y$. Moreover, at ${\sigma}_{01} = [S^2, h]$ the automorphism ${\gamma}:S^2\to S^2$ such that $h\circ {\gamma} = h, {\gamma}(z_i) = z_i$ acts on $T_{z_1}S^2$ by the map $v\mapsto -v$. (To see this, note that we can identify $S^2$ with ${\bf C}\cup \{\infty\}$ in such a way that $z_0 =p_0 = 0, z_1 =p_1 = \infty$. Then $h(z) = z^2$, and ${\gamma}(z) = -z$.) Hence the local structure of $L$ at ${\sigma}_{01}$ is given by quotienting the trivial bundle $D^4\times {\bf C}$ by the map $(x,y)\times v\mapsto (-x,-y)\times -v$. This is precisely the structure of the quotient of ${\widetilde L}$ by $\tilde\tau$ at the singular point. Moreover, we can identify $S(L_Y)$ with $ S^5/\tau$ globally since $L_Y\to Y$ pulls back to ${\widetilde L}\to {\widetilde Y}$ under the map ${\widetilde Y}\to Y$.\hfill$\Box$\medskip {\medskip} {\noindent} The quotient $S^5/\tau$ is smooth except possibly at the fixed points $(x,y,0)$ of $\tau$. Since $S(L_Y)$ is smooth at these points, $S(L_Y)$ is smooth everywhere. It may be identified with $S^5$ by the map $$ S(L_Y) \equiv S^5/\tau \to S^5:\quad (x,y,z)\mapsto (x\,\sqrt{1+|z|^2}, y\,\sqrt{1+|z|^2}, z^2). $$ The last statement may be proved by noting that the formula $$ (x,y,z)\mapsto [x^2:y^2:z:xy]\in {\bf C} P^3 $$ defines a diffeomorphism from the orbit space of the given circle action to ${\bf C} P^2/\tau = Y$. \hfill$\Box$\medskip {\medskip} {\noindent} {\bf \S 3.2.4 Attaching the strata.} {\smallskip} The next step is to understand how the two strata ${\cal V}_{1,J}$ and ${\cal V}_{2,J}$ fit together. The two zero sections ${\cal Z}_{1,J}$ and ${\cal Z}_{2,J}$ intersect at the point ${\sigma}_*$. Recall that the domain ${\Sigma}$ of ${\sigma}_*$ has $4$ components with ${\Sigma}_0,{\Sigma}_2,{\Sigma}_3$ all meeting ${\Sigma}_1$ and a marked point $z_0\in {\Sigma}_3$ . The map $h_0:{\Sigma}_0\to {\Delta}_J$ parametrizes ${\Delta}_J$ as a section, $h_1$ takes ${\Sigma}_1$ onto the point $x_1=F_0\cap {\Delta}_J$, and $h_2,h_3$ have image $F_0$ with $h_3(z_0) = x_0$. The stratum of ${\cal Z}_{1,J}$ containing ${\sigma}_*$ consists just of this one point. Hence the local coordinates of ${\sigma}_*$ in ${\cal Z}_{1,J}$ are given by two gluing parameters $(a_0,a_1)$. If we write $z_{ij}$ for the point ${\Sigma}_i\cap {\Sigma}_j$, these are $$ (a_0,a_1)\;\; \mbox{where} \;\;a_0\in T_{z_{12}}{\Sigma}_1\otimes T_{z_{12}}{\Sigma}_2,\;\; a_1\in T_{z_{13}}{\Sigma}_1\otimes T_{z_{13}}{\Sigma}_3,\;\;. $$ Similarly, the local coordinates for a neighborhood of ${\sigma}_*$ in ${\cal V}_{1,J}$ are $$ (b,a_0,a_1) $$ where $(a_0,a_1)$ are as before and $b \in T_{z_{01}}{\Sigma}_0\otimes T_{z_{01}}{\Sigma}_1$ is a gluing parameter at the point $z_{01}$ where the component ${\Sigma}_0$ mapping to ${\Delta}_J$ is attached. On the other hand the natural coordinates for a neighborhood of ${\sigma}_*$ in ${\cal V}_{2,J}$ are triples $(w,b,a)$ where $b$ is a gluing parameter at the point $z_{03}$ where the component ${\Sigma}_0$ that maps to ${\Delta}_J$ is attached to the fixed fiber ${\Sigma}_3$, $w$ is the point where the moving fiber ${\Sigma}_2$ (the one not containing $z_0$) is attached to ${\Sigma}_0$ and $a$ is a gluing parameter at $w$. \begin{lemma}\label{glu} The attaching map ${\alpha}$ at ${\sigma}_*$ has the form $(b,a_0,a_1)\mapsto (w_b, ba_0, ba_1)$, where $b\ne 0$ and $\|b\|$ is small. Here the map $b\mapsto w_b$ identifies a small neighborhood of $0$ in $T_{z_{01}}{\Sigma}$ with a neighborhood of $x_1$ in $\Delta_J$ in the obvious way. \end{lemma} \proof{} The attaching of ${\cal Z}_{1,J}$ to ${\cal Z}_{2,J}$ comes from gluing at the point $z_{01}$ via the parameter $b$. Thus we are gluing the ``ghost component" ${\Sigma}_1$ to the component ${\Sigma}_0$ that maps to ${\Delta}_J$ in the space of stable $A$-curves that are holomorphic for a {\em fixed} $J$. (It is only when one glues at $a_0$ or $a_1$ that one changes the homology class of the curve ${\Delta}_J$ and hence has to change $J$.) In particular, we can forget the components ${\Sigma}_2, {\Sigma}_3$ of the domain ${\Sigma}$ of ${\sigma}_*$, retaining only the points $z_{12}, z_{13}$ on ${\Sigma}_1$ where they are attached. Therefore we can consider the domain of the attaching map ${\alpha}$ to be the $2$-dimensional space $$ \{[{\Sigma}_0\cup_{z_{01}} {\Sigma}_1, z_{12}, z_{13}, h_{\Delta}; b] :\quad b\in {\bf C} = T_{z_{01}}{\Sigma}_0\otimes T_{z_{01}}{\Sigma}_1\}, $$ and its range to be the space of all elements $[{\Sigma}_0, q_0, w, h_{\Delta}, J]$ where ${\Sigma}_0 = S^2$ and $w$ moves in a small disc about $x_1$. Here, the map $h_{\Delta}: {\Sigma}_0\to {\Delta}_J$ is fixed and parametrizes ${\Delta}_J$ as a section. We can encode this by picking two points $q_1, q_2$ in ${\Sigma}_0$ that are different from $q_0 = \pi_J(x_0)$ and then considering $h_{\Delta}$ to be the map that takes these two marked points to two other fixed points on ${\Delta}_J$. Thus the attaching map ${\alpha}$ is equivalent to the following map ${\alpha}'$ that attaches diffferent strata in the moduli space ${\ov{{\cal M}}}_{0,4}(S^2)$ of $4$ marked points on $S^2$: $$ {\alpha}': \{({\Sigma}_0\cup{\Sigma}_1, q_1,q_2,z_{12}, z_{13}; b): b\in {\bf C}\} \to \{({\Sigma}_0, q_1,q_2,z_{12}, z_{13}) \in {\ov{{\cal M}}}_{0,4}(S^2)\}. $$ Here, each ${\Sigma}_i$ is a copy of $S^2$ as before. On the left $q_1, q_2$ are two marked points on ${\Sigma}_0$ and $z_{12}, z_{13}$ are two marked points on ${\Sigma}_1$. On the right, we should consider the three points $q_1, q_2, z_{13}$ to be fixed, while $z_{12} = w$ moves, since this corresponds to our previous trivialization of the neighborhood of ${\sigma}_*$ in ${\cal V}_{2,J}$. Thus ${\alpha}'$ may be considered as a map taking $b$ to $w_b = z_{12}\in {\Delta}_J$. It remains to check that as $b$ moves once (positively) around $0$, $w_b$ moves once positively around $z_{13}$. This follows by examining the identification of the glued domain $$ {\Sigma}_b = \left({\Sigma}_0 - D(z_{01})\right)\cup_{gl_b} \left({\Sigma}_1 - D(z_{01})\right) $$ with ${\Sigma}_0 = S^2$. Observe that the two points $q_1,q_2 $ in ${\Sigma}_0 - D(z_{01})$ and the single point $z_{12}$ in ${\Sigma}_1 - D(z_{01})$ must be taken to the corresponding three fixed points on $S^2={\Sigma}_1$. Hence the identification on ${\Sigma}_0 - D(z_{01})$ is fixed, while that on ${\Sigma}_1 - D(z_{01})$ can rotate about $z_{13}$ as $b$ moves. Hence, when $b$ moves round a complete circle, so does $w_b$. It remains to check the direction of the rotation. Now, as we saw in Proposition~\ref{nbhdpt}, as $b$ moves once round this circle positively as seen from $z_{01}$, the point $p_b$ on ${\partial} D(z_{01})\subset {\Sigma}_1$ that is matched with a fixed point $p$ on ${\Sigma}_0 - D(z_{01})$ moves once positively round ${\partial} D(z_{01})$. In order to line up $p_b$ with $p$, ${\Sigma}_1$ must be rotated in the {\em opposite} direction, i.e. positively as seen from the fixed point $z_{13}$. Hence $w_b$ rotates positively round $z_{13}$. To complete the proof of the lemma, we must understand how the gluing parameters $a_0, a_1$ fit into this picture. Since nothing is happening in the vertical (ie fiberwise) direction, we may consider the $a_i$ to be elements of the following tangent spaces: $$ a_0\in T_{z_{12}} {\Sigma}_1,\quad a_1\in T_{z_{13}} {\Sigma}_1. $$ As $b$ rotates positively, the image of $a_0$ in the glued curve rotates once positively in the tangent space of $z_{12}$, and $a_1\in T_{w_b}{\Sigma}_1$ also rotates once with respect to the standard trivialization of the tangent spaces $T_{w_b}{\Sigma}_1 \subset T({\Sigma}_1)|_{D(z_{12})}.$ Hence result.\hfill$\Box$\medskip {\noindent} {\bf Proof of Proposition~\ref{prop:link}}. We have identified the orbibundle ${\cal V}_{1,J}\to {\cal Z}_{1,J}$ with $L\to Y$ and the bundle ${\cal V}_{2,J}\to {\cal Z}_{2,J}$ with ${\cal O}(-2)\oplus {\bf C}\to S^2$. The previous lemma shows that these are attached by first twisting ${\cal V}_{1,J}$ to ${\cal O}(-3)\oplus {\cal O}(-1)$ and then plumbing it to $L$. Hence $$ {\cal L}_{\cal Z} = S({\cal O}(-3)\oplus {\cal O}(-1)){\mbox{$\,\equiv \hspace{-.92em} \| \,\,\,$}} S(L_Y) $$ as claimed. The identification of the latter space with $S({\cal O}(-1)\oplus{\bf C})$ follows from Proposition~\ref{str}. \hfill$\Box$\medskip \subsection{The projection ${\cal V}_J\to {\cal J}$} In order to complete the calculation of the link ${\cal L}_{2,0}$ of ${\cal J}_2$ in ${\cal J}$ it remains to understand the projection ${\cal V}_J\to {\cal J}$. This is $1$-to-$1$ except over the points of ${\cal J}_1$. In ${\cal V}_{2,J}$ it is clearly the points with zero gluing parameter at the moving fiber that get collapsed. Thus the subbundle $R_-$ of the circle bundle $S(L_P) \to {\cal P}({\cal O}(-2)\oplus {\bf C})$ that lies over the (rigid) section $S_-= {\cal P}(\{0\}\oplus {\bf C})$ must be collapsed to a single circle. The subbundle $R_+$ lying over the other section $S_+={\cal P}({\cal O}(-2)\oplus \{0\})$ maps to a family of distinct elements in ${\cal J}_1$. The story on ${\cal V}_{1,J}$ is, of course, more complicated. Here the points that concern us are the maps in ${\cal S}_2$ where the branch points coincide. Thus, if we identify ${\ov{{\cal M}}}_{0,0}(S^2, 2)$ with ${\bf C} P^2$ as in Lemma~\ref{ss}, these are the points of the quadric $Q = \{(u+v+w)^2 = 4uv\}$. Note that the attaching point ${\sigma}_*\in {\ov{{\cal M}}}_{0,2}(S^2, p_0,p_1, 2)$ sits over $$ [1:0:-1]=\ell_1 \cap Q\;\in\;{\bf C} P^2 = {\ov{{\cal M}}}_{0,0}(S^2, 2). $$ The lift of $Q$ to $ {\ov{{\cal M}}}_{0,2}(S^2, 2)$ has two components $Q_\pm$, given by the intersection $Y\cap H_\pm$ where $H_\pm$ is the hyperplane $2t = \pm (u+v+w)$. Since we can assign these at will, we will say that $Q_-$ corresponds to elements with the two marked points $z_0, z_1$ on the same component of ${\Sigma}_w = {\Sigma}_0\cup_{w=w} {\Sigma}_1$ and that $Q_+$ corresponds to elements with $z_0, z_1$ on different components. Then, when one glues at $z_1$ the resulting $A$-curve is the union of an $(A-F)$-curve with an $F$ curve. It is not hard to check that the points on $Q_-$ give rise to a $(A-F)$-curve through $x_0$, which is independent of $w$, while those on $Q_+$ give rise to a varying $(A-F)$-curve that meets the $J_w$-holomorphic fiber through $x_0$ at the point $w$. Note that the intersection $Q_+\cap Q_-$ consists of two points, $p_* = [1:0:-1:0]$ (corresponding to ${\sigma}_*$) and $q_*=[0:1:-1:0]$. Moreover, in the coordinates $(a_0, a_1)$ of a neighborhood of ${\sigma}_*$ used in Lemma~\ref{glu} above, $$ \{(a_0, a_1): a_0=0\}\subset Q_-, \quad \{(a_0,a_1): a_1=0\}\subset Q_+. $$ This confirms that when ${\cal V}_{2,J}\otimes {\cal O}(-1) = {\cal O}(-3)\oplus {\cal O}(-1)$ is plumbed to ${\cal V}_{1,J}$, $Q_-$ is plumbed to the subbundle $\{0\} \oplus {\cal O}(-1)$ corresponding to $R_-$ and $Q_+$ is plumbed to the subbundle ${\cal O}(-3)\oplus\{0\}$ corresponding to $R_+$. Let $S(Q_\pm)\to Q_\pm$ denote the restriction of $S(L_Y)\to Y$ to $Q_\pm.$ Then the plumbing ${\cal O}(-3)\oplus{\cal O}(-1) {\mbox{$\,\equiv \hspace{-.92em} \| \,\,\,$}} S(L_Y)$ contains the plumbings $R_-{\mbox{$\,\equiv \hspace{-.92em} \| \,\,\,$}} S(Q_-),$ and $ R_+{\mbox{$\,\equiv \hspace{-.92em} \| \,\,\,$}} S(Q_+).$ \begin{lemma} (i) $R_-{\mbox{$\,\equiv \hspace{-.92em} \| \,\,\,$}} S(Q_-) = S({\bf C}) =S^2\times S^1$ and $ R_+{\mbox{$\,\equiv \hspace{-.92em} \| \,\,\,$}} S(Q_+) = S({\cal O}(-2).$ {\smallskip} {\noindent}(ii) The subsets $R_-{\mbox{$\,\equiv \hspace{-.92em} \| \,\,\,$}} S(Q_-)$ and $R_+{\mbox{$\,\equiv \hspace{-.92em} \| \,\,\,$}} S(Q_+)$ of ${\cal O}(-3)\oplus{\cal O}(-1) {\mbox{$\,\equiv \hspace{-.92em} \| \,\,\,$}} S(L_Y)$ intersect in a circle. \end{lemma} \proof{} Since $Q_-$ and $Q_+$ do not meet the singular point of $Y$, both bundles $S(Q_\pm)\to Q_\pm$ have Euler number $-1$. Hence $$ R_-{\mbox{$\,\equiv \hspace{-.92em} \| \,\,\,$}} S(Q_-) = S({\cal O}(-1)){\mbox{$\,\equiv \hspace{-.92em} \| \,\,\,$}} S({\cal O}(-1)) = S({\bf C}) = S^2\times S^1, $$ and $$ R_+{\mbox{$\,\equiv \hspace{-.92em} \| \,\,\,$}} S(Q_+) = S({\cal O}(-3)){\mbox{$\,\equiv \hspace{-.92em} \| \,\,\,$}} S({\cal O}(-1)) = S({\cal O}(-2)). $$ This proves (i). To prove (ii) note that the inverse image (in $S(Q_\pm)$) of the intersection point $p_* = [1:0:-1]$ of $Q_-$ with $Q_+$ disappears under the plumbing. But the other one remains.\hfill$\Box$\medskip {\noindent} {\bf Proof of Theorem~\ref{LINK}}. It follows from part (i) of the preceding lemma that it is possible to collapse the subset $R_-{\mbox{$\,\equiv \hspace{-.92em} \| \,\,\,$}} S(Q_-)$ of ${\cal L}_Zz $ to a single circle. Moreover, it is not hard to see that under the identification of ${\cal L}_Zz=S({\cal O}(-3)\oplus {\cal O}(-1)){\mbox{$\,\equiv \hspace{-.92em} \| \,\,\,$}} S(L_Y)$ with $S({\cal O}(-1)\oplus{\bf C})$, this collapsing corresponds to collapsing the circle bundle over the exceptional divisor. Since the intersection of $R_-{\mbox{$\,\equiv \hspace{-.92em} \| \,\,\,$}} S(Q_-)$ with $R_+{\mbox{$\,\equiv \hspace{-.92em} \| \,\,\,$}} S(Q_+)$ is a single circle, this collapsing does not affect $R_+{\mbox{$\,\equiv \hspace{-.92em} \| \,\,\,$}} S(Q_+)$. Note that $R_+{\mbox{$\,\equiv \hspace{-.92em} \| \,\,\,$}} S(Q_+)$ is the inverse image of some $2$-dimensional submanifold of ${\bf C} P^2$. Because $ R_+{\mbox{$\,\equiv \hspace{-.92em} \| \,\,\,$}} S(Q_+) = S({\cal O}(-2)$ this submanifold must be a quadric. \hfill$\Box$\medskip \section{Analytic arguments} In \S 4.1 we prove the (easy) Lemmas~\ref{mfld}. \S 4.2 contains a detailed analysis of gluing. The exposition here is fairly self-contained, though some results are quoted from [MS] and [FO]. \subsection{Regularity in dimension $4$} The theory of $J$-holomorphic spheres in dimension $4$ is much simplified by the fact that any $J$-holomorphic map $h:S^2\to X$ that represents a class $A$ with $A\cdot A\ge -1$ is regular, i.e. the linearized delbar operator $$ Dh: W^{1,p}(h^*(TX))\to L^p\left({\Lambda}_J^{0,1}(S^2)\otimes h^*(TX)\right) $$ is surjective. This remains true even if $h$ is a multiple covering. (For a proof see Hofer--Lizan--Sikorav~[HLS]. The notation is explained in \S3.1 below.) Therefore, regularity is automatic: one does not have to perturb the equation in order to achieve it. The analogous statement when $A\cdot A < -1$ is that ${\rm Coker} Dh$ always has rank equal to $2 +2A\cdot A$. As is shown below, this almost immediately implies that the ${\cal J}_k$ are submanifolds of ${\cal J}$. {\medskip} {\noindent} {\bf Proof of Lemma~\ref{mfld}} We begin by proving that ${\cal J}_k$ is a Fr\'echet manifold. This is obvious when $k = 0$, since ${\cal J}_0$ is an open subset of ${\cal J}$. For $k > 0$, let ${\cal C}_k$ denote the space of all symplectically embedded spheres in the class $A-kF$, and let ${\cal C}_k({\cal J})$ be the bundle over ${\cal C}_k$ whose fiber at $C$ is the space of all smooth almost complex structures on $C$ that are compatible with ${\omega}|_C$. Then ${\cal C}_k({\cal J})$ fibers over ${\cal C}_k$ and it is easy to check that both spaces are Fr\'echet manifolds. (Note that ${\cal C}_k$ is an open submanifold in the space of all embedded spheres in the class $A-kF$. Because these spheres are not parametrized the tangent space to ${\cal C}_k$ at $C$ is the space of all sections of the normal bundle to $C$.) Further ${\cal J}_k$ fibers over ${\cal C}_k({\cal J})$ with fiber at $(C,J|_C)$ equal to all ${\omega}$-compatible almost complex structures that restrict to $J$ on $TC$. This proves the claim. To see that $\pi_k$ is bijective when $k > 0$ note that each $J\in {\cal J}_k$ admits a holomorphic curve in class $A - kF$ by definition, and that this curve is unique by positivity of intersections. A similar argument works when $k = 0$ since the curves in ${\cal M}(A,{\cal J})$ are constrained to go through $x_0$. Hence ${\cal M}_k$ inherits a Fr\'echet manifold structure from ${\cal J}_k$. To show that ${\cal J}_k$ is a submanifold of ${\cal J}$ when $k > 0$ we must use the theory of $J$-holomorphic curves, as explained in Chapter~3 of [MS] for example. Let ${\cal M}_k^s, {\cal J}_k^s, {\cal J}^s$ denote the similar spaces in the $C^s$-category for some large $s$. These are all Banach manifolds. It is easy to check that the tangent space $T_J{\cal J}^s$ is the space $End(TX,{\omega},J)$ of all $C^s$-sections $Y$ of the endomorphism bundle of $TX$ such that $$ JY + YJ = 0,\quad {\omega}(Yx,y) = {\omega}(x, Yy). $$ These conditions imply that ${\omega}(Yx,x) = {\omega}(Yx,Jx) = 0$ for all $x$. It follows easily that $Y$ is determined by its value on a single nonzero vector $x$ that it has to take to the ${\omega}$-orthogonal complement to the $J$-complex line through $x$. Observe further that there is an exponential map $$ exp: T_J{\cal J}^s\to {\cal J}^s $$ that preserves smoothness and is a local diffeomorphism near the zero section. Next, note that the tangent space $T_{[h,J]}{\cal M}_k^s$ is the quotient of the space of all pairs $(\xi, Y)$ such that $$ Dh (\xi) + \frac 12 Y\circ dh\circ j = 0\qquad\qquad (*) $$ by the $6$-dimensional tangent space to the reparametrization group ${\rm PSL}(2,{\bf C})$. Here $j$ is the standard almost complex structure on $S^2$ and $Dh$ is the linearization of the delbar operator that maps the Sobolev space of $W^{1,p}$-smooth sections of $h^*(TX)$ to anti-$J$-holomorphic $1$-forms, viz: \begin{eqnarray}\label{eq:Dh} Dh: W^{1,p}(S^2, h^*(TX)) \to L^p({\Lambda}_J^{0,1}(S^2, h^*(TX)), \end{eqnarray} where the norms are defined using the standard metric on $S^2$ and a metric on $TX$. In~[HLS], Hofer--Lizan--Sikorav show how to interpret elements of ${\rm Ker}\, Dh$ and of ${\rm Ker}\, Dh^*$ (where $Dh^*$ is the formal adjoint) as $J$-holomorphic curves in their own right. Using the fact that the domain is a sphere and that $X$ has dimension $4$, they then use positivity of intersections to show that ${\rm Ker}\, Dh$ is trivial when $k > 0$, i.e. it consists only of vectors that generate the action of ${\rm PSL}(2,{\bf C})$. Hence ${\rm Ker}\, Dh^*$ is a bundle over ${\cal M}_k^s \cong {\cal J}_k^s$ of rank $4k - 2 = -{\rm index\,}Dh$, and it is not hard to see that it is isomorphic to the normal bundle of ${\cal J}_k^s$ in ${\cal J}^s$. In other words $$ T_J{\cal J}^s = T_J {\cal J}_k^s \oplus {\rm Ker}\, Dh^*. $$ To see this, observe that the map \begin{eqnarray}\label{eq:io} {\iota}: Y\mapsto \frac 12 Y\circ dh\circ j \end{eqnarray} maps $T_J{\cal J}^s$ onto the space of $C^s$-sections of ${\Lambda}_J^{0,1}(S^2, h^*(TX))$, and that the kernel of this projection consists of elements $Y$ that vanish on the tangent bundle to the image of $h$ and so lie in $T_J {\cal J}_k^s$ whenever $[h,J] \in {\cal M}_k^s$. It follows from equation~(\ref{eq:Dh}) above that the image of $T_J {\cal J}_k^s$ under this projection is precisely equal to the image of $Dh$, and so has complement isomorphic to ${\rm Ker}\, Dh^*$. (For more details on all this, see the Appendix to~[A].) It now remains to show that ${\cal J}_k$ is a submanifold of ${\cal J}$ whose normal bundle has fibers ${\rm Ker}\, Dh^*$. This means in particular that the codimension of ${\cal J}_k$ is $- {\rm ind\,}\,Dh = 4k-2.$ We therefore have to check that each point in ${\cal J}_k$ has a neighborhood $U$ in ${\cal J}$ that is diffeomorphic to the product $(U\cap {\cal J}_k)\times {\bf R}^{4k-2}$. It is here that we use the exponential map {\it exp}. Clearly, one can use {\it exp} to define such local charts for ${\cal J}_k^s$ in ${\cal J}^s$. The point here is that the derivative of the putative chart will be the identity along $(U\cap {\cal J}_k^s)\times \{0\}$ and so by the implicit function theorem for Banach manifolds will be a diffeomorphism on a neighborhood. Then, because ${\rm Ker}\, Dh^*$ consists of $C^\infty$ sections when $J$ is $C^\infty$ and because {\it exp} respects smoothness, this local diffeomorphism will take $(U\cap {\cal J}_k)\times {\bf R}^{4k-2}$ onto a neighborhood of $J$ in ${\cal J}$. \hfill$\Box$\medskip \subsection{Gluing} The next task is to complete the proof of Propositions~\ref{nbhdpt} and~\ref{glue}. The standard gluing methods are local and work in the neighborhood of one stable map, and so our main problem is to globalize the construction. The first step in doing this is to show that one can still glue even when the elements of the obstruction bundle are nonzero at the gluing point. We will use the gluing method of McDuff--Salamon~[MS] and Fukaya--Ono~[FO]. Much of the needed analysis appears in~[MS] but the conceptual framework of that work has to be enlarged to include the idea of stable maps as in Hofer--Salaomon~[HS]. No doubt the other gluing methods can be adapted to give the same results. Our aim is to construct a gluing map $$ {\cal G}:{\cal N}_{\cal V}({\cal Z})\to {\ov{{\cal M}}}(A-kF,{\cal J}) $$ where ${\cal Z} = {\ov{{\cal M}}}(A-kF, {\cal J}_m)$ is the space of stable maps in class $A-kF$ with one component in class $A-mF$, and ${\cal N}_{\cal V}({\cal Z})$ is a neighborhood of ${\cal Z}$ in the space ${\cal V}$ of gluing parameters. Choose once and for all a $(4m-2)$-dimensional subbundle $K$ of $T{\cal J}|_{{\cal J}_m}$ that is transverse to ${\cal J}_m$. As explained in \S3.1 above the exponential map $exp$ maps a neighborhood of the zero section in $K$ diffeomorphically onto a neighborhood of ${\cal J}_m$ in ${\cal J}$. For each $J\in {\cal J}_m$ let $$ {\cal K}_J\subset{\cal J} $$ be the slice through $J$ (i.e. the image under $exp$ of a small neighborhood ${\cal N}_J(K)$ of $0$ in the fiber of $K$ at $J$). We shall prove the following sharper version of Proposition~\ref{glue}. \begin{prop}\label{glue2} Fix $J\in {\cal J}_m$ and let ${\cal N}_{\cal V}({\cal Z}_J)$ be the fiber of the map ${\cal N}_{\cal V}({\cal Z})\to {\cal J}_m$ at $J$. Then, if the neighborhood ${\cal N}_{\cal V}({\cal Z})$ is sufficiently small, there is a homeomorphism $$ {\cal G}_J:{\cal N}_{\cal V}({\cal Z}_J)\;\longrightarrow\; {\ov{{\cal M}}}(A-kF,{\cal K}_J) $$ onto a neighborhood ${\ov{{\cal M}}}(A-kF, J)$ in ${\ov{{\cal M}}}(A-kF,{\cal K}_J)$. Moreover, the union of all the sets ${\rm Im\,} {\cal G}_J, J\in {\cal J}_m$, is a neighborhood of ${\ov{{\cal M}}}(A-kF,{\cal J}_m)$ in ${\ov{{\cal M}}}(A-kF,{\cal J})$. \end{prop} Let $\pi_J:{\cal N}_{\cal V}({\cal Z}_J)\to{\cal Z}_J$ denote the projection. We will first construct the map ${\cal G}_J$ in the fiber at one point ${\sigma} = [{\Sigma}_{\sigma}, h_{\sigma}, J]$ of ${\cal Z}_J$ and then how to fit these maps together to get a global map over ${\cal N}_{\cal V}({\cal Z}_J)$ with the stated properties. For the next paragraphs (until \S4.2.4) we will fix a particular representative $h_{\sigma}:{\Sigma}_{\sigma}\to X$ of ${\sigma}$, and we will define ${\Tilde {\Gg}}$ as a map into the space of parametrized stable maps. In order to understand a full neighborhood of ${\sigma}$ we will have to glue not only at points where the branches meet the stem ${\Sigma}_0$ but also at points internal to the branches. Therefore, for the moment we will forget the stem-branch structure of our stable maps and consider the general problem of gluing, at the points $z_i\in{\Sigma}_{i0}\cap{\Sigma}_{i1}$ with parameter $$ a\; = \;\oplus_i a_i\;\in \; \bigoplus_i T_{z_i}{\Sigma}_{i0}\otimes T_{z_i}{\Sigma}_{i1}. $$ {\medskip} {\noindent} {\bf \S 4.2.1:\, Construction of the pregluing $h_a$}{\smallskip} We showed in Proposition~\ref{glue} above how to construct the glued domain ${\Sigma}_a$. Since this construction depends on a choice of metric on ${\Sigma}$, we must assume that the domain ${\Sigma}$ of each stable map is equipped with a K\"ahler metric that is flat near all double points and is invariant under the action of the isotropy group ${\Gamma}_{{\sigma}}$. Fukaya--Ono point out in [FO] \S 9 that it is possible to choose such a metric continuously over the whole moduli space: one just has to start at the strata containing elements ${\sigma}$ with the largest number of components, extend the choice of metric near these strata by using the gluing construction (which is invariant by ${\Gamma}_{\sigma}$) and then continue inductively, strata by strata. In what follows we will assume this has been done. We will also suppose that the cutoff functions $\chi_r$ used to define ${\Sigma}_a$ have been chosen once and for all. The approximately holomorphic map $h_a:{\Sigma}_a\to X$ is defined from $h_{\sigma}$ by using cutoff functions. As before, we write $r_i$ or simply $r$ instead of $\sqrt{\|a_i\|}$. Hence if $R$ is as in [FO] or [MS], $r= 1/R$. We choose a small ${\delta}> 0$ once and for all so that $r/{\delta} $ is still small.\footnote { The logic is that one chooses ${\delta}>0$ small enough for certain inequalities to hold, and then chooses $r\le r({\delta})$. See Lemma~\ref{le:de} below.} Set $x_i = h_{\sigma}(z_i)$. Then, for ${\alpha} = 0,1$ define \begin{eqnarray*} h_a (z) &=& h_{\sigma}(z) \mbox{ for }\; z\in {\Sigma}_{i{\alpha}} - D_{z_i}^{\alpha}(2r/{\delta})\\ & = & x_i \mbox{ for }\; z\in D_{z_i}^{\alpha}({r}/{{\delta} } ) - D_{z_i}^{\alpha}(r) \end{eqnarray*} and interpolate on the annulus $D_{z_i}^{\alpha}(2{r}/{{\delta}} ) - D_{z_i}^{\alpha}(r/{\delta})$ in ${\Sigma}_{i{\alpha}}$ by setting $$ h_a(z) = exp_{x_i}(\rho( {{\delta} |z|}/ r) \xi_{i{\alpha}}(z)), $$ where $\rho$ is a smooth cut-off function that equals $1$ on $[2,\infty)$ and $0$ on $[0,1]$, and the vectors $\xi_{i{\alpha}}(z)\in T_{x_i} X$ exponentiate to give $h_{\sigma}(z)$ on ${\Sigma}_{i{\alpha}}$: $$ h_{\sigma}(z) = exp_{x_i}(\xi_{i{\alpha}}(z)),\mbox{ for }\; z\in D_{z_i}^{\alpha}( 2r/{{\delta}}). $$ The whole expression is defined provided that $2r/{\delta}$ is small enough for the exponential maps to be injective. Later it will be useful to consider the corresponding map $h_{{\sigma},r}$ with domain ${\Sigma}$. This map equals $h_a$ on ${\Sigma} - \cup_{i,{\alpha}} D_{z_i}^{\alpha} (r_i)$ and is set equal to $x_i$ on each disc $D_{z_i}^{\alpha}(r_i)$. Note that $h_{{\sigma},r}:{\Sigma}\to X $ converges in the $W^{1,p}$-norm to $h_{\sigma}$ as $r\to 0$. {\medskip} {\noindent}{\bf \S 4.2.2\, Construction of the gluing ${\Tilde {\Gg}}(h_{\sigma},a)$.}{\smallskip} Let $$ {\cal N}_0(W_a) = {\cal N}_0((W^{1,p}({\Sigma}_a, h_a^*(TX))) $$ be a small neighborhood of $0$ in $W^{1,p}({\Sigma}_a, h_a^*(TX)))$. Note that, if ${\Sigma}_a$ has several components ${\Sigma}_{a,j}$, the elements ${\sigma}$ of $W_a$ can be considered as collections $\xi_j$ of sections in $W^{1,p}({\Sigma}_{a,j}, (h_{a,j})^*(TX))$ that agree pairwise at the points $z_i$. (This makes sense since the $\xi_j$ are continuous.) Further, we may identify ${\cal N}_0(W_a)$ via the exponential map with a neighborhood of $h_a$ in the space of $W^{1,p}$-maps ${\Sigma}_a\to X$. We will write $h_{a,\xi}$ for the map ${\Sigma}_a\to X$ given by: $$ h_{a,\xi}(z) = exp_{h_a(z)}(\xi(z)), \quad z\in {\Sigma}_a. $$ Recall that ${\cal N}_J(K)$ is a neighborhood of $0$ in the fiber of $K$. Given $Y\in {\cal N}_J(K)$ we will write $J_Y$ for the almost complex structure $exp(Y)$ in the slice $ {\cal K}_J$. Now consider the locally trivial bundle ${\cal E} = {\cal E}_a \to {\cal N}_0(W_a)\times {\cal N}_J(K)$ whose fiber at $(\xi, Y)$ is $$ {\cal E}_{(\xi,Y)} = L^p({\Lambda}^{0,1}({\Sigma}_a)\otimes_{J_Y} h_{a,\xi}^*(TX)). $$ We wish to convert the pregluing $h_a$ to a map that is $J_Y$-holomorphic for some $Y$ by using the implicit function theorem for the section ${\cal F}_a $ of ${\cal E}_a$ defined by $$ {\cal F}_a(\xi, Y) = {\overline{\partial}}_{J_Y}(h_{a,\xi}). $$ Note that ${\cal F}_a(\xi,Y) = 0$ exactly when the map $h_{a,\xi}$ is $J_Y$-holomorphic. The linearization ${\cal L}({\cal F}_a)$ of ${\cal F}_a$ at $(0,0)$ equals $$ {\cal L}({\cal F}_a) = D(h_a)\oplus {\iota}_a: W^{1,p}(h_a^*(TX)) \oplus K \;\to \; L^p({\Lambda}^{0,1}({\Sigma}_a)\otimes_{J} h_a^*(TX)), $$ where ${\iota}_a$ is defined by ${\iota}_a(Y) = \frac 12 Y\circ dh_a\circ j$ as in equation~(\ref{eq:io}) in \S4.1. \begin{lemma}\label{bound} Suppose that there is a continuous family of right inverses $Q_a$ to ${\cal L}({\cal F}_a)$ that are uniformly bounded for $\|a\|\le r_0$. Then, there is $r_1 > 0$ such that for all $a$ satisfying $\|a\|\le r_1$ there is a unique element $ (\xi_a,Y_a)\in {\rm Im\,} Q_a$ such that $$ {\cal F}_a(\xi_a, Y_a)=0. $$ Moreover, $(\xi_a,Y_a)$ depends continuously on the initial data. \end{lemma} \proof{} This follows from the implicit function theorem as stated in 3.3.4 of [MS]. It also uses Lemma A.4.3 of [MS]. See also [FO] \S11.\hfill$\Box$\medskip We will construct the required family $Q_a$ in \S 4.2.3. By the above lemma, this allows us to define the gluing map. \begin{defn}\label{defgl1}\rm We set ${\Tilde {\Gg}}(h_{\sigma}, a) = ({\Sigma}_a, h_{a,\xi_a}, J_{Y_a})$ where $(\xi_a,Y_a)$ is the unique element in the above lemma. Further ${\cal G}(h_{\sigma},a)] = [{\Sigma}_a, h_{a,\xi_a}, J_{Y_a}]$. \end{defn} The next proposition states the main local properties of the gluing map ${\cal G}$. \begin{prop}\label{exun} Each ${\sigma}\in {\cal Z}_J$ has a neighborhood ${\cal N}_{\cal V}({\sigma})$ in ${\cal V}_J$ such that the map $$ {\cal N}_{\cal V}({\sigma})\to {\ov{{\cal M}}}(A-kF,{\cal K}_J): \quad ({\sigma}',a') \mapsto {\cal G}(h_{{\sigma}'}, a') $$ takes ${\cal N}_{\cal V}({\sigma})$ bijectively onto an open subset in $ {\ov{{\cal M}}}(A-kF,{\cal K}_J)$. Moreover this map depends continuously on $J \in {\cal J}_m$. \end{prop} \proof{} This is a restatement of Theorem 12.9 in [FO]. Note that the stable map ${\cal G}(h_{{\sigma}'}, a')$ depends on the choice of representative $({\Sigma}', h_{{\sigma}'})$ of the equivalence class ${\sigma}' = [{\Sigma}', h_{{\sigma}'}]$. However, it is always possible to choose a smooth family of such representatives in a small enough neighborhood of ${\sigma}$ in ${\cal Z}_J$. (This point is discussed further in \S 4.2.4.) Moreover, if ${\sigma}$ is an orbifold point (i.e. if ${\Gamma}_{\sigma}$ is nontrivial), then $h_{\sigma}$ is ${\Gamma}_{\sigma}$-invariant and one can define ${\Tilde {\Gg}}$ so that it is equivariant with respect to the natural action of ${\Gamma}_{\sigma}$ on the space of gluing parameters $a$ and its action on a neighborhood of ${\sigma}$ in the space of parametrized maps. The composite ${\cal G}$ of ${\Tilde {\Gg}}$ with the forgetful map is therefore ${\Gamma}_{\sigma}$-invariant. (Cf. the discussion before Lemma~\ref{le:repr}.) This shows that ${\cal G}$ is well defined. One proves that it is a local homeomorphism as in [FO] \S 13, 14, and we will say no more about this except to observe that our adding of $K$ to the domain of $Dh_{\sigma}$ is equivalent to their replacement of the range of $Dh_{\sigma}$ by the quotient $L^p/ {\iota}_a(K)$. \hfill$\Box$\medskip {\noindent} {\bf \S 4.2.3\, Construction of the right inverses $Q_a$}{\smallskip} This is done essentially as in A.4 of [MS] and \S12 of [FO]. However, there are one or two extra points to take care of, firstly because the stem of the map $h_{\sigma}$ is not regular, so that the restriction of $D{h_{\sigma}}$ to ${\Sigma}_0$ is not surjective, and secondly because the elements of the normal bundle $K\to {\cal J}_m$ do not necessarily vanish near the points $x_i$ in $X$ where gluing takes place. For simplicity, let us first consider the case when ${\Sigma}$ has just two components ${\Sigma}_0, {\Sigma}_1$ intersecting at the point $w$, and that $h_{{\sigma}}$ maps ${\Sigma}_0$ onto the $(A-kJ)$-curve ${\Delta}_J$ and ${\Sigma}_1$ onto a fiber. (For the general case see Remark~\ref{rmk:gen}.) Then the linearization of ${\overline{\partial}}_J$ at $h_{\sigma}$ has the form $$ D{h_{\sigma}}: W^{1,p}({\Sigma}, h_{\sigma}^*(TX)) \to L^p({\Lambda}_J^{0,1}({\Sigma})\otimes h_{\sigma}^*(TX)). $$ Here the domain consists of pairs $(\xi_0,\xi_1)$, where $\xi_j$ is a $W^{1,p}$-smooth section of the bundle $h_{{\sigma}_j}^*(TX)\to{\Sigma}_j$, subject to the condition $$ \xi_0(w) = \xi_1(w), $$ and the range consists of pairs of $L^p$-smooth $(0,1)$-forms over ${\Sigma}_j$ with values in $h_{{\sigma}_j}^*(TX)$ and with no condition at $w$. For short we denote this map by $$ Dh_{\sigma}: W_{\sigma} \to L_{{\sigma}_0}\oplus L_{{\sigma}_1}. $$ Recall from the discussion before Proposition~\ref{glue2}, that we chose $K$ so that $$ Dh_{{\sigma}_0}\oplus {\iota}_0: W_{{\sigma}_0}\oplus K \to L_{{\sigma}_0}. $$ is surjective and $ {\iota}_0: K\to L_{{\sigma}_0} $ is injective. (All maps ${\iota}$ are defined as in equation~(\ref{eq:io}): it should be clear from the context what the subscripts mean.) \begin{lemma}\label{le:NJ} There are constants $c,r_0 > 0$ so that the following conditions hold for all $r < r_0$: {\smallskip} {\noindent} (i) ${\iota}_a$ is injective for all $\|a\| \le r$; {\noindent} (ii) the projection $pr_K: L_{{\sigma}_0} \to K$ that has kernel ${\rm Im\,} D_{{\sigma}_0}$ and satisfies $pr_K\circ {\iota}_0 = {\rm id}_K$ has norm $\le c$; {\noindent} (iii) for all $Y\in K$ and $j = 0,1$ $$ \left(\int_{D_{w}^j(r)} |{\iota}_{j} (Y)|^p\right)^{1/p} \le \frac 1{12 c} \left( \int_{{\Sigma}_j} |{\iota}_j (Y)|^p\right)^{1/p}, $$ where ${D_{w}^j(r)}$ is the disc in ${\Sigma}_j$ on which gluing takes place and integration is with respect to the area form defined by the chosen K\"ahler metric on ${\Sigma}_{\sigma}$. \end{lemma} \proof{} There is $c$ so that (ii) holds because ${\rm Im\,} D_{{\sigma}_0}$ is closed and ${\rm Im\,}{\iota}_0$ is finite dimensional. Then there is $r_0 = r_0(c)$ satisfying (i) and (iii) since the elements of $K$ are $C^\infty$-smooth (as are the elements of ${\cal J}$). \hfill$\Box$\medskip \begin{lemma}\label{dh} The operator $$ Dh_{\sigma}\oplus ({\iota}_0, {\iota}_1): W_{\sigma}\oplus K \to L_{{\sigma}_0}\oplus L_{{\sigma}_1}, $$ is surjective and has kernel ${\rm ker\,}Dh_{\sigma}$. \end{lemma} \proof{} We know from the proof of Lemma~\ref{mfld} that $$ Dh_{{\sigma}_0}\oplus {\iota}_0: W^{1,p}({\Sigma}_0, h_{{\sigma}_0}^*(TX))\oplus K \to L^p({\Lambda}^{0,1}({\Sigma}_0)\otimes_{J} h_{{\sigma}_0}^*(TX)) =L_{{\sigma}_0}, $$ is surjective. Similarly, $Dh_{{\sigma}_1}$ is surjective. Therefore, to prove surjectivity we just need to check that the compatibility condition $\xi_0(w) = \xi_1(w)$ for the elements of $W_{\sigma}$ causes no problem. However, the pullback bundle $h_{{\sigma}_1}^*TX$ splits naturally into the sum of a line bundle with Chern class $2d$ (where $d\ge 0$ is the multiplicity of $h_{{\sigma},1}$) and a trivial line bundle, the pullback of the normal bundle to the fiber ${\rm Im\,} h_{{\sigma}_1}$. Hence there is a element $\xi_1$ of $\ker Dh_{{\sigma}_1}$ with any given value $\xi_1(w)$ at $w$. The result follows. Note that an appropriate version of this argument applies for all ${\sigma}$, not just those with two components, since there is just one condition to satisfy at each double point $z$ of ${\Sigma}$ and the maps $\ker Dh_{{\sigma}_j}\to {\bf C}^2: \xi\mapsto \xi(z)$ are surjective for $j > 0$. The second statement holds because ${\iota}_0$ is injective. \hfill$\Box$\medskip Note that the right inverse $Q_{\sigma}$ to $Dh_{{\sigma}}\oplus ({\iota}_0, {\iota}_1)$ is completely determined by choosing a complement to the finite dimensional subspace $\ker Dh_{\sigma}$ in $W_{\sigma}$. Consider the composite $$ pr_{{\sigma}_0}: L_{{\sigma}_0} \oplus L_{{\sigma}_1}\longrightarrow L_{{\sigma}_0} {\longrightarrow} K $$ where the second projection is as in Lemma~\ref{le:NJ} (ii). The fiber $(pr_{{\sigma}_0})^{-1}(Y)$ at $Y$ has the form $({\rm Im\,} Dh_{{\sigma}_0} + {\iota}_0(Y))\oplus L_{{\sigma}_1}$, and we write $$ Q_{\sigma}^Y: {\rm Im\,} Dh_{{\sigma}_0}\oplus L_{{\sigma}_1} + ({\iota}_0(Y), {\iota}_1(Y)) \to W_{\sigma} $$ for the restriction of $Q_{\sigma}$ to this fiber. We now use the method of [MS] A.4 to construct an approximate right inverse $Q_{a,app}$ to $$ {\cal L}({\cal F}_a) = Dh_a\oplus {\iota}_a: W_a \oplus K \;\to \; L_a, $$ where $$ W_a = W^{1,p}(h_a^*(TX)),\qquad L_a = L^p({\Lambda}^{0,1}({\Sigma}_a)\otimes_{J} h_a^*(TX)). $$ It will be convenient to use the approximations $ h_{{\sigma},r}:{\Sigma}_{\sigma}\to X $ to $h_{\sigma}$ that were defined at the end of \S 4.2.1 where $r^2 = \|a\|$. We write $h_{{\sigma}_j, r}$ for the restriction of $h_{{\sigma}, r}$ to the component ${\Sigma}_j$. Since $h_{{\sigma},r}$ converges $W^{1,p}$ to $h_{\sigma}$ as $r \to 0$, $Dh_{{\sigma},r}$ has a uniformly bounded inverse $$ Q_{{\sigma},r}:\quad L_{{\sigma}_0,r}\oplus L_{{\sigma}_1,r}\to W_{{\sigma},r} \oplus K. $$ (In the notation of [MS], $Q_{{\sigma},r} = Q_{u_R,v_R}$.) As above there is a projection $pr_{{\sigma}_0,r}: L_{{\sigma}_0} \oplus L_{{\sigma}_1} \to K$ and we write $Q_{{\sigma},r}^Y$ for the restriction of $Q_{{\sigma},r}$ to the fiber over $Y$. As a guide to defining $Q_{a,app}$ consider the following diagram of spaces $$ \begin{array}{ccc} W_{{\sigma},r}\oplus K&{\longleftarrow} & L_{{\sigma}_0,r}\oplus L_{{\sigma}_1,r}\\ \downarrow & & \uparrow\\ W_a\oplus K &\stackrel{Q_{a,app}}{\longleftarrow} & L_a.\end{array} $$ where the maps are given by: $$ \begin{array}{ccc} (\xi_0,\xi_1,Y)&\stackrel{Q_{{\sigma},r}}{\longleftarrow} &(\eta_0,\eta_1) \\ \downarrow & & \uparrow\\ (\xi, Y)\in W_a\oplus K &\stackrel{Q_{a,app}}{\longleftarrow} &\eta\in L_a.\end{array} $$ We define the horizontal arrow $Q_{a,app}$ by following the other three arrows. Here $\eta_{\alpha}$, for ${\alpha} = 0,1$, is the restriction of $\eta$ to ${\Sigma}_{{\sigma},{\alpha}} - D_{w}^{\alpha}(r)$ extended by $0$ as in [MS]. Note that the $\eta_{\alpha}$ are in $L^p$ even though they are not continuous. Next, decompose $$ \begin{array}{ccl} \eta_0 & = & \eta_0' + {\iota}_{{\sigma}_0,r}(Y)\in \left({\rm Im\,} D_{{\sigma}_0,r}\right) + {\iota}_{{\sigma}_0,r}(K) = L_{{\sigma}_0, r}\\ \eta_1 & = & \eta_1' + {\iota}_{{\sigma}_1,r}(Y) \in L_{{\sigma}_1,r}. \end{array} $$ Then $(\xi_0,\xi_1) = Q_{{\sigma},r}^Y(\eta_0',\eta_1')$. Note that $\xi_0(w) = \xi_1(w) = v$ say. We then define the section $\xi$ by putting it equal to $ \xi_{\alpha}$ on ${\Sigma}_{\alpha} - D_{w}^{\alpha}(r/{\delta})$ for ${\alpha} = 0,1$ and then extending it over the neck using cutoff functions so that it equals $\xi_0 + \xi_1 - v$ on the circle ${\partial} D_w^0(r) = {\partial} D_w^1(r)\subset {\Sigma}_a$. In the formula below we think of the gluing map $\Psi_a$ of Proposition~\ref{glue} as inducing identifications $$ \begin{array}{cccccc} \Psi_a: \quad & A_0 & = & D_{w}^0( r/{\delta}) - D_{w}^0(r) &\to& D_{w}^1( r) - D_{w}^1(r{\delta}),\\ \Psi_a : \quad & A_1 & = & D_{w}^0( r) - D_{w}^0(r{\delta}) &\to & D_{w}^1( r/{\delta}) - D_{w}^1(r). \end{array} $$ Then $\xi$ is given by $$ \xi(z) =\left\{ \begin{array}{ll} \xi_0(z) + (1 - {\beta}(z{\delta}/r))(\xi_1(\Psi_a(z)) - v) & \mbox{if } z\in A_0,\\ \xi_1(\Psi_a(z)) + (1 - {\beta}(z/r)(\xi_0(z) - v) & \mbox{if } z\in A_1\end{array}\right. $$ where ${\beta}:{\bf C}\to [0,1]$ is a cutoff function that $=1$ if $|z|\le {\delta}$ and $=0$ for $|z|\ge 1$. The next lemma is the analog of Lemma A.4.2 in [MS]. It shows that $Q_{a,app}$ is the approximate inverse that we are seeking. The norms used are the usual $L^p$-norms with respect to the chosen metric on ${\Sigma}_{\sigma}$ and the glued metrics on ${\Sigma}_a$. Note that we suppose that the metrics on ${\Sigma}_a$ agree with the standard model $\chi_r(|x|)|dx|^2$ on the annuli $D_{i}^{\alpha}(r/{\delta}) - D_{i}^{\alpha}(r{\delta})$ (where $r= \|a\|^2$) so that $\Psi_a$ is an isometry. \begin{lemma}\label{le:de} For all sufficiently small ${\delta}$ there is $r({\delta})> 0$ and a cutoff function ${\beta}$ such that for all $\eta \in L_a$, $\|a\| \le r({\delta})^2$, we have $$ \| (Dh_a\oplus {\iota}_a) Q_{a,app} \eta - \eta\| \le \frac 12 \|\eta\|. $$ \end{lemma} \proof{} It follows from the definitions that $$ (Dh_a\oplus {\iota}_a) Q_{a,app} \eta = \eta $$ on each set ${\Sigma}_j - D_w^j(r/{\delta})$. (Observe that $h_{{\sigma}_j,a} = h_a$ on this domain so that ${\iota}_{{\sigma}_j,a} = {\iota}_a$ here.) Therefore, we just have to consider what happens on the subannuli $$ A_0 = D_w^0(r/{\delta}) - D_w^0(r),\qquad \Psi_a(A_1) = \Psi_a\left(D_w^0(r) - D_w^0(r{\delta})\right) $$ of ${\Sigma}_a$. In this region the maps $h_{{\sigma}_j,a}$ as well as the glued map $h_a$ are constant so that the maps ${\iota}_{{\sigma}_j,a}, {\iota}_a$ are constant. Further, the linearizations $Dh_{{\sigma}_j,a}$ and $Dh_a$ are all equal and on functions coincide with the usual ${\overline{\partial}}$-operator. We will consider what happens in $\Psi_a(A_1)$, leaving the similar case of $A_0$ to the reader. It is not hard to check that for $z\in A_1$ $$ \begin{array}{lclcc} Dh_a(\xi_0(z)) & = & \eta_0'(z) & = & -{\iota}_a(Y),\\ Dh_a(\xi_1(\Psi_az)) & = & \eta_1'(\Psi_a z). & & \end{array} $$ Let us write ${\beta}_r$ for the function $ {\beta}_r(z) = {\beta}(z/r). $ Then, if $r^2 = \|a\|$ and $(\xi,Y) = Q_{a,app}\eta$, we have for $z\in A_1$ \begin{eqnarray*} (Dh_a \xi + {\iota}_a Y- \eta)(\Psi_a z) & = & \eta_1' (\Psi_a z) + (1 - {\beta}_r)(-{\iota}_a Y - Dh_a(v))(z) -\\& & \qquad\quad {\overline{\partial}}({\beta}_r)\otimes(\xi_0 - v)(z) + ({\iota}_a Y -\eta)(\Psi_a z) \\ & = & ({\beta}_r - 1) ({\iota}_aY + Dh_a(v))(z) - {\overline{\partial}}({\beta}_r)\otimes(\xi_0 - v)(z). \end{eqnarray*} Therefore, taking the $L^p$-norm \begin{eqnarray*} \|\left(Dh_a \xi + {\iota}_a Y- \eta\right)\circ \Psi_a\|_{L^p, A_1} & \le & \|{\iota}_a(Y)\|_{L^p,A_1} + \|Dh_a(v))\|_{L^p,A_1} +\\& & \qquad\quad \|{\overline{\partial}}({\beta}_r)\otimes(\xi_0 - v)\|_{L^p,A_1}. \end{eqnarray*} If $r$ is sufficiently small we can, by Lemma~\ref{le:NJ} suppose that $\|{\iota}_a(Y)\|_{L^p,A_1} \le \|\eta\|/12.$ Moreover, because $v$ is a constant section $Dh_a$ acts on $v$ just by its zeroth order part and so there are constants $c_1,c_2$ such that $$ \|Dh_a(v))\|_{L^p,A_1} \le c_1\|v\| ({\rm area\,}A_1)^{1/p}\le c_2 \|v\| r^{2/p}. $$ Furthermore, by [MS] Lemma A.1.2, given any ${\varepsilon} > 0$ we can choose ${\delta}_{\varepsilon}> 0$ and ${\beta}$ so that $$ \|{\overline{\partial}}({\beta}_r)\otimes(\xi_0 - v)\|_{L^p,A_1} \le {\varepsilon} \|\xi_0 -v\|_{W^{1,p}}, $$ for all ${\delta} \le {\delta}_{\varepsilon}$. Hence \begin{eqnarray*} \|Dh_a \xi + {\iota}_a Y- \eta\|_{L^p, A_1} & \le & (c_2 r^{2/p} + {\varepsilon})(\|v\| + \|\xi_0 -v\|_{W^{1,p}}) + \|\eta\|/12\\ & \le & c_3(r^{2/p} + {\varepsilon})(\|(\eta_0', \eta_1')\|_{L^p} + \|\eta\|/12\\ & \le & c_4(r^{2/p} + {\varepsilon})(\|(\eta_0, \eta_1)\|_{L^p}+ \|\eta\|/12\\ & = & \left(c_4(r^{2/p} + {\varepsilon}) + 1/12\right)\|\eta\|, \end{eqnarray*} where the second inequality holds because of the uniform estimate for the right inverse $Q_{{\sigma},r}^Y$ and the third inequality holds because the projection of $L_{{\sigma}_0,r}\oplus L_{{\sigma}_1,r} $ onto the subspace $ {{\rm Im\,}} Dh_{{\sigma},r} \oplus L_{{\sigma}_1,r}$ is continuous. Then if we choose ${\delta}_{\varepsilon}$ so small that $c_4 {\varepsilon} < 1/12$ and $r<< {\delta}_{\varepsilon}$ so small that $ c_4 r < 1/12$ we find $$ \|\left(Dh_a \xi + {\iota}_a Y- \eta\right)\circ\Psi_a\|_{L^p, A_1} \le 1/4 \|\eta\|. $$ Repeating this for $A_0$ gives the desired result.\hfill$\Box$\medskip Finally, we define the right inverse $Q_a$ by setting $$ Q_a = Q_{a,app}\left((Dh_a\oplus {\iota}_a) Q_{a,app}\right)^{-1}. $$ It follows easily from the fact that the inverses $Q_{{\sigma},r}$ are uniformly bounded for $0 < r \le r_0$ that the $Q_a$ are too. It remains to remark that the above construction can be carried out in such as way as to be ${\Gamma}_{\sigma}$-equivariant. The only choice left unspecified above was that of the right inverse $Q_{{\sigma},r}$. This in turn is determined by the choice of a subspace $R_{{\sigma},r}$ of $$ W^{1,p} ({\Sigma}, h_{{\sigma},r}^*(TX)) $$ complementary to the kernel of $Dh_{{\sigma},r}$. But since ${\Gamma}_{\sigma}$ is finite, we can arrange that $R_{{\sigma},r}$ is ${\Gamma}_{\sigma}$-equivariant. For example, since $Dh_{{\sigma},r}$ is a finite-dimensional space consisting of $C^\infty$ sections, we can take $R_{{\sigma},r}$ to be the $L^2$-orthogonal complement of $Dh_{{\sigma},r}$ defined with respect to a ${\Gamma}_{\sigma}$-invariant norm on $h_{{\sigma},r}^*(TX)$.\footnote{ As pointed out in [FO], the map $$ \xi\mapsto \xi - \sum_j\langle \xi, e_j\rangle e_j, \quad \xi\in W_{{\sigma},a} $$ is well defined whenever $e_1,\dots, e_p$ is a finite set of $C^\infty$-smooth sections.} Note that because $h_{{\sigma},r} = h_{{\sigma},r}\circ {\gamma}$ for ${\gamma} \in {\Gamma}_{\sigma},$ we can obtain a ${\Gamma}_{\sigma}$-invariant norm on $h_{{\sigma},r}^*(TX)$ by integrating the pull-back by $h_{{\sigma},r}$ of any norm on the tangent bundle $TX$ with respect to a ${\Gamma}_{\sigma}$-invariant area form on the domain ${\Sigma}_{\sigma}$. We can achieve this uniformly over ${\cal Z}_J$ by choosing a suitable metric on each domain ${\Sigma}_{\sigma}$ as described at the beginning of \S4.2.1. {\medskip} \begin{remark}\label{rmk:gen}\rm If one is gluing two branch components ${\Sigma}_{\ell_j}, j = 0,1$ of ${\Sigma}$ then both linearizations $Dh_{\ell_j}$ are surjective and one can construct the inverse $Q_a$ to have image in $W_a$, thus forgetting about the summand $K$. The general gluing argument combines both these cases. If one is gluing at $N$ different points then one needs to choose $r$ so small that one has an inequality of the form $$ \|Dh_a \xi + {\iota}_a Y- \eta\|_{L^p, A} \le 1/4N \|\eta\|. $$ on each of the $2N$-annuli $A$. Note that the number of components of ${\Sigma}$ is bounded above by some number that depends on $m$ (where $J\in {\cal J}_m$). Hence there is always $r_0 > 0$ such that gluing at ${\sigma}$ is possible for all $r < r_0$, provided that one is looking a family of parametrized stable maps ${\sigma} = ({\Sigma}, h_{\sigma}, J)$ that is compact for each $J$ and where $J\in{\cal J}_m$ is bounded in $C^\infty$-norm. \end{remark} This completes the proof of Lemma~\ref{bound} and hence of Proposition~\ref{exun}. {\medskip} {\noindent} {\bf \S 4.2.4\, ${\rm Aut}^K({\Sigma})$-equivariance of ${\Tilde {\Gg}}$}{\medskip} Note that there is an action of $S^1$ on the pair $(h_{\sigma}, a)$ that rotates one of the components (say ${\Sigma}_1$) of ${\Sigma} = {\Sigma}_0\cup {\Sigma}_1$ fixing the intersection point $w = {\Sigma}_0\cap {\Sigma}_1$. We claim that by choosing an invariant metric on ${\Sigma}$ we can make the whole construction invariant with respect to the action of this compact group, i.e. so that as {\it unparametrized} stable maps $$ [{\Sigma}_a, \,{\Tilde {\Gg}}(h_{\sigma},a)] = [{\Sigma}_{{\theta}\cdot a},\, {\Tilde {\Gg}}( h_{\sigma}\cdot {\theta}^{-1}, {\theta}\cdot a)]. $$ For then there is an isometry $\psi$ from the glued domain ${\Sigma}_a$ to ${\Sigma}_{{\theta}\cdot a}$ such that $$ h_{a} = h^{{\theta}}_{{\theta}\cdot a}\circ\psi, $$ where $h^{{\theta}}_b $ denotes the pregluing of $h^{\theta}_{\sigma} = h_{\sigma}\cdot {\theta}^{-1}$ with parameter $b$. There is a similar formula for the maps $h_{{\sigma}, r}$. It is not hard to check that the rest of the construction can be made compatible with this $S^1$ action. It is important here to use the Fukaya--Ono choice of $R_{{\sigma}, r}$ as described above, instead of cutting down the domain of $Dh$ fixing the images of certain points as in ~[LiT], [LiuT1], [Sieb]. More generally, consider a parametrization ${\tilde{{\sigma}}} = ({\Sigma}, h)$ and an arbitrary element ${\sigma} \in {\cal Z}_J$. Recall that a component of ${\Sigma}$ is said to be unstable if it contains less than three special points, i.e. points where two components of ${\Sigma}$ meet. Each unstable branch component has at least one special point where it attaches to the rest of ${\Sigma}$ and so the identity component of its automorphism group has the homotopy type of a circle. Therefore, if there are $k$ such unstable components, the torus group $T^k$ is a subgroup of ${\rm Aut}({\Sigma})$. It is not hard to see that if the automorphism group ${\Gamma}_{\tilde{{\sigma}}}$ of ${\tilde{{\sigma}}}$ is nonzero, we can choose the action of $T^k$ to be ${\Gamma}_{\tilde{{\sigma}}}$-equivariant so that the groups fit together to form the compact group ${\rm Aut}^K({\Sigma})$ of Definition~\ref{def:gp}. Note further that if $({\Sigma}', h')$ is obtained from ${\tilde{{\sigma}}} = ({\Sigma}, h)$ by gluing, then ${\rm Aut}^K({\Sigma}')$ can be considered as a subgroup of ${\rm Aut}^K({\Sigma})$. To see this, suppose for example that ${\Sigma}'$ is obtained by gluing ${\Sigma}_i$ to ${\Sigma}_{j_i}$ with parameter $a$, and that both these components have at most one other special point. Then we can choose metrics on ${\Sigma}_i\cup{\Sigma}_{j_i}$ that are invariant under an $S^1$ action in each component and so that the glued metric on ${\Sigma}_a$ is invariant under the action of an $S^1$ in ${\rm Aut}^K({\Sigma}')$. Note that the diagonal subgroup $S^1\times S^1$ of ${\rm Aut}^K({\Sigma})$ acts trivially on the gluing parameters at the double point ${\Sigma}_i\cap{\Sigma}_{j_i}$ since it rotates in opposite directions in the two tangent spaces. It is now easy to check that if we write $\hat{\theta}$ for the image of $\theta\in S^1$ in the diagonal subgroup of $S^1\times S^1$ then $$ ({\Sigma}_a, h'\circ\theta) = ({\Sigma}_a,{\Tilde {\Gg}}(h,a)\circ\theta) = ({\Sigma}_a, {\Tilde {\Gg}}(h\circ\hat{\theta}, a)). $$ Observe also that if $b$ is a gluing parameter at the intersection of ${\Sigma}_i$ with some other component ${\Sigma}_k$ of ${\tilde{{\sigma}}}$, then it can also be considered as a gluing parameter for ${\tilde{{\sigma}}}'$. Moreover under this correspondence $\hat{\theta}\cdot b$ corresponds to $\theta\cdot b$. These arguments prove the following result. \begin{lemma}\label{le:repr} Let ${\tilde{{\sigma}}} = ({\Sigma}, h)$ and suppose that the metric on ${\Sigma}$ is ${\rm Aut}^K({\Sigma})$-invariant. Then the following statements hold.{\smallskip} {\noindent} (i) The composite ${\cal G}$ of ${\Tilde {\Gg}}$ with the forgetful map into the space of unparametrized stable maps is ${\rm Aut}^K({\Sigma})$-invariant. {\smallskip} {\noindent} (ii) Divide the set $P$ of double points of ${\Sigma}$ into two sets $P_b, P_s$ and correspondingly write the gluing parameter $a$ as $ a_b + a_s$. Suppose that $({\Sigma}', h') = ({\Sigma}_{a_b}, {\Tilde {\Gg}}(h,a_b))$, and consider $a_s$ as a gluing parameter at ${\tilde{{\sigma}}}'$. Then one can choose metrics and choose the groups ${\rm Aut}^K({\Sigma}), {\rm Aut}^K({\Sigma}')$ so that there is an inclusion $$ {\rm Aut}^K({\Sigma}')\to {\rm Aut}^K({\Sigma}): \quad\theta\mapsto \hat{\theta} $$ such that $$ ({\Sigma}_{a_b}, h'\circ\theta^{-1}; \theta \cdot a_s) = ({\Sigma}_{a_b}, {\Tilde {\Gg}}(h\circ\hat{\theta}^{-1}, a_b); \hat{\theta}\cdot a_s). $$ Further, this can be done continuously as $a_b$ (and hence $h'$) varies, and smoothly if ${\Sigma}_{a_b}$ varies in a fixed stratum. \end{lemma} {\smallskip} \begin{remark}\rm In their new paper~[LiuT2] \S5, Liu--Tian also develop a version of gluing that is invariant with respect to a partially defined torus action. \end{remark} {\noindent} {\bf \S 4.2.5\, Globalization}{\medskip} The preceding paragraphs construct the gluing map ${\Tilde {\Gg}}(h_{\sigma},a)$ over a neighborhood ${\cal N}({\sigma})$ of one point ${\sigma}\in {\cal Z}_J$. We now show how to define a gluing map ${\cal G}_J:{\cal N}_{\cal V}({\cal Z}_J)\to {\ov{{\cal M}}}(A-kF, {\cal K}_J)$ on a whole neighborhood ${\cal N}_{\cal V}({\cal Z}_J)$ of ${\cal Z}_J$ in the space of gluing parameters ${\cal V}_J$. The only difficulty in doing this lies in choosing a suitable parametrized representative $s({\sigma}) = ({\Sigma}, h_{\sigma})$ of the equivalence class ${\sigma} = [{\Sigma}, h]$ as ${\sigma}$ varies over ${\cal Z}_J$. In other words, in order to define ${\Tilde {\Gg}}(h_{\sigma},a)$ we need to choose a parametrization $h_{\sigma}: {\Sigma} \to X$ of the stable map ${\sigma}$, and now we have to choose this consistently as ${\sigma}$ varies. We now show that although we may not be able to make a singlevalued choice $s({\sigma}) = h_{\sigma}$ continuously over ${\cal Z}_J$ we can find a section that at each point is well defined modulo the action of a suitable subgroup of ${\rm Aut}^K({\Sigma})$. More precisely, we claim the following. \begin{lemma}\label{choice} We may choose a continuous family of metrics $g_{\sigma}$ on ${\Sigma}_{\sigma}$ for ${\sigma}\in {\cal Z}_J$ and a family of parametrizations $s({\sigma})$ for each ${\sigma}\in {\cal Z}_J$ such that {\noindent} (i) $s({\sigma})$ consists of a $G_{\sigma}$-orbit of maps $h_{\sigma}:{\Sigma}_{\sigma} \to X$ and $g_{\sigma}$ is $G_{\sigma}$-invariant, where $G_{\sigma}\subset {\rm Aut}^K({\Sigma})$; {\noindent} (ii) the assignment ${\sigma} \to s({\sigma})$ is continuous in the sense that near each ${\sigma}$ there is a (singlevalued) continuous map ${\sigma}\to h_{\sigma}\in s({\sigma})$ whose restriction to each stratum is smooth. Moreover, $g({\sigma})$ varies smoothly on each stratum. \end{lemma} \proof{} The strata in ${\cal Z}_J$ can be partially ordered with ${\cal S}' \le {\cal S}$ if there is a gluing that takes an element in the stratum ${\cal S}$ to that in ${\cal S}'$, i.e. if the stratum ${\cal S}$ is contained in the closure of ${\cal S}'$. If ${\cal S}$ is maximal under this ordering and ${\sigma} \in {\cal S}$, then each branch component in ${\Sigma}$ is mapped to a fiber by a map of degree $\le 1$. It is easy to check that in this case there is a unique identification of the domain ${\Sigma}_{\sigma}$ with a union of spheres such that the map $h_{\sigma}$ is either constant or is the identity map on each branch component and a section on the stem: cf. Example~\ref{ex:lens}. We assume this done, and then extend the choice of parametrization to a neighborhood of each of these maximal strata by gluing. We now start extending our choice $s({\sigma}) = h_{\sigma}$ of parametrization to the whole of ${\cal Z}_J$ by downwards induction over the partially ordered strata. Clearly we can always choose a parametrization modulo the action of ${\rm Aut}^K({\Sigma})$. In order for the image of the fiber $\pi_J^{-1}({\sigma}) = \{({\sigma},a): |a|<{\varepsilon}\}$ under the gluing map to be independent of this choice, we need the metric $g_{\sigma}$ on ${\Sigma}_{\sigma}$ to be ${\rm Aut}^K({\Sigma}_{\sigma})$-invariant. This choice of metric can be assumed to be smooth as ${\sigma}$ varies in a stratum. However, it cannot always be chosen continuously as ${\sigma}$ goes from one stratum to another. For example, if ${\sigma}$ has one component ${\Sigma}_i$ with $3$ special points at $0,1,\infty$ and that is glued to some component ${\Sigma}_{j_i}$ at $1$ with gluing parameter $a$, then the resulting component ${\Sigma}_a$ is unstable if ${\Sigma}_{j_i}$ has no other special points. But for small $|a|$ the metric on ${\Sigma}_a$ is determined by the metrics on ${\Sigma} = {\Sigma}_i\cup{\Sigma}_{j_i}$ by the gluing construction and cannot be chosen to be $S^1$-invariant. On the other hand, if both ${\Sigma}_i$ and ${\Sigma}_{j_i}$ have at most one other special point then the glued metric on ${\Sigma}_a$ will be $S^1$-invariant provided that the original metrics on ${\Sigma}_i, {\Sigma}_{j_i}$ were also $S^1$-invariant. The above remarks show that suitable $g_{\sigma}^{\cal S}, s({\sigma})^{\cal S}$ and $G_{\sigma}^{\cal S}$ can be defined over each stratum ${\cal S}$, and in particular over maximal strata. If $g_{\sigma}, s({\sigma}) $ and $G_{\sigma}$ are already suitably defined over some union $Y$ of strata, then the above remarks about gluing show that they can be extended to a neighborhood ${\cal U}(Y)$ of $Y$. Let us write $g_{\sigma}^{gl}, s({\sigma})^{gl}$ and $G_{\sigma}^{gl}$ for the objects obtained by gluing when ${\sigma}\in {\cal U}(Y)$. Then, if ${\beta}: {\cal U}(Y)\cup {\cal S} \to [0,1]$ is a smooth cutoff function that equals $0$ near $Y$ and $1$ near the boundary of ${\cal U}(Y)$, set \begin{eqnarray*} g_{\sigma} & = & (1-{\beta}({\sigma}))g_{\sigma}^{gl} + {\beta}({\sigma}) g_{\sigma}^{\cal S}, {\sigma} \in {\cal S}\\ s({\sigma}) & =& s({\sigma})^{\cal S},\quad \mbox{if }\;{\beta}({\sigma})=1,\\ & = & s({\sigma})^{gl}\quad\mbox{otherwise},\\ G_{\sigma} & =& G_{\sigma}^{\cal S},\quad \mbox{if }\;{\beta}({\sigma})=1,\\ & = & G_{\sigma}^{gl}\quad\mbox{otherwise}. \end{eqnarray*} It is easy to check that the required conditions are satisfied.\hfill$\Box$\medskip {\medskip} {\noindent}{\bf Proof of Proposition~\ref{glue2}} {\medskip} By Lemmas~\ref{le:repr} and~\ref{choice} there is a well defined continuous gluing map $$ {\cal G}_J: {\cal N}_{\cal V}({\cal Z}_J) \to {\ov{{\cal M}}}(A-kF, {\cal K}_J). $$ that restricts on ${\cal Z}_J$ to the inclusion. Therefore, because ${\cal Z}_J$ is compact, the injectivity of ${\cal G}_J$ on a small neighborhood ${\cal N}_{\cal V}({\cal Z}_J)$ follows from the local injectivity statement in Proposition~\ref{exun}. Similarly, the local surjectivity of Proposition~\ref{exun} implies that the image of ${\cal G}_J$ is open in ${\ov{{\cal M}}}(A-kF,{\cal K}_J)$. Note that all the restrictions made on the size of ${\cal N}_{\cal V}({\cal Z}_J)$ vary smoothly with $J$ (and involve no more than the $C^2$ norm of $J$). Hence $\cup_J{\rm Im\,} {\cal G}_J$ is an open subset of ${\ov{{\cal M}}}(A-kF,{\cal J})$. \hfill$\Box$\medskip {\medskip}\MS {\noindent}{\bf References}{\medskip} {\noindent} [A] M. Abreu, Topology of symplectomorphism groups of $S^2\times S^2$, {\it Invent. Math.} {\bf 131}, (1998), 1--23. {\medskip} {\noindent} [AM] M. Abreu and D. McDuff, Topology of symplectomorphism groups of rational ruled surfaces, in preparation.{\medskip} {\noindent} [FO] K. Fukaya and K. Ono, Arnold conjecture and Gromov--Witten invariants, to appear in {\it Topology} {\medskip} {\noindent} [HLS] H.~Hofer, V.~Lizan and J.-C.~Sikorav, On genericity for complex curves in $4$-dimensional, almost complex manifolds, {\medskip} {\noindent} [HS] H. Hofer and D. Salamon, Gromov compactness and stable maps, Preprint (1997). {\medskip} {\noindent} [K] P. Kronheimer, Some nontrivial families of symplectic structures, preprint (1998) {\medskip} {\noindent} [LM] F. Lalonde and D McDuff, $J$\/-curves and the classification of rational and ruled symplectic $4$\/-manifolds, in {\it Contact and Symplectic Geometry}, ed C. Thomas, Camb Univ Press (1996). {\medskip} {\noindent} [LiT] Jun Li and Gang Tian, Virtual moduli cycles and Gromov--Witten invariants of general symplectic manifolds, preprint (1996) {\medskip} {\noindent} [LiuT1] Gang Liu and Gang Tian, Floer homology and Arnold conjecture, to appear in {\it Journ. Differential Geometry}{\medskip} {\noindent} [LiuT2] Gang Liu and Gang Tian, Weinstein conjecture and GW invariants, preprint (1997).{\medskip} {\noindent} [Lo] W. Lorek, Generalized Cauchy--Riemann operators in Symplectic Geometry, Ph. D. thesis, Stony Brook (1996).{\medskip} {\noindent} [MP] D. McDuff and L. Polterovich, Symplectic packings and algebraic geometry, {\it Inventiones Mathematicae}, {\bf 115}, (1994) 405--29. {\noindent} [MS] D. McDuff and D.A. Salamon, {\it $J$-holomorphic curves and quantum cohomology}, Amer Math Soc Lecture Notes \#6, Amer. Math. Soc. Providence (1995). {\medskip} {\noindent} [R] Yongbin Ruan, Virtual neighborhoods and pseudoholomorphic curves, preprint Preprint alg-geom/9611021 {\medskip} {\noindent} [S] B. Siebert, Gromov--Witten invariants for general symplectic manifolds. Preprint, Bochum (1996). \end{document}
1,108,101,565,325
arxiv
\section{Introduction} Mobility is a key factor in determining autonomy and independence for people with severe disabilities (PsD). Electric powered wheelchairs (EPW) are an important means of providing mobility to PsD across the age span. Tips and falls are the most frequent reason that PsD who use EPW report to emergency rooms \cite{chen2011wheelchair}. There is a need for more capable EPW that can reduce the risk of tips/falls or loss of control, which occur in 30\%-65\% of users each year \cite{xiang2006wheelchair}. Tips and falls related to EPW crashes have a significant effect on PsD. Improvements have been shown in EPWs in the past 20 years \cite{ding2005electric,wang2010relationship}, including reliability, better suspension to minimize vibration exposure and expanded user interfaces. Despite some improvements, current EPW design limits most users to drive in indoor environments and outdoors with mostly flat barrier-free environments. Furthermore, PsD using EPW have difficulties and thus often avoid, driving over uneven terrain or overcoming architectural barriers such as curbs and terrains non-compliant to accessibility standards \cite{board2004americans}. \par Many studies were done to allow either manual or powered wheelchairs to climb the stairs. One solution introduced for stair climbing problem is installing tracks to wheelchairs \cite{topchair}. Such approach needs no prior knowledge about the stairs, but irregular stair edges can easily cause slipping since all weight of the wheelchair rests on the stair edges. Another safer edge independent solution is leg-based wheelchairs to climb stairs as step by step or even in a high single step according to the legs elevation capability \cite{candiotti2017kinematics}. For this approach a prior knowledge about stairs dimensions such as depth and height is crucial for safety. In \cite{grewal2017lidar,nguyen2007real}, a LIDAR and camera based more general surrounding detection techniques were used to identify objects for autonomous driving wheelchairs. However, both sensors suffer many limitations due to different lighting, surface material and harsh outdoor environmental conditions. \vspace{-0.3cm} \section{Stair Detection Setup}\label{section:sds} \subsection{FMCW-Radar} Radar is introduced in this paper as an efficient tool for stair detection for its capability to handle outdoor measurement conditions like sun light, dust and unstructured terrain. The radar used in this paper is a compact \unit[94]{GHz} FMCW radar with adjustable parameters and an aperture of about 11$^\circ$ \cite{zech2015compact}. The used signal modulation on the FMCW radar is a continuous sequence of chirps and each chirp is defined as a linear increasing frequency signal over time. As shown in Fig. \ref{fig:fmcwRadar}, the received echo signal will have a frequency shift ($f_D$) corresponding to the velocity information and a time shift ($\Delta t$) corresponding to the range information. \par The main adjustable parameters on the modulation related to our application are the range resolution and the maximum detectable range. The range resolution ($r_{res}$) is defined as the minimum distance between targets where the radar is still able to distinguish them as multiple targets. Based on Eq. \ref{eq_rres}, the range resolution is only a function of the radar bandwidth ($B$) and is set to \unit[1.5]{cm} (lower than the minimum possible distance between two consecutive steps) based on a bandwidth of \unit[10]{GHz}. The maximum range ($r_{max}$) is defined as the maximum distance from the radar where a target is still detected. From Eq. \ref{eq_rmax}, the maximum range is just the range resolution relation scaled with the number of samples per chirp ($N_s$). Given the bandwidth is set to \unit[10]{GHz}, then $N_s$ is set to 200 samples per chirp to get a maximum range of \unit[3]{m} which is suitable for our application. \begin{equation} r_{res}=\frac{c}{2B} \label{eq_rres} \end{equation} \begin{equation} r_{max}=\frac{cN_s}{2B} \label{eq_rmax} \end{equation} \vspace{0.01cm} \begin{figure}[H] \vspace{-0.45cm} \centering \centerline{\includegraphics[scale=.25]{fmcwRadar3.eps}} \caption{FMCW Radar transmitted and received chirps.} \label{fig:fmcwRadar} \end{figure} To extract the range information of the targets in the radar scene, a Fast Fourier Transform (FFT) is applied to each ramp in the down converted received signal. Accordingly, the frequency spectrum will have a strong DC component that can be eliminated by a high pass filter. In addition to peaks at frequency shifts corresponding to the time delays ($\Delta t$). As shown in Fig. \ref{fig:rangeProfiles}, the frequency of each peak represents a range of a certain target and the power is correlated with the Radar Cross Section (RCS) of the corresponding target. This RCS is strongly affected by the angle of radar beam incidence, the size, the material and the shape of the target \cite{knott2012radar}. Targets with sharp edges are known to show significant high power reflections, thus radar is a convenient sensor for stairs detection and dimensioning. \begin{figure}[H] \vspace{-0.45cm} \centering \centerline{\includegraphics[width=\columnwidth,height=\textheight,keepaspectratio]{rangeProfiles3.eps}} \caption{Range profiles with peaks at the corresponding targets.} \label{fig:rangeProfiles} \vspace{-0.56cm} \end{figure} \subsection{Rotating Mirror Based Scanner} \par A Single Input Single Output (SISO) radar with one transmitting and receiving antenna can only detect range of objects within its beam aperture without angular information. Thus, to get a precise scanning of the scene, a right aperture radar is needed with an angular resolution capability. Scanning is done based on a mechanical motion of the radar and acquiring range profiles at each point, then mapping range data to correct radar positions. Different scanning techniques are often used as applying a translational motion to the radar over the height-dimension or rotating the radar over the depth plane \cite{fitch2012synthetic}. These scanning technique are rather simple solutions, but not compact enough for wheelchair application. \par In this paper, we introduce a compact solution for scanning by keeping both the radar height and orientation constant and rotating the radar beam only based on a rotating mirror. The designed mirror is concave and the surface is made from aluminum such that at any instant the radar beam is reflected by 90$^{\circ}$ to allow scanning in the sagittal plane (height and depth) as shown in Fig. \ref{fig:mapping}. The center of the mirror is horizontally aligned with the radar lens and placed at a distance of \unit[22]{cm}. The rotating scanner structure is placed horizontally at a suitable wheelchair height of \unit[40]{cm}. The mirror is designed with a 5$^{\circ}$ aperture and this reduction from the original radar beam aperture (11$^{\circ}$) will increase the vertical resolution capability. The mirror is mounted to a small shaft and can be freely moved over 340$^{\circ}$ angular range. The mirror rotation is controlled by a step motor such that at each measurement instant the mirror is static. \par In our setup shown in Fig. \ref{fig:mapping}, the mirror is rotated by an angular resolution of $\theta_{res}$ for each range spectrum measurement. This angular resolution strongly affects the height resolution (the translational distance moved in the height plane based on this angular rotation). Based on Eq. \ref{eq_hres}, the height resolution ($h_{res}$) at a distance ($d$) is directly proportional with the angular resolution ($\theta_{res}$). Considering a stair detected at the chosen radar maximum range of \unit[3]{m}, an angular resolution ($\theta_{res}$) of 0.25$^{\circ}$ is chosen to satisfy a height resolution ($h_{res}$) of \unit[1]{cm} which is enough for our application. \begin{equation} h_{res}=d\tan\theta_{res} \label{eq_hres} \end{equation} \par Based on experiments on different staircases, the angular dynamic range is chosen from -20$^{\circ}$ above the horizontal depth plane to 50$^{\circ}$ below the horizontal depth plane. Within the specified angular range, each range spectrum is mapped to a world coordinate system. Finally, a 2D intensity map in the sagittal plane is generated representing reflected signals from different objects as shown in Fig. \ref{fig:stairsScan}. At a distance of \unit[1]{m} three steps are detected on a wooden stair as three vertical high intensity planes at different heights and depths. \begin{figure}[H] \centering \centerline{\includegraphics[width=\columnwidth,height=\textheight,keepaspectratio]{Experiment_Setup_Wheelchair.eps}} \caption{2D scanner (left) and the experimental setup (right).} \label{fig:mapping} \vspace{-0.4cm} \end{figure} \begin{figure}[H] \vspace{-0.45cm} \centering \centerline{\includegraphics[width=\columnwidth,height=\textheight,keepaspectratio]{stairs2.eps}} \caption{2D Radar scanned image of a 3 steps staircase.} \label{fig:stairsScan} \vspace{-0.56cm} \end{figure} \section{Stair Detection Algorithm} In this part, we present a particle filter based plain detection algorithm to identify high intensity parts in the staircase scanned image shown in Fig. \ref{fig:stairsScan}. The idea behind the particle filter is to represent a probability distribution function (pdf) based on random samples with corresponding weights. Thus multiple objects will be represented as a pdf. The algorithm will work by initializing particles all over the image and applying iterative resampling to initialized particles till the particles converge to high power areas in the intensity map. Based on the resampled particles distribution, clustering is applied to separate detected stairs. Furthermore, particles distribution in each cluster will be used to estimate height and depth of each detected stair. \subsection{Particles Initialization and Resampling} First part of the plain detection algorithm will be the particle filter initialization phase. In this phase $N$ particles are initialized with a pre-defined distribution. As shown in Fig. \ref{fig:initUniform}, each particle $s_i$ (blue point) is represented as: \begin{equation} s_i=(x_i,y_i,p_i) \label{eq_spar} \end{equation} where $x_i,y_i$ represents the position of the $i^{th}$ particle (distance, height) in the sagittal plane, $p_i$ represents the power at this particular position and $i$ represents the particle index from $1$ to $N$. After initialization, all the powers are normalized to weights and each particle will get a weight $w_i$ such that the sum of all weights ($\sum_{i=1}^{N}w_i$) is equal to 1. \par In the second phase, resampling by replacement based on the weights $w_i$ is applied to the initial state particles distribution. During resampling, particles with high weights are more likely to be selected multiple times and replace particles with low weights \cite{efron1994introduction}. Resampling is repeated recursively for multiple amount of times to insure convergence to high intensity plains (red points) as shown in Fig. \ref{fig:initUniform}. \par Particles distribution during the initialization phase can strongly affect how fast particles can converge to the high intensity areas during resampling phase. One common approach for particles initialization is to uniformly distribute all particles over areas with defined power values in the image as shown in Fig. \ref{fig:initUniform}. This approach is rather simple, but it can take about 15 iterations to converge to the correct high intensities. \begin{figure}[H] \vspace{-.36cm} \centering \centerline{\includegraphics[width=\columnwidth,height=\textheight,keepaspectratio]{init_uniform.eps}} \caption{Uniform particles distribution over scanned image.} \label{fig:initUniform} \vspace{-0.8cm} \end{figure} \par In this paper, we use a Gaussian Multi-Modal (GMM) distribution initialization technique where $M$ high intensity locations are randomly chosen within the image. A subset $N/M$ (mode) of the particles is then initialized with a suitable variance and a mean equal to the corresponding high intensity position. The same approach is applied till we initialize all required $N$ particles as shown in Fig. \ref{fig:initstate}. After initialization, weight normalization and resampling is applied to each subset alone which will allow faster convergence to high intensity planes \cite{murray2016parallel}. Furthermore, this distributed weight normalization and resampling approach will insure convergence to all high intensities in the scan even if some areas are lower in power than others. The used multi-modal Gaussian initialization was tested on different staircase scans and it can converge to high intensity areas within 5 resampling iterations. For the sake of lowering the complexity, number of particles $N$ is chosen to be 1000 particles and number of modes $M$ as 10 modes. Finally, to reduce the overall complexity before applying clustering to the resampled particles, redundant particles with the same exact positions and powers due to resampling with replacement are removed. \begin{figure}[H] \vspace{-.3cm} \centering \centerline{\includegraphics[width=\columnwidth,height=\textheight,keepaspectratio]{init_multi.eps}} \caption{GMM particles distributions over scanned image.} \label{fig:initstate} \vspace{-0.9cm} \end{figure} \subsection{Clustering and Rejection} In order to separate the steps detected in an image, clustering is applied to the resampled particles based on their ($x,y$) positions and corresponding power intensities. Accordingly, cluster bounds are generated based on an adaptable sensitivity parameter ($\rho$) to assign each particle to a cluster. The clustering sensitivity $\rho$ can have values between 0 and 1. Increasing the sensitivity will result in more bounds and separations, thus more detected clusters. In the proposed setup, the sensitivity parameter $\rho$ was tunned over different scans and is finally set as 0.5 to insure fair clustering. \par As shown in Fig. \ref{fig:unclustered}, fair clustering can result sometimes in erroneous clusters that can be redundant (belonging to same object) or outliers clusters. Outlier clusters are mostly scattered particles which did not converge to high intensity area. Such scatters are known to have a low number of particles which can be identified and removed. A threshold on the number of particles in each cluster is introduced to identify if a cluster is an outlier. In our case, a rather simple threshold based on 10\% of the mean number of particles in all clusters is used. A cluster with particles not satisfying such threshold will be removed from the detected clusters. To insure uniform distribution of particles in each cluster, an outliers detection and removal is applied to particles in each cluster. \par Moreover, redundant clusters which belong to the same object must be either fused or the cluster with lower number of particles is rejected as marked in Fig. \ref{fig:unclustered}. Since a staircase is known to be detected, multiple clusters sharing relatively close position information in either depth or height are identified as redundant. A distance threshold of \unit[10]{cm} is chosen to identify if multiple clusters are redundant, then the cluster with the lower relative position is rejected to get the final detected clusters as shown in Fig. \ref{fig:clustered}. \par Finally, the detected number clusters after rejection corresponds to the number of stairs the wheelchair can climb. The particles distribution in each cluster will be used for the sake of dimensioning. In the next part, implemented enhancements to our stair detection algorithm is presented in terms of faster particles convergence and adding more possible use cases. \begin{figure}[H] \vspace{-.25cm} \centering \centerline{\includegraphics[width=\columnwidth,height=\textheight,keepaspectratio]{unclust3.eps}} \caption{Clustered particles over scanned image.} \label{fig:unclustered} \vspace{-0.4cm} \end{figure} \begin{figure}[H] \vspace{-.55cm} \centering \centerline{\includegraphics[width=\columnwidth,height=\textheight,keepaspectratio]{clustered2.eps}} \caption{Final detected clustered particles over scanned image.} \label{fig:clustered} \vspace{-0.9cm} \end{figure} \subsection{Clutter Filtering} During the radar scan and especially at long distances as the beam area is wider, it is likely to get reflections from unwanted objects often referred to as clutter. Due to the variability of the mixed clutter and noise, an adaptive thresholding technique for suppressing spectrum noise and keeping wanted target peaks only is needed. Constant False Alarm Rate (CFAR) is a widely used adaptive thresholding method in radar systems to separate target peaks from neighbor noise. CFAR detection is based on gain control to preserve a constant rate of false target detections in the case of varying clutter and noise levels. \par There are different techniques for applying CFAR in the radar reception to prevent high false alarm rates in the presence of interference such as jamming or clutter residue. In this paper, Cell-Averaging (CA-) CFAR \cite{richards2005fundamentals} is applied for its simplicity. In CA-CFAR, a sliding window over the range profile is used to compute the average power within the window cells. The sliding window averaging considers a potential peak to be in the middle, thus the average is computed without considering the middle cells. As shown in Fig. \ref{fig:cfarSpec}, the required target peaks exceed the CFAR threshold and otherwise is considered as noise. \par To insure presence of target stairs only in radar scans, any range power below the CFAR threshold is assigned a value equal to the minimum received power and values exceeding the threshold are taken as it is. After coordinate correction at each mirror rotation, a 2D CFAR intensity map in sagittal plane can be produced. As shown in Fig. \ref{fig:cfarScan}, the high intensity peaks representing stairs are clearly seen as high yellow values. On the other hand, noise and clutter can be identified as complete blue background. The CFAR technique was tested on different stair cases and it shows clear improvement in clutter and noise rejection. Moreover, this CFAR technique has a major influence in the particle filter complexity as particles can converge much faster to high intensity areas. As mentioned above the stair detection particle filter algorithm can take up to 5 iterations to converge to correct steps. CFAR technique was tested on several measurements and the particles can converge during resampling within 2 iterations only. \begin{figure}[H] \vspace{-0.25cm} \centering \centerline{\includegraphics[width=1.02\columnwidth,height=\textheight,keepaspectratio]{cfarSpectrum.eps}} \caption{CFAR threshold over detected range profiles.} \label{fig:cfarSpec} \vspace{-0.56cm} \end{figure} \begin{figure}[H] \vspace{-.25cm} \centering \centerline{\includegraphics[width=\columnwidth,height=\textheight,keepaspectratio]{cfarScan.eps}} \caption{2D Radar scanned CFAR image of a 3 steps staircase.} \label{fig:cfarScan} \vspace{-0.7cm} \end{figure} \section{Stairs Dimensioning} After applying our proposed particle filter based stair detection algorithm and clutter filtering, now particles are correctly distributed over each step and can be used for dimensioning. For successful stair climbing, the number of stairs to climb represented as the number of detected clusters, depth and height of each step is needed. \par \textbf{Depth} can be estimated by first subtracting the distance between the radar and the mirror (\unit[22]{cm}) mention in Section \ref{section:sds} from all the range measurements. Now the radar can be considered at 0 position and the distance between the radar and each step can be estimated based on particles distribution in the $x$-plane. This can be achieved by computing the weighted average of particles position over the $x$-plane in each cluster. Given number of particles in the $i^{th}$ cluster is $N_i$ and each particle depth position as $x_j$ and weight as $w_j$, where index $j$ vary from 1 to $N_i$. Then all the particles weights are normalized to 1 in each cluster and the depth of this cluster ($d_i$) can be estimated as: \begin{equation} d_{i}=\sum_{1}^{N_i} w_j.x_j \label{eq_depth} \end{equation} \textbf{Height} can be estimated based on first step appearance in the height plane. In comparison to laser, the origin of reflection with the experimental setup can only be measured directly with an uncertainty of \unit[1.8]{cm} in $y$-Position at a range of \unit[2]{m}. To overcome this limitation we introduce the usage of a \unit[3]{dB} beam model of the radar unit. As shown in Fig. \ref{fig:objectheight}, the aperture is defined as the angle between upper and lower \unit[3]{dB} ranges and in our method the height error is estimated as distance from beam center and the lower \unit[3]{dB} beam (half aperture). Accordingly, the correct height can be estimated by getting the position of the top particle in each cluster. Then the height error subtracted from the top particle $y$-position is computed based on Eq. \ref{eq_hres} where distance is considered as depth $d_i$ and $\theta_{res}$ as half the mirror aperture (2.5$^\circ$). \begin{figure}[H] \vspace{-.25cm} \centering \centerline{\includegraphics[width=\columnwidth,height=\textheight,keepaspectratio]{3db_primilary_result.eps}} \caption{Height estimation using a \unit[3]{dB} beam model} \label{fig:objectheight} \vspace{-0.66cm} \end{figure} \section{Results} The introduced scanner was then placed over a mobile surface and was used to scan several staircases. In this part, scans of staircases with the same number of steps (3 steps) is used for fair comparison. As shown in Fig. \ref{fig:comparison}, \textit{Stair A} is a wooden constructed staircase for initial tests and \textit{Stair B} was used to test dark ceramic material reflections. Finally, \textit{Stair C} was used to test the algorithm detection capability of floating steps. The scanner was used to collect the scans over these staircases with the same initial position (\unit[0.5]{m}) and height (\unit[0.4]{m}). The particle filter was then used for steps detection and dimensioning, then the stairs are reconstructed and compared to real dimensions as shown in Fig. \ref{fig:reconstructed}. The errors of both depth and height estimations of mentioned staircases are shown in Table \ref{Tab:hd} and detected steps are ordered in ascending depth. The depth estimation in all cases is always less than \unit[0.5]{cm} and height estimation is a bit worth as it is additionally influenced by the mechanical model and aperture correction. Moreover, the height error scales with the depth as explained in Eq. \ref{eq_hres}. \vspace{-0.4cm} \defTable{Table} \begin{table}[H] \centering \caption{Distance errors over tested staircases.} \label{Tab:hd} \scalebox{1.1}{\begin{tabular}{|c|c|c|c|c|c|c|} \hline Stairs/ Accuracy &\multicolumn{3}{|c|}{Depth [cm] } & \multicolumn{3}{|c|}{Height [cm]} \\ \cline{2-7} & \scalebox{0.9}{Step 1} & \scalebox{0.9}{Step 2} & \scalebox{0.9}{Step 3} & \scalebox{0.9}{Step 1} & \scalebox{0.9}{Step 2} & \scalebox{0.9}{Step 3} \\ \hline Stair A & 0.2 & 0.4 & 0.3 & 0.5 & 0.7 & 1 \\ \hline Stair B & 0.1 & 0.3 & 0.5 & 0.3 & 1 & 1.2 \\ \hline Stair C & 0.2 & 0.1 & 0.4 & 0.6 & 0.6 & 1.5 \\ \hline \end{tabular}} \end{table} \vspace{-0.4cm} \begin{figure}[H] \centering \centerline{\subfloat[Stair A]{\includegraphics[scale=.17]{stairAComp.jpg}\label{fig:stairA}} \hspace{3mm} \subfloat[Stair B]{\includegraphics[scale=.17]{stairBComp.jpg}\label{fig:stairB}}} \centerline{\subfloat[Stair C]{\includegraphics[scale=.17]{stairCComp.JPG}\label{fig:stairC}}} \caption{Different stair types used for testing.} \label{fig:comparison} \vspace{-0.4cm} \end{figure} \vspace{-0.4cm} \begin{figure}[H] \vspace{-.25cm} \centering \centerline{\includegraphics[width=\columnwidth,height=\textheight,keepaspectratio]{reconstructed.eps}} \caption{Reconstructed staircase against real dimensions.} \label{fig:reconstructed} \vspace{-0.66cm} \end{figure} \section{Conclusion and Further Scenarios} In this paper we introduced a mirror based stair scanner for wheelchair stair climbing applications. Moreover, we introduced a detection and dimensioning technique based on particle filter. The scanner was tested and showed good results for the desired application. We mainly addressed the scenario of a wheelchair user climbing a staircase (up stair detection), thus the radar scanner was used to scan over the depth plane to identify and dimension steps in the staircase. One other possible scenario that will yield similar results will be to apply our radar scanner in the application of downstairs detection. In this case, the radar scanner will work by scanning over the height plane as shown in Fig. \ref{fig:downDetection}. This scenario is particularly achievable due to the introduced compact rotating mirror scanner. This proposed structure can provide the wheelchair user with additional capabilities of scanning in different plane directions. This can be achieved by mounting the structure to a movable pole which can move the scanner over the required staircase. \begin{figure}[H] \centering \centerline{\includegraphics[width=0.6\columnwidth,height=\textheight,keepaspectratio]{Experiment_Setup_downstairs.eps}} \caption{Scanner illustration for downstairs detection scenario.} \label{fig:downDetection} \vspace{-0.4cm} \end{figure} \vspace{-0.6cm} % % % % % \bibliographystyle{IEEEtran} % % %
1,108,101,565,326
arxiv
\section{Introduction} The recent cosmological observations give evidence for the presence of a cosmological matter density, which represents about 27\% of the total density of the Universe, and the presence of the so-called dark energy. Using the total matter density observed by WMAP \cite{Komatsu:2008hk} and the baryon density indicated by Big-Bang nucleosynthesis (BBN) \cite{Burles:1997ez,Burles:1997fa} and including the theoretical uncertainties, the dark matter density range at 95\% C.L. is deduced \cite{Arbey:2008kv}: \begin{equation} 0.094 < \Omega_{DM} h^2 < 0.135 \;, \label{WMAPnew} \end{equation} where $h$ is the reduced Hubble constant. In the following, we also refer to the older range \cite{Ellis:1997wva} \begin{equation} 0.1 < \Omega_{DM} h^2 < 0.3 \;. \label{WMAPold} \end{equation} In supersymmetric models the lightest stable supersymmetric particle (LSP) constitutes the favorite candidate for dark matter. If the relic density can be calculated precisely, the accuracy of the latest WMAP data can therefore be used to constrain the supersymmetric parameters. The computation of the relic density is well-known within the standard model of cosmology \cite{Gondolo:1990dk,Edsjo:1997bg}, and is implemented in automatic codes, such as MicrOMEGAs \cite{Belanger:2006is}, DarkSUSY \cite{Gondolo:2004sc} or SuperIso Relic \cite{Arbey:2009gu}. Nevertheless, the nature of the dark energy and the properties of the Universe in the pre-BBN epoch are still unknown, and the BBN era, at temperatures of about 1 MeV, is the oldest period in the cosmological evolution when reliable constraints are derived. The cosmology of the primordial Universe could therefore be much more complex, and the pre-BBN era could have for example experienced a slower or faster expansion. Such a modified expansion, even though still compatible with the BBN or the WMAP results, would change the LSP freeze-out time and the amount of relic density (see for example \cite{Arbey:2008kv,Kamionkowski:1990ni,Salati:2002md,Chung:2007cn}). A similar question exists concerning the entropy content of the Universe at that period, and the eventual consequences on the energy conservation in the primodial epochs (see for example \cite{Moroi:1999zb,Giudice:2000ex,Gelmini:2006pw,Arbey:2009gt}). In the following, we will first describe the way the relic density is calculated in the standard cosmology. We will then present how the calculations can be modified in altered cosmological scenarios. We then analyze the consequences of the cosmological uncertainties on the supersymmetric parameter searches. We also inverse the problem and show that the determination of a beyond the Standard Model particle physics scenario will give hints on the cosmological properties of the early Universe. Finally, we will present the SuperIso Relic package and conclude. \section{Relic density in Standard Cosmology} The cosmological standard model is based on a Friedmann-Lema{\^\i}tre Universe, approximately flat, incorporating a cosmological constant accelerating its expansion, and filled with radiation, baryonic matter and cold dark matter. Before BBN, the Universe expansion is dominated by the radiation density, and therefore the expansion rate $H$ of the Universe is determined by the Friedmann equation \begin{equation} H^2=\frac{8 \pi G}{3} \rho_{rad}\;,\label{friedmann_stand} \end{equation} where \begin{equation} \rho_{rad}(T)=g_{\mbox{eff}}(T) \frac{\pi^2}{30} T^4 \end{equation} is the radiation density and $g_{\mbox{eff}}$ is the effective number of degrees of freedom of radiation. The computation of the relic density is based on the solution of the Boltzmann evolution equation \cite{Gondolo:1990dk,Edsjo:1997bg} \begin{equation} dn/dt=-3Hn-\langle \sigma_{\mbox{eff}} v\rangle (n^2 - n_{\mbox{eq}}^2)\;, \label{evol_eq} \end{equation} where $n$ is the number density of all supersymmetric particles, $n_{\mbox{eq}}$ their equilibrium density, and $\langle \sigma_{\mbox{eff}} v\rangle$ is the thermal average of the annihilation rate of the supersymmetric particles to the Standard Model particles. By solving this equation, the density number of supersymmetric particles in the present Universe and consequently the relic density can be determined. The computation of the thermally averaged annihilation cross section $\langle \sigma_{\mbox{eff}} v \rangle$ requires the computation of the many annihilation and co-annihilation amplitudes. The annihilation rate of supersymmetric particles $i$ and $j$ into SM particles $k$ and $l$ is defined as \cite{Gondolo:1990dk,Edsjo:1997bg}: \begin{equation} W_{ij\to kl} = \frac{p_{kl}}{16\pi^2 g_i g_j S_{kl} \sqrt{s}} \sum_{\rm{internal~d.o.f.}} \int \left| \mathcal{M}(ij\to kl) \right|^2 d\Omega \;, \label{Weff} \end{equation} where $\mathcal{M}$ is the transition amplitude, $s$ the center-of-mass energy, $g_i$ the number of degrees of freedom of the particle $i$ and $p_{kl}$ the final center-of-mass momentum such as \begin{equation} p_{kl} = \frac{\left[s-(m_k+m_l)^2\right]^{1/2} \left[s-(m_k-m_l)^2\right]^{1/2}}{2\sqrt{s}}\;. \end{equation} $S_{kl}$ in Eq. (\ref{Weff}) is a symmetry factor equal to 2 for identical final particles and to 1 otherwise, and the integration is over the outgoing directions of one of the final particles. Moreover, an average over initial internal degrees of freedom is performed. The effective annihilation rate $W_{\rm eff}$ can be defined by \begin{equation} g_{LSP}^2 p_{\rm{eff}} W_{\rm{eff}} \equiv \sum_{ij} g_i g_j p_{ij} W_{ij} \end{equation} with \begin{equation} p_{\rm{eff}}(\sqrt{s}) = \frac{1}{2} \sqrt{(\sqrt{s})^2 -4 m_{LSP}^2} \;. \end{equation} The following relation can therefore be deduced: \begin{equation} \frac{d W_{\rm eff}}{d \cos\theta} = \sum_{ijkl} \frac{p_{ij} p_{kl}}{ 8 \pi g_{LSP}^2 p_{\rm eff} S_{kl} \sqrt{s} } \sum_{\rm helicities} \left| \sum_{\rm diagrams} \mathcal{M}(ij \to kl) \right|^2 \;, \end{equation} where $\theta$ is the angle between particles $i$ and $k$. The thermal average of the effective cross section is then obtained by: \begin{equation} \langle \sigma_{\rm{eff}}v \rangle = \displaystyle\frac{\displaystyle\int_0^\infty dp_{\rm{eff}} p_{\rm{eff}}^2 W_{\rm{eff}}(\sqrt{s}) K_1 \left(\displaystyle\frac{\sqrt{s}}{T} \right) } { m_{LSP}^4 T \left[ \displaystyle\sum_i \displaystyle\frac{g_i}{g_{LSP}} \displaystyle\frac{m_i^2}{m_1^2} K_2 \left(\displaystyle\frac{m_i}{T}\right) \right]^2}\;, \end{equation} where $K_1$ and $K_2$ are the modified Bessel functions of the second kind of order 1 and 2 respectively. The ratio of the number density to the radiation entropy density $Y(T)=n(T)/s(T)$ is defined, in which \begin{equation} s(T)=h_{\mbox{eff}}(T) \frac{2 \pi^2}{45} T^3 \;. \end{equation} $h_{\mbox{eff}}$ is the effective number of entropic degrees of freedom of radiation. Combining Eqs. (\ref{friedmann_stand}) and (\ref{evol_eq}) and defining the ratio of the LSP mass over temperature $x=m_{\mbox{\small LSP}}/T$, yield \begin{equation} \frac{dY}{dx}=-\sqrt{\frac{\pi}{45 G}}\frac{g_*^{1/2} m_{\mbox{\small LSP}}}{x^2} \langle \sigma_{\mbox{eff}} v\rangle (Y^2 - Y^2_{\mbox{eq}}) \;, \label{main} \end{equation} with \begin{equation} g_*^{1/2}=\frac{h_{\mbox{eff}}}{\sqrt{g_{\mbox{eff}}}}\left(1+\frac{T}{3 h_{\mbox{eff}}}\frac{dh_{\mbox{eff}}}{dT}\right) \;, \end{equation} and \begin{equation} Y_{eq} = \frac{45}{4 \pi^4 T^2} h_{\rm{eff}} \sum_i g_i m_i^2 K_2\left(\frac{m_i}{T}\right) \;, \end{equation} where $i$ runs over all supersymmetric particles of mass $m_i$ and with $g_i$ degrees of freedom. The freeze-out temperature $T_f$ is the temperature at which the LSP leaves the initial thermal equilibrium when $Y (T_f) = (1 + \delta) Y_{\mbox{eq}}(T_f)$, with $\delta \simeq 1.5$. The relic density is obtained by integrating Eq. (\ref{main}) from $x=0$ to $m_{\mbox{\small LSP}}/T_0$, where $T_0=2.726$ K is the temperature of the Universe today \cite{Gondolo:1990dk,Edsjo:1997bg}: \begin{equation} \Omega_{\mbox{\small LSP}} h^2 = \frac{m_{\mbox{\small LSP}} s(T_0) Y(T_0) h^2}{\rho_c^0} \approx 2.755\times 10^8 \frac{m_{\mbox{\small LSP}}}{1 \mbox{ GeV}} Y(T_0)\;, \end{equation} where $\rho_c^0$ is the critical density of the Universe, such as \begin{equation} H^2_0 = \frac{8 \pi G}{3} \rho_c^0 \;, \end{equation} $H_0$ being the Hubble constant. \section{Relic density in Alternative Cosmological Scenarios} In presence of non-thermal production of SUSY particles, the Boltzmann equation becomes \begin{equation} \frac{dn}{dt} = - 3 H n - \langle \sigma v \rangle (n^2 - n^2_{eq}) + N_D \;.\label{boltzmann} \end{equation} The term $N_D$ is added to provide a parametrization of the non-thermal production of SUSY particles. The expansion rate $H$ can also be modified: following \cite{Arbey:2008kv,Arbey:2009gt}, $\rho_D$ is introduced as an effective dark density which parametrizes the expansion rate modification. The Friedmann equation then becomes \begin{equation} H^2=\frac{8 \pi G}{3} (\rho_{rad} + \rho_D) \;,\label{friedmann} \end{equation} where $\rho_{rad}$ is the radiation energy density, which is considered as dominant before BBN in the standard cosmological model. In case of additional entropy fluctuations, the entropy evolution reads \begin{equation} \frac{ds}{dt} = - 3 H s + \Sigma_D \label{entropy_evolution} \;, \end{equation} where $s$ is the total entropy density. $\Sigma_D$ parametrizes here effective entropy fluctuations due to unknown properties of the early Universe. Separating the radiation entropy density from the total entropy density, {\it i.e.} setting $s \equiv s_{rad} + s_D$ where $s_{rad}$ is the radiation entropy density and $s_D$ is an effective entropy density, the following relation between $s_D$ and $\Sigma_D$ can be derived: \begin{equation} \Sigma_D = \sqrt{\frac{4 \pi^3 G}{5}} \sqrt{1 + \tilde{\rho}_D} T^2 \left[\sqrt{g_{\rm{eff}}} s_D - \frac13 \frac{h_{\rm{eff}}}{g_*^{1/2}} T \frac{ds_D}{dT}\right] \;. \end{equation} Following the standard relic density calculation method \cite{Gondolo:1990dk,Edsjo:1997bg}, we introduce $Y \equiv n/s$, and Eq. (\ref{boltzmann}) becomes \begin{equation} \frac{dY}{dx}= - \frac{m_{LSP}}{x^2} \sqrt{\frac{\pi}{45 G}} g_*^{1/2} \left( \frac{1 + \tilde{s}_D}{\sqrt{1+\tilde{\rho}_D}} \right) \left[\langle \sigma v \rangle (Y^2 - Y^2_{eq}) + \frac{Y \Sigma_D - N_D}{\left(h_{\rm{eff}}(T) \frac{2\pi^2}{45} T^3\right)^2 (1+\tilde{s}_D)^2} \right] \;, \label{final} \end{equation} where $x=m_{LSP}/T$, $m_{LSP}$ being the mass of the relic particle, \begin{equation} \tilde{s}_D \equiv \frac{s_D}{h_{\rm{eff}}(T) \frac{2\pi^2}{45} T^3}\;, \qquad\qquad \tilde{\rho}_D \equiv \frac{\rho_D}{g_{\rm{eff}} \frac{\pi^2}{30} T^4}\;, \end{equation} and \begin{equation} Y_{eq} = \frac{45}{4 \pi^4 T^2} h_{\rm{eff}} \frac{1}{(1+\tilde{s}_D)} \sum_i g_i m_i^2 K_2\left(\frac{m_i}{T}\right) \;. \end{equation} The relic density can then be calculated in the standard way: \begin{equation} \Omega h^2 = 2.755 \times 10^8 Y_0 m_{LSP}/\mbox{GeV} \;. \end{equation} where $Y_0$ is the present value of $Y$. In the limit where $\rho_D = s_D = \Sigma_D = N_D = 0$, usual relations are retrieved. We should note here that $s_D$ and $\Sigma_D$ are not independent variables. In the following, we neglect $N_D$. We use the parametrizations described in \cite{Arbey:2008kv,Arbey:2009gt} for $\rho_D$ and $s_D$: \begin{equation} \rho_D = \kappa_\rho \rho_{rad}(T_{BBN}) \left(\frac{T}{T_{BBN}}\right)^{n_\rho} \;,\label{rhoD} \end{equation}and \begin{equation} s_D = \kappa_s s_{rad}(T_{BBN}) \left(\frac{T}{T_{BBN}}\right)^{n_s} \;,\label{sD} \end{equation} where $T_{BBN}$ is the BBN temperature. $\kappa_\rho$ ($\kappa_s$) is the ratio of effective dark energy (entropy) density over radiation energy (entropy) density at BBN time, and $n_\rho$ and $n_s$ are parameters describing the behavior of the densities. We refer to \cite{Arbey:2008kv,Arbey:2009gt} for detailed descriptions and discussions on these parametrizations. \section{Supersymmetric consequences} Relic density is often used to constrain SUSY parameter space (see for example \cite{Battaglia:2003ab}). In particular the non-universal Higgs model (NUHM) provides attractive candidates for dark matter \cite{Ellis:2007ka}. For our analysis, we consider the NUHM parameter plane $(\mu,m_A)$, fixing the other parameters ($m_0=1$ TeV, $m_{1/2}=500$ GeV, $\tan\beta=35$, $A_0=0$). About 250,000 random SUSY points in the NUHM parameter plane ($\mu$,$m_A$) are generated using SOFTSUSY v2.0.18 \cite{softsusy}, and for each point we compute flavor physics observables, direct limits and the relic density with SuperIso Relic v2.7 \cite{Arbey:2009gu}. In Fig.~\ref{NUHMenergy}, the excluded zones due to different observables are displayed: the red area is excluded by the isospin asymmetry of $B \to K^* \gamma$, the green by the inclusive branching ratio of $b \to s \gamma$, the yellow area leads to tachyonic particles and the gray zone has been excluded by collider searches. All these exclusions are related to particle physics and are subject to uncertainties which are under control. The dark (light) blue zones are \emph{favored} by the WMAP (old) dark matter constraints. Hence, in the top left figure for example, which corresponds to the standard cosmological model, only tiny strips remain compatible with all the displayed constraints, and the relic density observable happens to be extremely constraining. This kind of figures is often used to determine the favored SUSY parameter space zones.\\ \begin{figure}[t!] $\begin{array}{cc} \includegraphics[width=6.05cm]{plot1.eps}&\includegraphics[width=6.05cm]{plot5.eps}\\ \includegraphics[width=6.05cm]{plot4.eps}&\includegraphics[width=6.05cm]{plot3.eps}\\ \includegraphics[width=6.05cm]{plot2.eps}&\includegraphics[width=6.05cm]{plot6.eps} \end{array}$ \caption{Constraints in the NUHM parameter plane $(\mu,m_A)$ in presence of additional dark energy for several values of $\kappa_\rho$ and $n_\rho$, and for $m_0=1$ TeV, $m_{1/2}=500$ GeV, $\tan\beta=35$, $A_0=0$. The color code is given in the text.\label{NUHMenergy}} \end{figure}% In the next plots of Fig.~\ref{NUHMenergy} however, we show that the presence of additional energy density in the early Universe, even negligible at BBN time, can completely change the results. These plots show the influence of a quintessence--like dark energy ($n_\rho=6$) whose proportions relative to the radiation density at BBN time are respectively $\kappa_\rho=10^{-5}$, $10^{-4}$, $10^{-3}$ and $10^{-2}$. The last plot shows the influence of a density of a decaying scalar field ($n_\rho=8$) with an extremely low $\kappa_\rho=10^{-5}$. In these plots the relic favored zones are moved towards the center, completely modifying the favored SUSY parameters. \begin{figure}[t!] $\begin{array}{cc} \includegraphics[width=6.05cm]{plot1.eps}&\includegraphics[width=6.05cm]{plot7.eps}\\ \includegraphics[width=6.05cm]{plot14.eps}&\includegraphics[width=6.05cm]{plot13.eps}\\ \includegraphics[width=6.05cm]{plot12.eps}&\includegraphics[width=6.05cm]{plot11.eps}\\ \end{array}$ \caption{Constraints in the NUHM parameter plane $(\mu,m_A)$ in presence of additional dark energy for several values of $\kappa_s$ and $n_s$, and for $m_0=1$ TeV, $m_{1/2}=500$ GeV, $\tan\beta=35$, $A_0=0$. The color code is given in the text.\label{NUHMentropy}} \end{figure}% The result is similar when considering the influence of additional entropy. In Fig.~\ref{NUHMentropy}, the first plot shows for reference the constraints in the standard model of cosmology. The second plot presents the influence of a dark entropy density with $n_s=4$ and $\kappa_s=10^{-3}$, which can occur in the case of reheating. The next plots show the influence of a dark entropy density with $n_s=5$ and $\kappa_s=10^{-4}$, $\kappa_s=10^{-3}$ and $\kappa_s=10^{-2}$. The relic density favored areas are this time moved outwards with different shapes. From these two analyses, it is clear that the relic density calculations can be strongly altered by the presence of entropy or energy densities, even very small or negligible at the BBN time ({\it i.e.} completely unobservable in the current cosmological data). Thus, cosmological unknown properties of the early Universe can completely change the favored SUSY parameter space, and could lead to erroneous affirmations on the properties of the SUSY particles. \section{Inverse problem} We have seen in the previous section that it is critical to use the relic density to constrain SUSY. However the problem can be inverted: the future colliders will give the possibility to determine the new physics and relic particle properties \cite{Baltz:2006fm}, and by calculating the relic density in different cosmological scenarios it would be possible to determine or to verify some of the physical properties of the early Universe. In such cases, combining the relic density constraints with the BBN limits would remove some of the degeneracies. \begin{figure}[t!] \includegraphics[width=6.05cm]{plot16.eps}\includegraphics[width=6.05cm]{plot20.eps}\\ \caption{Constraints in the NUHM parameter plane $(\mu,m_A)$ in presence of additional dark entropy and energy for $\kappa_\rho=10^{-2}$, $n_\rho=6$, $\kappa_s=10^{-2}$, $n_s=5$ (left) and $\kappa_\rho=10^{-11}$, $n_\rho=8$, $\kappa_s=10^{-2}$, $n_s=4$ (right). The color code is given in the text.\label{degeneracy}} \end{figure}% In Fig.~\ref{degeneracy} for example, the constraints are presented for two different cosmological scenarios: $\kappa_\rho=10^{-2}$, $n_\rho=6$, $\kappa_s=10^{-2}$, $n_s=5$ on the left, $\kappa_\rho=10^{-11}$, $n_\rho=8$, $\kappa_s=10^{-2}$, $n_s=4$ on the right. The cosmological properties are different, but the obtained zones are rather similar, which means that there is a degeneracy between dark energy and dark entropy effects. Let us assume that the subjacent SUSY model leads to a relic density inside the WMAP favored zone of these plots, and disfavored in the standard cosmology model. \begin{figure}[t!] \includegraphics[width=6.05cm]{Yp.eps}\includegraphics[width=6.05cm]{H2_H.eps} \caption{Constraints from $Y_p$ (left) and $^2H/H$ (right) on the dark energy parameters $(n_\rho,\kappa_\rho)$. The parameter regions excluded by BBN are located between the black lines for $Y_p$, and in the upper left corner for $^2H/H$. The colors correspond to different values of $Y_p$ and $^2H/H$.\label{BBNenergy}} \end{figure}% \begin{figure}[t!] \includegraphics[width=6.05cm]{Yp2.eps}\includegraphics[width=6.05cm]{H2_H2.eps} \caption{Constraints from $Y_p$ (left) and $^2H/H$ (right) on the dark entropy parameters $(n_s,\kappa_s)$. The parameter regions excluded by BBN are located in the upper left corner for $Y_p$, and in both upper corners and in between the right black lines for $^2H/H$. The colors correspond to different values of $Y_p$ and $^2H/H$.\label{BBNentropy}} \end{figure}% In Figs.~\ref{BBNenergy} and \ref{BBNentropy} the current limits on the dark energy and entropy properties from the $Y_p$ and $^2H/H$ BBN constraints are presented. The areas on the top of the black lines lead to unfavored element abundances. Comparing Fig. \ref{degeneracy} with Figs. \ref{BBNenergy} and \ref{BBNentropy}, we notice that the left plot ($\kappa_\rho=10^{-2}$, $n_\rho=6$, $\kappa_s=10^{-2}$, $n_s=5$) is disfavored by BBN, while the right one is still compatible. Thus, in such cases, knowing the particle physics properties will enable us to recognize whether the favored cosmological model is the standard one, and by considering also the BBN data, we can already distinguish between different cosmological scenarios. \section{SuperIso Relic} The results presented here are obtained using the public code SuperIso Relic v2.7 \cite{Arbey:2009gu}, available on \verb?http://superiso.in2p3.fr/relic? . SuperIso Relic is an extension of the SuperIso program \cite{superiso}, which is devoted to the calculation of flavor physics observables in the Two-Higgs Doublet Model, Minimal Supersymmetric Standard Model and Next-to-Minimal Supersymmetric Standard Model. The main purpose of SuperIso Relic is to compute the relic density in the cosmological standard model as well as in alternative scenarios. All models described here are already implemented, and other models will soon be included. For more information on SuperIso Relic, we refer to its website, or to the manual \cite{Arbey:2009gu}. \section{Conclusion} We showed that using the relic density to constrain the supersymmetric parameter space is highly dependent on the cosmological assumptions, and it is then critical to use the relic density constraints for the scientific preparation of the future colliders. We also noticed that inverting the problem, {\it i.e.} using the particle physics results to determine the early Universe properties, can be of great interest as it will give access to an unknown part of the Universe history. \bibliographystyle{aipproc} \hyphenation{Post-Script Sprin-ger}
1,108,101,565,327
arxiv
\section{Background}\label{sec:background} \begin{table*}[t] \captionsetup{font=scriptsize} \vspace{-0.1in} \scriptsize \centering \begin{tabular}{|c|c|c|c|c|c|c|c|c|} \hline Database & \thead{\scriptsize{\# Unique} \\ \scriptsize{contents}} & \# Distortions & \thead{\scriptsize{\# Picture} \scriptsize{contents}} & \thead{\scriptsize{\# Patch} \scriptsize{contents}} & Distortion type & \thead{\scriptsize{Subjective study} \\ \scriptsize{framework}} & \# Annotators & \# Annotations \\ \hline LIVE IQA (2003)~\cite{hamidLiveDB} & 29 & 5 & 780 & 0 & single, synthetic & in-lab & & \\ TID-2008~\cite{ponomarenko2009tid2008} & 25 & 17 & 1700 & 0 & single, synthetic & in-lab & & \\ TID-2013~\cite{ponomarenko2009tid2008} & 25 & 24 & 3000 & 0 & single, synthetic & in-lab & &\\ \hline CLIVE (2016)~\cite{clive} & 1200 & - & 1200 & 0 & in-the-wild & crowdsourced & $8000$ &$350$K \\ KonIQ (2018)~\cite{koniq} & $10$K & - & $10$K & 0 & in-the-wild & crowdsourced & $1400$ &$1.2$M \\ \hline \textbf{Proposed database} & $39,810$ & - & $39,810$ & $119,430$ & in-the-wild & crowdsourced & $7865$ & $3,931,710$ \\ \hline \end{tabular} \caption{\scriptsize{\textbf{Summary of popular IQA datasets.} In the legacy datasets, pictures were synthetically distorted with different types of single distortions. ``In-the-wild'' databases contain pictures impaired by complex mixtures of highly diverse distortions, each as unique as the pictures they afflict.}} \vspace{-1.5em} \label{tbl:datasets} \end{table*} \noindent\textbf{Image Quality Datasets:} Most picture quality models have been designed and evaluated on three ``legacy" databases: LIVE IQA~\cite{hamidLiveDB}, TID-2008~\cite{ponomarenko2009tid2008}, and TID-2013~\cite{tid2013}. These datasets contain small numbers of unique, pristine images ($\sim30$) synthetically distorted by diverse types and amounts of single distortions (JPEG, Gaussian blur, etc.). They contain limited content and distortion diversity, and do not capture complex mixtures of distortions that often occur in real-world images. Recently, ``in-the-wild'' datasets such as CLIVE~\cite{clive} and KonIQ-$10$K~\cite{koniq}, have been introduced to attempt to address these shortcomings (Table \ref{tbl:datasets}). \noindent\textbf{Full-Reference models:} Many \textit{full-reference} (FR) perceptual picture quality predictors, which make comparisons against high-quality \textit{reference} pictures, are available~\cite{ssim, vif},~\cite{msssim, mad, fsim, vsnr, gmsd, haariqa, vsi}. Although some FR algorithms (e.g. SSIM~\cite{ssim},~\cite{emmyAward}, VIF~\cite{vif},~\cite{vmafEncodeNetflixBlog, shotEncodeNetflixBlog}) have achieved remarkable commercial success (e.g. for monitoring streaming content), they are limited by their requirement of pristine reference pictures. \noindent\textbf{Current NR models aren’t general enough:} \textit{No-reference} or blind algorithms predict picture content without the benefit of a reference signal. Popular blind picture quality algorithms usually measure distortion-induced deviations from perceptually relevant, highly regular bandpass models of picture statistics \cite{autoBovik}, \cite{field87, ruderman1994, simoncelli2001natural, textureBovik}. Examples include BRISQUE \cite{mittal2012no}, NIQE \cite{niqe}, CORNIA \cite{cornia}, FRIQUEE \cite{deeptiBag}, which use ``handcrafted'' statistical features to drive shallow learners (SVM, etc.). These models produce accurate quality predictions on legacy datasets having single, synthetic distortions~\cite{hamidLiveDB, ponomarenko2009tid2008, tid2013, csiq}, but struggle on recent “in-the-wild”~\cite{clive, koniq} databases. Several deep NR models \cite{ghadiyaram2014blind, deepConvIQA, bosseDeepIQA, fullyDeepIQA, nima} have also been created that yield state-of-the-art performance on legacy synthetic distortion databases \cite{hamidLiveDB, ponomarenko2009tid2008, tid2013, csiq}, e.g., by pretraining deep nets \cite{simonyan2014deep, rankIQA, kedeMaIQA} on ImageNet \cite{imageNet}, then fine tuning, or by training on proxy labels generated by an FR model \cite{fullyDeepIQA}. However, most deep models also struggle on CLIVE~\cite{clive}, because it is too difficult, yet too small to sufficiently span the perceptual space of picture quality to allow very deep models to map it. The authors of \cite{Bianco2018}, the code of which is not made available, reported high results, but we have been unable to reproduce their numbers, even with more efficient networks. The authors of \cite{Varga2018DeeprnAC} use a pre-trained ResNet-101 and report high performance on~\cite{clive, koniq}, but later disclosed \cite{saupePage} that they are unable to reproduce their own results in \cite{Varga2018DeeprnAC}. \section{Concluding Remarks}\label{sec:conclusion} Problems involving perceptual picture quality prediction are long-standing and fundamental to perception, optics, image processing, and computational vision. Once viewed as a basic vision science modelling problem to improve on weak Mean Squared Error (MSE) based ways of assessing television systems and cameras, the picture quality problem has evolved into one that demands the large-scale tools of data science and computational vision. Towards this end we have created a database that is not only substantially larger and harder than previous ones, but contains data that enables global-to-local and local-to-global quality inferences. We also developed a model that produces local quality inferences, uses them to compute picture quality maps, and global image quality. We believe that the proposed new dataset and models have the potential to enable quality-based monitoring, ingestion, and control of billions of social-media pictures and videos. Finally, examples in Fig.~\ref{fig:failure_eg} of competing local vs. global quality percepts highlight the fundamental difficulties of the problem of no-reference perceptual picture quality assessment: its subjective nature, the complicated interactions between content and myriad possible combinations of distortions, and the effects of perceptual phenomena like masking. More complex architectures might mitigate some of these issues. Additionally, mid-level semantic side-information about objects in a picture (e.g., faces, animals, babies) or scenes (e.g., outdoor vs. indoor) may also help capture the role of higher-level processes in picture quality assessment. \section{Large-Scale Dataset and Human Study}\label{sec:dataset} \input{dataset_collage.tex} Next we explain the details of the new picture quality dataset we constructed, and the crowd-sourced subjective quality study we conducted on it. The database has about $40,000$ pictures and $120,000$ patches, on which we collected $4$M human judgments from nearly $8,000$ unique subjects (after subject rejection). It is significantly larger than commonly used ``legacy” databases~\cite{hamidLiveDB, ponomarenko2009tid2008, tid2013, csiq} and more recent ``in-the-wild'' crowd-sourced datasets~\cite{clive, koniq}. \subsection{UGC-like picture sampling} \label{sec:picture_sampling} Data collection began by sampling about $40$K highly diverse contents of diverse sizes and aspect ratios from hundreds of thousands of pictures drawn from public databases, including AVA \cite{ava}, VOC \cite{voc}, EMOTIC \cite{emotic}, and CERTH Blur \cite{certh}. Because we were interested in the role of local quality perception as it relates to global quality, we also cropped \underline{three} patches from each picture, yielding about $120$K patches. While internally debating the concept of ``representative,” we settled on a method of sampling a large image collection so that it would be substantially ``UGC-like.” We did this because billions of pictures are uploaded, shared, displayed, and viewed on social media, far more than anywhere else. \begin{figure}[h] \vspace{-1em} \begin{center} \includegraphics[width=0.7\linewidth]{figures/pic_dim.png} \vspace{-1em} \caption{\scriptsize{\textbf{Scatter plot of picture width versus picture height} with marker size indicating the number of pictures for a given dimension in the new database.}} \vspace{-2em} \label{fig:pixel_aspect} \end{center} \end{figure} We sampled picture contents using a mixed integer programming method~\cite{mixedInteger} similar to~\cite{koniq}, to match a specific set of UGC feature histograms. Our sampling strategy was different in several ways: firstly, unlike KonIQ~\cite{koniq}, no pictures were down sampled, since this intervention can substantially modify picture quality. Moreover, including pictures of diverse sizes better reflects actual practice. Second, instead of uniformly sampling feature values, we designed a picture collection whose feature histograms match those of $15$M randomly selected pictures from a social media website. This in turn resulted in a much more realistic and difficult database to predict features on, as we will describe later. Lastly, we did not use a pre-trained IQA algorithm to aid the picture sampling, as that could introduce \textit{algorithmic bias} into the data collection process. To sample and match feature histograms, we computed the following diverse, objective features on both our picture collection and the $15$M UGC pictures: \begin{packed_enum} \vspace{-0.05in} \item \textit{absolute brightness} $L = R + G + B$. \item \textit{colorfulness} using the popular model in \cite{measureColor}. \item \textit{RMS brightness contrast} \cite{peli}. \item \textit{Spatial Information(SI)}, the global standard deviation of Sobel gradients \cite{yu2013image}, a measure of complexity. \item \textit{pixel count}, a measure of picture size. \item number of \textit{detected faces} using~\cite{faceDetection}. \vspace{-0.05in} \end{packed_enum} In the end, we arrived at about $40$K pictures. Fig. \ref{fig:exemplarFLIVE} shows $16$ randomly selected pictures and Fig.~\ref{fig:pixel_aspect} highlights the diverse sizes and aspect ratios of pictures in the new database. \subsection{Patch cropping} \label{sec:patch_cropping} We applied the following criteria when randomly cropping out patches: (a) \textbf{aspect ratio:} patches have the same aspect ratios as the pictures they were drawn from. (b) \textbf{dimension:} the linear dimensions of the patches are $40\%$, $30\%$, and $20\%$ of the picture dimensions. (c) \textbf{location:} every patch is entirely contained within the picture, but no patch overlaps the area of another patch cropped from the same image by more than $25\%$. Fig. \ref{fig:egImagesPatches} shows two exemplar pictures, and three patches obtained from each. \iffalse \begin{figure}[h] \begin{center}$ \begin{array}{cc} \includegraphics[width=0.4\linewidth]{figures/fig2_left.jpg} & \includegraphics[width=0.4\linewidth]{figures/fig2_right.jpg} \\ \end{array}$ \vspace{-0.1in} \caption{\scriptsize{\textbf{Patch cropping:} Sample pictures and $3$ randomly positioned crops ($20\%$, $30\%$, and $40\%$).}} \vspace{-0.3in} \label{fig:egImagesPatches} \end{center} \end{figure} \fi \begin{figure}[h] \vspace{-0.1in} \begin{center}$ \begin{array}{cc} \includegraphics[width=0.48\linewidth]{figures/out_of_focus0161.jpg} & \hspace{-0.7em} \includegraphics[width=0.48\linewidth]{figures/motion0250.jpg} \\ \end{array}$ \vspace{-0.15in} \caption{\scriptsize{Sample pictures and $3$ randomly positioned crops ($20\%$, $30\%$, $40\%$).}} \vspace{-0.3in} \label{fig:egImagesPatches} \end{center} \end{figure} \iffalse \begin{figure*}[t] \begin{center} \includegraphics[width=\linewidth, trim={1em 1em 1em 2em},clip]{figures/amtDesign.png} \vspace{-2em} \caption{\scriptsize{\textbf{AMT task}: Workflow experienced by crowd-sourced workers when rating either pictures or patches.}} \vspace{-2em} \label{fig:AMTdesign} \end{center} \end{figure*} \fi \iffalse \begin{figure}[t] \begin{center} \includegraphics[width=\linewidth]{figures/AMT_Flow.png} \vspace{-1.5em} \caption{\scriptsize{\textbf{AMT task:} Workflow experienced by crowd-sourced workers when rating either pictures or patches.}} \vspace{-2em} \label{fig:AMTdesign} \end{center} \end{figure} \fi \begin{figure}[t] \begin{center} \includegraphics[width=0.8\linewidth, trim={0em 0em 1.5em 0em},clip]{figures/AMT_Flow.pdf} \vspace{-2em} \caption{\scriptsize{\textbf{AMT task:} Workflow experienced by crowd-sourced workers when rating either pictures or patches.}} \vspace{-2em} \label{fig:AMTdesign} \end{center} \end{figure} \subsection{Crowdsourcing pipeline for subjective study} \label{sec:crowdsourcing} Subjective picture quality ratings are true psychometric measurements on human subjects, requiring $10$-$20$ times as much time for scrutiny (per photo) as for example, object labelling \cite{imageNet}. We used the Amazon Mechanical Turk (AMT) crowdsourcing system, well-documented for this purpose \cite{clive, koniq, amt, cliveVideo}, to gather human picture quality labels. We divided the study into two separate tasks: picture quality evaluation and patch quality evaluation. Most subjects ($7141$ out of $7865$ workers) only participated in one of these, to avoid biases incurred by viewing both, even on different dates. Either way, the crowdsource workflow was the same, as depicted in Fig. \ref{fig:AMTdesign}. Each worker was given instructions, followed by a training phase, where they were shown several contents to learn the rating task. They then viewed and quality-rated $N$ contents to complete their human intelligent task (HIT), concluding with a survey regarding their experience. At first, we set $N = 60$, but as the study accelerated and we found the workers to be delivering consistent scores, we set $N = 210$. We found that the workers performed as well when viewing the increased number of pictures. \subsection{Processing subjective scores}\label{subsec:processScores} \noindent \textbf{Subject rejection:} We took the recommended steps~\cite{clive, cliveVideo} to ensure the quality of the collected human data. \vspace{-0.08in} \begin{packed_enum} \item We only accepted workers with \textbf{acceptance rates} $>75\%$. \item \textbf{Repeated images:} $5$ of the $N$ contents were repeated randomly per session to determine whether the subjects were giving consistent ratings. \item \textbf{``Gold" images:} $5$ out of $N$ contents were ``gold” ones sampled from a collection of $15$ pictures and $76$ patches that were separately rated in a controlled lab study by $18$ reliable subjects. The ``gold" images are not part of the new database. \end{packed_enum} \vspace{-0.08in} We accepted or rejected each rater’s scores within a HIT based on two factors: the difference of the repeated content scores compared with overall standard deviation, and whether more than $50\%$ of their scores were identical. Since we desired to capture many ratings, workers could participate in multiple HITs. Each content received at least $35$ quality ratings, with some receiving as many as $50$. The labels supplied by each subject were converted into normalized Z scores \cite{hamidLiveDB}, \cite{clive}, averaged (by content), then scaled to [0, 100] yielding \textbf{Mean Opinion Scores (MOS)}. The total number of human subjective labels collected after subject rejection was $3,931,710$ ($950,574$ on images, and $2,981,136$ on patches). \noindent\textbf{Inter-subject consistency:} A standard way to test the consistency of subjective data \cite{hamidLiveDB}, \cite{clive}, is to randomly divide subjects into two disjoint equal sets, compute two MOS on each picture (one from each group), then compute the Pearson linear correlation (LCC) between the MOS values of the two groups. When repeated over $25$ random splits, the average LCC between the two groups’ MOS was $\mathbf{0.48}$, indicating the difficulty of the quality prediction problem on this realistic picture dataset. Fig. \ref{fig:patchCorrel} (left) shows a scatter plot of the two halves of human labels for one split, showing a linear relationship and fairly broad spread. We applied the same process to the patch scores, obtaining a higher LCC of $\mathbf{0.65}$. This is understandable: smaller patches contain less spatial diversity; hence they receive more consistent scores. We also found that nearly all the non-rejected subjects had a positive Spearman rank ordered correlation (SRCC) with the golden pictures, validating the data collection process. \noindent\textbf{Relationships between picture and patch quality:} Fig. \ref{fig:patchCorrel} (right) is a scatter plot of the entire database of picture MOS against the MOS of the largest patches cropped from them. The linear correlation coefficient (LCC) between them is $0.43$, which is strong, given that each patch represents only $16\%$ of the picture area. The scatter plots of the picture MOS against that of the smaller ($30\%$ and $20\%$) patches are quite similar, with somewhat reduced LCC of $0.36$ and $0.28$, respectively (supplementary material). An outcome of creating highly realistic ``in the wild” data is that it is much more difficult to train successful models on. Most pictures uploaded to social media are of reasonably good quality, largely owing to improved mobile cameras. \iffalse \begin{figure}[t] \begin{center} $\begin{array}{cc} \includegraphics[height= 0.35\linewidth,width=0.46\linewidth]{figures/fig5b.png} & \hspace{-1em} \includegraphics[width=0.50\linewidth]{figures/fig5a.png} \\ \end{array}$ \vspace{-1em} \caption{\scriptsize{Scatter plots descriptive of the new subjective quality database. Left: Inter-subject scatter plot of a random $50\%$ divisions of the human labels of all $40$K$+$ pictures into disjoint subject sets. Right: Scatter plot of picture MOS vs MOS of largest patch ($40\%$ of linear dimension) cropped from each same picture.}} \vspace{-1.1in} \label{fig:patchCorrel} \end{center} \end{figure} \fi \begin{figure}[t] \begin{center} $\begin{array}{cc} \includegraphics[height= 0.35\linewidth,width=0.46\linewidth]{figures/fig5b_hn.png} & \hspace{-1em} \includegraphics[height= 0.35\linewidth,width=0.46\linewidth]{figures/fig5a.png} \\ \end{array}$ \vspace{-1em} \caption{\scriptsize{\textbf{Scatter plots descriptive of the new subjective quality database}. Left: Inter-subject scatter plot of a random $50\%$ divisions of the human labels of all $40$K$+$ pictures into disjoint subject sets. Right: Scatter plot of picture MOS vs MOS of largest patch ($40\%$ of linear dimension) cropped from each same picture. }} \vspace{-2em} \label{fig:patchCorrel} \end{center} \end{figure} Hence, the distribution of MOS in the new database is narrower and peakier as compared to those of the two previous ``in the wild” picture quality databases \cite{clive}, \cite{koniq}. This is important, since it is desirable to be able to predict small changes in MOS, which can be significant regarding, for example, compression parameter selection \cite{compressedClive}. As we show in Sec.~\ref{sec:modeling}, the new database is very challenging, even for deep models. \begin{figure}[h] \begin{center} \includegraphics[width=\linewidth]{figures/figure_6_new.png} \caption{\scriptsize{\textbf{MOS (Z-score) histograms of three ``in-the-wild” databases}. Left: CLIVE \cite{clive}. Middle: KoniIQ-$10$K \cite{koniq}. Right: The new database introduced here.}} \vspace{-0.3in} \label{fig:MOShists} \end{center} \end{figure} \section{Introduction} Digital pictures, often of questionable quality, have become ubiquitous. Several hundred billion photos are uploaded and shared annually on social media sites like Facebook, Instagram, and Tumblr. Streaming services like Netflix, Amazon Prime Video, and YouTube account for $60\%$ of all downstream internet traffic \cite{vidReport}. Being able to understand and predict the perceptual quality of digital pictures, given resource constraints and increasing display sizes, is a high-stakes problem. It is a common misconception that if two pictures are impaired by the same amount of a distortion (e.g., blur), they will have similar perceived qualities. However, this is far from true because of the way the vision system processes picture impairments. For example, Figs.~\ref{fig:teaser}(a) and (b) have identical amounts of JPEG compression applied, but Fig.~\ref{fig:teaser}(a) appears relatively unimpaired perceptually, while Fig.~\ref{fig:teaser}(b) is unacceptable. On the other hand, Fig.~\ref{fig:teaser}(c) has had spatially uniform white noise applied to it, but its perceived distortion severity varies across the picture. The complex interplay between picture content and distortions (largely determined by masking phenomena~\cite{autoBovik}), and the way distortion artifacts are visually processed, play an important role in how visible or annoying visual distortions may present themselves. Moreover, perceived quality correlates poorly with simple quantities like resolution and bit rate~\cite{mseLoveLeave}. Generally, predicting perceptual picture quality is a hard, long-standing research problem~\cite{mannos, autoBovik, mseLoveLeave, ssim, vif}, despite its deceptive simplicity (we sense distortion easily with little, if any, thought). \begin{figure}[t] \begin{center} $\begin{array}{ccc} \includegraphics[width=0.35\linewidth, height = 3.0cm]{figures/Fig1UpperLeft.jpg} & \hspace{-0.8em} \includegraphics[width=0.25\linewidth, height = 3.0cm]{figures/Fig1UpperRight.jpg} & \hspace{-0.8em} \includegraphics[width=0.35\linewidth, height = 3.0cm]{figures/Fig1LowerLeft.png} \\ \scriptsize(a) & (b) & (c) \\ \end{array}$ \vspace{-0.3cm} \captionof{figure}{\footnotesize{\textbf{Challenges in distortion perception:} Quality of a (distorted) image as perceived by human observers is \textit{perceptual quality}. Distortion perception is highly content-dependent. Pictures (a) and (b) were JPEG compressed using identical encode parameters, but present very different degrees of perceptual distortion. The spatially uniform noise in (c) varies in visibility over the picture content, because of contrast masking~\cite{autoBovik}.}} \label{fig:teaser} \end{center} \end{figure} \begin{figure}[t] \centering \vspace{-0.2in} $\begin{array}{cc} \includegraphics[width=3.6cm, height = 3.6cm]{figures/cat.jpg} & \includegraphics[width=0.36\linewidth, height = 3.6cm]{figures/mouth.jpg} \\ \scriptsize(a) & (b)\\ \vspace{-2em} \end{array}$ \caption{\footnotesize{\textbf{Aesthetics vs. perceptual quality} (a) is blurrier than (b), but likely more aesthetically pleasing to most viewers.}} \label{fig:aesthetics} \vspace{-0.23in} \end{figure} It is important to distinguish between the concepts of \textit{picture quality} \cite{autoBovik} and \textit{picture aesthetics} \cite{ava}. Picture quality is specific to perceptual distortion, while aesthetics also relates to aspects like subject placement, mood, artistic value, and so on. For instance, Fig.~\ref{fig:aesthetics}(a) is noticeably blurred and of lower perceptual quality than Fig.~\ref{fig:aesthetics}(b), which is less distorted. Yet, Fig.~\ref{fig:aesthetics}(a) is more aesthetically pleasing than the unsettling Fig.~\ref{fig:aesthetics}(b). While distortion can detract from aesthetics, it can also contribute to it, as when intentionally adding film grain \cite{filmGrain} or blur (bokeh) \cite{bokeh} to achieve photographic effects. While both concepts are important, picture quality prediction is a critical, high-impact problem affecting several high-volume industries, and is the focus of this work. Robust picture quality predictors can significantly improve the visual experiences of social media, streaming TV and home cinema, video surveillance, medical visualization, scientific imaging, and more. In many such applications, it is greatly desired to be able to assess picture quality at the point of ingestion, to better guide decisions regarding retention, inspection, culling, and all further processing and display steps. Unfortunately, measuring picture quality without a pristine \textit{reference} picture is very hard. This is the case at the output of any camera, and at the point of content ingestion by any social media platform that accepts user-generated content (UGC). \textit{No-reference} (NR) or blind picture quality prediction is largely unsolved, though popular models exist \cite{mittal2012no,niqe, deeptiBag,cornia,hosa,nferm, qac}. While these are often predicated on solid principles of visual neuroscience, they are also simple and computationally shallow, and fall short when tested on recent databases containing difficult, complex mixtures of real-world picture distortions~\cite{clive, koniq}. Solving this problem could affect the way billions of pictures uploaded daily are culled, processed, compressed, and displayed. Towards advancing progress on this high-impact unsolved problem, we make several new contributions. \vspace{-0.1in} \begin{packed_enum} \item \textbf{We built the largest picture quality database in existence}. We sampled hundreds of thousands of open source digital pictures to match the feature distributions of the largest use-case: pictures shared on social media. The final collection includes about $40,000$ real-world, unprocessed (by us) pictures of diverse sizes, contents, and distortions, and about $120,000$ cropped image patches of various scales and aspect ratios (Sec.~\ref{sec:picture_sampling},~\ref{sec:patch_cropping}). \item \textbf{We conducted the largest subjective picture quality study to date.} We used Amazon Mechanical Turk to collect about $4M$ human perceptual quality judgments from almost $8,000$ subjects on the collected content, about four times more than any prior image quality study (Sec.~\ref{sec:crowdsourcing}). \item \textbf{We collected both picture and patch quality labels to relate local and global picture quality}. The new database includes about $1M$ human picture quality judgments and $3M$ human quality labels on patches \textit{drawn from the same pictures}. Local picture quality is deeply related to global quality, although this relationship is not well understood \cite{moorthyPooling}, \cite{vpooling}. This data will help us to learn these relationships and to better model global picture quality. \item \textbf{We created a series of state-of-the-art deep blind picture quality predictors}, that builds on existing deep neural network architectures. Using a modified ResNet~\cite{resNet} as a baseline, we (a) use patch and picture quality labels to train a region proposal network~\cite{fastRCNN},~\cite{fasterRCNN} to predict both global picture quality and local patch quality. This model is able to produce better global picture quality predictions by learning relationships between global and local picture quality (Sec.~\ref{sec:p2p_model}). We then further modify this model to (b) predict spatial maps of picture quality, useful for localizing picture distortions (Sec.~\ref{sec:qualityMaps}). Finally, we (c) innovate a local-to-global feedback architecture that produces further improved whole picture quality predictions using local patch predictions (Sec.~\ref{subsec:patchAugQuality}). This series of models obtains state-of-the art picture quality performance on the new database, and transfer well -- \emph{without finetuning} -- on smaller ``in-the-wild” databases such as LIVE Challenge (CLIVE) \cite{clive} and KonIQ-10K \cite{koniq} (Sec.~\ref{sec:cross_data}). \end{packed_enum} \section{Learning Blind Picture Quality Predictors}\label{sec:modeling} With the availability of the new dataset comprising pictures and patches associated with human labels (Sec.~\ref{sec:dataset}), we created a series of deep quality prediction models that exploit its unique characteristics. We conducted four picture quality learning experiments, evolving from a simple network into models of increasing sophistication and perceptual relevance which we describe next. \subsection{A baseline picture-only model}\label{sec:baseline} To start with, we created a simple model that only processes pictures and the associated human quality labels. We will refer to this hereafter as the Baseline Model. The basic network that we used is the well-documented pre-trained ResNet-$18$~\cite{resNet}, which we modified (described next) and fine-tuned to conduct the quality prediction task. \noindent \textbf{Input image pre-processing:} Because picture quality prediction (whether by human or machine) is a psychometric prediction, it is crucial to not modify the pictures being fed into the network. While most visual recognition learners augment input images by cropping, resizing, flipping, etc., doing the same when training a perceptual quality predictor would be a psychometric error. Such input pre-processing would result in perceptual quality scores being associated with different pictures than they were recorded on. The new dataset contains thousands of unique combinations of picture sizes and aspect ratios (see Fig.~\ref{fig:pixel_aspect}). While this is a core strength of the dataset and reflects its realism, it also poses additional challenges when training deep networks. We attempted several ways of training the ResNet on raw multi-sized pictures, but the training and validation losses were not stable, because of the fixed sized pooling and fully connected layers. In order to tackle this aspect, we white padded each training picture to size $640\times640$, centering the content in each instance. Pictures having one or both dimensions larger than $640$ were moved to the test set. This approach has the following advantages: (a) it allows supplying constant-sized pictures to the network, causing it to stably converge well, (b) it allows large batch sizes which improves training, (c) it agrees with the experiences of the picture raters, since AMT renders white borders around pictures that do not occupy the full webpage's width. \noindent \textbf{Training setup:} We divided the picture dataset (and associated patches and scores) into training, validation and testing sets. Of the collected $39,810$ pictures (and $119,430$ patches), we used about $75\%$ for training ($30$K pictures, along with their $90$K patches), $19\%$ for validation ($7.7$K pictures, $23.1$K patches), and the remaining for testing ($1.8$K pictures, $5.4$K patches). When testing on the validation set, the pictures fed to the trained networks were also white bordered to size $640\times640$. As mentioned earlier, the test set is entirely composed of pictures having at least one linear dimension exceeding $640$. Being able to perform well on larger pictures of diverse aspect ratios was deemed as an additional challenge to the models. \noindent \textbf{Implementation Details:} We used the PyTorch implementation of ResNet-$18$~\cite{torchVision} pre-trained on ImageNet and retained only the CNN backbone during fine-tuning. To this, we added two pooling layers (adaptive average pooling and adaptive max pooling), followed by two fully-connected (\textit{FC}) layers, such that the final \textit{FC} layer outputs a single score. We used a batch size of $120$ and employed the MSE loss when regressing the single output quality score. We employed the Adam optimizer with $\beta_1=0.9$ and $\beta_2= 0.99$, a weight decay of $0.01$, and do a full fine-tuning for $10$ epochs. We followed a discriminative learning approach~\cite{howard2018universal}, using a lower learning rate of $3e^{-4}$, but a higher learning rate of $3e^{-3}$ for the head layers. These settings apply to all the models we describe in the following. \noindent \textbf{Evaluation setup:} Although the baseline model was trained on whole pictures, we tested it on both pictures and patches. For comparison with popular shallow methods, we also trained and tested BRISQUE~\cite{mittal2012no} and the ``completely blind” NIQE~\cite{niqe}, which does not involve any training. We reimplemented two deep picture quality methods - NIMA ~\cite{nima} which uses a Mobilenet-v2~\cite{mobileNetV2} (except we replaced the output layer to regress a single quality score), and CNNIQA~\cite{cnnIqa}, following the details provided by the authors. As is the common practice in the field of picture quality assessment, we report two metrics: (a) Spearman Rank Correlation Coefficient (\textbf{SRCC}) and (b) Linear Correlation Coefficient (\textbf{LCC}). \noindent \textbf{Results:} From Table \ref{tbl:onFlive}, the first thing to notice is the level of performance attained by popular shallow models (NIQE~\cite{niqe} and BRISQUE~\cite{mittal2012no}), which have the same feature sets. The unsupervised NIQE algorithm performed poorly, while BRISQUE did better, yet the reported correlations are far below desired levels. Despite being CNN-based, CNNIQA~\cite{cnnIqa} performed worse than BRISQUE~\cite{mittal2012no}. Our Baseline Model outperformed most methods and competed very well with NIMA~\cite{nima}. The other entries in the table (the ROIPool and Feedback Models) are described later. \begin{table}[t] \captionsetup{font=scriptsize} \setlength\extrarowheight{1.0pt} \centering \footnotesize \vspace{0.8em} \begin{tabular}{P{3.1cm}|P{0.85cm}|P{0.85cm}|P{0.85cm}|P{0.85cm}} \hline & \multicolumn{2}{c|}{\textbf{Validation Set}} & \multicolumn{2}{c}{\textbf{Testing Set}} \\ \hline \textbf{Model} & \textbf{SRCC} & \textbf{LCC} & \textbf{SRCC} & \textbf{LCC} \\ \hline NIQE \cite{niqe} & 0.094 & 0.131 & 0.211 & 0.288 \\ BRISQUE \cite{mittal2012no} & 0.303 & 0.341 & 0.288 & 0.373 \\ \hline CNNIQA \cite{cnnIqa} & 0.259 & 0.242 & 0.266 & 0.223 \\ NIMA \cite{nima} & 0.521 & 0.609 & 0.583 & 0.639 \\ \hline Baseline Model (Sec.~\ref{sec:baseline}) & 0.525 & 0.599 & 0.571 & 0.623 \\ RoIPool Model (Sec.~\ref{sec:p2p_model}) & 0.541 & 0.618 & 0.576 & 0.655 \\ Feedback Model (Sec.~\ref{subsec:patchAugQuality}) & \textbf{0.562} & \textbf{0.649} & \textbf{0.601} & \textbf{0.685} \\ \hline \end{tabular} \caption{\footnotesize{\textbf{Picture quality predictions: } Performance of picture quality models on the full-size validation and test pictures in the new database. A higher value indicates superior performance. NIQE is not trained. \vspace{-0.5em}}} \vspace{-1.5em} \label{tbl:onFlive} \end{table} Table \ref{tbl:patches} shows the performances of the \textit{same} trained, unmodified models on the associated picture patches of three reduced sizes ($40\%$, $30\%$ and $20\%$ of linear image dimensions). The Baseline Model maintained or slightly improved performance across patch sizes, while NIQE continued to lag, despite the greater subject agreement on reduced-size patches (Sec. \ref{subsec:processScores}). The performance of NIMA suffered as the patch sizes decreased. Conversely, BRISQUE and CNNIQA improved as the patch sizes decreased, although they were trained on whole pictures. \begin{table*}[h] \captionsetup{font=scriptsize} \setlength\extrarowheight{1.0pt} \centering \footnotesize \begin{tabular}{P{3.1cm}||P{0.7cm}|P{0.7cm}||P{0.7cm}|P{0.7cm}||P{0.7cm}|P{0.7cm}||P{0.7cm}|P{0.7cm}||P{0.7cm}|P{0.7cm}||P{0.7cm}|P{0.7cm}} \hline & \multicolumn{4}{c||}{(a)} & \multicolumn{4}{c||}{(b)} & \multicolumn{4}{c}{(c)} \\ \hline & \multicolumn{2}{c||}{Validation} & \multicolumn{2}{c||}{Test} & \multicolumn{2}{c||}{Validation} & \multicolumn{2}{c||}{Test} & \multicolumn{2}{c||}{Validation} & \multicolumn{2}{c}{Test}\\ \hline \textbf{Model} & \textbf{SRCC} & \textbf{LCC} & \textbf{SRCC} & \textbf{LCC} & \textbf{SRCC} & \textbf{LCC} & \textbf{SRCC} & \textbf{LCC} & \textbf{SRCC} & \textbf{LCC} & \textbf{SRCC} & \textbf{LCC}\\ \hline NIQE \cite{niqe} & 0.109 & 0.106 & 0.251 & 0.271 & 0.029 & 0.011 & 0.217 & 0.109 & 0.052 & 0.027 & 0.154 & 0.031\\ BRISQUE \cite{mittal2012no} & 0.384 & 0.467 & 0.433 & 0.498 & 0.442 & 0.503 & 0.524 & 0.556 & 0.495 & 0.494 & 0.532 & 0.526\\ \hline CNNIQA~\cite{cnnIqa} & 0.438 & 0.400 & 0.445 & 0.373 & 0.522 & 0.449 & 0.562 & 0.440 & 0.580 & 0.481 & 0.592 & 0.475\\ NIMA~\cite{nima} & 0.587 & 0.637 & 0.688 & 0.691 & 0.547 & 0.560 & 0.681 & 0.670 & 0.395 & 0.411 & 0.526 & 0.524\\ \hline Baseline Model (Sec.~\ref{sec:baseline}) & 0.561 & 0.617 & 0.662 & 0.701 & 0.577 & 0.603 & 0.685 & 0.704 & 0.563 & 0.541 & 0.633 & 0.630\\ RoIPool Model (Sec.~\ref{sec:p2p_model}) & 0.641 & 0.731 & 0.724 & 0.782 & 0.686 & 0.752 & 0.759 & 0.808 & 0.733 & 0.760 & 0.769 & 0.792\\ Feedback Model (Sec.~\ref{subsec:patchAugQuality}) & \textbf{0.658} & \textbf{0.744} & \textbf{0.726} & \textbf{0.783} & \textbf{0.698} & \textbf{0.762} & \textbf{0.770} & \textbf{0.819} & \textbf{0.756} & \textbf{0.783} & \textbf{0.786} & \textbf{0.808} \\ \hline \end{tabular} \caption{\footnotesize{\textbf{Patch quality predictions: } Results on (a) the largest patches ($40\%$ of linear dimensions), (b) middle-size patches ($30\%$ of linear dimensions) and (c) smallest patches ($20\%$ of linear dimensions) in the validation and test sets. Same protocol as used in Table \ref{tbl:onFlive}. \vspace{-1em}}} \label{tbl:patches} \end{table*} \subsection{RoIPool : a picture $+$ patches model} \label{sec:p2p_model} Next, we developed a new type of picture quality model that leverages both picture and patch quality information. Our ``RoIPool Model'' is designed in the same spirit as Fast/Faster R-CNN~\cite{fastRCNN, fasterRCNN}, which was originally designed for object detection. As in Fast-RCNN, our model has an \textit{RoIPool} layer which allows the flexibility to aggregate at both patch and picture-sized scales. However, it differs from Fast-RCNN~\cite{fastRCNN} in three important ways. First, instead of regressing for detecting bounding boxes, we predict full-picture and patch quality. Second, Fast-RCNN performs multi-task learning with two separate heads, one for image classification and another for detection. Our model instead shares a single head between patches and images. This was done to allow sharing of the ``quality-aware” weights between pictures and patches. Third, while both heads of Fast-RCNN operate solely on features from ROI-pooled region proposals, our model pools over the entire picture to conduct global picture quality prediction. \noindent\textbf{Implementation details:} As in~Sec.~\ref{sec:baseline}, we added an ROIPool layer followed by two fully-connected layers to the pre-trained CNN backbone of ResNet-18. The output size of the RoIPool unit was fixed at $2\times2$. All of the hyper-parameters are the same as detailed in Sec.~\ref{sec:baseline}. \noindent\textbf{Train and test setup:} Recall that we sampled $3$ patches per image and obtained picture and patch subjective scores (Sec.~\ref{sec:dataset}). During training, the model receives the following input: (a) image, (b) location coordinates \texttt{(left, top, right, bottom)} of all $3$ patches and, (c) ground truth quality scores of the image and patches. At test time, the RoIPool Model can process both pictures and patches of any size. Thus, it offers the advantage of predicting the qualities of patches of any number and specified locations, in parallel with the picture predictions. \begin{figure}[t \centering \vspace{-0.2in} \begin{minipage}[b]{.42\linewidth} \subcaptionbox{} {\hspace{0.1em}\includegraphics[width=0.48\linewidth, trim={2.1in 3.5in 2.55in 2.1in}, clip]{figures/figure8/baseline_model2_vert.pdf}\vspace{-0.5em}} \subcaptionbox{} {\hspace{-1em}\includegraphics[width=0.75\linewidth, trim={1.8in 2.85in 2.1in 1.8in}, clip]{figures/figure8/roi_pool_model2_vert.pdf}\vspace{-0.5em}}% \end{minipage \hspace{-3.5em} \begin{minipage}[t]{.71\linewidth} \subcaptionbox{} {\includegraphics[width=\linewidth, trim={1.5in 2.8in 1.5in 1.5in}, clip]{figures/figure8/feedback_model2_vert.pdf}\vspace{-1em}}% \end{minipage}% \vspace{-0.05in} \caption {% \footnotesize{Illustrating the different deep quality prediction models we studied. (a) \textbf{Baseline Model:} ResNet-$18$ with a modified head trained on pictures (Sec.~\ref{sec:baseline}). (b) \textbf{RoIPool Model:} trained on both picture and patch qualities (Sec.~\ref{sec:p2p_model}). (c) \textbf{Feedback Model:} where the local quality predictions are fed back to improve global quality predictions (Sec.~\ref{subsec:patchAugQuality}).\vspace{-2em}} \label{fig:roiPool}% }% \end{figure \noindent\textbf{Results:} As shown in Table \ref{tbl:onFlive}, the RoIPool Model yields better results than the Baseline Model and NIMA on whole pictures on both validation and test datasets. When the same trained RoIPool Model was evaluated on patches, the performance improvement was more significant. Unlike the Baseline Model, the performance of the ROIPool model increased as the patch sizes were reduced. This suggests that: (i) the RoIPool Model is more scalable than the Baseline Model, hence better able to predict the qualities of pictures of varying sizes, (ii) accurate patch predictions can help guide global picture prediction, as we show in Sec. \ref{subsec:patchAugQuality}, (iii) this novel picture quality prediction architecture allows computing local quality maps, which we explore next. \subsection{Predicting perceptual quality maps} \label{sec:qualityMaps} Next, we used the ROIPool model to produce patch-wise quality maps on each image, since it is flexible enough to make predictions on any specified number of patches. This unique picture quality map predictor is the first deep model that is learned from true human-generated picture and patch labels, rather than from proxy labels delivered by an algorithm, as in \cite{fullyDeepIQA}. We generated picture quality maps in the following manner: (a) we partitioned each picture into a grid of $32\times32$ non-overlapping blocks, thus preserving aspect ratio (this step can be easily extended to process denser, overlapping, or smaller blocks) (b) Each block's boundary coordinates \texttt{(left, top, right, bottom}) were provided as input to the RoIPool to guide learning of patch quality scores (c) For visualization, we applied bi-linear interpolation to the block predictions, and represented the results as magma color maps. We then $\alpha$-blended the quality maps with the original pictures ($\alpha = 0.8$). From Fig.~\ref{fig:qualMaps}, we may observe that the ROIPool Model is able to accurately distinguish regions that are blurred, washed-out, or poorly exposed, from high-quality regions. Such spatially localized quality maps have great potential to support applications like image compression, image retargeting, and so on. \subsection{A local-to-global feedback model}\label{subsec:patchAugQuality} As noted in Sec.~\ref{sec:qualityMaps}, local patch quality has a significant influence on global picture quality. Given this, how do we effectively leverage local quality predictions to further improve global picture quality? To address this question, we developed a novel architecture referred to as the Feedback Model (Fig.~\ref{fig:roiPool}(c)). In this framework, the pre-trained backbone has two branches: (i) an RoIPool layer followed by an FC-layer for local patch and image quality prediction (\texttt{Head0}) and (ii) a global image pooling layer. The predictions from \texttt{Head0} are concatenated with the pooled image features from the second branch and fed to a new FC layer (\texttt{Head1}), which makes whole-picture predictions. From Tables \ref{tbl:onFlive} and \ref{tbl:patches}, we observe that the performance of the Feedback Model on both pictures and patches is improved even further by the unique local-to-global feedback architecture. This model consistently outperformed \underline{all} shallow and deep quality models. The largest improvement is made on the whole-picture predictions, which was the main goal. The improvement afforded by the Feedback Model is understandable from a perceptual perspective, since, while quality perception by a human is a low-level task involving low-level processes, it also involves a viewer casting their foveal gaze at discrete localized patches of the picture being viewed. The overall picture quality is likely an integrated combination of quality information gathered around each fixation point, similar to the Feedback Model. \noindent \textbf{Failure cases:} While our model attains good performance on the new database, it does make errors in prediction. Fig~\ref{fig:failure_eg}(a) shows a picture that was considered of a very poor quality by the human raters (MOS=$18$), while the Feedback model predicted an overrated score of $57$, which is moderate. This may have been because the subjects were less forgiving of the blurred moving object, which may have drawn their attention. Conversely, Fig~\ref{fig:failure_eg}(b) is a picture that was underrated by our model, receiving a predicted score of $68$ against the subject rating of $82$. It may have been that the subjects discounted the haze in the background in favor of the clearly visible waterplane. These cases further reinforce the difficulty of perceptual picture quality prediction and highlight the strength of our new dataset. \begin{figure}[t] \vspace{-1em} \begin{center}$ \vspace{-0.3em} \begin{array}{cc} \hspace{-1em} \includegraphics[height=0.39\linewidth, trim = {0.2in, 0.1in, 0.2in, 0.11in}, clip]{figures/failure_cases/FailureCase1.pdf} & \hspace{-0.8em} \includegraphics[height=0.39\linewidth, trim = {0.1in, 0.1in, 0.2in, 0.11in}, clip]{figures/failure_cases/FailureCase2.pdf} \\ \tiny{(a)} & \tiny{(b)}\\ \vspace{-1.5em} \end{array}$ \caption{\footnotesize{\textbf{Failure cases:} Examples where the Feedback Model's predictions differed the most from the ground truth predictions.}} \vspace{-0.3in} \label{fig:failure_eg} \end{center} \end{figure} \subsection{Cross-database comparisons} \label{sec:cross_data} Finally, we evaluated the Baseline (Sec.~\ref{sec:baseline}), RoIPool (Sec.~\ref{sec:p2p_model}), Feedback (Sec.~\ref{subsec:patchAugQuality}) , and other baselines -- all trained on the proposed dataset -- on two other smaller ``in-the-wild” databases CLIVE \cite{clive} and KonIQ-10k \cite{koniq} \underline{without any fine-tuning}. From Table \ref{tbl:cliveKoniq}, we may observe that all our three models, trained on the proposed dataset, transfer well to other databases. The Baseline, RoIPool, and Feedback Models all outperformed the shallow and other deep models \cite{nima, cnnIqa} on both datasets. This is a powerful result that highlights the representativeness of our new dataset and the efficacy of our models. The best reported numbers on both databases~\cite{mixedDBiqa} uses a Siamese ResNet-34 backbone by training and testing on the same datasets (along with 5 other datasets). While this model reportedly attains $0.851$ SRCC on CLIVE and $0.894$ on KonIQ-$10$K, we achieved the above results by directly applying pre-trained models, thereby not allowing them to adapt to the distortions of the test data. When we also trained and tested on these datasets, our picture-based Baseline Model also performed at a similar level, obtaining an SRCC of $0.844$ on CLIVE and $0.890$ on KonIQ-$10$K. \begin{table}[t] \captionsetup{font=scriptsize} \setlength\extrarowheight{1.0pt} \centering \footnotesize \begin{tabular}{P{3.1cm}||P{0.7cm}|P{0.7cm}||P{0.7cm}|P{0.7cm}} \hline & \multicolumn{4}{c}{\textbf{Validation Set}} \\ \hline & \multicolumn{2}{c||}{\textbf{CLIVE} \cite{clive}} & \multicolumn{2}{c}{\textbf{KonIQ} \cite{koniq}} \\ \hline \textbf{Model} & \textbf{SRCC} & \textbf{LCC} & \textbf{SRCC} & \textbf{LCC} \\ \hline NIQE \cite{niqe} & 0.503 & 0.528 & 0.534 & 0.509 \\ BRISQUE \cite{mittal2012no} & 0.660 & 0.621 & 0.641 & 0.596 \\ \hline CNNIQA \cite{cnnIqa} & 0.559 & 0.459 & 0.596 & 0.403 \\ NIMA \cite{nima} & 0.712 & 0.705 & 0.666 & 0.721 \\ \hline Baseline Model (Sec.~\ref{sec:baseline}) & 0.740 & 0.725 & 0.753 & 0.764 \\ RoIPool Model (Sec.~\ref{sec:p2p_model}) & 0.762 & \textbf{0.775} & 0.776 & 0.794 \\ Feedback Model (Sec.~\ref{subsec:patchAugQuality}) & \textbf{0.784} & 0.754 & \textbf{0.788} & \textbf{0.808} \\ \hline \end{tabular} \caption{\footnotesize{\textbf{Cross-database comparisons:} Results when models trained on the new database are applied on CLIVE \cite{clive} and KonIQ \cite{koniq} \textbf{without fine-tuning.}\vspace{-1.2em}}} \vspace{-1.5em} \label{tbl:cliveKoniq} \end{table} \part*{Supplementary Material -- \\ From Patches to Pictures (PaQ-2-PiQ): \\ Mapping the Perceptual Space of Picture Quality} \section{Performance Summary} The performance of NIMA~\cite{nima} reported in the paper used a default MobileNet~\cite{mobileNetV2} backbone. For a fair comparison against the proposed family of models which used ResNet-$18$ backbone, we reported the performance of NIMA (ResNet-$18$) on images (Table~\ref{tbl:onFlive}) and patches (Table~\ref{tbl:patches}) of the new datatbase, and also cross-database performance on CLIVE~\cite{clive} and KonIQ-$10$K~\cite{koniq} (Table~\ref{tbl:cliveKoniq}). Given that the proposed models either compete well or outperform other models in all categories further demonstrates their quality prediction strength across multiple databases containing diverse image distortions. \input{tbl/FLIVE.tex} \input{tbl/patches.tex} \input{tbl/CLIVE_KonIQ.tex} \section{Information on Model Parameters} \input{tbl/total_params.tex} \section{Picture MOS vs Patch MOS scatter plots} \begin{figure}[h] \begin{center} $\begin{array}{cc} \includegraphics[height= 0.35\linewidth,width=0.46\linewidth]{figures-suppl/im_p2.png} & \hspace{-1em} \includegraphics[height= 0.35\linewidth,width=0.46\linewidth]{figures-suppl/im_p3.png} \\ \end{array}$ \vspace{-1em} \caption{\scriptsize{\textbf{Scatter plots of picture MOS vs patch MOS}. Left: Scatter plot of picture MOS vs MOS of second largest patch ($30\%$ of linear dimension) cropped from each same picture. Right: Scatter plot of picture MOS vs MOS of smallest patch ($20\%$ of linear dimension) cropped from each same picture. }} \vspace{-2em} \label{fig:patchCorrel} \end{center} \end{figure} \section{Amazon Mechanical Turk Interface} We allowed the workers on Amazon Mechanical Turk (AMT) to preview the ``Instructions" page (as shown in Fig~\ref{fig:instructions}) before they accept to participate in the study. Once accepted, they were tasked with rating the quality of images on a Likert scale marked with ``Bad", ``Poor", ``Fair", ``Good" and ``Excellent" as demonstrated in Fig.~\ref{fig:training1} and~\ref{fig:testing1}. A similar user interface was used for patch quality rating task. \begin{figure}[h] \begin{center} \includegraphics[width=0.9\linewidth, height=\textheight, trim={0em 0em 1.5em 0em},clip]{figures-suppl/drawing-6.pdf} \caption{\scriptsize{\textbf{AMT task:} The ``Instructions" page shown to workers at the beginning of each HIT.}} \label{fig:instructions} \end{center} \end{figure} \begin{figure}[h] \begin{center} \includegraphics[width=\linewidth, trim={0em 0em 1.5em 0em},clip]{figures-suppl/training_3.png} \vspace{-2em} \caption{\scriptsize{\textbf{AMT task:} Training session interface of AMT task experienced by crowd-sourced workers when rating pictures.}} \vspace{-2em} \label{fig:training1} \end{center} \end{figure} \begin{figure}[h] \begin{center} \includegraphics[width=\linewidth, trim={0em 0em 1.5em 0em},clip]{figures-suppl/testing_1.png} \vspace{-2em} \caption{\scriptsize{\textbf{AMT task:} Testing session interface of AMT task experienced by crowd-sourced workers when rating pictures.}} \vspace{-2em} \label{fig:testing1} \end{center} \end{figure}
1,108,101,565,328
arxiv
\section{Conclusion} \label{sec:conclusion} In this paper, we have demonstrated a novel method for music-to-body movement generation in 3-D space. Different from previous studies, which merely apply conventional recurrent neural networks on this task, our model incorporates the U-net with self-attention mechanism to enrich the expressivity of skeleton motion. Also, we design a refinement network specifically for the right wrist to generate more reasonable bowing movements. Overall, our proposed model achieves promising results compared to baselines in quantitative evaluation, and the generated body movement sequences are perceived as reasonable arrangement in subjective evaluation, especially for participants with music expertise. Codes, data, and related materials are available at the project link.\footnote{https://github.com/hsuankai/Temporally-Guided-Music-to-Body-Movement-Generation} \section{Data and pre-processing} In this section, we introduce the procedure to compile a new violin performance dataset for this study. And the data pre-processing procedure is summarized in Figure~\ref{pre-processing}. \subsection{Dataset} We propose a newly-collected dataset containing 140 violin solo videos with total length of around 11 hours. 14 selected violin solo pieces were performed by 10 violin-major students from music college. This dataset therefore contains diverse performed version and individual musical interpretations based on the same set of repertoire, which is specifically designed for the exploration of non-one-to-one correspondence between music motion and audio. The selected repertoire contains 12 conventional Western classical pieces for violin solo ranging from Baroque to post-Romanticism, plus two non-Western folk songs. We collected 10 different versions performing identical music pieces, which allows us to derive 10 sets of bowing and fingering arrangements, as well as pseudolabel (i.e. the skeleton motion data extracted from pose estimation method) for each music piece. The multi-version design of the dataset is incorporated with our data splitting strategy to explore diverse possible motion patterns corresponding to identical music piece. The skeleton and music data are available at the project link (see Section \ref{sec:conclusion}). \subsection{Audio feature extraction} We apply {\it librosa}, a Python library for music signal processing \cite{mcfee2015librosa}, to extract audio features. Each music track is sampled at 44.1 kHz, and the short-time Fourier transform (STFT) is performed with a sliding window (length = 4096 samples; hope size = 1/30 secs). Audio features are then extracted from STFT, including 13-D Mel-Frequency Cepstral Coefficients (MFCC), logarithm mean energy (a representation for sound volume), and their first-order temporal derivative, resulting in a feature dimension of 28. \subsection{Skeletal keypoints extraction} The state-of-the-art pose detection method \cite{pavllo20193d} is adopted to extract the 3-D position of violinists’ 15 body joints, resulting in a 45-D body joint vector for each time frame. The 15 body joints are: head, nose, thorax, spine, right shoulder, left shoulder, right elbow, left elbow, right wrist, left wrist, hip, right hip, left hip, right knee, and left knee. The joints are extracted frame-wisely at the video’s frame rate (30 fps). All the joint data are normalized, such that the mean of all joints over all time instances is zero. The normalized joint data are then smoothed over each joint using a median filter (window size = 5 frames). \subsection{Data pre-processing} The extracted audio and skeletal data are synchronized with each other with the frame rate of 30 fps. To facilitate the training process, the input data are divided into segments according to the basic metrical unit in music. Beat position serves as the reference to slice data segments, considering the fact that the arrangement of bowing stroke is highly related to the metrical position. To obtain beat labels from audio recordings, we first derive beat positions in the MIDI file for each musical piece, and the dynamic time warping (DTW) algorithm is applied to align beat positions between the MIDI-synthesized audio and the recorded audio performed by human violinists. The beat positions are then used for the data segmentation. Each data segment starts from a beat position, and is with the length of 900, i.e., 30 seconds. According to the average tempo in the dataset, 30 seconds is slightly longer than 16 bars in music, which provides a sufficient context for our task. All the segmented data are normalized in feature dimension by z-score. For the data splitting, a leave-one-piece-out (i.e., 14-fold cross-validation) scheme is performed by assigning 14 pieces to the testing set by turns. we take the recordings of one specific violinist for training and validation, and take the recordings of the remaining nine violinists for testing. For the training and validation data, we choose the recordings played by the violinist whose performance technique is the best among all according to expert's opinion. Within the training and validation set, 80 \% of the sequences are for training and 20 \% of the sequences are for testing. This 14-fold cross-validation procedure results in 14 models. Each model is evaluated on the piece performed by the remaining nine violinists in the testing set. The results will then be discussed by comparing the nine different performance versions and their corresponding ground truths. This evaluation procedure can reflect the nature of violin performance, in which multiple possible motion patterns may correspond to a identical music piece in different musician's recordings. For the cross-dataset evaluation, we also evaluate our model using the URMP dataset \cite{li2018creating}, which has been used in previous studies for music-to-body-movement generation \cite{li2018skeleton,liu2020body}. The URMP dataset comprises 43 music performance videos with individual instruments recorded in separate tracks, and we choose 33 tracks containing solo violin performance as our test data for cross-dataset evaluation. For reproducibility, the list of the 33 chosen pieces are provided on the project link. \section{Experiment} \subsection{Baselines} We compare our method with two baseline models, which share similar objectives with our work to generate conditioned body movement based on the given audio data. \textbf{Audio to body dynamics} Both our work and \cite{shlizerman2018audio} aim to generate body movement in music performance. \cite{shlizerman2018audio} predicts reasonable playing movement based on piano and violin audio. Their model consists of 1-layer LSTM layer with time delay and 1-layer fully connected layer with dropout. We follow their setup and use MFCC feature as the input. It should be noted that while PCA is applied in \cite{shlizerman2018audio} to reduce the dimension in lower hand joints, PCA is not applicable to our task, since our task is to generate the full body motion, instead of only generating the hand motion. Their work takes the estimated 2-D arm and hand joint positions from video as the psudo-labels, whereas we extracts 15 body joints in 3-D space. \textbf{Speech to gesture} Another work in \cite{ginosar2019learning} aims to predict the speaker's gesture based on the input speech audio signal. Compared to our task, their predicted gesture motion are short segments ranging from 4-12 seconds, while our music pieces generally range from one to ten minutes. Convolutional U-net architecture is applied to their work, and a motion discriminator is introduced to eliminate single motion output. Although applying discriminator may increase the distance between the generated motion and ground truth(i.e, $L_1$ loss), the model is capable of producing more realistic motion. In this paper, we only take their model without discriminator as the baseline for comparison. \subsection{Evaluation metrics} There is no standard way to measure the performance of a body movement generation system so far. To provide a comprehensive comparison among different methods, we propose a rich set of quantitative metrics to measure the overall distance between the skeletons and also the accuracy of bowing attack inference. $\boldsymbol{L_1}$ \textbf{and PCK} While $L_1$ distance is the objective function in the training stage, we also used it to evaluate the difference between generated motion and ground truth. Note that we report the results by averaging over 45-D joints and across all frames. Considering that the motion in right-hand wrist is much larger compared to other body joints, we calculate another $L_1$ hand loss for the 3-D wrist joint alone. The Percentage of Correct Keypoints (\textbf{PCK}) was applied to evaluate the generated gesture in speech in a prior work \cite{ginosar2019learning}, and we adapt PCK to 3-D coordinate in this paper. In the computation of PCK, a predicted keypoint is defined as correct if it falls within $\alpha\times \max(h,w,d)$ pixels of the ground truth keypoint, where $h$, $w$ and $d$ are the height, width and depth of the person bounding box, and we average the results using $\alpha=0.1$ and $\alpha=0.2$ as the final PCK score. \textbf{Bowing attack accuracy} The bowing attack indicates the time slot when the direction of the hand is changing. We first take both right-hand wrist joint sequences having length $L$ as $\hat{y}^{(rw)}$ and $y^{(rw)}$. Note that $\hat{y}^{(rw)}$ here only represents one coordinate of right-hand wrist joint. For each coordinate, We then compute the direction $D(i)$ for both sequences as: \begin{equation}\label{eq:9} D(i) = \begin{cases} 1 & \quad \text{if } y^{(rw)}(i+1) - y^{(rw)}(i) > 0,\\ 0 & \quad \text{if } y^{(rw)}(i+1) - y^{(rw)}(i) \leq 0. \end{cases} \end{equation} Accordingly, we get the right-hand wrist joint direction for generated results $\hat{D}(i)$ and ground truth $D(i)$ respectively. Derived from the bowing direction $D(i)$, the bowing attack $A(i)$ at time $i$ would be set as 1 if the direction $D(i)$ is different from $D(i-1)$: \begin{equation} \label{eq:10} A(i) = \begin{cases} 1 & \quad \text{if } D(i) - D(i-1) \neq 0,\\ 0 & \quad \text{otherwise }. \end{cases} \end{equation} Finally, we compare the predicted bowing attacks $\hat{A}(i)$ and the ground truth ones $A(i)$. Additionally, we take a tolerance $\delta$, and set the ground truth as 1 in the range $[i-\delta, i+\delta]$ for a bowing attack located at time $i$, which suggests that the predicted bowing attack $\hat{A}(i)$ is a true positive (i.e. correct) prediction, if real bowing attack is located on the range $[i-\delta, i+\delta]$. Otherwise, it would be a false prediction. Notice that all the ground truth bowing attacks are only calculated once, which means that if all real bowing attacks near $\hat{A}(i)$ have been calculated before, then $\hat{A}(i)$ is a false prediction. Another previous work~\cite{liu2020body} also introduced bowing attack accuracy as an evaluation metric, and set the tolerance value as $\delta=10$ (i.e, 0.333s). We consider that the tolerance should be more strict and set $\delta=3$ (i.e, 0.1s) in this paper. The F1-scores for bowing attack labels on axes x, y, z (width, height and depth) are calculated, and represented as Bowx, Bowy and Bowz, respectively, whereas the average of three bowing attack accuracy is shown as bow avg. \textbf{Cosine similarity} In this paper, our goal is not to generate identical playing movement as the ground truth, and the cosine similarity is therefore a suitable measurement to evaluate the general trend of bowing movement. We compute cosine similarity for 3-D right-hand wrist joint between generated results and the ground truth, and then take the average over three coordinates across all frames. It should be noted that the above evaluation metrics cannot measure the \emph{absolute} performance of body movement generation, since there is no standard and unique ground truth. Instead, the above evaluation metrics are to measure the \emph{consistency} between the generated results and a version of human performance. \subsection{Quantitative results} We compare our proposed method with two baselines~\cite{shlizerman2018audio}~\cite{ginosar2019learning} for the average performance over 14-fold test (as shown in Table~\ref{tab:quantitative results}). Three variants of the proposed methods are presented: First, {\it Unet1} represents the model with one single block (i.e. $M=1$, see Figure \ref{overview}) composed by U-net with self-attention. Second, {\it Unet1 + FFN + Refine Network} adds a position-wise feed forward network and a refine network after {\it Unet1}. The last one, {\it Unet2 + FFN + Refine Network}, adopts two U-net blocks ($M=2$) instead. The values reported in Table~\ref{tab:quantitative results} can be understood based on a reference measurement: the mean of right arm length in our dataset is 0.13. For example, an $L_1$ average values at 0.0391 mean that the average $L_1$ distance between ground truth and prediction is around 0.0391/0.13$\approx$30\% of the length of right arm. In addition, it should be noted that $L_1$ avg are generally smaller than $L_1$ hand avg, which is owing to the fact that joints on trunk mostly move with small quantity, whereas right-hand wrist exhibit obvious bowing motion covering wide moving range. It can be observed from the table that our model outperforms {\it A2B} both in $L_1$ and PCK, which indicates that our model applying U-net based network with self-attention mechanism can improve the performance for learning ground truth movement. Although {\it S2G} has competent performance for $L_1$ avg and PCK, our model boosts bowing attack accuracy more than 4\% compared to {\it S2G}. Also, after adding the position-wise feed forward network and the refined network, we get better performance in $L_1$ hand, bowing attack x, y, z and cosine similarity. This proves that the two components play a critical role for learning hand movement. Further, stacking two U-net blocks can increase bowing attck accuracy about 1\%. Overall, stacking two blocks of U-net and adding two components can achieve the best results in most metrics. This best model outperforms the baseline {\it A2B} model significantly in a two-tailed t-test ($p=8.21\times 10^{-8}$, d. f. $=250$). \subsection{Cross-dataset evaluation} To explore if the methodology and designed process can adapt to other scenarios (e.g., different numbers of recorded joints, different positions such as standing or sitting, etc.), a cross-dataset evaluation is performed on the URMP dataset. The same process mentioned in Section 3 is applied to extract audio features and to estimate violinists' skeleton motion. However, the URMP dataset only contains 13 body joints, whereas 15 joints are extracted from our recorded videos. Considering the different skeleton layouts between two dataset, only the averaged bowing attack accuracy, and the accuracy on three directions are computed as illustrated in Table~\ref{tab:urmp}. Our method (i.e. \emph{Unet2 + FFN + Refine Network}) in Table~\ref{tab:urmp} represents the best model demonstrated in the quantitative results. It can be observed that our proposed method outperforms two baselines for bowing attack accuracy, and it is demonstrated that our model is well-adapted to different scenarios and datasets. \begin{table}[t] \caption{Comparison for baselines and the proposed model evaluated on audio input with varying speeds. `1x' means the original speed, `2x' means double speed, and so on.} \begin{tabular}{|l|ccccc|} \hline & 0.5x & 0.75x & 1x & 1.5x & 2x \\ \hline\hline \textit{A2B}~\cite{shlizerman2018audio} & 0.4024 & 0.4217 & 0.4357 & 0.4807 & 0.4971 \\ \textit{S2G}~\cite{ginosar2019learning} & 0.3591 & 0.3744 & 0.3921 & 0.4007 & 0.4111 \\ \hline \textit{Our} & \textbf{0.4400} & \textbf{0.4367} & \textbf{0.4656} & \textbf{0.4896} & \textbf{0.5182}\\ \hline \end{tabular} \label{tab:varied speed} \end{table} \subsection{Robustness test} To test the robustness of our model to tempo variation, we compare the average bowing attack F1-scores on the same music pieces with different tempi. It is expected that the performance of a robust model should be invariant with diverse tempi. It should be noted that only the longest piece in the dataset is tested in this experiment, and all results shown in Table~\ref{tab:varied speed} are the Bow avg values only. As shown in Table~\ref{tab:varied speed}, our proposed model achieves better results compared to two baselines in five settings of tempo, which verifies the robustness of the proposed method with different performance tempi. The bowing attack accuracyis more likely to improve with faster tempo, since the prediction has a better chance to fall between the range $[i-\delta, i+\delta]$. \subsection{Subjective evaluation} Since {\it A2B} shows better performance than {\it S2G}, we take only the ground truth, {\it A2B}, and our model for comparison in the subjective evaluation. The material for evaluation is 14 performances played by one randomly selected violinist (length of each performance = 94 seconds). The ground truth, the generated movements by {\it A2B} and our model are presented in a random order for per music piece. And the participants are asked to rank 'the similarity level compared to human being's playing' and the 'rationality of the movement arrangement' among the three versions. Within 36 participants in this evaluation, 38.9\% of participants have played violin, and 41.7\% have gotten music education or worked on music-related job. The results for all participants are shown in Figure~\ref{subjective_evaluation_all}, whereas the results specifically for the participants who have played violin are shown in Figure~\ref{subjective_evaluation_vio}. For the rationality, our results and the ground truth are much more reasonable than $A2B$, and the difference is more evident in Figure~\ref{subjective_evaluation_vio} compared to Figure~\ref{subjective_evaluation_all}. For the extent of being like human, the results are quite similar to the results of rationality in Figure~\ref{subjective_evaluation_vio}, whereas no obvious trend is observed in Figure~\ref{subjective_evaluation_all}. This result may be owing to the limitation that only the violinist's skeleton is included in the evaluation. In the future work, we consider to incorporate the violin bow as a part of our architecture to generate more vivid animations. \begin{figure}[t] \centering\includegraphics[width=3.5in]{images/subjective_evaluation_all.png} \caption{The subjective evaluation for all participants. Left: The extent of playing movement being like human. Right: The rationality of playing movement.}\label{subjective_evaluation_all} \end{figure} \begin{figure}[] \centering\includegraphics[width=3.5in]{images/subjective_evaluation_vio.png} \caption{The subjective evaluation only for the participants who have played violin. Left: The extent of playing movement being like human. Right: The rationality of playing movement.}\label{subjective_evaluation_vio} \end{figure} \subsection{Qualitative results} For a more comprehensive demonstration of our result, we illustrate one example of the generated skeletons of the proposed method, the baseline method, and ground truth, as shown in Figure~\ref{qualitative evaluation2}. In this example, we choose one bar from one of the music pieces in the testing data and show the corresponding skeletons. Figure~\ref{qualitative evaluation2} clearly shows that the movements of ground truth skeletons are consistent to the down-bow and up-bow marks in the score. It can be observed that the skeletons generated by the proposed model also exhibit consistent bowing direction in the right hand, while the skeletons generated by A2B do not show any changes of the bowing direction within this music segment. \begin{figure}[t] \centering\includegraphics[width=0.5\textwidth]{images/qualitative_evaluation3.png} \caption{Illustration of generated playing movement and ground truth with corresponding sheet music. $\sqcap$ and $\vee$ indicate down bow and up bow separately. Example selected from the 20th bar of a folk song \emph{Craving for the Spring Wind} composed by Teng Yu-Shian.}\label{qualitative evaluation2} \end{figure} \section{Introduction} Music performance is typically presented in both audio and visual forms. Musician's body movement acts as the pivot to connect audio and visual modalities, since musicians employ their body movement to produce the performed sound, and such movement also serves as the means to communicate their musical ideas toward the audience. As the result, the analysis, interpretation, and modeling of musicians' body movement has been an essential research topic in the interdisciplinary fields for music training \cite{farber1987discovering, pierce1997four}, music recognition \cite{li2017audiovisual,huang2019identifying}, biomechanics, and music psychology \cite{davidson2012bodily,huang2017conducting,thompson2012exploring,wanderley2005musical,Burger2013PerceivedEmotions}. Motion capture and pose estimation techniques \cite{pavllo20193d} facilitated quantitative analysis of body motion by providing the data describing how each body joint moves with time. Beyond such research works based on analysis, an emerging focus is to develop a generative model that can automatically generate body movements from music. Such a technique can be applied to music performance animation, and human-computer interaction platforms, in which the virtual character's body movement can be reconstructed from audio signal alone, without the physical presence of human musician. Several studies endeavor to generate body movement from audio and music signals, including generating pianist's and violinist's 2-D skeletons from music audio \cite{shlizerman2018audio,li2018skeleton,liu2020body}, generating hand gestures from conversational speech \cite{ginosar2019learning}, and generating choreographic movements from music \cite{kakitsuka2016choreographic,lee2019dancing}. In this paper, we focus on the generation for violinists' body movement. Violinists' body movement is highly complicated and intertwined with the performed sound. To investigate musical movement, previous research identified three main types of body movement in music performance. First, the \emph{instrumental movement} leads to the generation of instrument sound; second, the \emph{expressive movement} induces visual cues of emotion and musical expressiveness; and third, the \emph{communicative movement} interacts with other musicians and the audience \cite{wanderley2005musical}. Taking a violinist's instrumental movement as an example, a \emph{bow stroke} is a movement executed by the right hand to make the bow moving across the string. For a bowed note termed \emph{arco}, there are two typical bowing modes: up-bow (the bow moving upward) and down-bow (the bow moving downward) The arrangement of bow strokes depends on how the musician segments a note sequence. In general, a group of notes marked with a \emph{slur} on the score should be played in one bow stroke. Yet the music scores do not usually contain detailed bowing annotations for every note through the whole music piece, but only provide suggested bowing marks for several important instances, which renders musicians a lot of freedom to apply diverse bowing strategies according to their own musical interpretations. Given the flexibility of bowing in the performance practice, still, the bowing configuration should be arranged in a sensible manner to reflect the structure in music compositions. An unreasonable \emph{bowing attack} (i.e., the time instance when the bowing direction changes) timing can be easily sensed by experienced violinists. Likewise, the left-hand fingering movement is also flexible to a certain extent: an identical note can be played with different strings at different fingering positions, depending on the pitches of successive notes. In addition to the instrumental movements (bowing and fingering motion), which are directly constrained by the written note sequence in the music scores, the expressive body movements also reflect the context-dependent and subject-dependent musical semantics, including the configuration of beat, downbeat, phrasing, valence, and arousal in music \cite{pierce1997four, Burger2013PerceivedEmotions}. In sum, the musical body movements have diverse functions and are attached to various types of music semantics, which leads to the high degree of freedom for movement patterns during the performance. The connection between the performed notes and body movements (including the right-hand bowing movements and left-hand fingering movements) is not one-to-one correspondence, but is highly mutual-dependent. Such characteristics not only make it difficult to model the correspondence between music and body movement, but also result in issues regarding the assessment of generative model: since there is no exact ground truth in body movement for a given music piece, it is not certain that if the audience's perceptual quality can be represented by simplified training objective (e.g., the distance between the predicted joint position and a joint position selected from a known performance). \begin{figure*}[ht] \centering\includegraphics[width=\textwidth]{images/preprocessing.png} \caption{The full process of data pre-processing.}\label{pre-processing} \end{figure*} In this paper, we propose a 3-D violinist's body movement generation system, which incorporates musical semantics including the beat timing and bowing attack inference mechanisms. Following the track in \cite{liu2020body}, we model the trunk and the right hand segments separately, and further develop this approach into an end-to-end, multi-task learning framework. To incorporate the musical semantic information in model training, the beat tracking technique is applied to guide the processing of input data. Moreover, a state-of-the-art 3-D pose estimation technique is employed to capture the depth information of skeleton joints, which is critical in identifying bowing attacks. The pose estimation process provides reliable \emph{pseudolabels} motion data to facilitate the training process. To investigate the non-one-to-one motion-music correspondence, we propose a new dataset, which contains music with multiple performance versions by different violinists for the same set of repertoire. The generative models are evaluated on multiple performance in order to reduced the bias. To the best or our knowledge, this work represents the first attempt to generate 3-D violinists’ body movements, as well as to consider information from multiple performance versions for the development of body movement generation system. The rest of this paper is organized as follows. Section 2 presents a survey of recent research regarding body movement generation techniques. The proposed method is introduced hereafter, where Section 3 describes the data processing, and Section 4 describes the model implementation. Section 5 reports the experiment and results, followed by the conclusion in Section 6. \section{Proposed model} The architecture of proposed music-to-body-movement generation model is shown in Figure \ref{overview}. The architecture is constructed by two branches of networks: body network and right-hand network. In order to capture the detailed variation of right-hand keypoints in the performance, the right-hand network includes one encoder and one decoder, while the body network only includes one decoder. Both networks take the audio features mentioned in Section 3.2 as the input. The feature is represented as $X:=\{x_i\}^L_{i=1}$, where $x_i\in\mathbb{R}^ 28$ is the feature at the $i$th frame. In this paper, we have $L=900$. The right-hand encoder combines a U-net architecture~\cite{ronneberger2015u} with a self-attention mechanism~\cite{vaswani2017attention} at the bottleneck layer of the U-net. Based on the design of the Transformer model~\cite{vaswani2017attention}, the output of the U-net is fed into a position-wise feed-forward network. Its output is then fed into a recurrent model for body movement generation, which is constructed by an LSTM RNN, followed by a linear projection layer. The final output of the model is the superposition of the generated body skeleton $Y^{(body)}:=\{y^{(body)}_i\}^L_{i=1}$ and right-hand skeleton $Y^{(rh)}:=\{y^{(rh)}_i\}^L_{i=1}$, where $y^{(body)}_i\in\mathbb{R}^{39}$ and $y^{(rh)}_i\in\mathbb{R}^{6}$. In addition, to enhance the modeling of the right-hand joint, another linear projection layer is imposed on the right-hand wrist joint, and output a right-hand wrist joint calibration vector of the $y^{(rw)}_i\in\mathbb{R}^3$. This term is then added to the corresponding right-hand element of $y^{(rh)}_i$, and the right-hand decoder outputs the whole estimated right-hand skeleton. Finally, we combine the right-hand and body skeleton to output the whole estimated full skeleton $Y^{(full)}:=\{y^{(full)}_i\}^L_{i=1}$, where $y^{(full)}_i\in\mathbb{R}^{45}$. Note that our decoder mainly follows the design in \cite{shlizerman2018audio}, while our model is to indicate the significance of using a U-net-base encoder architecture with self-attention mechanism. \subsection{U-net} The U-net architecture~\cite{ronneberger2015u} was originally proposed to solve the image segmentation problem. Recently, it has also been widely applied to generation tasks over different data modality, due to the advantage in translating features to another modal. Examples include sketch to RGB pixel~\cite{isola2017image}, audio-to-pose generation~\cite{shlizerman2018audio}, and music transcription~\cite{wu2018automatic}. In this work, we first map the input features into a high-dimension through a linear layer. The output of linear layer is taken as the input of the U-net. The left part of the U-net structure starts from an average pooling layer to downsample the full sequence, and is followed by two consecutive convolutional blocks, each of which consists of one convolutional layer, a batch normalization layer, and ReLU activation. Such computation repeats by $N$ times until the bottleneck layer of U-net. In this paper, we set $N=4$. The main function of the encoding process of the U-net is to extract high-level features from low-level ones; in our scenario, it functions as a procedure to learn structural features from the frame-level audio features. The self-attention layer between the encoding and decoding parts of the U-net will be introduced in the next section. Next, the encoder part of the U-net starts from an upsampling layer using linear interpolation, which is concatenated with the down-sampling convolutional layer in the encoder part through the skip-connection, and then followed by two convolutional blocks. This calculation also repeats by $N$ times until the features are converted into another modal. Compared to the original architecture of U-net, we do not directly transform audio features to skeleton; rather, we first convert such output representation into another high-dimensional feature, which is leaved for generation task with the remaining LSTM network. Moreover, we find out that the bowing attack accuracy can be improved by stacking multiple blocks in U-net with self-attention. The whole block is framed by dash line, as illustrated in Figure~\ref{overview}. \subsection{Self-attention} Music typically has a long-term hierarchical structure. Similar patterns may appear repeatedly in a training sample. To decode the order of the body movement sequence is a critical issue. However, while the U-net utilizes convolutional blocks in downstream to encode audio features to symbolic representation, it merely deals with the local structure in a limited kernel size. To solve the problem of long-term sequential inference, recently, the self-attention mechanism \cite{vaswani2017attention} has been widely applied in sequence-to-sequence tasks, such as machine translation, and text-to-speech synthesis. Different from the RNN-based models, in Transformer, representation is calculated by the weighted sum of each frame of the input sequence across different states, and the more relevant states are given more weights. Accordingly, each state perceives the global information, and this would be helpful for modeling long-term information such as notes and music structure. We therefore apply the self-attention mechanism at the bottleneck layer of U-net. \textbf{Scaled Dot-Product Attention} Given input sequence $X \in \mathbb{R}^{L\times d}$, we first project $X$ into three matrices, namely query $Q:=XW^Q$, key $K:=XW^K$ and value $V:=XW^V$, where $W^Q, W^K, W^V \in \mathbb{R}^{d\times d}$ and $Q, K, V \in \mathbb{R}^{L\times d}$. The scaled dot-product attention computes outputs for a sequence vector $X$ as: \begin{equation} \label{eq:1} \text{Attention}(Q, K, V)=\text{softmax}\left(\frac{QK^T}{\sqrt{d}}\right)V\,, \end{equation} where the scalar $\frac{1}{\sqrt{d}}$ is used to avoid overflowed value leading to very small gradient. \textbf{Multi-Head Attention} Multi-head attention allows the model to jointly attend to the information from different representation subspaces at different positions. The scale-dot product is computed $h$ times in parallel with different \emph{head}, and the $h$th head can be expressed as follows: \begin{equation} \label{eq:2} \text{Head}_h(Q_h, K_h, V_h)=\text{softmax}\left(\frac{Q_hK_h^T}{\sqrt{d_{h}}}\right)V_h\,. \end{equation} For each head, queries, keys, and values are projected into a subspace with dimension $d_h$, where $d_h=d/h$, $W_h^Q, W_h^K, W_h^V \in \mathbb{R}^{d\times d_h}$ and $Q_h, K_h, V_h \in \mathbb{R}^{L\times d_{h}}$. The output of each head are concatenated and linearly projected, and skip connection is applied with input $X$: \begin{align} \text{MultiHead}&=\text{Concat}(\text{Head}_1,...,\text{Head}_h)W^O\,, \\ \text{MidLayer}&=\text{MultiHead}+X\,, \end{align} where $W^O \in \mathbb{R}^{(h \times d_h) \times d}$. \textbf{Relative Position Representations} While there is no any positional information applied in scaled dot product, the same input at different time steps would contribute to the same attending weights. To solve the problem, we apply the relative position encoding~\cite{shaw2018self} in the scaled dot-product self-attention. Two learnable embeddings $R^K$ and $R^V$ represent the distance between two positions in sequence vector $X$, where $R^V, R^K \in \mathbb{R}^{L\times d}$, and they are shared across all attention heads. We then modify Equation~\ref{eq:1} as follows: \begin{equation} \begin{aligned} \text{Attention}(Q, K, V)= \text{softmax}\left(\frac{QK^T + Q(R^K)^T}{\sqrt{d}}\right)(V + R^V)\,. \label{eq:5} \end{aligned} \end{equation} By adding the term $Q(R^K)^T$ in numerator, the original matrix multiplication in Equation~\ref{eq:1} would be injected the relative position information. The similar way is also applied to the value term, $V + R^V$. \textbf{Position-wise Feed Forward Network} Another sub-layer in the self-attention block is position-wise a feed-forward network. It consists in two linear transformation layers with a ReLU activation between them, which is applied to each position separately and identically. The dimensionality of input and output is $d$, and the inner layer has the dimensionality of $d_{ff}$. The outputs of this sub-layer are computed as: \begin{equation} \text{FFN}(X)=\max(0,XW_1+b_1)W_2 + b_2\,, \end{equation} where the weights $W_1 \in \mathbb{R}^{d \times d_{ff}}$, $W_2 \in \mathbb{R}^{d_{ff} \times d}$ and biases $b_1 \in \mathbb{R}^{d_{ff}}$, $b_2 \in \mathbb{R}^{d}$. Additionally, we also place an extra position-wise feed forward network after the last layer of U-net. While the outputs of U-net is contextualized, the position-wise feed forward network make it more similar to the skeletal representation. \subsection{Generation} For the generated body sequence $\hat{Y}^{(body)}$, we directly feed audio features into the LSTM RNN network, followed by a dropout and a linear projection layer, as shown in the lower branch of Figure \ref{overview}. This branch of model directly generates the sequence of 39-D body skeleton $\hat{Y}^{(body)}$. For the right-hand sequence generation $\hat{Y}^{(rh)}$, the output of position-wise feed forward network would be fed into two components. The first one is identical to the body sequence generation network, and the second component is a network to refine the right-hand motion. While directly producing full right-hand from one branch may limits the variation of wrist joint, we take the output of position-wise feed-forward network into another branch to generate the 3-D coordinate for the right-hand wrist joint. Therefore, the right-hand output is a 6-D right-hand sequence, whose last three dimensions (represented as wrist joint) are added by the output of refine network. Finally, we concatenate the outputs of body sequence and right-hand sequence: \begin{equation} \hat{Y}^{(full)} = \text{Concat}(\hat{Y}^{(body)},\hat{Y}^{(rh)})\,. \end{equation} The model is optimized by minimizing the $L_1$ distance between the generated skeleton $\hat{Y}^{(full)}$ and the ground truth skeleton $Y^{(full)}$: $\mathcal{L}_{\text{full}}:=\|Y^{(full)}-\hat{Y}^{(full)}\|$. \subsection{Implementation details} In our experiments, we use 4 convolutional blocks $(N=4)$ in the downstream and upstream subnetworks of U-net individually, and all the dimensions of convolutional layers in U-net are set to 512. In the bottleneck layer of U-net, we adopt 1 attention block. The number of head in the attention block is 4, and $d$ is set to 512. The inner dimension of feed forward network $d_{ff}$ is set to 2048. The dimension of the LSTM unit is 512, and the dropout rate for all dropout layers is 0.1. Besides, we further stack two full components $(M=2)$ composed of U-net and self-attention as our final network architecture. The model is optimized by Adam with $\beta_1=0.9$, $\beta_1=0.98$, $\epsilon=10^{-9}$, and adaptive learning rate is adopted over the training progress: \begin{equation} lr=k\cdot d^{-0.5}\cdot min(n^{-0.5},n\cdot warmup^{-1.5}), \end{equation} where $n$ is the step number, $k$ is a tunable scalar. The learning rate is increased linearly for the first $warmup$ training steps and is decreased thereafter proportionally to the inverse square root of the step number. We set $warmup=500$, $k=1$ for training model with 100 epochs, and batch size is set to 32. Furthermore, we use the early-stopping scheme to choose optimal model when the validation loss stops decreasing for 5 epochs. \section{Related work} \textbf{Music body movement analysis} The role of music body movement has been discussed in many studies \cite{davidson2012bodily}. Music performer's body movements are divided into three types: 1) the instrumental movement such as striking the keyboard on piano or pressing the strings on violin; 2) the expressive movement such as body swaying and head nodding; and 3) the communicative movement such as cuing movement suggesting tempo changes in music \cite{wanderley2005musical}. Studies showed that music performers could intentionally adopt different body movements to achieve the planed performance sound according to the musical context \cite{macritchie2012intentions, davidson2012bodily, macritchie2013inferring, huang2017conducting}. For instance, different violinists may choose various bowing and fingering strategies depending on the musical interpretations they attempt to deliver. Previous research has shown that body movements from different music performers generate diverse instrumental sounds \cite{dahl2010gestures, macritchie2012intentions}. The correspondence between music performer' movement and the musical composition being performed has also been discussed \cite{haga2008correspondences, thompson2012exploring}. Recently, a study employs body movement data with the recurrent neural network (RNN) model to predict dynamic levels, articulation styles, and phrasing cues instructed by the orchestral conductor \cite{huang2019identifying}. Since detecting musical semantics from the body movement data is possible, an interesting yet challenging task is to generate body movement data from given musical sound \cite{shlizerman2018audio, li2018skeleton}. \textbf{Generating audio from body movement} Techniques have been developed to generate speech or music signals from body movement \cite{chung2017lip, yoon2019robots}. \cite{chung2017lip} generated human speech audio from automatic lip reading on the face videos, whereas \cite{yoon2019robots} generated co-speech movements including iconic and metaphoric gestures from speech audio. \cite{chen2017deep} applied Generative Adversarial Networks (GAN) to produce music performer's images based on different types of timbre. \cite{berg2012interactive} generated music from gathering the motion capture data. In the field of interactive multimedia, using gesture data to induce sound morphing or related generation task is also commonly used. \textbf{Body movement generation from audio} Several attempts have been devoted to generate music-related movement. The commonly seen topics of body movement generation from audio include generating body movements from music, generating gestures from speech, and generating dance from music. \cite{shlizerman2018audio} used an RNN with long-short-term-memory (LSTM) units to encode audio features, and then employed a fully-connected (FC) layer to decode it into the body skeleton keypoints of either pianists or violinists. In \cite{kakitsuka2016choreographic}, choreographic movements are automatically generated from music according to the user's preference and the musical structural context, such as the metrical and dynamic arrangement in music. Another recent work on pianists' body skeleton generation \cite{li2018skeleton} also consider musical information including bar and beat positions in music. The model combining CNN and RNN was proven to be capable of learning the body movement characteristics of each pianist .
1,108,101,565,329
arxiv
\section{Introduction} Recent realistic MHD simulations have revealed that the complex brightness and flow structure of sunspots results from convective energy transport dominated by a strong vertical (in the umbra) or inclined (in the penumbra) magnetic field \citep{Heinemann:etal:2007, Scharmer:etal:2008, Rempel:etal:2009a, Rempel:etal:2009b}. In the umbra, such magneto-convection occurs in the form of narrow upflow plumes with adjacent downflows \citep[][henceforth referred to as SV06]{Schuessler:Voegler:2006}. In computed intensity images, these plumes appear as bright features that share various properties with observed umbral dots. Here we provide a systematic study of the umbral simulation results with specific emphasis on the properties of the bright structures and their comparison with recent results from high-resolution observations. \section{MHD simulation} We used the MURaM code \citep{Voegler:2003, Voegler:etal:2005} with nearly the same simulation setup as SV06. The dimensions of the computational box are $5.76\,$Mm $\times$ $5.76\,$Mm in the horizontal directions and $1.6\,$Mm in the vertical. Rosseland optical depth unity is about $0.4\,$Mm below the upper boundary. The grid cells have a size of $20\,$km in the horizontal directions and $10\,$km in the vertical. The magnetic diffusivity is 2.8$\cdot10^6\,$m$^2$s$^{-1}$, the lowest value compatible with the grid resolution. The hyperdiffusivities for the other diffusive processes \citep[for details, see][]{Voegler:etal:2005} in the deepest layers of the box were chosen lower than the values used by SV06 in order to minimize their contribution to the energy transport. Since we do not synthesize spectral lines or Stokes profiles, the radiative transfer can be treated in the gray approximation. The fixed vertical magnetic flux through the computational box corresponds to a mean field strength of $2500\,$G. Side boundaries are periodic, and the top boundary is closed for the flow. The lower boundary is open and the thermal energy density of the inflowing matter has been fixed to a value of $3.5\cdot10^{12}\,$erg$\cdot$cm$^{-3}$, leading to a total surface energy output comparable to that of a typical sunspot umbra, i.e., about 20\% of the value for undisturbed solar surface. The magnetic field is assumed to be vertical at the top and bottom boundaries. We started our simulation from the last snapshot of the run analyzed by SV06 and continued for 10.8 hours solar time in order to obtain a sufficiently large statistical sample of simulated umbral dots. Since the average magnetic field in the simulation is vertical, our results represent the inner part of a sunspot umbra with central umbral dots -- as opposed to peripheral umbral dots, which often are related to penumbral filaments \citep[e.g.,][]{Grossmann-Doerth:etal:1986}. \section{Image segmentation} To obtain statistical properties of the large number of simulated umbral dots (UDs), we used an automated detection method to distinguish between bright features and the surrounding dark background (DB). Since both the UDs and the background cover a broad range of intensities, we chose the MLT (Multi Level Tracking) algorithm of \citet{Bovelet:Wiehr:2001} as the basis of our image segmentation. This method has been successfully applied to various observational data \citep[e.g.,][]{Wiehr:etal:2004, Bovelet:Wiehr:2007, Bovelet:Wiehr:2008, Kobel:etal:2009}, including high-resolution images of sunspot umbrae \citep{Riethmueller:etal:2008b}. \begin{figure} \centering \resizebox{\hsize}{!}{\includegraphics{13328f1.eps}} \caption{ Over-segmentation of simulated umbral dots due to dark lanes. The section of the original image (middle panel) shows umbral dots with dark lanes. The unmodified MLT algorithm leads to an undesired splitting of many dots in the segmented image (left panel). Introducing a modification of the algorithm controlling the merging of segments removes the problem in most cases (right panel).} \label{fig:DL_problem} \end{figure} The basic concept of the MLT algorithm is to segment the image by applying a sequence of intensity thresholds with decreasing values, keeping all features already detected. In our case, a complication arises from the dark lanes shown by many of our simulated UDs. This can lead the algorithm to split an UD into two parts (see Fig.~\ref{fig:DL_problem}). In order to avoid such undesired splitting, we permit the merging of segments that were separate for threshold $n$ after applying threshold level $n+1$ provided that their maximum intensity does not exceed a value equal to the latter threshold multiplied by a suitably chosen factor $c$. For our analysis, we used 15 intensity thresholds ranging from about 2.5 to 1.5 times the intensity of the dark background (somewhat depending on the dataset used, see Sec.~4.2) and a value of $c=1.45$, which suppresses most cases of unwanted splitting of UDs. In a final step of the segmentation process, we removed from the list of UDs all segmented structures whose maximum intensities do not exceed the lowest threshold by at least 10\%; such features are considered to be mere fluctuations of the background. Fig.~\ref{fig:segmentation} shows an example of the resulting segmentation of a typical simulated image. \ifnum 2<1 In the best segmentation obtained so far, still remained some structures, which do not look like UD's. These were very faint UD's not wholly detected by the algorithm or no UD's at all. To get rid of these things every region which maximum intensity does not exceed $4.45\cdot 10^9\,\frac{\text{erg}}{\text{cm}^2\text{s}\text{sr}}$ ($4.5\cdot 10^9\,\frac{\text{erg}}{\text{cm}^2\text{s}\text{sr}}$ in Data 1) is set to zero! Fig. \ref{best} displays a sample of the original data (right) segmented regions (middle), and the segmented regions refilled with the original intensities. Some UD's still are effected by the DL-problem, but applying much higher CP's would lead to merging UD's.\\ \fi \begin{figure} \centering \resizebox{\hsize}{!}{\includegraphics{13328f2.eps}} \caption{ Result of the MLT image segmentation. {\em Left:} original image; {\em Middle:} segmented image (binary map); {\em Right:} segmented umbral dot areas with their original intensities.} \label{fig:segmentation} \end{figure} In order to study UD evolution and lifetimes, we extended the MLT algorithm to include time, such that the segmentation can be carried out in three dimensions (two spatial and one temporal) to follow the temporal evolution of individual UDs. \section{Results} \subsection{Overall temporal evolution} \label{subsec:temporal} The relative inefficiency of the convective energy transport in the strong umbral magnetic field leads to a rather long thermal relaxation time of the system, so that it can take hours until it settles down into a statistically stationary state. With our long simulation run of 10.8 hours solar time, we were able to follow the overall changes of the system throughout the relaxation phase. \begin{figure*} \centering \resizebox{\hsize}{!} {\includegraphics{13328f3a.eps} \includegraphics{13328f3b.eps}} \caption{Thermal relaxation of the system. {\em Left panel:} time evolution of the total internal energy (normalized to its initial value); {\em Right panel:} total (radiative) energy output (solid line) and loss of total internal energy (dashed line), both integrated in time from $t=0$ onward.} \label{fig:eint} \end{figure*} \begin{figure*} \centering \resizebox{\hsize}{!} {\includegraphics{13328f4a.eps} \includegraphics{13328f4b.eps}} \caption{Temporal evolution of the system. {\em Left panel:} radiative energy flux from the box, normalized by the corresponding value for the same area of undisturbed solar photosphere; {\em Right panel:} total kinetic energy (solid line) and magnetic energy (dashed line), both normalized to their values at $t=0$.} \label{fig:fout+ek+em} \end{figure*} The steady decrease of the total internal energy in the computational box shown in Fig.~\ref{fig:eint} (left panel) clearly indicates that the sytem is not relaxed until about 6--7 hours after the start of the simulation. Comparing the time-integrated total radiative output from the box with the integrated decrease of the total internal energy given in the right panel of Fig.~\ref{fig:eint} suggests that during the first half of the evolution the energy output was mostly covered by a loss of internal energy, i.e., the plasma in the box cooled. Only after about 6 hours, the two curves start to diverge and the internal energy does no longer decrease. At about the same time, the slope of the integrated energy output steepens, indicating a somewhat higher energy output. This is confirmed by Fig.~\ref{fig:fout+ek+em} (left panel), which shows that the total radiative flux is higher by about 10\% in the second half of the run. These results suggest that the character of the energy supply has changed halfway through the run: while the radiated energy was taken from the internal energy in the first half and was not significantly replenished through convective inflows through the bottom of the box, convection became more effective in the second half of the run and led to a thermally relaxed stationary situation with a somewhat higher total radiative output. This interpretation is supported by considering the time evolution of kinetic energy (Fig.~\ref{fig:fout+ek+em}, right panel), which shows an increase by a factor of 3 in the second half of the run. The magnetic energy also grows slightly, by about 10\%, probably due to the stronger fluctuations (local compressions of the magnetic flux) caused by the higher velocities. \subsection{Statistical properties of umbral dots} \label{subsec:propUD} The existence of the two phases (cooling phase and convective phase) in the temporal evolution of the umbra simulation is also reflected in differences of the average UD properties in both phases. The snapshots of (bolometric) brightness in Fig.~\ref{fig:snapshots} indicate that larger UDs appear in the second phase while the total number of UDs decreases. We therefore defined separate datasets for each phase and analyzed them individually. The first part (P1) covers the time between 0.5~h and 4.5~h after the beginning of the run while the second part (P2) extends from 6.2~h to 10.3~h. We excluded the transition between the two phases and only kept periods for which the average UD properties do not show significant secular trends. For the MLT segmentation considered in this section, we used maps of the bolometric (frequency-integrated) emergent intensity in the vertical direction. The maps are separated in time by about 1 min each, so that UDs can be analyzed in a statistical manner as well as their evolution followed in time. Owing to the different characteristics of the UDs, slightly different MLT threshold levels were chosen for P1 and P2, respectively. For dataset P1, the 15 intensity thresholds range between 20\% and 31\% of the average quiet-Sun brightness while for P2 they range from 22\% to 35\%. \begin{figure} \centering \resizebox{\hsize}{!}{\includegraphics{13328f5a.eps} \includegraphics{13328f5b.eps}} \caption{ Snapshots of the bolometric emergent intensity from the two phases of the simulation. {\em Upper panel:} initial cooling phase ($t\simeq 2\,$h); {\em Lower panel:} convective phase ($t\simeq 9\,$h).} \label{fig:snapshots} \end{figure} \begin{figure} \centering \resizebox{1.\hsize}{!}{\includegraphics{13328f6.eps}} \caption{ Time evolution of UD properties, given separately for the two phases P1 (cooling phase, red curves) and P2 (convective phase, green curves). {\em A:} UD area fraction; {\em B:} number of UDs; {\em C:} average UD area; {\em D:} mean bolometric UD intensity (normalized by the mean quiet-Sun value); {\em E:} average bolometric intensity of DB (normalized by the mean quiet-Sun value); {\em F:} mean bolometric UD intensity (normalized by the corresponding average for the DB). Time is counted from the start of the respective dataset (0.5 h and 6.2 h after the start of the simulation for P1 and P2, respectively.)} \label{fig:UDtemporal} \end{figure} Fig.~\ref{fig:UDtemporal} shows, separately for P1 and P2, the temporal evolution of various average quantities of the segmented UDs and of the remaining domain area, the DB. The area fraction of UDs (top left panel) tends to be somewhat higher in P2 compared to P1; its relatively strong fluctuations result from the relatively small size of the computational box. In P2, there are less (top right panel) but, on average, larger (middle right panel) and brighter (middle left and bottom left panel) UDs, as already indicated by the snapshots shown in Fig.~\ref{fig:snapshots}. The DB is also slightly brighter in P2 (bottom left panel). The mean bolometric intensity of the UDs approaches roughly stationary values of about 0.23 (P1) and 0.26 (P2) of the quiet-Sun value, or about 1.6 (P1) and 1.7 (P2), respectively, in units of the corresponding DB intensity (bottom right panel). These results indicate that the upflows underlying the UDs contribute more strongly to the convective energy transport in P2. This includes not only the fraction of the radiative output directly covered by the UDs (about 30\%), but also the horizontal radiative losses of the upflow plumes, which heat their environment and thus contribute to the energy output of the DB. The higher average brightness of the DB in the second phase (P2) is thus consistent with a bigger contribution of the UDs to the overall convective energy transport. The average UD area resulting from our simulation and segmentation analysis is 0.08 Mm{$^2$} for P1 and 0.14 Mm{$^2$} for P2, corresponding to average diameters of $320$~km and $420$~km under the assumption of circular areas. These values are higher than those reported in recent observational studies of UDs with the 1-m SST on La Palma: \citet{Riethmueller:etal:2008b} give an average area of 0.04~Mm{$^2$}, \citet{Sobotka:Hanslmeier:2005} estimate 0.025~Mm{$^2$} while \citet{Sobotka:Puschmann:2009} even find a value near 0.01~Mm{$^2$}. On the other hand, the UDs with dark lanes studied by the latter authors have an average area around 0.08~Mm{$^2$} and even larger UDs have been studied by \citet{Bharti:etal:2007b}. Also, \citet{Watanabe:etal:2009a} find an average area of about 0.1~Mm{$^2$} from Hinode data. In such a comparison one has to take into account that the lowest intensity thresholds in our MLT segmentation are not far above the DB values, while the observational analyses typically define the edges of an UD by the half-width of the local intensity contrast or by the inflection point of the intensity profile. These procedures tend to yield smaller areas than our approach, which covers the full extension of the UDs \citep[cf., for example, the UD outlines shown in Figs.~5 and 6 of][]{Riethmueller:etal:2008b}. We tested this conjecture by applying the procedure of \citet{Riethmueller:etal:2008b} to a few representative snapshots in our datasets. We found that the resulting UD areas are, on average, about 50\% smaller than the values determined by our MLT procedure. This brings the simulation and the observational results into rough agreement -- the simulations possibly lacking a population of very small, short-lived UDs \citep{Sobotka:Puschmann:2009}. \begin{figure \centering \resizebox{\hsize}{!}{\includegraphics{13328f7.eps}} \caption{ Area histograms of bolometric intensity (upper panel) and vertical field strength (lower panel). The field strength is given taken at $z=0$, corresponding to the average geometrical height level of Rosseland optical depth unity. Solid curves refer to UDs, dashed curves to DB. Red and green curves indicate datasets P1 and P2, respectively.} \label{fig:hist_I_B_bol} \end{figure} \begin{figure \hspace{-3mm} \centering \resizebox{\hsize}{!}{\includegraphics{13328f8.eps}} \caption{ Histograms of UD area (left) and mean bolometric brightness (right, normalized to the average DB) for all segmented maps. Red and green curves refer to datasets P1 and P2, respectively.} \label{fig:hist_allUD} \end{figure} For the full datasets P1 and P2, Fig.~\ref{fig:hist_I_B_bol} shows area histograms of the bolometric intensity and the vertical component of the magnetic field at constant height $z=0$, which corresponds to the mean level of Rosseland optical depth unity. Histograms for UDs and DB are given separately (including all image pixels belonging to each class). The UD intensity distributions reflect the fact that UDs tend to be brighter in P2. The tail of DB intensities overlapping with the UD distribution results from the `trenches' generated by the MLT algorithm to separate closely neighboring or clustered UDs and the exclusion of the faintest UDs. The area histograms of the vertical magnetic field given in the lower panel of Fig.~\ref{fig:hist_I_B_bol} shows a very broad distribution of field strength in the UDs, ranging from slightly negative values up to about 3000~G. While the field strength becomes very low owing to expansion and flux expulsion in the core of the rising plumes that generate the UDs, horizontal radiative heating extends the wings of the bright intensity structure (i.e., the UD) into the surrounding umbra with high field strength. Since we segment the images according to the intensity structure, the peripheries of the so-defined UDs harbor strong magnetic fields. Negative field strength arises when a strong downflow catches and drags a magnetic field line, so that it is stretched into a hairpin-like shape (see Fig.~\ref{fig:maps_B_v} and Sec.~4.3). The average field strength in the DB is higher in P2 since the larger area fraction of UDs compresses the magnetic flux in the DB. The distributions of UD area and mean brightness for all segmented maps (with one minute cadence, so that UDs are considered more than once, in various stages of their evolution) are illustrated by the histograms shown in Fig.~\ref{fig:hist_allUD}. UDs with areas below 0.02~Mm{$^2$} were omitted as most of them represent fluctuations in the DB. The area distribution for P2 is broader, with less smaller and more larger UDs than for P1. The largest UDs have areas of 0.44~Mm{$^2$} and 0.79~Mm{$^2$} for P1 and P2, respectively. The distribution of average UD bolometric brightness contrast (right panel of Fig.~\ref{fig:hist_allUD}) is shifted towards higher values for P2, the mean values being 1.62 (P1) and 1.72 (P2). \begin{figure \centering \resizebox{\hsize}{!}{\includegraphics{13328f9.eps}} \caption{ Histogram of lifetimes for UDs that could be tracked during their whole evolution. Red and green curves refer to datasets P1 and P2, respectively.} \label{fig:hist_lifetime} \end{figure} Since the MLT tracking algorithm identifies and tracks UDs during their whole life cycle, we determined lifetimes for 558 UDs from P1 and 500 UDs from P2. The lifetime distributions are shown in the form of histograms in Fig.~\ref{fig:hist_lifetime} (excluding the roughly 10\% of the UDs with lifetimes below 8 minutes, which seem to constitute a separate group of low-amplitude fluctuations). The lifetime distributions are similar for both datasets, with average values of 28.2~min (P1) and 25.1~min (P2). The mean lifetimes reported from observations range from 3~min to about an hour, with recent high-resolution observations typically indicating rather short mean lifetimes below 10 minutes \citep[e.g.,][]{Kusoffsky:Lundstedt:1986, Ewell:1992, Sobotka:etal:1997, Kitai:etal:2007, Riethmueller:etal:2008b, Sobotka:Puschmann:2009, Watanabe:etal:2009a}. However, similar to our simulation results, the distribution of lifetimes from any observation is rather broad and `typical' lifetimes cannot be defined easily \citep{Socas-Navarro:etal:2004}. Taken at face value, the observations indicate the existence of a large number of short-lived UDs that are not shown by the simulations. However, it is also possible that some observational lifetimes are affected by brightness fluctuations of UDs and by seeing effects. \begin{figure \centering \resizebox{\hsize}{!}{\includegraphics{13328f10.eps}} \caption{ Relationship between UD lifetime, area and brightness. {\em Left:} scatter plot of lifetime vs. maximum area; {\em Right:} scatter plot of lifetime vs. maximum area-averaged bolometric brightness reached during the lifetime of an UD (normalized by the average brightness of the DB). Red and green dots refers to datasets P1 and P2, respectively; solid black (P1) and blue (P2) lines connect binned values for 50 points each.} \label{fig:scatter_lifetime} \end{figure} \begin{figure \centering \resizebox{\hsize}{!}{\includegraphics{13328f11.eps}} \caption{ Relation between area and brightness for the same set of UDs as in Fig.~\ref{fig:scatter_lifetime}. {\em Left:} scatter plot of maximum area vs. maximum (normalized) bolometric brightness during the lifetime of an UD; {\em Right:} scatter plot of mean UD area (over UD lifetime) vs. mean area-averaged brightness (over UD lifetime). Red and green color refers to datasets P1 and P2, respectively; solid black (P1) and blue (P2) lines connect binned values for 50 points each.} \label{fig:scatter_lifetime2} \end{figure} The relationship between lifetime and maximum UD area and brightness in the course of the UD evolution is shown in Fig.~\ref{fig:scatter_lifetime} in the form of scatter plots with curves connecting binned values. Although there is a significant amount of scatter, the plots indicate that longer-lasting UDs tend to be larger and brighter. A qualitatively similar lifetime-size relation is reported by \citet{Riethmueller:etal:2008b} for UDs with lifetimes below 20 min while longer-lived UDs are not found to be systematically larger. The lifetime-area and lifetime-brightness correlations are probably due to the fact that stronger and more extended convective upflows are maintained longer and create larger and brighter UDs: higher upflow speed entails a bigger convective energy flux density (brighter UDs) and also more mass flux and kinetic energy available for the sideways expansion of the upflow plume (bigger UDs). This explanation is in line with the relationship between UD area and (maximum and mean) brightness shown in Fig.~\ref{fig:scatter_lifetime2}. Similar trends were found observationally by \citet{Tritschler:Schmidt:2002} and by \citet{Sobotka:Puschmann:2009} while \citet{Riethmueller:etal:2008b} do not find a clear relationship. \subsection{UD properties and optical-depth dependence at 630 nm} \label{subsec:prop630} \begin{figure \centering \resizebox{\hsize}{!}{\includegraphics{13328f12.eps}} \caption{ Area histograms of the continuum intensity at 630 nm. Solid curves refer to UDs, dashed curves to DB. Red and green curves indicate datasets P1 and P2, respectively.} \label{fig:hist_630} \end{figure} \begin{figure \centering \resizebox{\hsize}{!}{\includegraphics{13328f13.eps}} \caption{ Profiles of vertically emerging intensity along a cut at $x=4.6$~Mm through the image shown in the right panel of Fig.~\ref{fig:snapshots}. The solid line refers to the continuum intensity at 630~nm, the dashed line to the bolometric intensity. Intensities are normalized individually by the corresponding local DB intensities.} \label{fig:Icomp} \end{figure} The iron line pair near 630 nm and the nearby continuum is often used for observations of sunspot fine structure. In order to compare with observational results derived from such data, we considered 57 (P1) and 74 (P2) snapshots from our simulation and calculated images of the (vertically emerging) continuum intensity at 630~nm. These images were segmented by the MLT procedure to obtain maps of UDs. In addition to the UD properties studied in the previous section, we correlated the UD maps with the distributions of vertical magnetic field and vertical velocity on surfaces of equal optical depth at 630 nm. A full synthesis of line profiles and Stokes parameters for a detailed comparison with observed spectro-polarimetric data is beyond the scope of this work and will be carried out in a forthcoming paper (Vitas, V{\"o}gler \& Keller, in preparation). \begin{figure*} \centering \resizebox{\hsize}{!}{\includegraphics{13328fxa.eps} \includegraphics{13328fxb.eps} \includegraphics{13328fxc.eps}} \resizebox{\hsize}{!}{\includegraphics{13328fxd.eps} \includegraphics{13328fxe.eps} \includegraphics{13328fxf.eps}} \caption{Maps of vertical magnetic field strength (upper row) and vertical velocity (lower row) on surfaces of constant optical depth at 630~nm (from left to right: $\tau_{630}=1, 0.1, 0.01$, respectively). The color table for the magnetic field ranges from 3780~G (blue) over zero (white) to $-190$~G (red). For the velocity the range is from 1.9~km$\,$s$^{-1}$ (blue, upflow) over zero (white) to $-1.7$~km$\,$s$^{-1}$ (red, downflow).} \label{fig:maps_B_v} \end{figure*} Area histograms for the continuum intensity at 630~nm are shown in Fig.~\ref{fig:hist_630}. Compared to the histograms of the bolometric intensity shown in Fig.~\ref{fig:hist_I_B_bol}, the intensity contrasts between UDs and DB is much higher, resulting from the fact that 630 nm is on the blueward side from the maximum of the respective Planck functions. The significantly higher contrast between DB and UD is also illustrated by the intensity profiles shown in Fig.~\ref{fig:Icomp}. The mean UD intensity values in Fig.~\ref{fig:hist_630} are 2.58 for P1 and 2.88 for P2 (both relative to the corresponding average DB). These values are consistent with the intensity ratios reported from observations at 602~nm \citep[e.g.,][]{Sobotka:Hanslmeier:2005, Sobotka:Puschmann:2009}. Individual UDs in the simulation can reach significantly higher intensities (see Fig.~\ref{fig:Icomp}). \begin{figure \centering \resizebox{\hsize}{!}{\includegraphics{13328f15.eps}} \caption{ Area histograms of the vertical magnetic field component on surfaces of constant optical depth at 630~nm. {\em Upper panel:} $\tau_{630}=1$; {\em middle panel:} $\tau_{630}=0.1$; {\em lower panel:} $\tau_{630}=0.01$. Solid lines refer to UDs, dashed lines to the DB. The red and green colors indicate datasets P1 and P2, respectively. } \label{fig:hist_630_mag} \end{figure} \begin{figure \centering \resizebox{\hsize}{!}{\includegraphics{13328f16.eps}} \caption{ Same as Fig.~\ref{fig:hist_630_mag} but for the vertical component of the velocity. Positive velocity corresponds to upward motion, negative velocity to a downward flow.} \label{fig:hist_630_vel} \end{figure} Maps of the vertical magnetic field and vertical velocity on surfaces of equal optical depth at 630~nm are given in Fig.~\ref{fig:maps_B_v}. While the magnetic field distribution becomes more diffuse and homogeneous with height, the velocity remains rather intermittent. Strong downflows at the edges of the largest UDs at $\tau_{630}=1$ (left panels) can capture inclined magnetic field lines and drag them downwards, thereby creating a hairpin-like structure with a patch of reversed polarity, indicated by red color in the magnetic-field map. Quantitative information on the distributions is provided by the area histograms for the same optical-depth levels shown in Fig.~\ref{fig:hist_630_mag}. The histograms at $\tau_{630}=0.01$ roughly represent the results that would be obtained by carrying out inversions of spectro-polarimetric observations in the neutral iron lines at 630.15~nm and 630.25~nm. The histograms at $\tau_{630}=1$ are roughly similar to those at constant geometrical depth $z=0$ shown in the lower panel of Fig.~\ref{fig:hist_I_B_bol}, with a broad range of values in the UDs. At $\tau_{630}=0.1$, field strengths below 1000~G are rarely found in UDs (particularly for those from P1) and at $\tau_{630}=0.01$, the UD distributions are further shifted to higher field strengths. For instance, 90\% of the UDs of dataset P1 have field strengths between 1800~G and 2200~G at $\tau_{630}=0.01$. This effect is due to the elevation of the surfaces of constant optical depth in UDs (in comparison to the DB) and the cusp-like shape of the UD structure \citep[cf.][]{Schuessler:Voegler:2006, Riethmueller:etal:2008a, Bharti:etal:2009}. The larger, stronger UDs in the convective phase (dataset P2, green lines) reach higher into the umbral atmosphere [cf. Fig.~\ref{fig:scat_B_v_z}] and show therefore still a stronger signature of the reduced field strength at $\tau_{630}=0.01$ than the UDs from P1. For both datasets, the distributions for UDs and DB approach each other as the optical depth decreases; this reflects the decreasing area fraction of the cusp-shaped UD structures with increasing height. The corresponding area histograms of the vertical velocity given in Fig.~\ref{fig:hist_630_vel} show rather weak motion in the DB: at all three $\tau$ levels, about 50 $\%$ of DB shows almost no vertical flow while the rest of the area has velocities in the range $\pm$ $200\,$m$\,$s$^{-1}$. The strong upflows creating the UDs have already been considerably braked when they reach the level $\tau_{630}=1$. Higher up at levels $\tau_{630}=0.1$ and $\tau_{630}=0.01$, high-velocity tails (up to $\sim \pm 1\,$km$\,$s$^{-1}$) appear in the distributions. These are mostly due to the upward-directed, jet-like `valve flows' from the UD cusps \citep{Choudhuri:1986, Schuessler:Voegler:2006} and the corresponding return flows. The latter, however, may be affected by the presence of the closed upper boundary of the simulation box. Figure~\ref{fig:histograms_B_v_z} illustrates, by means of histograms, the properties of average vertical magnetic field and velocity at various optical depths, together with the average elevations of the optical-depth levels for all UDs from the datasets P1 and P2. The magnetic field histograms show that, at all optical-depth levels, the expulsion of magnetic flux from the expanding upflow plumes underlying the UDs is more clearly reflected in the case of P2. However, in most cases the region of very low field strengths does not extend above the surface of $\tau_{630}=1$. The cusp shape of the vertical UD structure \citep[see Fig.~2 of][]{Schuessler:Voegler:2006} leads to an increase of the field strength for the iso-$\tau$ surfaces above $\tau_{630}=1$. In addition, the UD area is defined by the intensity structure corresponding to a height near $\tau_{630}=1$; the cusp's cross section shrinks toward higher iso-$\tau$ surfaces, so that we progressively sample more of the strong-field region surrounding the plume. A similar situation is found for the upflows (second row), where the average velocities do not exceed a few hundred m$\,$s$^{-1}$ at $\tau_{630}=1$. At higher levels, the jet-like outflows along the cusps of the upflow plumes are seen. The distribution of upflows is shifted towards lower velocities for the UDs from P2, since bigger and flatter UDs show weaker outflows. This results from the fact that the height reached by the upflow plume is only weakly dependent on its size (since the stratification above optical depth unity is strongly subadiabatic), so that the cusp overlying a large UD is broader (has a larger aspect angle) and the whole structures thus is flatter. Therefore, the channelling of the upflowing matter into the top of the cusp is less efficient and the jet-like outflows are weaker. The downflows (third row) at the UD periphery (for $\tau_{630}=1$) and adjacent to the upflow jets (for the other levels) are similar in average magnitude for P1 and P2 while the distributions for P2 are somewhat broader. The bottom row of Fig.~\ref{fig:histograms_B_v_z} shows histograms of the relative height difference between the levels of constant optical depth in UDs (averaged over UD area) and in the mean DB, thus representing the elevation of the optical-depth levels in the UDs relative to the DB. While the mean elevations range between 60--90~km, the levels for the UDs from P2 are, on average, 10--20~km higher than the corresponding levels of UDs from P1. Thus, the P2 UDs tend to penetrate somewhat higher into the umbral photosphere than their counterparts from P1. \begin{figure \centering \resizebox{1.0\hsize}{!}{\includegraphics{13328f17.eps}} \caption{ Distributions of averages over UD area of vertical magnetic field (top row) and vertical velocity (separately for upflows, $v_z>0$, and downflows, $v_z<0$ in the second and third row, respectively), all on surfaces of constant optical depth at 630~nm (left column: $\tau_{630}=1$, middle column: $\tau_{630}=0.1$, right column: $\tau_{630}=0.01$). The bottom row gives the elevation of the optical-depth levels of the UDs with respect to the corresponding mean levels in the DB. Red and green lines refer to UDs from datasets P1 and P2, respectively.} \label{fig:histograms_B_v_z} \end{figure} \begin{figure \centering \resizebox{1.0\hsize}{!}{\includegraphics{13328f18.eps}} \caption{ Relationship between properties of UDs defined by segmentation of continuum images at 630~nm. Mean quantities are averages over the area of the individual UDs. Red and green dots refer to datasets P1 and P2, respectively; solid black (P1) and blue (P2) lines connect binned values for 100 points each. Positive velocity corresponds to upward motion, negative velocity to a downward flow.} \label{fig:scat_B_v_z} \end{figure} The elevation of the surfaces of constant optical depth above the upflow plumes tends to hide their magnetic and flow structure from spectroscopic observations \citep[e.g.,][]{Degenhardt:Lites:1993a, Degenhardt:Lites:1993b}. It is therefore not surprising that observational results for the magnetic and flow properties of UDs do not provide a unique picture. A reduced field strength in UDs has been repeatedly reported \citep[e.g.,][]{Wiehr:Degenhardt:1993, Socas-Navarro:etal:2004} and some authors also find indications for a decrease of field strength with depth \citep{Riethmueller:etal:2008a, Bharti:etal:2009}. Other studies \citep{Sobotka:Jurcak:2009, Watanabe:etal:2009b} do not show significantly lower field strengths in UDs. While upwellings in the deep layers of UDs were found \citep{Socas-Navarro:etal:2004, Rimmele:2004, Riethmueller:etal:2008a, Bharti:etal:2007a, Bharti:etal:2009}, the jet-like upflows predicted by the simulations in the higher layers have not been observed so far. This may be due to the fact that spectroscopic studies mostly refer to large UDs, for which the jets are weaker or absent. The relations between various mean quantities for the full set of UDs found by segmentation of continuum images at 630~nm are illustrated by scatter plots in Fig.~\ref{fig:scat_B_v_z}. The positive correlation between UD area and brightness seen in Fig.~\ref{fig:scatter_lifetime2} (for bolometric brightness) is confirmed also for 630~nm (top left panel). Although there is a large amount of scatter, UDs from P2 tend to be brighter for the same area, possibly reflecting higher upflow speeds (below optical depth unity) with a higher convective energy flux. This interpretation is supported by the decrease of the magnetic field strength with increasing UD area at $\tau_{630}=0.1$ (top right panel) and with increasing intensity at $\tau_{630}=1$ (bottom left panel) for P2 as well as by the rise of the elevation of the level $\tau_{630}=1$ with increasing UD intensity (bottom right panel). The downflows at $\tau_{630}=1$ intensify with increasing brightness (middle right panel). The upflows $\tau_{630}=1$ are largely independent of brightness (middle left panel); since the level surface is higher for brighter UDs, this also indicates stronger underlying upflows. \section {Conclusions} The simulation of magneto-convective energy transport in a strong vertical magnetic field exhibits narrow upflow plumes, whose properties can be compared with observations of (central) umbral dots in sunspots. Using a multi-level image segmentation and tracking algorithm, we analyzed sets of simulated UDs from two phases of the simulation, the thermal relaxation phase (P1) and the quasi-stationary phase (P2). This led to the following results: \begin{enumerate} \item {\em Size:} Histograms of UD area indicate that there is no `typical' size of the simulated UDs. The average area is 0.08~Mm$^2$ in P1 and 0.14~Mm$^2$ in P2. Reported values from recent high-resolution observations are somewhat lower. This is partly due to different size definitions, but could also indicate a lack of small, short-lived UDs in the simulation. \item {\em Brightness:} Averaged over their area, the bolometric brightness of the simulated UDs exceeds the surrounding dark background by a factor 1.6 (P1) and 1.7 (P2), respectively. The corresponding values for the continuum brightness at 630~nm are 2.6 (P1) and 2.9 (P2). For the peak intensities, the factors can reach markedly higher values between about 3 (bolometric) and 8 (630 nm). None of the simulated UDs exceeds the corresponding brightness values of the quiet Sun. Comparison with observations is complicated by the often unknown amount of straylight contamination. \item {\em Lifetime:} Average lifetimes for the simulated UDs are 28~min (P1) and 25~min (P2). Reported lifetimes from observations vary significantly, but recent high-resolution data typically indicate shorter lifetimes, although long-lived UDs are also found. It is possible that the simulations, owing to limitations in spatial resolution, miss a population of small, short-lived UDs. \item {\em Correlations:} Larger UDs tend to be brighter and live longer, although there is a significant amount of scatter. Similar trends have been reported from observations. \item{\em Magnetic field and flows:} The drastic reduction of the magnetic field and the strong flows in the near-photospheric parts of the upflow plumes are largely hidden from spectro-polarimetric observations. This is caused by the elevation of the surfaces of constant optical depth unity, which bulge upward over columns of hot rising plasma. Consistent with observational results, only a moderate field reduction and weak flow signatures are expected at optical-depth levels where relevant photospheric lines are formed. \end{enumerate} In summary, the comparison of the properties of simulated and observed (central) UDs indicates that the simulations have caught the basic underlying mechanism for the formation of these bright structures. Differences in detail are not surprising, given the still unsufficient spatial resolution of both simulations and observations. Simulations of full sunspots, which have recently become available \citep{Rempel:etal:2009b}, show that the similar magneto-convective processes are responsible for the formation of umbral dots and penumbral filaments, also clarifying the relationship between central and peripheral UDs. More comprehensive parameter studies are needed to reveal the dependence of UD properties on background field strength, spatial resolution, and vertical extension of the computational box. Also, the calculation of synthetic Stokes profiles will permit a direct comparison with spectro-polarimetric observations and inversions. \bibliographystyle{aa}
1,108,101,565,330
arxiv
\section{\label{}} \section{INTRODUCTION} Deep inelastic scattering (DIS) off protons has provided decisive information on the parton distribution functions (PDFs) of the proton. Inclusive measurements of the cross section for the reaction $ep\rightarrow e{\rm X}$ as a function of the virtuality of the exchanged boson ($Q^2$) and of the Bjorken scaling variable ($x$) have been used to determine the proton structure function $F^p_2(x,Q^2)$. Perturbative QCD (pQCD) in the next-to-leading-order (NLO) approximation has been widely used to extract the proton PDFs from such measurements and to test the validity of pQCD. In the standard approach (DGLAP~\cite{dglap}), the evolution equations sum up all leading double logarithms in $\ln{Q^2}\cdot \ln{1/x}$ along with single logarithms in $\ln{Q^2}$ and are expected to be valid for $x$ not too small. At low $x$, a better approximation is expected to be provided by the BFKL formalism~\cite{bfkl} in which the evolution equations sum up all leading double logarithms along with single logarithms in $\ln{1/x}$. The DGLAP evolution equations have been tested extensively at HERA and were found to describe, in general, the data. In particular, the striking rise of the measured $F^p_2(x,Q^2)$ at HERA with decreasing $x$ can be accomodated in the DGLAP approach. Nevertheless, the inclusive character of $F^p_2(x,Q^2)$ may obscure the underlying dynamics at low $x$, and more exclusive final states like forward~\footnote{The coordinate system used is a right-handed Cartesian system with the $Z$ axis pointing in the proton beam direction, referred to as the ``forward direction''.} jets~\cite{mueller} need to be studied. BFKL evolution predicts a larger fraction of small-$x$ events containing high-$E_T$ forward jets than predicted by DGLAP~\cite{mueller,otros}. Parton dynamics at low $x$ is particularly relevant for the LHC given that most of the interesting hard processes involve partons with low fractional momenta. \section{FORWARD JET PRODUCTION} Forward jet production in neutral current (NC) DIS has been studied extensively at HERA. As an example, measurements of forward jet production with $p_{t,{\rm jet}}> 3.5$~GeV, polar angle $\theta_{\rm jet}$ between $7^{\circ}$ and $20^{\circ}$, $0.5 < p^2_{t,{\rm jet}}/Q^2 < 5$ and $x_{\rm jet}\equiv E_{\rm jet}/E_p>0.035$, where $E_p$ is the proton-beam energy, in the kinematic region defined by $10^{-4} < x < 4 \cdot 10^{-3}$ and $5 < Q^2 < 85$~GeV$^2$ are shown in Figure~\ref{figura1}a and exhibit a strong rise towards low $x$~\cite{h1forw}. Perturbative QCD apparently does not account for such a rise: the leading-order (LO) QCD calculation does not predict any rise while the one predicted by NLO~\cite{disent} is still much too low (see Figure~\ref{figura1}a). Some of the Feynman diagrams that are accounted for in the LO (${\cal O}(\alpha_s)$) and NLO (${\cal O}(\alpha_s^2)$) calculations are shown in Figure~\ref{figura1}b. The former has no additional gluon radiation and helps to understand why in the LO calculation there is hardly any phase space left for forward jet production. In contrast, the NLO calculation accounts for the radiation of one additional gluon. This explains why there is such a huge increase from LO to NLO: due to the opening of a new channel, namely gluon-exchange in the $t$ channel. However, that means that the NLO calculation is effectively a ``LO'' calculation, since no corrections are included. The NLO calculation should thus have large theoretical uncertainties from higher orders. A variation of the renormalisation scale around $\mu^2_R=\langle p^2_{t,{\rm dijets}}\rangle$ do not give rise to such large theoretical uncertainties (see Figure~\ref{figura1}a). However, if $Q^2$ is instead chosen as the renormalisation scale, the resulting theoretical uncertainties are large, as pointed out in~\cite{h1forw} and shown in Figure~\ref{figura1}c: measurements of forward jet production~\cite{zeusforw} with $E^{\rm jet}_T>5$~GeV, pseudorapidity in the range $2 < \eta^{\rm jet}<4.3$, $0.5 < (E^{\rm jet}_T)^2/Q^2<2$ and $x_{\rm jet}>0.036$ in the kinematic region given by $4 \cdot 10^{-4} < x < 5 \cdot 10^{-3}$ and $20 < Q^2 < 100$~GeV$^2$ are compared to NLO QCD calculations~\cite{disent} with $\mu^2_R=Q^2$. Large theoretical uncertainties which arise from higher orders in the pQCD calculations prevent a firm conclusion. Further progress can be made by making measurements for which genuine NLO calculations are available, i.e. one gluon radiation at LO and two additional radiated gluons at NLO. That is the case for three-jet production, which was already studied in~\cite{h1forw} and has been investigated more thoroughly in~\cite{h1three}. The latter is discussed next. \begin{figure*}[h] \setlength{\unitlength}{1.0cm} \begin{picture} (10.0,8.0) \put (-5.0,0.0){\includegraphics[width=70mm]{d05-135f3a.eps}} \put (2.0,2.0){\includegraphics[width=50mm]{forwjet2lo.eps}} \put (7.1,-0.3){\includegraphics[width=60mm]{figurazeus071002a.eps}} \put (-1.1,-0.3){(a)} \put ( 4.1,-0.3){(b)} \put (10.1,-0.3){(c)} \put (2.3,1.3){LO ${\cal O}(\alpha_s)$} \put (5.0,1.3){NLO ${\cal O}(\alpha^2_s)$} \end{picture} \caption{Measurements of forward jet production in NC DIS as functions of $x$ (a,c). Examples of Feynman diagrams (b).} \label{figura1} \end{figure*} \section{MULTIJET PRODUCTION AT LOW {\boldmath $x$}} Some of the Feynman diagrams accounted for in the pQCD calculations for three-jet production in NC DIS are shown in Figure~\ref{figura2}a. A measurement of the differential cross section $d\sigma/dx$ as a function of $x$ for three-jet production~\cite{h1three} is presented in Figure~\ref{figura2}b. The jets are reconstructed in the $\gamma^{*}p$ frame and are required to fulfill the following conditions: the transverse momentum of each jet $p_{t,i} >4$~GeV, the sum of the highest and next-to-highest $p_t$ jets above 9~GeV, the jet pseudorapidity in the laboratory frame to lie between -1 and 2.5 and at least one of the jets in the central region ($-1 < \eta^{\rm lab}_{\rm jet}<1.3$). The kinematic region is defined by $10^{-4} < x < 10^{-2}$, $5 < Q^2 < 80$~GeV$^2$ and $0.1 < y <0.7$, where $y$ is the inelasticity variable. Perturbative QCD calculations are compared to the data in Figure~\ref{figura2}b: the LO (${\cal O}(\alpha_s^2)$) calculation still falls short of the data, but the NLO (${\cal O}(\alpha_s^3)$) calculation~\cite{nlojet} improves dramatically the description of the data at low $x$. \begin{figure*}[h] \setlength{\unitlength}{1.0cm} \begin{picture} (10.0,7.0) \put (-4.1,1.0){\includegraphics[width=70mm]{forwjet3lo.eps}} \put (3.1,0.7){\includegraphics[width=60mm]{d07-200f4b.eps}} \put (10.1,2.3){\includegraphics[width=40mm]{d07-200f3.eps}} \put (-1.1,-0.3){(a)} \put ( 6.1,-0.3){(b)} \put (12.1,-0.3){(c)} \put (-3.3,0.3){LO ${\cal O}(\alpha^2_s)$} \put (0.0,0.3){NLO ${\cal O}(\alpha^3_s)$} \end{picture} \caption{Examples of Feynman diagrams (a). Measurement of $d\sigma/dx$ for three-jet production in NC DIS as a function of $x$ (b). Definition of $\theta^{\prime}$ and $\psi^{\prime}$ in the three-jet centre-of-mass frame (c).} \label{figura2} \end{figure*} The inclusion of the ${\cal O}(\alpha_s^3)$ QCD corrections does not only improve the description of the measured rate but also that of the topology of the events. Measurements have been made of the distributions in the variables used to describe the topology of three-jet events in the three-jet centre-of-mass frame: the scaled energy of the jets $X^{\prime}_i\equiv 2E^{\prime}_i/(E^{\prime}_1+E^{\prime}_2+E^{\prime}_3)$ ($i=1,2$; $E^{\prime}_1 > E^{\prime}_2 > E^{\prime}_3$) and the two angles $\theta^{\prime}$ and $\psi^{\prime}$ (see Figure~\ref{figura2}c). The measurements are shown in Figure~\ref{figura3} and compared to NLO calculations~\cite{nlojet}. The inclusion of additional gluon radiation provides an improved description of the data, for example, in the $\cos{\theta^{\prime}}$ distribution: the NLO follows the data and exhibits peaks at $\cos{\theta^{\prime}}=-1$ and $1$, whereas the LO calculation flattens out at $\cos{\theta^{\prime}}=-1$. \begin{figure*}[h] \setlength{\unitlength}{1.0cm} \begin{picture} (10.0,4.5) \put (-4.1,-0.3){\includegraphics[width=40mm]{d07-200f6a.eps}} \put (0.6,-0.3){\includegraphics[width=40mm]{d07-200f6b.eps}} \put (5.1,-0.3){\includegraphics[width=40mm]{d07-200f6c.eps}} \put (9.8,-0.3){\includegraphics[width=40mm]{d07-200f6d.eps}} \end{picture} \caption{Measurements of the differential cross sections as functions of the variables used to describe the topology of three-jet events in the three-jet centre-of-mass frame.} \label{figura3} \end{figure*} Further investigations of low-$x$ parton dynamics have been made by studying transverse-energy and angular correlations in dijet and trijet production in NC DIS~\cite{zeusthree}. Jets are reconstructed in the hadronic centre-of-mass (HCM) frame and required to fulfill the following conditions: $E^{\rm jet 1}_{T,{\rm HCM}}>7$~GeV, $E^{\rm jet 2,3}_{T,{\rm HCM}}>5$~GeV and $-1 < \eta^{jet 1,2,3}_{\rm lab}<2.5$. The kinematic region is given by $10^{-4} < x < 10^{-2}$, $10 < Q^2 < 100$~GeV$^2$ and $0.1 < y <0.6$. One of the most interesting angular correlations is provided by the variable $|\Delta\phi^{jet 1,2}_{\rm HCM}|$, which is defined as the azimuthal separation of the two jets with largest $E^{\rm jet}_{T,{\rm HCM}}$. For dijet events, ${\cal O}(\alpha_s)$ kinematics constrain $|\Delta\phi^{jet 1,2}_{\rm HCM}|$ to $\pi$ and ${\cal O}(\alpha^2_s)$ calculations provide the LO contribution; ${\cal O}(\alpha^3_s)$ calculations give the NLO correction. Measurements of the doubly differential cross section $d^2d\sigma/d|\Delta\phi^{jet 1,2}_{\rm HCM}|dx$ for dijet production in different regions of $x$ are presented in Figure~\ref{figura4}. The ${\cal O}(\alpha^2_s)$ predictions increasingly deviate from the data as $x$ decreases, whereas ${\cal O}(\alpha^3_s)$ calculations~\cite{nlojet} provide a good description of the data even at low $x$. In summary, parton dynamics at low $x$ is vigorously pursued at HERA. Precise measurements of multijet production in NC DIS have been made down to $x\sim 10^{-4}$ in terms of jet rates, topologies and correlations. Comparison of perturbative QCD calculations with these measurements demonstrate the big impact of initial-state gluon radiation. Perturbative QCD at ${\cal O}(\alpha_s^3)$ reproduces succesfully the measurements. However, the theoretical uncertainties are still significant and the precision of the data demands next-to-next-to-leading-order corrections to be included. \begin{figure*}[t] \setlength{\unitlength}{1.0cm} \begin{picture} (10.0,12.50) \centering \put (0.0,-0.3){\includegraphics[width=100mm]{DESY-07-062_10.eps}} \end{picture} \caption{\mbox{Measurements of the doubly differential cross section $d^2d\sigma/d|\Delta\phi^{jet 1,2}_{\rm HCM}|dx$ for dijet production in different regions of~$x$.}} \label{figura4} \end{figure*}
1,108,101,565,331
arxiv
\section{Introduction} Let $C$ be a smooth irreducible projective curve of genus $2$ over an algebraically closed field $k$ of characteristic zero, and let $\mathcal{SU}_C(3)$ be the moduli space of rank $3$ vector bundles over $C$ with trivial determinant. Laszlo began to investigate the local structure of this moduli space in \cite[V]{localstr}: Luna's \'etale slice theorem provides a way to compute the completed local ring at any point of $\mathcal{SU}_C(3)$ as GIT quotients of affine spaces, but, as soon as the isotropy group gets too bad, this leads to a quite intricate calculation. By translating this situation in terms of representations of quivers, we managed to work out the local structure at any point of $\mathcal{SU}_C(3)$. We have in particular obtained the following result: \begin{theorem} The moduli space of rank $3$ vector bundles over a curve of genus $2$ is a local complete intersection. \end{theorem} As we have already seen in \cite{orth}, the notion of representations of quivers appears to be really helpful to understand the quotients given by Luna's result. Although it may not be clear in this note, where we could have given direct proofs avoiding such considerations, this quiver setting was the very basic point which led to generating sets for the coordinate rings of the quotients. Let now $\Theta$ be the canonical Theta divisor on the variety $J^{1}$ which parametrizes line bundles of degree $1$ on $C$. It is known for long that the theta map $\theta \colon \mathcal{SU}_C(3) \to |3\Theta|$ is a double covering. Ortega has shown in \cite{angela} that its branch locus $\mathcal S \subset |3\Theta|$ is a sextic hypersurface which is the dual of the Coble cubic $\mathcal C \subset |3\Theta|^\ast$, where the Coble cubic is the unique cubic in $|3\Theta|^\ast$ which is singular along $J^1 \buildrel{|3\Theta|}\over{\longrightarrow}|3\Theta|^\ast$ (note that a different proof of this statement has been given by Nguy$\tilde{\hat{\text{e}}}$n in \cite{minh}). The last part of this paper is devoted to the local structure of the sextic $\mathcal S$. \section{Local structure of $\mathcal{SU}_C(3)$} The starting point of the local study of moduli spaces of vector bundles, which follows from Luna's slice theorem, can be found in \cite[II]{localstr}: it states that, at a closed point representing a polystable bundle $E$, the moduli space $\mathcal{SU}_C(r)$ of rank $r$ vector bundles with trivial determinant is \'etale locally isomorphic to the quotient $\mathrm{Ext}^1(E,E)_0 \git \mathrm{Aut}(E)$ at the origin, where $\mathrm{Ext}^1(E,E)_0$ denotes the kernel of $\tr \colon \mathrm{Ext}^1(E,E) \to H^1(C,\O_C)$. We thus have to understand the ring of invariants of the polynomial algebra $k[\mathrm{Ext}^1(E,E)_0]=\mathrm{Sym}({\mathrm{Ext}^1(E,E)_0}^\ast)$ under the action of $\mathrm{Aut}(E)$. As a polystable bundle $E$ can be written \begin{align} \label{polystable}E=\bigoplus_{i=1}^s E_i \otimes V_i,\end{align} \noindent where the $E_i$'s are mutually non-isomorphic stable bundles (of rank $r_i$ and degree $0$), and the $V_i$'s are vector spaces (of dimension $\rho_i$). Through this splitting our data become \begin{align} \label{Ext}\mathrm{Ext}^1(E,E)=\bigoplus_{i,j} \mathrm{Ext}^1(E_i,E_j)\otimes \mathrm{Hom}(V_i,V_j),\end{align} \noindent endowed with an operation of $\displaystyle{\mathrm{Aut}(E)=\prod_i \gl(V_i)}$ coming from the natural actions of $\gl(V_i) \times \gl(V_j)$ on $\mathrm{Hom}(V_i,V_j)$. We recognize here the setting of representations of quivers (see \cite{LBP}): consider indeed the quiver $Q$ with $s$ vertices $1,\ldots,s$, and $\dim\mathrm{Ext}^1(E_i,E_j)$ arrows from $i$ to $j$, and define $\alpha \in \mathbb N^s$ by $\alpha_i=\rho_i$. The $\mathrm{Aut}(E)$-module $\mathrm{Ext}^1(E,E)$ is then exactly the $\gl(\alpha)$-module $R(Q,\alpha)$ consisting of all representations of $Q$ of dimension $\alpha$ (we refer to (\textit{loc. cit.}) for the notations). This point of view identifies the quotient $\mathrm{Ext}^1(E,E)_0 \git \mathrm{Aut}(E)$ we have in mind with a closed subscheme of $R(Q,\alpha)\git \gl(\alpha)$, and (\textit{loc. cit.}) shows that the coordinate ring of the latter is generated by traces along oriented cycles in the quiver $Q$. But we also need a precise description of the relations between these generators (the \textit{second main theorem for invariant theory}). Once we have a convenient enough statement about these relations we can describe the completed local ring of $\mathcal{SU}_C(r)$ at $E$. When $r=3$ the decomposition (\ref{polystable}) ensures that there are only five cases to deal with, according to the values of the $r_i$'s and $\rho_i$'s. \vspace{0.2cm}\stepcounter{theorem}\noindent(\thetheorem) The case of a stable bundle is obvious, and the case $r_1=2, r_2=1$ is a special case of the situation studied in \cite[III]{localstr}: $\mathcal{SU}_C(3)$ is \'etale isomorphic at $E$ to a rank $4$ quadric in $\mathbb A^9$. Here quivers do not provide a shorter proof. \mylabel{theorem}{easycase} \vspace{0.2cm}\stepcounter{theorem}\noindent(\thetheorem) Let us look at the three other cases, where every $E_i$ in (\ref{polystable}) is invertible. The generic case consists of bundles $E$ which are direct sum of $3$ distinct line bundles. It has already been performed in \cite[V]{localstr}, but may also be recovered in a more convenient fashion as an easy consequence of \cite{LBP}: the generators of \cite[Lemma V.1]{localstr} then arise nicely as traces along closed cycles in the quiver \mylabel{theorem}{generic} \begin{equation}\label{quiver} \begin{array}{c} \xymatrix@R=34pt@C=20pt{ \bullet \ar@{-}|@{>}@/^1.5mm/[rr] \ar@{-}|@{>}@/^1.5mm/[rd]& & \bullet \ar@{-}|@{>}@/^1.5mm/[ll] \ar@{-}|@{>}@/^1.5mm/[ld]\\ & \bullet \ar@{-}|@{>}@/^1.5mm/[ul] \ar@{-}|@{>}@/^1.5mm/[ur]& } \end{array} \end{equation} \noindent (note that there should be two loops on each vertex; but $\alpha=(1,1,1)$ implies that we can restrict ourselves to the quiver (\ref{quiver})). It is easy too to infer from (\ref{quiver}) the relation found by Laszlo; but, although \cite{LBP} gives a way to produce all the relations, this description turns out to be quite inefficient even in the present case (note however that, in order to conclude here, it is enough to remind that we know a priori the dimension of $\mathrm{Ext}^1(E,E) \git \mathrm{Aut}(E)$). In the remaining two cases we already know that the tangent cone at $E$ must be a quadric (in $\mathbb A^9$) of rank $\leqslant 2$ (see \cite[V]{localstr}). We give now more precise statements. \vspace{0.2cm}\stepcounter{theorem}\noindent(\thetheorem) \mylabel{theorem}{2by2} Suppose that $\rho_1=2$, i.e. that $E=(L\otimes V) \oplus L^{-2}$ where $L$ is a line bundle of degree $0$ with $L^3 \not\simeq \O$ and $V$ a vector space of dimension $2$. We have to consider here the ring of invariant polynomials on the representation space $R(Q,(2,1))$ of the quiver $Q$ \ \begin{equation}\label{quiver2} \begin{array}{c} $$\xymatrix{ \bullet \ar@{-}|@{>}@/^3mm/[rr] \ar@(ur,ul)[] \ar@(dr,dl)[] & & \bullet \ar@{-}|@{>}@/^3mm/[ll] \ar@(ul,ur)[] \ar@(dl,dr)[] }$$ \end{array} \end{equation} \ \ \noindent under the action of $\gl(V)\times\mathbb G_m$. Since the second vertex corresponds to a $1$-dimensional vector space it is enough to consider the quiver obtained by deleting the two loops on the right, and in fact we are brought to the action of $\gl(V)$ on $\mathrm{End}(V) \oplus \mathrm{End}(V) \oplus \mathrm{End}(V)^{\leqslant 1} \subset \mathrm{End}(V)^{\oplus 3}$, where $\mathrm{End}(V)^{\leqslant 1}$ denotes the space of endomorphisms of $V$ of rank at most $1$: this simply means that $$k[R(Q,(2,1))]^{\gl(V)\times\mathbb G_m}\simeq \left(k[R(Q,(2,1))]^{\mathbb G_m}\right)^{\gl(V)},$$ \noindent and that $k[R(Q,(2,1))]^{\mathbb G_m}$ gets naturally identified (as a $\gl(V)$-module) with $\mathrm{End}(V) \oplus \mathrm{End}(V) \oplus \mathrm{End}(V)^{\leqslant 1} \oplus k \oplus k$, the last two summands being fixed under the induced operation of $\gl(V)$. Let us now translate this discussion in a more geometric setting. Since (\ref{Ext}) identifies here $\mathrm{Ext}^1(E,E)$ with $$\left(H^1(C,\O)\otimes\mathrm{End}(V)\right)\oplus \left(H^1(C,L^{-3})\otimes V^\ast\right)\oplus\left(H^1(C,L^3)\otimes V\right)\oplus \left(H^1(C,\O)\otimes k\right),$$ \noindent we can identify the $\mathrm{Aut}(E)$-module $\mathrm{Ext}^1(E,E)_0$ with the $\gl(V)\times\mathbb G_m$-module $$\left(H^1(C,\O)\otimes\mathrm{End}(V)\right)\oplus \left(H^1(C,L^{-3})\otimes V^\ast\right)\oplus\left(H^1(C,L^3)\otimes V\right),$$ so that, up to the choices of some basis of the different cohomology spaces, any element of $\mathrm{Ext}^1(E,E)_0$ can be written $(a_1,a_2,\lambda,v) \in \mathrm{End}(V) \oplus \mathrm{End}(V) \oplus V^\ast \oplus V$. The map $(a_1,a_2,\lambda,v) \mapsto (a_1,a_2,a_3=\lambda \otimes v) \in \mathrm{End}(V)^{\oplus 3}$ identifies the quotient $\mathrm{Ext}^1(E,E)_0 \git \mathrm{Aut}(E)$ with the closed subscheme of $\mathrm{End}(V)^{\oplus 3} \git \gl(V)$ defined by the equation $\det a_3=0$. A presentation of the invariant algebra $k[\mathrm{End}(V)^{\oplus 3}]^{\gl(V)}$ can be found in \cite{drensky} (note that another presentation of this ring had been previously given in \cite{formanek}): if we let $b_i$ denote the traceless endomorphism $a_i-\frac{1}{2}\tr(a_i)\mathrm{id}$, this invariant ring is generated by the following ten functions \begin{equation} \label{gen} \begin{array}{c} \displaystyle{u_i=\tr(a_i)\ \text{with $1 \leqslant i \leqslant 3$},\ v_{ij}=\tr(b_i b_j)\ \text{with $1\leqslant i \leqslant j \leqslant 3$},} \\ \\ \displaystyle{w=\sum_{\sigma\in\mathfrak{S}_3} \varepsilon(\sigma) \tr(b_{\sigma(1)}b_{\sigma(2)}b_{\sigma(3)}),} \end{array} \end{equation} \noindent subject to the single relation $w^2+18 \det(v_{ij})=0$. We have thus obtained the following result: \begin{e-proposition} If $E=\left(L \otimes V\right) \oplus L^{-2}$ with $L^3 \not\simeq \O$, then $\mathcal{SU}_C(3)$ is \'etale locally isomorphic at $E$ with the subscheme of $\mathbb A^{10}$ defined by the two equations $$X_{10}^2+18(X_4X_5X_6+2X_7X_8X_9-X_6X_7^2-X_5X_8^2-X_4X_9^2)=0$$ $$\text{ and } X_3^2-2X_6=0$$ \noindent at the origin. Its tangent cone is a double hyperplane in $\mathbb A^9$. \end{e-proposition} \vspace{0.2cm}\stepcounter{theorem}\noindent(\thetheorem) \mylabel{theorem}{3by3} Suppose now that $\rho_1=3$, i.e. that $E = L \otimes V$ where $V$ is a vector space of dimension $3$ (and $L$ a line bundle of order $3$). By the same argument as in \cite[Proposition V.4]{localstr} we know that the tangent cone at such a point is a rank $1$ quadric. But an explicit description of an \'etale neighbourhood is available, thanks to \cite{two3by3}. The space $\mathrm{Ext}^1(E,E)_0$ is isomorphic to $H^1(C,\O)\otimes \mathrm{End}_0(V)$ and, if we fix a basis of $H^1(C,\O)$, any of its element can be written $(x,y)\in \mathrm{End}_0(V)\oplus\mathrm{End}_0(V)$. The ring of invariants $k[H^1(C,\O) \otimes \mathrm{End}_0(V)]^{\gl(V)}$ is then generated by the nine functions $\tr(x^2)$,$\tr(xy)$, $\tr(y^2)$, $\tr(x^3)$, $\tr(x^2y)$, $\tr(xy^2)$, $\tr(y^3)$, $v=\tr(x^2y^2)-\tr(xyxy)$ and $w=\tr(x^2y^2xy)-\tr(y^2x^2yx)$; moreover the ideal of relations is principal, generated by an explicit equation (see (\textit{loc. cit.})). As a result of this case-by-case analysis we conclude that $\mathcal{SU}_C(3)$ is a local complete intersection, as announced in the introduction. \section{On the local structure of $\mathcal S$} We know from \cite{angela} that the involution $\sigma$ associated to the double covering given by the theta map $$\theta \colon \mathcal{SU}_C(3) \to |3\Theta|$$ \noindent acts by $E \mapsto \iota^\ast E^\ast$, where $\iota$ stands for the hyperelliptic involution. The local study of its ramification locus thus reduces to an explicit analysis of the behaviour of $\sigma$ through the \'etale morphisms resulting from Luna's theorem. Once again it comes to a case-by-case investigation. \vspace{0.2cm}\stepcounter{theorem}\noindent(\thetheorem) When $E$ is stable there is nothing to say. If $E=F \oplus L$ (with $F$ a stable bundle of rank $2$ and $L=(\det F)^{-1}$) we have to understand the action of the linearization of $\sigma$ on $$\mathrm{Ext}^1(E,E)_0\simeq \mathrm{Ext}^1(F,F) \oplus \mathrm{Ext}^1(F,L) \oplus \mathrm{Ext}^1(L,F)$$ \noindent (note that we tacitly identify $\mathrm{Ext}^1(F,F)$ with its image in $\mathrm{Ext}^1(F,F) \oplus H^1(C,\O) \subset \mathrm{Ext}^1(E,E)$ by the map $\omega \mapsto (\omega,-\tr (\omega))$). Since $\sigma(E)=E$, $\iota^\ast F^\ast$ must be isomorphic to $F$, and $\sigma$ identifies $\mathrm{Ext}^1(F,L)$ and $\mathrm{Ext}^1(L,F)$; let us choose a basis $X_1,X_2$ of $\mathrm{Ext}^1(F,L)^\ast$, and call $Y_1,Y_2$ the corresponding basis of $\mathrm{Ext}^1(L,F)$. We need here to recall precisely from \cite{localstr} the explicit description of the coordinate ring of $\mathrm{Ext}^1(E,E)_0\git \mathrm{Aut}(E)$ mentionned in \ref{easycase}: it is generated by $k[\mathrm{Ext}^1(F,F)]$ and the four functions $u_{ij}=X_iY_j$, subject to the relation $u_{11}u_{22}-u_{12}u_{21}=0$. It follows from our choice that $\sigma$ maps $u_{ij}$ to $u_{ji}$. Furthermore we claim that $\sigma$ acts identically on $\mathrm{Ext}^1(F,F)$: as a stable bundle, $F$ corresponds to a point of the moduli space $\mathcal{U}(2,0)$, whose tangent space is precisely isomorphic to $\mathrm{Ext}^1(F,F)$. The action of $\sigma$ on this vector space is the linearization of the one of $F \in \mathcal U(2,0) \mapsto \iota^\ast F^\ast$. Using that $\mathcal U(2,0)$ is a Galois quotient of $J_C \times \mathcal{SU}_C(2)$, our claim comes from the fact that $\sigma$ is trivial on both $J_C$ and $\mathcal{SU}_C(2)$. Since the coordinate ring of the fixed locus of $\sigma$ in $\mathrm{Ext}^1(E,E)_0\git \mathrm{Aut}(E)$ is the quotient of the one of $\mathrm{Ext}^1(E,E)_0\git \mathrm{Aut}(E)$ by the involution induced by $\sigma$ we may conclude that $\mathcal S$ is \'etale locally isomorphic at $E$ to the quadric cone in $\mathbb A^8$ defined by $X_3^2-X_1 X_2=0$. \vspace{0.2cm}\stepcounter{theorem}\noindent(\thetheorem) Consider now the situation of \ref{generic}: let us write $E=L_1\oplus L_2 \oplus L_3$ with $L_i \not\simeq L_j$ if $i\neq j$. We have $\mathrm{Ext}^1(E,E) \simeq \bigoplus_{i,j} \mathrm{Ext}^1(L_i,L_j)$; let us choose for $i \neq j$ a non-zero element $X_{ij}$ of $\mathrm{Ext}^1(L_i,L_j)^\ast$ such that $X_{ji}$ corresponds to $X_{ij}$ through the isomorphism $\mathrm{Ext}^1(L_i,L_j) \simeq \mathrm{Ext}^1(L_j,L_i)$ induced by $\sigma$ and the natural isomorphisms $\iota^\ast L_i^\ast \simeq L_i$. It then follows from \ref{generic} (see \cite{localstr} for a complete proof) that the ring $k[\mathrm{Ext}^1(E,E)_0]^{\mathrm{Aut}(E)}$ is generated by $k[\ker(\bigoplus_i \mathrm{Ext}^1(L_i,L_i) \to H^1(C,\O))]$ and the five functions $Y_1=X_{23}X_{32}$, $Y_2=X_{13}X_{31}$, $Y_3=X_{12}X_{21}$, $Y_4=X_{12}X_{23}X_{31}$, $Y_5=X_{13}X_{32}X_{21}$, subject to the relation $Y_4Y_5-Y_1Y_2Y_3=0$. One easily checks that the involution $\sigma$ fixes $k[\ker(\bigoplus_i \mathrm{Ext}^1(L_i,L_i) \to H^1(C,\O))]$, $Y_1$, $Y_2$ and $Y_3$, while it sends $Y_4$ to $Y_5$. The fixed locus $\mathrm{Fix}(\sigma)$ is then defined by the equation $Y_4-Y_5=0$, so that $\mathcal S$ is \'etale locally isomorphic to the hypersurface in $\mathbb A^8$ defined by $Z_4^2-Z_1 Z_2 Z_3=0$. Its tangent cone is a double hyperplane. \vspace{0.2cm}\stepcounter{theorem}\noindent(\thetheorem) In the situation of \ref{2by2} we have to make a more precise choice of the non-zero elements of $\mathrm{Ext}^1(L^{-2},L)$ and $\mathrm{Ext}^1(L,L^{-2})$, so as to make them correspond through $\sigma$ and the natural isomorphism $\iota^\ast L^\ast \simeq L$; such a choice ensures that $\sigma$ operates on $\mathrm{Ext}^1(E,E)_0$ in the following way: $$(x,y,\lambda,v)\in \mathrm{End}(V)^{\oplus 2} \oplus V{}^\ast \oplus V\mapsto ({}^tx,{}^ty,{}^t v,{}^t\lambda),$$ \noindent so that we know how it acts on the generators of $k[\mathrm{Ext}^1(E,E)_0 \git \mathrm{Aut}(E)]$ given in (\ref{gen}): $\sigma$ fixes $u_i$, $v_{ij}$, and sends $w$ to $-w$. This implies that the fixed locus is defined by the equation $w=0$. The sextic $\mathcal S$ is \'etale locally isomorphic to the subscheme of $\mathbb A^9$ whose ideal is generated by the two equations $$X_4X_5X_6+2X_7X_8X_9-X_6X_7^2-X_5X_8^2-X_4X_9^2=0\ \text{and}\ X_3^2-2X_6=0;$$ \noindent its tangent cone is therefore the cubic hypersurface of $\mathbb A^8$ defined by $2 X_7 X_8 X_9 -X_5 X_8^2-X_4 X_9^2=0$. \vspace{0.2cm}\stepcounter{theorem}\noindent(\thetheorem) We are now left with the last case, where $E$ is of the form $L \otimes V$ (with $L^3=\O$): $\mathrm{Ext}^1(E,E)_0$ is then isomorphic to $H^1(C,\O) \otimes \mathrm{End}_0(V)$, and $\sigma$ acts by $\omega\otimes a \in H^1(C,\O) \otimes \mathrm{End}_0(V) \longmapsto \omega \otimes {}^t a$. This induces an action on $k[H^1(C,\O)\otimes\mathrm{End}_0(V)]^{\mathrm{Aut}(E)}$ which fixes the first eight generators of \ref{3by3}, and acts by $-1$ on the last one, namely $w$; the fixed locus is thus defined in $\mathrm{Ext}^1(E,E)_0 \git \mathrm{Aut}(E)$ by the linear equation $w=0$. The sextic $\mathcal S$ is then \'etale locally isomorphic to an hypersurface in $\mathbb A^8$ defined by an explicit equation; writing down this equation shows that its tangent cone is a triple hyperplane.
1,108,101,565,332
arxiv
\section{Introduction} Hybrid analog and digital beamforming (HBF) design has recently been recognized as a key technology in millimeter wave (mmWave) communication systems to improve the spectral efficiency and/or energy-efficiency at affordable hardware cost and power consumption \cite{sohrabi2016hybrid}-\cite{Tsinos2017on}. Although its application to multiuser multiple input and multiple output (MIMO) mmWave systems enables spatial division multiple access, there also exist big challenges since the signals at different users cannot be cooperatively processed \cite{sohrabi2016hybrid},\cite{Payami2016hybrid}-\cite{cong2017hybrid}. In the existing studies on the multiuser HBF design, the authors in \cite{sohrabi2016hybrid} and \cite{Payami2016hybrid} investigated the HBF design in the multiuser multiple input and single output (MISO) scenario aiming at maximizing the sum achievable rate. In \cite{alkhateeb2015limited}, the authors proposed a low-complexity HBF scheme in the multiuser MIMO scenario supporting multiple data streams for each user. More recently, the authors in \cite{nguyen2017hybrid} investigated the the orthogonal matching pursuit (OMP) based HBF algorithm under the minimum mean square error (MMSE) criterion. To enhance the performance, in \cite{cong2017hybrid}, the authors proposed a near-optimal multiuser MMSE HBF scheme in MISO scenario. In this paper, we investigate the HBF design aiming to minimize the sum of the mean square errors (sum-MSE) of all users' multiple streams in a downlink multiuser MIMO mmWave system \footnote{As shown in the traditional fully digital MIMO beamforming designs \cite{palomar2003joint}, the objective of minimizing the sum-MSE results in fairer beamforming and power allocation among data streams than that of maximizing the sum-rate.}. Using the alternating minimization method \cite{Csi1984Alt}, we decompose the original problem into the hybrid precoding and combining sub-problems. For the former sub-problem, we derive the optimal digital precoder based on the Karush-Kuhn-Tucker (KKT) conditions and optimize the analog one via generalized eigen-decomposition (GEVD). For the latter one, we derive a closed-form expression of the digital combiners under the unitary constraint and optimize the analog combiners via GEVD by replacing the sum-MSE by its lower bound. Simulation results show that the proposed MMSE HBF scheme outperforms the conventional HBF schemes and performs close to the fully digital beamforming. \textit{Notations}: $\mathbf{A}$ is a matrix, $\mathbf{a}$ is a vector, and $a$ is a scalar. $\mathbf{I}_N$ is an $N\times N$ identity matrix. $\mathrm{blkdiag}\lbrace \mathbf{A}_1, \mathbf{A}_2, \dots, \mathbf{A}_N\rbrace$ returns a block diagonal matrix with sub-matrices $\mathbf{A}_1, \mathbf{A}_2, \dots, \mathbf{A}_N$ on its diagonal. $\mathbf{A}^T$, $\mathbf{A}^H$ and $\mathbf{A}^{-1}$ are the transpose, conjugate transpose and inverse of matrix $\mathbf{A}$. $\mathrm{tr}\left(\mathbf{A} \right)$ denotes the trace of matrix $\mathbf{A}$. $\mathrm{Re}\lbrace\cdot\rbrace$ denotes the real component of a complex variable. $\|\cdot\|_1$, $\|\cdot\|$ and $\|\cdot\|_\infty$ are the one, two and infinite norms, respectively. $\mathcal{CN}\left(\mathbf{a},\mathbf{A}\right)$ denotes the circularly symmetric complex Gaussian distribution with mean $\mathbf{a}$ and covariance matrix $\mathbf{A}$. $\mathrm{E}\{\cdot \}$ denotes the expectation operator. \section{System Model}\label{sec:systemmodel} \begin{figure}[t] \begin{center} \centering \includegraphics*[width=3.5in]{systemmodel} \caption{Diagram of the downlink of a multiuser MIMO mmWave system with the hybrid precoding and combining architecture.} \label{model} \end{center} \end{figure} Consider the downlink of a narrowband multiuser mmWave MIMO system shown in Fig. \ref{model}, where a base-station (BS) with $N_\mathrm{t}$ transmit antennas and $N_\mathrm{t}^\mathrm{RF}$ RF chains serves a total of $K$ users each of which is equipped with $N_\mathrm{r}$ receive antennas and $N_\mathrm{r}^\mathrm{RF}$ RF chains and requires $N_\mathrm{s}$ independent data streams. It is assumed that $KN_\mathrm{s}\leq N_\mathrm{t}^\mathrm{RF}\ll N_\mathrm{t}$ and $N_\mathrm{s}\leq N_\mathrm{r}^\mathrm{RF}\leq N_\mathrm{r}$ due to the high cost and power consumption of RF devices. Throughout this letter the fully connected RF precoder/combiner structure \cite{yu2016alternating} is considered. At the BS, the users' data streams are processed with a baseband precoder $\mathbf{V}_\mathrm{D}$ followed by an RF preocoder $\mathbf{V}_\mathrm{RF}$. Thus, the precoded signal is given by $\mathbf{x} = \mathbf{V}\mathbf{s} = \mathbf{V}_\mathrm{RF}\mathbf{V}_\mathrm{D}\mathbf{s} = \sum_{k=1}^K\mathbf{V}_\mathrm{RF}\mathbf{V}_{\mathrm{D},k}\mathbf{s}_{k}$, where $\mathbf{V}=\mathbf{V}_\mathrm{RF}\mathbf{V}_\mathrm{D}$ denotes the hybrid precoding matrix with $\mathbf{V}_\mathrm{D}=\left[\mathbf{V}_\mathrm{D,1},\dots,\mathbf{V}_{\mathrm{D},K}\right]$ and $\mathbf{V}_{\mathrm{D},k}$ being an $N_\mathrm{t}^\mathrm{RF}\times N_\mathrm{s}$ matrix for $k=1,\dots,K$, and $\mathbf{s}=[\mathbf{s}_{1}^T,\dots,\mathbf{s}_{K}^T]^T$ with $\mathrm{E}\{\mathbf{s}\mathbf{s}^H\}=\mathbf{I}_{KN_\mathrm{s}}$ is the $KN_\mathrm{s}\times1$ vector of all users' transmitted symbols, with $\mathbf{s}_{k}$ defined as the symbol vector of user $k$. Furthermore, it is assumed that $\mathrm{tr}\left(\mathbf{V}_\mathrm{RF}\mathbf{V}_\mathrm{D}\mathbf{V}_\mathrm{D}^H\mathbf{V}_\mathrm{RF}^H\right)\leq P$, where $P$ is the maximum transmit power of the BS. Assuming a frequency-flat fading MIMO channel between the BS and user $k$, the received signal vector at user $k$ is $\mathbf{y}_{k} = \mathbf{H}_k\mathbf{x}+\mathbf{e}_{k}$, where $\mathbf{e}_{k}\sim\mathcal{CN}\left(0,\sigma^2\mathbf{I}_{N_\mathrm{r}}\right)$ denotes the noise vector, and the channel response $\mathbf{H}_k$ is modeled as \begin{equation} \mathbf{H}_k=\sqrt{\frac{N_\mathrm{t}N_\mathrm{r}}{L}}\sum\limits_{l=1}^L\alpha_{l,k}\mathbf{a}_\mathrm{r} \left(\phi_{\mathrm{r},k}^l\right)\mathbf{a}_\mathrm{t}^H\left(\phi_{\mathrm{t},k}^l\right), \label{channel} \end{equation} where $\alpha_{l,k}$, $\phi_{\mathrm{r},k}^l$ and $\phi_{\mathrm{t},k}^l$ denote the complex gain, the angles of departure and arrival (AoD and AoA) corresponding to the $l$th path, respectively. Further, $\mathbf{a}_\mathrm{r}\left(.\right)$ and $\mathbf{a}_\mathrm{t}\left(.\right)$ are the antenna array response vectors at the BS and a user, respectively. Considering the uniform linear arrays, we have $\mathbf{a}_i\left(\phi\right)=\frac{1}{\sqrt{N_i}}\left[1,e^{\mathrm{j}k_0d\sin\left(\phi\right)},\dots, e^{\mathrm{j}k_0d\left(N_i-1\right)\sin\left(\phi\right)}\right]^T$, where $i\in \{ \mathrm{r},\mathrm{t}\}$, $\mathrm{j}=\sqrt{-1}$, $k_0=2\pi/\lambda_c$, $\lambda_c$ is the wavelength, and $d$ is the antenna spacing. It is assumed that $\mathbf{H}_1,\dots,\mathbf{H}_K$ are perfectly known at the BS. For each user, the received signal is first processed with an analog combiner $\mathbf{W}_{\mathrm{RF},k}$, then a low-dimensional digital combiner $\mathbf{W}_{\mathrm{D},k}$, and finally a symbol estimator denoted by a scalar factor $\beta$ \cite{joung2007regularized}. That is, \begin{equation} \begin{split} \mathbf{\hat{s}}_{k}=&\beta\mathbf{W}_{\mathrm{D},k}^H\mathbf{W}_{\mathrm{RF},k}^H\mathbf{H}_k\mathbf{V}_k \mathbf{s}_{k}+\beta\mathbf{W}_{\mathrm{D},k}^H\mathbf{W}_{\mathrm{RF},k}^H\mathbf{H}_k\sum_{f\neq k}\mathbf{V}_f\mathbf{s}_{f}\\ &+\beta\mathbf{W}_{\mathrm{D},k}^H\mathbf{W}_{\mathrm{RF},k}^H\mathbf{e}_{k},\nonumber \end{split} \end{equation} where the three terms in the right hand side represent the desired signal, the inter-user interference and the noise, respectively. Define the MSE of user $k$ as $J_k=\mathrm{E}\{||\mathbf{s}_{k}-\mathbf{\hat{s}}_{k}||^2\}$. By substituting the above equation into this definition, we have \begin{equation} \begin{split} J_k=&\mathrm{tr}\left(\beta^2\mathbf{W}_k^H\mathbf{H}_k\mathbf{V}\mathbf{V}^H\mathbf{H}_k^H\mathbf{W}_k+ \beta^2\sigma^2\mathbf{W}_k^H\mathbf{W}_k+\mathbf{I}_{N_\mathrm{s}}\right)\\ &-2\mathrm{Re}\lbrace\mathrm{tr}\left(\beta\mathbf{V}_k^H\mathbf{H}_k^H\mathbf{W}_k\right)\rbrace, \end{split} \label{sumMSE} \end{equation} where $\mathbf{W}_k=\mathbf{W}_{\mathrm{RF},k}\mathbf{W}_{\mathrm{D},k}$ and $\mathbf{V}_k=\mathbf{V}_\mathrm{RF}\mathbf{V}_{\mathrm{D},k}$. Since $\mathbf{V}_\mathrm{RF}$ and $\mathbf{W}_{\mathrm{RF},k}$ are implemented using phase shifters, we introduce the constant modulus constraint on each entry of the analog beamformers. The objective in this letter is to minimize the sum-MSE of all users' multiple streams. Thus, the HBF optimization problem is formulated as follows: \begin{equation} \begin{split} \underset{\mathbf{V}_\mathrm{D},\mathbf{V}_\mathrm{RF},\mathbf{W}_{\mathrm{D},k},\mathbf{W}_{\mathrm{RF},k},\beta}{\text{minimize}} & J_{\mathrm{sum}}=\sum_{k=1}^KJ_k \\ \text{subject to}\;\;\;\;\;\;\;\;\; & \mathrm{tr}\left(\mathbf{V}_\mathrm{RF}\mathbf{V}_\mathrm{D}\mathbf{V}_\mathrm{D}^H\mathbf{V}_\mathrm{RF}^H\right)\leq P\\ & |\mathbf{V}_\mathrm{RF}(\mathit{i},\mathit{j})|^2=1,\;\forall \mathit{i},\mathit{j} \\ & |\mathbf{W}_{\mathrm{RF},k}(\mathit{p},\mathit{q})|^2=1,\;\forall \mathit{p},\mathit{q},\mathit{k}. \end{split} \label{optproblem} \end{equation} \section{Hybrid MMSE Precoder and Combiners Design}\label{design} As the problem in \eqref{optproblem} is nonconvex and difficult to solve optimally, based on the alternating minimization method, we propose a HBF scheme to alternatively optimize the hybrid precoder of the BS and the hybrid combiners of the users. \subsection{Hybrid Precoder Design}\label{subsec:hybrid-prec} By fixing all users' hybrid combiners, we have the following BS hybrid precoding optimization sub-problem: \begin{equation} \begin{split} \underset{\mathbf{V}_\mathrm{D},\mathbf{V}_\mathrm{RF},\beta}{\text{minimize}}\;\;\;\; & J_{\mathrm{sum}} \\ \text{subject to}\;\;\;\; & \mathrm{tr}\left(\mathbf{V}_\mathrm{RF}\mathbf{V}_\mathrm{D}\mathbf{V}_\mathrm{D}^H\mathbf{V}_\mathrm{RF}^H\right)\leq P\\ &|\mathbf{V}_\mathrm{RF}(\mathit{i},\mathit{j})|^2=1,\;\forall \mathit{i},\mathit{j}, \end{split} \end{equation} where the scalar factor $\beta$ is jointly optimized with $\mathbf{V}_\mathrm{D}$ and $\mathbf{V}_\mathrm{RF}$ for better performance since now the noise effect is considered in the precoder design. \subsubsection{Digital Precoder Design} We first fix $\mathbf{V}_\mathrm{RF}$ and optimize $\beta$ and $\mathbf{V}_\mathrm{D}$. As shown in \cite{cong2017hybrid}, the original precoder $\mathbf{V}_\mathrm{D}$ can be separated as $\mathbf{V}_\mathrm{D} =\beta^{-1}\widetilde{\mathbf{V}}_\mathrm{D}$, where $\widetilde{\mathbf{V}}_\mathrm{D}$ is an unconstrained baseband precoder and $\beta$ has the function of guaranteeing the transmit power constraint. Based on the KKT conditions, it can be shown that the optimal $\mathbf{V}_\mathrm{D}$ and $\beta$ are given by \begin{equation} \begin{split} \widetilde{\mathbf{V}}_\mathrm{D}&=\left(\mathbf{V}_\mathrm{RF}^H\mathbf{H}^H\mathbf{W}\mathbf{W}^H\mathbf{H} \mathbf{V}_\mathrm{RF}+\lambda\mathbf{V}_\mathrm{RF}^H\mathbf{V}_\mathrm{RF}\right)^{-1}\mathbf{V}_\mathrm{RF}^H \mathbf{H}^H\mathbf{W},\\ \beta&=\sqrt{\mathrm{tr}(\mathbf{V}_\mathrm{RF}\widetilde{\mathbf{V}}_\mathrm{D} \widetilde{\mathbf{V}}_\mathrm{D}^H\mathbf{V}_\mathrm{RF}^H)/P},\nonumber \end{split} \end{equation} where $\mathbf{H}=\left[\mathbf{H}_1^T, \dots, \mathbf{H}_K^T\right]^T$, $\lambda=\sigma^2\mathrm{tr}\left(\mathbf{W}^H\mathbf{W}\right)\big/P$, and $\mathbf{W}=\mathrm{blkdiag}\lbrace \mathbf{W}_1, \dots, \mathbf{W}_K\rbrace$ is a block diagonal matrix with all users' hybrid combining matrices on the diagonal. \subsubsection{Analog Precoder Design} By substituting the above optimal digital precoder into the sum-MSE and using the matrix inversion lemma, we have \begin{equation} \begin{split} J_{\mathrm{sum}}=&\mathrm{tr}\Big(\big(\mathbf{I}_{KN_\mathrm{s}} +\frac{1}{\lambda} \mathbf{W}^H\mathbf{H}\mathbf{V}_\mathrm{RF}(\mathbf{V}_\mathrm{RF}^H\mathbf{V}_\mathrm{RF})^{-1}\\ &\quad\quad\times\mathbf{V}_\mathrm{RF}^H\mathbf{H}^H\mathbf{W}\big)^{-1}\Big), \end{split} \label{MSEequation} \end{equation} which is now a function of $\mathbf{V}_\mathrm{RF}$ to be further optimized. Due to the fact that the BS is equipped with a large number of transmit antennas, the analog beamforming vectors are likely orthogonal to each other \cite{sohrabi2016hybrid}, i.e., $\mathbf{V}_\mathrm{RF}^H\mathbf{V}_\mathrm{RF}\approx N_\mathrm{t}\mathbf{I}_{N_\mathrm{t}^\mathrm{RF}}$. Under this approximation and further using the Sherman Morrison formula, the sum-MSE in \eqref{MSEequation} can be separated into two terms that are related respectively to a column in $\mathbf{V}_\mathrm{RF}$, denoted by $\mathbf{v}^{(j)}_\mathrm{RF}$, and the remaining sub-matrix, denoted by $\overline{\mathbf{V}}^{(j)}_\mathrm{RF}$, after removing $\mathbf{v}^{(j)}_\mathrm{RF}$ from $\mathbf{V}_\mathrm{RF}$. That is, \begin{equation} \begin{split} J_{\mathrm{sum}}{\approx}&\mathrm{tr}\left(\left(\mathbf{I}_{KN_\mathrm{s}}+\frac{1}{\eta}\mathbf{W}^H \mathbf{H}\mathbf{V}_\mathrm{RF}\mathbf{V}_\mathrm{RF}^H\mathbf{H}^H\mathbf{W}\right)^{-1}\right)\\ =&\mathrm{tr}\left(\mathbf{A}_{\mathrm{t},j}^{-1}\right)-\frac{\mathbf{v}_\mathrm{RF}^{(j) H}\left(\frac{1}{\eta}\mathbf{H}^H\mathbf{W}\mathbf{A}_{\mathrm{t},j}^{-2}\mathbf{W}^H\mathbf{H}\right) \mathbf{v}_\mathrm{RF}^{(j)}}{\mathbf{v}_\mathrm{RF}^{(j) H}\left(\frac{1}{N_\mathrm{t}}\mathbf{I}+\frac{1}{\eta}\mathbf{H}^H\mathbf{W}\mathbf{A}_{\mathrm{t},j}^{-1} \mathbf{W}^H\mathbf{H}\right)\mathbf{v}_\mathrm{RF}^{(j)}}, \end{split} \label{process_t} \end{equation} where $\mathbf{A}_{\mathrm{t},j}=\mathbf{I}+\frac{1}{\eta}\mathbf{W}^H\mathbf{H}\overline{\mathbf{V}}_\mathrm{RF}^{(j)} (\overline{\mathbf{V}}_\mathrm{RF}^{(j)})^H\mathbf{H}^H\mathbf{W}$ and $\eta=N_\mathrm{t}\lambda$. A close observation to \eqref{process_t} reveals that $\mathbf{V}_\mathrm{RF}$ can be optimized column-by-column. Specifically, $\mathbf{v}_\mathrm{RF}^{(j)}$ can be optimized by maximizing the last term in \eqref{process_t}. Define $\mathbf{B}_{\mathrm{t},j}=\frac{1}{\eta}\mathbf{H}^H\mathbf{W}\mathbf{A}_{\mathrm{t},j}^{-2}\mathbf{W}^H\mathbf{H}$ and $\mathbf{D}_{\mathrm{t},j}=\frac{1}{N_\mathrm{t}}\mathbf{I}+\frac{1}{\eta}\mathbf{H}^H \mathbf{W}\mathbf{A}_{\mathrm{t},j}^{-1}\mathbf{W}^H\mathbf{H}$. It can be shown that by fixing other columns of the RF precoder and ignoring the constant modulus constraint, the optimal $\mathbf{v}_\mathrm{RF}^{(j)}$ is the eigenvector associated with the largest generalized eigenvalue of the matrix pair $\mathbf{B}_{\mathrm{t},j}$ and $\mathbf{D}_{\mathrm{t},j}$. Considering the constant modulus constraint, a sub-optimal solution of $\mathbf{v}_\mathrm{RF}^{(j)}$ can be obtained by directly extracting the phase of each element of the eigenvector as similar to that in \cite{yu2016alternating,Payami2016hybrid}. Here the phase extraction is performed before the optimization of the next column, i.e., $\mathbf{v}_\mathrm{RF}^{(j+1)}$. Note that although the iteration convergence cannot be proved due to the phase extraction, simulation results in Section IV will show that the overall performance of the proposed HBF scheme converges fast. \subsection{Hybrid Combiners Design}\label{subsec:hybrid-comb} We now consider the hybrid combiners design with the optimized precoder. We first optimize the users' digital combiners by fixing the analog ones. Inspired by \cite{sohrabi2016hybrid,yu2016alternating}, a similar constraint that the columns of the digital combiner of user $k$ are mutually orthogonal is imposed. That is, \begin{equation} \mathbf{W}_{\mathrm{D},k}^H\mathbf{W}_{\mathrm{D},k}=\gamma\mathbf{I}_{N_\mathrm{s}}, \label{supposing} \end{equation} where $\gamma>0$. Note that $\gamma$ can be absorbed in the $\beta$ factor. Thus, in the following, $\gamma$ is set to 1 without loss of generality. \subsubsection{Digital Combiners Design} From the constraint \eqref{supposing}, it can be shown that $\mathbf{W}_{\mathrm{D},k}\mathbf{W}_{\mathrm{D},k}^H=\mathbf{Z}_k\begin{bmatrix} \mathbf{I}_{N_\mathrm{s}}&\mathbf{0}\\ \mathbf{0}&\mathbf{0} \end{bmatrix}\mathbf{Z}_k^H$, where $\mathbf{Z}_k$ is an $N_\mathrm{r}^\mathrm{RF}\times N_\mathrm{r}^\mathrm{RF}$ unitary matrix. By substituting this result into \eqref{sumMSE} and further fixing $\mathbf{V}_\mathrm{RF}$, $\mathbf{V}_\mathrm{D}$, $\mathbf{W}_{\mathrm{RF},k}$ and $\beta$ in \eqref{sumMSE}, it can be found that only the last term in \eqref{sumMSE} is a function of $\mathbf{W}_{\mathrm{D},k}$. By taking this observation into the objective function of \eqref{optproblem} and removing the terms that are not related to $\mathbf{W}_{\mathrm{D},k}$, the optimization problem \eqref{optproblem} is now converted into \begin{equation} \begin{split} \underset{\mathbf{W}_{\mathrm{D},k}}{\text{maximize}} \;\;\; & \sum_{k=1}^K\mathrm{Re}\{\mathrm{tr}\left(\beta\mathbf{V}_{\mathrm{D},k}^H\mathbf{V}_\mathrm{RF}^H \mathbf{H}_k^H\mathbf{W}_{\mathrm{RF},k}\mathbf{W}_{\mathrm{D},k}\right)\} \\ \text{subject to}\;\;\; & \mathbf{W}_{\mathrm{D},k}^H\mathbf{W}_{\mathrm{D},k}=\mathbf{I}_{N_\mathrm{s}}, \text{for}\;k=1,\dots,K. \end{split}\nonumber \end{equation} It turns out that this problem is still difficult to solve directly. Instead, the optimization can be carried out by aiming at its upper bound, which is $\sum_{k=1}^K\mathrm{Re}\{\mathrm{tr}(\mathbf{G}_k\mathbf{W}_{\mathrm{D},k})\}\leq \sum_{k=1}^K|\mathrm{tr}(\mathbf{G}_k\mathbf{W}_{\mathrm{D},k})|$, where $\mathbf{G}_k=\beta\mathbf{V}_{\mathrm{D},k}^H\mathbf{V}_\mathrm{RF}^H\mathbf{H}_k^H\mathbf{W}_{\mathrm{RF},k}$. By using the H$\ddot{\text{o}}$lder's inequality \cite{horn1990matrix}, we have \begin{equation} \sum_{k=1}^K\mid\mathrm{tr}\left(\mathbf{G}_k\mathbf{W}_{\mathrm{D},k}\right)\mid \leq\sum_{k=1}^K\|\mathbf{W}_{\mathrm{D},k}^H\|_\infty\cdot\|\mathbf{G}_k\|_1. \label{inequ} \end{equation} With the unitary constraint in \eqref{supposing}, we have $\|\mathbf{W}_{\mathrm{D},k}^H\|_\infty=1$. Taking the singular value decomposition (SVD) to $\mathbf{G}_k$, we have $\mathbf{G}_k=\mathbf{U}\Sigma\mathbf{R}^H=\mathbf{U}\mathbf{S}\mathbf{R}_1^H$, where $\mathbf{S}$ is a diagonal matrix containing the first $N_\mathrm{s}$ nonzero singular values, and $\mathbf{R}_1$ contains the associated singular vectors in $\mathbf{R}$. It can be shown that the equality in \eqref{inequ} is satisfied when $\mathbf{W}_{\mathrm{D},k}=\mathbf{R}_1\mathbf{U}^H$. \subsubsection{Analog Combiners Design} Recall the expression of the sum-MSE in \eqref{MSEequation} after the optimization of the digital precoder. Due to the constant modulus constraint on $\mathbf{W}_{\mathrm{RF},k}$ and the unitary constraint of \eqref{supposing}, the variable $\lambda$ in \eqref{MSEequation} is equal to $\lambda=\frac{\sigma^2KN_\mathrm{r}N_\mathrm{s}}{P}$. The sum-MSE in \eqref{MSEequation} under the approximation of $\mathbf{V}_\mathrm{RF}^H\mathbf{V}_\mathrm{RF}\approx N_\mathrm{t}\mathbf{I}_{N_\mathrm{t}^\mathrm{RF}}$ can be expressed as \begin{equation} \begin{split} J_{\mathrm{sum}} = & \lambda\mathrm{tr} \big((\mathbf{V}_\mathrm{RF}^H\mathbf{H}^H\mathbf{W} \mathbf{W}^H \mathbf{H} \mathbf{V}_\mathrm{RF} +\lambda\mathbf{V}_\mathrm{RF}^H\mathbf{V}_\mathrm{RF})^{-1} \\ & \quad\quad\times\mathbf{V}_\mathrm{RF}^H\mathbf{V}_\mathrm{RF}\big) + KN_\mathrm{s}-N_\mathrm{t}^\mathrm{RF}\\ \approx & \eta J(\mathbf{W}_{\mathrm{RF},k}) + KN_\mathrm{s}-N_\mathrm{t}^\mathrm{RF}, \label{MSEr} \end{split} \end{equation} where \begin{equation} \begin{split} &J(\mathbf{W}_{\mathrm{RF},k})\\=&\mathrm{tr}\big((\mathbf{V}_\mathrm{RF}^H\mathbf{H}^H\mathbf{W}\mathbf{W}^H \mathbf{H}\mathbf{V}_\mathrm{RF}+\eta\mathbf{I}_{N_\mathrm{t}^\mathrm{RF}})^{-1}\big)\\ =&\mathrm{tr}\Big(\big(\sum\limits_{k=1}^K\overline{\mathbf{H}}_k^H\mathbf{W}_{\mathrm{RF},k} \mathbf{W}_{\mathrm{D},k}\mathbf{W}_{\mathrm{D},k}^H\mathbf{W}_{\mathrm{RF},k}^H \overline{\mathbf{H}}_k+\eta \mathbf{I}_{N_\mathrm{t}^\mathrm{RF}}\big)^{-1}\Big), \end{split}\nonumber \end{equation} with $\overline{\mathbf{H}}_k=\mathbf{H}_k\mathbf{V}_\mathrm{RF}$. It turns out that it is still difficult to minimize $J\left(\mathbf{W}_{\mathrm{RF},k}\right)$ and further mathematical manipulation is needed. Thus, we introduce the following proposition. \begin{proposition} Define $\mathbf{\Omega}=\left[ \begin{array}{cc} \mathbf{I}_{N_\mathrm{s}}&\mathbf{0}\\ \mathbf{0}&\mathbf{0}\\ \end{array} \right]$. It can be shown that $\mathrm{tr}\left((\mathbf{A}^H\mathbf{{\Omega}}\mathbf{A}+\mathbf{I}_{N_\mathrm{t}^\mathrm{RF}})^{-1}\right)\geq\mathrm{tr} \left((\mathbf{A}^H\mathbf{A}+\mathbf{I}_{N_\mathrm{t}^\mathrm{RF}})^{-1}\right)$, where $\mathbf{A}$ is an $N_\mathrm{r}^\mathrm{RF}\times N_\mathrm{t}^\mathrm{RF}$ arbitrary matrix. \label{theo1} \end{proposition} $\mathnormal{Proof}$: First define two matrices $\mathbf{A}_1=\mathbf{A}^H\mathbf{{\Omega}}\mathbf{A}$ and $\mathbf{A}_2=\mathbf{A}^H(\mathbf{I}_{N_\mathrm{r}^\mathrm{RF}}-\mathbf{{\Omega}})\mathbf{A}$. It can be shown that $\mathrm{tr}\left((\mathbf{A}^H\mathbf{A}+\mathbf{I}_{N_\mathrm{t}^\mathrm{RF}})^{-1}\right)=\mathrm{tr} \left((\mathbf{A}_1+\mathbf{A}_2+\mathbf{I}_{N_\mathrm{t}^\mathrm{RF}})^{-1}\right)$. Denote the eigenvalues of $\mathbf{A}_1+\mathbf{I}_{N_\mathrm{t}^\mathrm{RF}}$ and those of $\mathbf{A}_1+\mathbf{A}_2+\mathbf{I}_{N_\mathrm{t}^\mathrm{RF}}$ by $\mu_1\leq\mu_2\ldots\leq\mu_{N_\mathrm{t}^\mathrm{RF}}$ and $\upsilon_1\leq\upsilon_2\ldots\leq\upsilon_{N_\mathrm{t}^\mathrm{RF}}$, respectively. According to the Weyl Theorem \cite{horn1990matrix}, we have $\mu_j\leq\upsilon_j$, for $j=1,\dots,N_\mathrm{t}^\mathrm{RF}$ and \begin{equation} \mathrm{tr}((\mathbf{A}^H\mathbf{A}+\mathbf{I}_{N_\mathrm{t}^\mathrm{RF}})^{-1})=\sum_j\frac{1}{\upsilon_j} \leq \sum_j\frac{1}{\mu_j}=\mathrm{tr}((\mathbf{A}_1+\mathbf{I}_{N_\mathrm{t}^\mathrm{RF}})^{-1}),\nonumber \end{equation} where the equality holds when $N_\mathrm{r}^\mathrm{RF}=N_\mathrm{s}$. \hfill $\blacksquare$ \begin{figure*}[!t] \centering \begin{center} \centerline{\subfigure[]{\includegraphics[width=3.6cm]{BERvsSNR_256_16_2_8}\label{fig:BERvsSNR_256_16_2_8}} \hfil \subfigure[]{\includegraphics[width=3.6cm]{MSEvsNUM256_16_2_8_1}\label{fig:MSEvsNUM256_16_2_8}}\hfil \subfigure[]{\includegraphics[width=3.6cm]{BERvsRFnew}\label{fig:BERvsRFnew}}\hfil}\caption{Comparison of different beamforming schemes in an 8-user mmWave MIMO system. (a) BER v.s. SNR. (b) Sum-MSE v.s. $N_\mathrm{it}$. (c) BER v.s. $N_\mathrm{t}^\mathrm{RF}$.} \label{fig:sim} \end{center} \end{figure*} By using Proposition \ref{theo1} and the fact that $\mathbf{W}_{\mathrm{D},k}\mathbf{W}_{\mathrm{D},k}^H=\mathbf{Z}_k\mathbf{\Omega}\mathbf{Z}_k^H$ from the orthogonal constraint in (\ref{supposing}), we have \begin{equation} \begin{split} J(\mathbf{W}_{\mathrm{RF},k}) =&\mathrm{tr}\Big((\sum\limits_{k=1}^K\overline{\mathbf{H}}_k^H\mathbf{W}_{\mathrm{RF},k}\mathbf{Z}_k \mathbf{{\Omega}}\mathbf{Z}_k^H\mathbf{W}_{\mathrm{RF},k}^H \overline{\mathbf{H}}_k+\eta\mathbf{I}_{N_\mathrm{t}^\mathrm{RF}})^{-1}\Big)\\ \geq&\mathrm{tr}\Big((\sum\limits_{k=1}^K\overline{\mathbf{H}}_k^H\mathbf{W}_{\mathrm{RF},k} \mathbf{W}_{\mathrm{RF},k}^H\overline{\mathbf{H}}_k+\eta\mathbf{I}_{N_\mathrm{t}^\mathrm{RF}})^{-1}\Big). \end{split}\nonumber \end{equation} Now the users' analog combiners can be optimized by minimizing the lower bound of $J\left(\mathbf{W}_{\mathrm{RF},k}\right)$, which is denoted by $J_\mathrm{LB}\left(\mathbf{W}_{\mathrm{RF},k}\right)$. It turns out that $\mathbf{W}_{\mathrm{RF},k}$ can be optimized column-by-column via the GEVD method. Specifically, with the definition of $\mathbf{A}_{\mathrm{r},j,k}=\sum_{f=1,f\neq k}^K\overline{\mathbf{H}}_f^H\mathbf{W}_{\mathrm{RF},f}\mathbf{W}_{\mathrm{RF},f}^H \overline{\mathbf{H}}_f+\overline{\mathbf{H}}_k^H\overline{\mathbf{W}}_{\mathrm{RF},k}^{(j)} (\overline{\mathbf{W}}_{\mathrm{RF},k}^{(j)})^H\overline{\mathbf{H}}_k+\eta\mathbf{I}_{N_\mathrm{t}^\mathrm{RF}}$, $J_\mathrm{LB}\left(\mathbf{W}_{\mathrm{RF},k}\right)$ becomes \begin{equation} \begin{split} &J_\mathrm{LB}(\mathbf{W}_{\mathrm{RF},k})\\ =&\mathrm{tr}(\mathbf{A}_{\mathrm{r},j,k}^{-1})-\frac{\mathbf{w}_{\mathrm{RF},k}^{(j)H} (\overline{\mathbf{H}}_k\mathbf{A}_{\mathrm{r},j,k}^{-2}\overline{\mathbf{H}}_k^H) \mathbf{w}_{\mathrm{RF},k}^{(j)}}{\mathbf{w}_{\mathrm{RF},k}^{(j)H} (\frac{1}{N_\mathrm{r}}\mathbf{I}+\overline{\mathbf{H}}_k\mathbf{A}_{\mathrm{r},j,k}^{-1} \overline{\mathbf{H}}_k^H)\mathbf{w}_{\mathrm{RF},k}^{(j)}}. \end{split}\nonumber \end{equation} By comparing it with \eqref{process_t}, it can be found that they have the same form and thus $\mathbf{w}_{\mathrm{RF},k}^{(j)}$ can be optimized in the same way. Finally, by using the alternating minimization method, the hybrid precoder and the hybrid combiners are alternatively optimized until a stop condition is satisfied. \section{Simulation Results and Conclusion}\label{result} Consider a multiuser ($K=8$) mmWave MIMO system with $N_\mathrm{t}^\mathrm{RF}=16$, $N_\mathrm{t}=256$, $N_\mathrm{s}=2$, $N_\mathrm{r}^\mathrm{RF}=2$ and $N_\mathrm{r}=16$. The channels are generated according to the geometric channel model in \eqref{channel} with $L=20$, $\alpha_{l,k}\sim\mathcal{CN}\left(0,1\right)$, $d=\lambda_c/2$ and uniformly randomly distributed AoAs and AoDs in $[0,2\pi]$. Fig. \ref{fig:BERvsSNR_256_16_2_8} shows bit error rate (BER) v.s. signal to noise ratio (SNR) for the proposed HBF, the conventional phase extraction alternating minimization (PE-AltMin) HBF \cite{yu2016alternating}, the MMSE-OMP HBF \cite{nguyen2017hybrid}, and the fully digital beamforming (FDBF) schemes \cite{joung2007regularized} with quadrature phase-shift keying (QPSK) modulation. Note that both the proposed HBF scheme and the conventional MMSE-OMP and FDBF schemes apply the alternating minimization method to alternatively optimize the BS's precoder and the users' combiners. In these schemes, the iteration is stopped when the difference between the sum-MSE values in two continuous iterations is less than $10^{-6}$. For the PE-AltMin scheme, as the original HBF problem is decoupled into two matrix approximation sub-problems at the BS and users' sides \cite{yu2016alternating}, the matrices to be approximated are set to the ones in the FDBF scheme \cite{joung2007regularized}. Fig. \ref{fig:BERvsSNR_256_16_2_8} shows that the proposed HBF scheme significantly outperforms the conventional ones. This is because in the MMSE-OMP scheme the analog beamformers are limited to a predefined set consisting of only the antenna array response vectors and in the PE-AltMin scheme the original sum-MSE optimization problem is indirectly solved as it is converted into the matrix approximation problem. Fig. \ref{fig:MSEvsNUM256_16_2_8} shows the averaged sum-MSE over 1000 channel realizations as a function of the number of iterations, $N_\mathrm{it}$, in the alternating minimization between the BS and users' sides for different schemes when $\mathrm{SNR}=-4\mathrm{dB}$ and $0\mathrm{dB}$. It can be seen that the proposed HBF scheme converges quickly to a lower sum-MSE value than the MMSE-OMP scheme, and such value is close to that of the fully digital scheme. Fig. \ref{fig:BERvsRFnew} shows the BER performance as a function of $N_\mathrm{t}^\mathrm{RF}$ when $\mathrm{SNR}=-4\mathrm{dB}$. Here other system parameters are the same as those in Fig. \ref{fig:BERvsSNR_256_16_2_8}. It can be seen that with more RF chains the proposed HBF scheme approaches the fully digital one more quickly than other HBF schemes. Comparing the computational complexity of different schemes in terms of the number of complex multiplications, the complexity of MMSE-OMP is in the order of $\mathcal{O}(N_\mathrm{it}(N_\mathrm{t}^\mathrm{RF}N_\mathrm{t}^3+KN_\mathrm{t}^3))$, as shown in \cite{nguyen2017hybrid}, and that of FDBF is $\mathcal{O}(N_\mathrm{it}(N_\mathrm{t}^3+KN_\mathrm{r}^3))$ because of the matrix inversion at both the BS and the users. The PE-AltMin scheme needs at least the complexity of FDBF to obtain the target fully digital matrices. The complexity of the proposed scheme is mainly in GEVD, which is in the order of $\mathcal{O}(N_\mathrm{it}(N_\mathrm{t}^{\mathrm{RF}}N_\mathrm{t}^3+KN_\mathrm{r}^{\mathrm{RF}}N_\mathrm{r}^3))$. However, it can be reduced to $\mathcal{O}(N_\mathrm{it}(N_\mathrm{t}^{\mathrm{RF}}N_\mathrm{t}^2+KN_\mathrm{r}^{\mathrm{RF}}N_\mathrm{r}^2))$ by using the power method \cite{horn1990matrix} since only the largest generalized eigenvector needs to be computed. Thus, the complexity of the proposed HBF scheme is not more than that of the conventional HBF schemes. In conclusion, we have proposed an MMSE HBF scheme for multiuser MIMO mmWave systems based on the alternating minimization method. In particular, we showed that the RF beamformers can be optimized via GEVD. Simulation results showed that the proposed HBF scheme is able to approach the performance of the fully digital beamforming scheme.
1,108,101,565,333
arxiv
\section{Introduction} An interesting development in the black holes studies is the discovery of the black ring solutions of the five dimensional Einstein equations by Emparan and Reall \cite{ER1}, \cite{ER2}. These are asymptotically flat solutions with an event horizon of topology $S^2\times S^1$ rather the much more familiar $S^3$ topology. Since the Emparan and Reall's discovery many explicit examples of black ring solutions were found in various gravity theories \cite{E}-\cite{P}. Elvang was able to apply Hassan-Sen transformation to the solution \cite{ER2} to find a charged black ring in the bosonic sector of the truncated heterotic string theory\cite{E}. A supersymmetric black ring in five dimensional minimal supergravity was derived in \cite{EEMR1} and then generalized to the case of concentric rings in \cite{GG1} and \cite{GG2}. A static black ring solution of the five dimensional Einstein-Maxwell gravity was found by Ida and Uchida in \cite{IU}. In \cite{EMP} Emparan derived "dipole black rings" in Einstein-Maxwell-dilaton (EMd) theory in five dimensions. Static and asymptotically flat black rings solutions in five dimensional EMd gravity with arbitrary dilaton coupling parameter $\alpha$ were presented in \cite{KL} without giving any derivation. In this paper we systematically derive asymptotically flat EMd black ring solutions. Moreover, we also derive new non-asymptotically flat EMd black ring solutions and analyze their properties. Our motivation is the following. First, such solutions are interesting in their own right. Second, despite of the discovery of the explicit examples of black EMd rings, the systematic construction of new higher dimensional solutions in EMd gravity has not been accomplished in comparison with the four dimensional case. In particular, solutions so far found are not so many. In order to achieve our goals we first consider the $D$-dimensional EMd theory in static spacetimes. After performing dimensional reduction along the timelike Killing vector we show that the norm of the Killing vector and the electric potential parameterize a $GL(2,R)/SO(1,1)$ sigma model coupled to $(D-1)$-dimensional euclidean gravity. Then, we show that the $GL(2,R)$ subgroup that preserves the asymptotic flatness is $SO(1,1)$. Applying the $SO(1,1)$ transformation to the five dimensional static neutral black ring solution we obtain the static EMd black rings. The non-asymptotically flat EMd black rings can be obtained by acting on the neutral black rings with special elements of $SL(2,R)\subset GL(2,R)$. \section{General equations and solution construction} The EMd gravity in $D$-dimensional spacetimes is described by the action\footnote{In what follows we consider theories with $\alpha\ne 0$.} \begin{equation} S= {1\over 16\pi} \int d^Dx \sqrt{-g}\left(R - 2g^{\mu\nu}\partial_{\mu}\varphi \partial_{\nu}\varphi - e^{-2\alpha\varphi}F^{\mu\nu}F_{\mu\nu} \right). \end{equation} The field equations derived from the action are \begin{eqnarray} R_{\mu\nu} &=& 2\partial_{\mu}\varphi \partial_{\nu}\varphi + 2e^{-2\alpha\varphi} \left[F_{\mu\rho}F_{\nu}^{\rho} - {g_{\mu\nu}\over 2(D-2)} F_{\beta\rho} F^{\beta\rho}\right], \\ \nabla_{\mu}\nabla^{\mu}\varphi &=& -{\alpha\over 2} e^{-2\alpha\varphi} F_{\nu\rho}F^{\nu\rho}, \\ &\nabla_{\mu}&\left[e^{-2\alpha\varphi} F^{\mu\nu} \right] = 0 . \end{eqnarray} We consider static spacetimes and denote the Killing vector by $\xi$ ($\xi={\partial\over \partial t}$). The metric of the static spacetime can be written in the form \begin{equation} ds^2 = - e^{2U}dt^2 + e^{-{2U\over D-3 }} h_{ij}dx^idx^j \end{equation} where $U$ and $h_{ij}$ are independent of the time coordinate $t$. The staticity of the dilaton and electromagnetic fields imply \begin{equation} {\cal L}_{\xi}\varphi = 0, \,\,\, {\cal L}_{\xi} F = 0 . \end{equation} Since ${\cal L}_{\xi} F= di_{\xi}F$ there exist locally an electromagnetic potential $\Phi$ such that \begin{equation} F = e^{-2U}\xi \wedge d\Phi. \end{equation} In terms of the potentials $U$, $\Phi$, $\varphi$ and $(D-1)$-dimensional metric $h_{ij}$ the field equations become \begin{eqnarray} {\cal D}_{i}{\cal D}^{i}U &=& 2{D-3\over D-2 }e^{-2U-2\alpha\varphi}h^{ij}{\cal D}_{i}\Phi {\cal D}_{j}\Phi , \\ {\cal D}_{i}{\cal D}^{i}\varphi &=& \alpha e^{-2U-2\alpha\varphi}h^{ij}{\cal D}_{i}\Phi {\cal D}_{j}\Phi ,\\ &{\cal D}_{i}&\left( e^{-2U-2\alpha\varphi}h^{ij}{\cal D}_{j}\Phi \right) = 0 ,\\ {\cal R}(h)_{ij} &=& {D-2\over D-3} {\cal D}_{i}U {\cal D}_{j}U + 2{\cal D}_{i}\varphi {\cal D}_{j}\varphi - 2e^{-2\alpha\varphi -2U}{\cal D}_{i}\Phi {\cal D}_{j}\Phi , \end{eqnarray} where ${\cal D}_{i}$ and ${\cal R}(h)_{ij}$ are the covariant derivative and the Ricci tensor with respect to the metric $h_{ij}$. These equations can be derived from the action \begin{equation} S = \int d^{(D-1)}x \sqrt{h} \left[{\cal R}(h) - {D-2\over D-3}h^{ij}{\cal D}_{i}U {\cal D}_{j}U - 2h^{ij}{\cal D}_{i}\varphi {\cal D}_{j}\varphi + 2e^{-2\alpha\varphi - 2U}h^{ij}{\cal D}_{i}\Phi {\cal D}_{j}\Phi\right]. \end{equation} Let us introduce the symmetric matrix \begin{eqnarray} P = e^{(\alpha_{D} -1)U} e^{- (\alpha_{D} + 1)\varphi_{D}}\left(% \begin{array}{cc} e^{2U + 2\alpha_{D}\varphi_{D}} - (1 + \alpha^2_{D})\Phi^2_{D} & - \sqrt{1 + \alpha^2_{D}}\Phi_{D} \\ - \sqrt{1 + \alpha^2_{D}}\Phi_{D} & -1 \\\end{array}% \right) \end{eqnarray} where \begin{equation} \alpha_{D}= \sqrt{{D-2\over 2(D-3)}} \alpha, \,\,\, \varphi_{D} = \sqrt{{2(D-3)\over (D-2)}} \varphi, \,\,\, \Phi_{D}= \sqrt{{2(D-3)\over (D-2)}} \Phi . \end{equation} Then the action can be written in the form \begin{equation} S = \int d^{(D-1)}x \sqrt{h} \left[{\cal R}(h) - {1\over 2(1+ \alpha_{D}^2) }{(D-2)\over (D-3)} h^{ij} Sp \left({\cal D}_{i}P {\cal D}_{i}P^{-1} \right) \right]. \end{equation} The action is invariant under the symmetry transformations \begin{equation} P \longrightarrow GPG^{T} \end{equation} where $G\in GL(2,R)$. The matrix $P$ parameterized a coset $GL(2,R)/SO(1,1)$ where $SO(1,1)$ is the stationary subgroup (see below). \section{Asymptotically flat solutions} In this section we will restrict ourselves to asymptotically flat solutions with \begin{equation} U(\infty)=0 ,\,\,\, \varphi(\infty)=0 , \,\,\, \Phi(\infty)=0 \end{equation} which corresponds to \begin{equation} P(\infty) = \sigma_{3} \end{equation} where $\sigma_{3}$ is the third Pauli matrix. The $G(2,R)$ transformations which preserves the asymptotics satisfy \begin{equation} B\sigma_{3}B^{T} = \sigma_{3}. \end{equation} Therefore we conclude that $B\in SO(1,1)$. We parameterize the $SO(1,1)$ group in the standard way \begin{equation} B = \left(% \begin{array}{cc} \cosh(\gamma) & \sinh(\gamma) \\ \sinh(\gamma) &\cosh(\gamma) \\\end{array}% \right) . \end{equation} Let us consider a static asymptotically flat solution of $D$-dimensional Einstein equations \begin{equation} ds_{0}^2 = -e^{2U_{0}}dt^2 + e^{-{2U_{0}\over D-3}}h_{ij}dx^idx^j \end{equation} which is encoded into the matrix \begin{equation} P_{0}= e^{(\alpha_{D} -1)U_{0} } \left(% \begin{array}{cc} e^{2U_{0}} & 0 \\ 0 & -1 \\\end{array}% \right) . \end{equation} and the metric $h_{ij}$. The $SO(1,1)$ transformations then generate a solution of the $D$-dimensional EMd gravity given by the matrix \begin{equation} P = BP_{0}B^{T}. \end{equation} and with the same metric $h_{ij}$. In more explicit form we have \begin{eqnarray} e^{U} &=& {e^{U_{0}}\over \left[\cosh^2(\gamma) - e^{2U_{0}}\sinh^2(\gamma) \right]^{1\over 1 + \alpha^2_{D} } },\\ e^{-\varphi_{D}} &=& \left[\cosh^2(\gamma) - e^{2U_{0}}\sinh^2(\gamma) \right]^{\alpha_{D}\over 1 + \alpha^2_{D} },\\ \Phi_{D} &=& {\tanh(\gamma) \over \sqrt{1 + \alpha^2_{D}} } {1 - e^{2U_{0}}\over 1 - e^{2U_{0}}\tanh^2(\gamma) }. \end{eqnarray} Let us restrict the considerations to five dimensional spacetimes. One of the most interesting solutions of the five dimensional Einstein equations is the black ring solution given by the metric \begin{eqnarray}\label{BRS} ds^2_{0} &=& - {F(x)\over F(y)}dt^2 \\ &+& {1\over A^2(x-y)^2} \left[ F(x)(y^2-1)d\psi^2 + {F(x)F(y)\over y^2 -1}dy^2 + {F^2(y)\over 1-x^2 }dx^2 + F^2(y){1-x^2\over F(x) }d\phi^2\right] \nonumber \end{eqnarray} where $F(x)= 1 - \mu x$, $A>0$ and $0<\mu < 1$. The coordinate $x$ is in the range $-1\le x\le 1$ and the coordinate $y$ is in the range $y\le -1$. The topology of the horizon is $S^2\times S^1$ parameterized by $(x,\phi)$ and $\psi$, respectively. In order to avoid a conical singularity at $y=-1$ one must demand that the period of $\psi$ satisfies $\Delta \psi=2\pi\sqrt{1 + \mu}$. If one demands regularity at $x=-1$ the period of $\phi$ must be $\Delta \phi=2\pi \sqrt{1 +\mu}$. In this case the solution is asymptotically flat and the ring is sitting on the rim of disk shaped membrane with a negative deficit angle. To enforce regularity at $x=1$ one must take $\Delta \phi = 2\pi \sqrt{1-\mu}$ and the solution describes a black ring sitting on the rim of disk shaped hole in an infinitely extended deficit membrane with positive deficit. More detailed analysis of the black ring solution can be found in \cite{ER1}. The $SO(1,1)$ transformations generate the following EMd solution \begin{eqnarray} ds^2 = - \left[\cosh^2(\gamma) - \sinh^2(\gamma){F(x)\over F(y)} \right]^{-2\over 1 + \alpha^2_{5}}{F(x)\over F(y)}dt^2 \\ + {\left[\cosh^2(\gamma) - \sinh^2(\gamma){F(x)\over F(y)} \right]^{1\over 1 + \alpha^2_{5}}\over A^2(x-y)^2} \left[ F(x)(y^2-1)d\psi^2 \right. \nonumber \\ \left. + {F(x)F(y)\over y^2 -1}dy^2 + {F^2(y)\over 1-x^2 }dx^2 + F^2(y){1-x^2\over F(x) }d\phi^2\right] ,\nonumber \\ e^{-\varphi_{5}} = \left[\cosh^2(\gamma) - \sinh^2(\gamma){F(x)\over F(y)} \right]^{\alpha_{5}\over 1 + \alpha^2_{5}} , \\ \Phi_{5} = {\tanh(\gamma)\over \sqrt{1 + \alpha^2_{5}} } {1 - {F(x)\over F(y) }\over 1 - \tanh^2(\gamma) {F(x)\over F(y)} } . \end{eqnarray} This solution was presented first in \cite{KL} without derivation. Here we have systematically derived it from the symmetries of dimensionally reduced EMd equations. Detailed analysis of the solution is given in \cite{KL}. Here we present only some basic results. The physical properties naturally share many of the same features of the neutral black ring solution, in particular, the horizon topology is $S^2\times S^1$. The area of the horizon is \begin{equation} {\cal S}_{h\pm} = 8\pi^2 \cosh^{3\over 1 + \alpha^2_{5}}(\gamma) {\mu^2 \sqrt{(1+ \mu)(1 \pm \mu)}\over A^3} \end{equation} where the sign $\pm$ corresponds to taking the conical singularity at $x=\pm 1$. The solution is asymptotically flat and this can be seen by change of the coordinates \begin{equation} \label{COOR} \rho_{1}= {(1+\mu)\sqrt{y^2-1}\over A(x-y)}, \,\,\, \rho_{2}={(1+\mu)\sqrt{1-x^2}\over A(x-y)}, \,\,\, {\tilde \psi}= {\psi\over \sqrt{1+\mu}} ,\,\,\, {\tilde \phi}= {\phi\over \sqrt{1+\mu}} . \end{equation} Defining $\rho= \sqrt{\rho^2_{1} + \rho^2_{2}}$ and taking the limit $\rho \to \infty$ we obtain \begin{equation} ds^2 \sim - dt^2 + d\rho^2_{1} + d\rho^2_{1} + \rho^2_{1} d{\tilde \psi}^2 + \rho^2_{2} d{\tilde \phi}^2 \end{equation} i.e. the solution is asymptotically flat. Note, however, that if the conical singularity lies at $x=-1$ the asymptotic metric is a deficit membrane. The mass is given by \begin{equation} M_{\pm} = {3\pi \over 4} {\mu \sqrt{(1+\mu)(1\pm \mu)}\over A^2} \left(1 + {2\over 1 + \alpha^2_{5} }\sinh^{2}(\gamma) \right) \end{equation} where the sing $\pm$ corresponds to the location of the conical singularity. The charge is found to be \begin{equation} Q_{\pm} = \pi \sqrt{{3\over 1 + \alpha^2_{5}}} \sinh(\gamma)\cosh(\gamma) { \mu\sqrt{(1+\mu)(1\pm \mu)} \over A^2 }. \end{equation} The temperature can be found in the standard way by Eucledeanizing the metric: \begin{equation} T = {A\over 4\pi\mu \cosh^{3\over 1 + \alpha^2_{5}}(\gamma)}. \end{equation} A Smarr-type relation is also satisfied \begin{equation} M_{\pm} = {3\over 8}T{\cal S}_{h\pm} + \Phi_{h}Q_{\pm} \end{equation} where $\Phi_{h}$ is the electric potential evaluated at the horizon. \section{Non-asymptotically flat solutions} In order to generate non-asymptotically flat solutions we shall consider the matrix \begin{equation} N = \left(% \begin{array}{cc} 0 & - a^{-1} \\ a & a \\\end{array}% \right) \in SL(2,R)\subset GL(2,R) . \end{equation} Then we obtain the following EMd solution presented by the matrix \begin{equation} P = N P_{0} N^{T} \end{equation} i.e. \begin{eqnarray} e^{U} &=& {e^{U_{0}}\over \left[a^2(1 - e^{2U_{0}}) \right]^{1\over 1 + \alpha^2_{D} } }, \\ e^{-\varphi_{D}} &=& \left[a^2(1 - e^{2U_{0}}) \right]^{\alpha_{D}\over 1 + \alpha^2_{D} } ,\\ \Phi_{D} &=& - {a^{-2} \over \sqrt{1 + \alpha^2_{D} } } {1\over 1 - e^{2U_{0}} } . \end{eqnarray} Applying this transformation to the neutral black ring solution (\ref{BRS}) we obtain the following EMd solution \begin{eqnarray} ds^2 &=& - \left[a^2\left(1 - {F(x)\over F(y) }\right) \right]^{{-2\over 1 + \alpha^2_{5}}} {F(x)\over F(y)}dt^2 \\ &+& {\left[a^2\left(1 - {F(x)\over F(y) }\right) \right]^{{1\over 1 + \alpha^2_{5}}} \over A^2 (x-y)^2 } \left[ F(x)(y^2-1)d\psi^2 + {F(x)F(y)\over y^2 -1}dy^2 + {F^2(y)\over 1-x^2 }dx^2 + F^2(y){1-x^2\over F(x) }d\phi^2\right] \nonumber \\ e^{-\varphi_{5}} &=& \left[a^2\left(1 - {F(x)\over F(y) }\right) \right]^{\alpha_{5}\over 1 + \alpha^2_{5} } ,\\ \Phi_{5} &=& - {a^{-2} \over \sqrt{1 + \alpha^2_{5} } } {1\over 1 - {F(x)\over F(y)} } . \end{eqnarray} \subsection{Analysis of the solution} From the explicit form of the solution it is clear that there is a horizon at $y=-\infty$. In the near-horizon limit the metric of the $t-y$ plane is \begin{equation} ds^2_{ty} \approx a^{2\over 1 + \alpha^2_{5}} F(x) \left[- {dt^2\over a^{6\over 1 + \alpha^2_{5}}\mu |y|} + {\mu \over A^2 |y|^3 }dy^2 \right] \end{equation} After performing a coordinate transformation $y = - {4\mu \over A^2 Y^2 }$ we obtain the metric \begin{equation} ds^2_{ty} \approx a^{2\over 1 + \alpha^2_{5}} F(x) \left[ - {A^2 Y^2 \over 4 a^{6\over 1 + \alpha^2_{5}} \mu^2}dt^2 + dY^2 \right] \end{equation} which is conformal to that of the Rindler space with an acceleration parameter $\omega = {A\over 2 \mu}a^{-3\over 1 + \alpha^2_{5}}$ (note that $F(x)>0$). Further, performing the coordinate transformation $X=Y\cosh(\omega t)$ and $T=Y\sinh(\omega t)$ the metric becomes manifestly conformally flat \begin{equation} ds^2_{ty} \approx a^{2\over 1 + \alpha^2_{5}} F(x) \left[- dT^2 + dX^2 \right] . \end{equation} This shows that $ty$ metric has a nonsingular horizon at $y=-\infty$. It is clear that the other terms can also be continuously extended and we obtain nonsingular near-horizon geometry \begin{equation} ds^2 \approx a^{2\over 1 + \alpha^2_{5}} F(x) \left[- dT^2 + dX^2 + {d\psi^2\over A^2 } \right] + a^{2\over 1 + \alpha^2_{5}} {\mu^2 \over A^2} \left[{dx^2 \over 1-x^2} + {1-x^2\over F(x)}d\phi^2 \right]. \end{equation} The constant-time slices through the horizon have metric \begin{equation} ds_{h}^2 ={a^{2\over 1 + \alpha^2_{5} }\over A^2} \left[F(x)d\psi^2 + \mu^2\left({dx^2 \over 1-x^2} + {1-x^2\over F(x)}d\phi^2 \right) \right]. \end{equation} Further, the analysis of the near-horizon geometry is quite similar to that for the near-horizon geometry of the neutral black ring solution. As it is seen the both near-horizon geometries are conformal with a constant conformal factor. In order for the metric to be regular at $x=-1$ the period of $\phi$ must be $\Delta \phi=2\pi \sqrt{1+ \mu}$. The regularity at $x=1$ requires $\Delta \phi=2\pi \sqrt{1-\mu}$. Therefore, it is not possible for the metric to be regular at both $x=-1$ and $x=1$. The regularity at $x=-1$ means that there is a conical singularity at $x=1$ and vice versa. The deficit angle is \begin{equation} \delta_{\pm 1} = 2\pi \left(1 - \sqrt{ 1 \pm \mu \over 1 \mp \mu } \right). \end{equation} The $x\phi$ part of the metric describes a two-dimensional surface with $S^2$ topology and with a conical singularity at one of the poles. In order to analyze the $y\psi$ part of the metric in the limit $y\to -1$, as in the neutral case, we set $y = - \cosh(\eta/\sqrt{1+\mu})$. We then find that the $y\psi$ part is conformal to \begin{equation} ds^2_{y\psi} \approx d\eta^2 + {\eta^2\over 1 +\mu}d\psi^2. \end{equation} In order for the metric to be regular at $\eta=0$ the coordinate $\psi$ must be identified with a period $\Delta \psi= 2\pi \sqrt{1+\mu}$. The above analysis show that the topology of the horizon is $S^2\times S^1$. The area of the horizon is \begin{equation} {\cal S}_{h\pm} = 8\pi^2 \left( {a^{1\over 1+\alpha^2_{5} }\over A }\right)^3 \mu^2 \sqrt{(1+\mu)(1\pm \mu)} \end{equation} where the sign $\pm$ corresponds to taking the conical singularity at $x=\pm 1$. The asymptotic infinity corresponds to $x=y=-1$. One can show that near $x=y=-1$ the solution behaves as \begin{eqnarray} ds^2 &\approx& - \left[{A^2 \over 2a^2\mu(1+\mu) } (\rho_{1}^2 + \rho_{2}^2) \right]^{2\over 1+\alpha^2_{5} } dt^2 \\ &+& \left[{A^2 \over 2a^2\mu(1+\mu) } (\rho_{1}^2 + \rho_{2}^2) \right]^{-1\over 1+\alpha^2_{5} } \left[d\rho^2_{1} + d\rho^2_{2} + \rho^2_{1}d{\tilde \psi}^2 + \rho^2_{2}d{\tilde \phi}^2 \right] ,\nonumber \\ e^{2\alpha\varphi} &=& \left[{A^2 \over 2a^2\mu(1+\mu) } (\rho_{1}^2 + \rho_{2}^2) \right]^{2\alpha^2_{5}\over 1+\alpha^2_{5} } ,\\ \Phi &=&- {\sqrt{3}\over 2 \sqrt{1+ \alpha^2_{5}} } \left[{A^2 \over 2a^2\mu(1+\mu) } (\rho_{1}^2 + \rho_{2}^2) \right] . \end{eqnarray} where the coordinates $\rho_{1}$, $\rho_{2}$, ${\tilde \psi}$ and ${\tilde \phi}$ are defined in (\ref{COOR}). It is clear that the solution is not asymptotically flat. In order to compute the mass of the non-asymptotically solution we need more sophisticated techniques than in the asymptotically flat case. We use a naturally modified five dimensional version of the quasilocal formalism in four dimensions \cite{BY}. Here we will not discuss the quasilocal formalism but present the final result for the mass: \begin{equation} M_{\pm} = {\alpha^2_{5} \over 1 + \alpha^2_{5} } {3\pi\mu\sqrt{(1+\mu)(1\pm \mu)}\over 4A^2 } . \end{equation} The electric charge is defined by \begin{equation} Q= {1\over 8\pi} \int_{\infty} e^{-2\alpha\varphi} F^{\mu\nu}d\Sigma_{\mu\nu} \end{equation} and we find \begin{equation} Q_{\pm} = -{\sqrt{3}\pi \over \sqrt{1 + \alpha^2_{5}} } {a^2 \mu\sqrt{(1+\mu)(1\pm \mu)}\over A^2} . \end{equation} The temperature can be found in the standard way by Euclideanizing the near-horizon metric ($t=-i\tau$) and periodically identifying $\tau$ to avoid new conical singularities. After doing that we obtain \begin{equation} T = {A\over 4\pi\mu a^{3\over 1 + \alpha_{5}^2} } . \end{equation} The electric potential is defined up to an arbitrary additive constant. In the asymptotically flat case there is a preferred gauge in which $\Phi(\infty)=0$. In the non-asymptotically flat case the electric potential diverges at spacial infinity and there is no preferred gauge. The arbitrary constant, however, can be fixed so that the Smarr-type relation to be satisfied: \begin{equation} M_{\pm} = {3\over 8} TS_{h\pm} + \Phi_{h}Q_{\pm}. \end{equation} \section{Conclusion} In the present paper we considered $D$-dimensional EMd gravity in static spacetimes. Performing dimensional reduction along the timelike Killing vector we obtain $GL(2,R)/SO(1,1)$ sigma model coupled to $(D-1)$-dimensional euclidean gravity. Applying $GL(2,R)$ transformations to the five dimensional neutral static black ring solution we were able to find both asymptotically and non-asymptotically flat five dimensional dilatonic black rings. Non-asymptotically flat dilatonic black rings solutions were analyzed. These solutions suffer from the presence of conical singularities as in the case of the neutral static rings. In the neutral case the rotation removes the conical singularities. That is why we expect that the rotating non-asymptotically flat EMd black rings will be free from conical singularities. The construction of non-asymptotically flat EMd solutions with rotation is now in progress and the results will be presented elsewhere. \section*{Acknowledgements} This work was partially supported by the Bulgarian National Science Fund under Grant MU 408. The author would like to thank the Bogoliubov Laboratory of Theoretical Physics (JINR) for their kind hospitality.
1,108,101,565,334
arxiv
\section{Introduction} The exploration of extragalactic objects, or in particular, Active Galactic Nuclei (AGNs), with long-baseline interferometers in the near-infrared (near-IR) has been very limited. While the brightest Type 1 AGN NGC4151 and Type 2 AGN NGC1068 have been observed by \cite{Swain03} and \cite{Wittkowski04}, respectively, further exploration has been hampered mainly by technical difficulties. Here we report successful observations of four Type 1 AGNs with the Keck interferometer (KI) in the near-IR (K-band 2.2 $\mu$m). Type 1 AGNs are thought to give us a direct view of the innermost region of the putative dust torus as well as the central accretion disk, where the interesting effect of the latter should also be evaluated carefully. \section{Keck interferometry} \subsection{Observations and data reduction}\label{sec_kiobs} We observed four AGNs listed in Table~\ref{tab_obs} and associated calibrators with the Keck Interferometer (KI; \citealt{Colavita03}) on 2009 May 15 (UT). These four targets were chosen based on their bright optical magnitudes measured from the pre-imaging data obtained in April 2009 at Tiki Observatory (French Polynesia) and Silver Spring Observatory (USA) by N.~Teamo, J.~C.~Pelle, and K.~Levin. The KI combines the two beams from the two Keck 10~m telescopes which are separated by 85~m along the direction 38\degr\ east of north. Adaptive Optics correction was implemented at each telescope, locking on the nucleus in the visible wavelengths. The data were obtained with a fringe tracker rate of 200 Hz operated at K-band, while the angle-tracking was performed at H-band. The data were first reduced with {\sf Kvis}\footnotemark[1] to produce raw squared visibility ($V^2$) data averaged over blocks of 5 sec each. Then those blocks with a phase RMS jitter larger than 0.8 radian were excluded. For a given visit of each object, the blocks with an AO wavefront sensor flux smaller than the median by 20\% or more were also excluded. The rejected blocks were $\sim$ 6\% of all the blocks, and they were generally outliers in the visibility measurements for a given visit, and quite often were of low fractional fringe-lock time. Then the wide-band side of the data were further reduced using {\sf wbCalib}\footnotemark[2] with the correction for the flux ratio between two telescope beams and the correction for a flux bias (slight systematic decrease of the KI's system visibility for lower injected flux\footnotemark[3]). The jitter correction was applied with a coefficient 0.04 \citep{Colavita99pasp}. Then the blocks were averaged into scans over a few minutes each, with its error estimated as a standard deviation within a scan. \footnotetext[1]{\tiny{http://nexsci.caltech.edu/software/KISupport/v2/V2reductionGuide.pdf}} \footnotetext[2]{\tiny{http://nexsci.caltech.edu/software/V2calib/wbCalib/index.html}} \footnotetext[3]{\tiny{http://nexsci.caltech.edu/software/KISupport/dataMemos/index.shtml}} Fig.\ref{fig_rawvis} shows observed visibilities of all the targets and calibrators (after the corrections above), plotted against observing time. All the six calibrators used are expected to be unresolved by the KI at K-band ($V^2 \ge 0.999$). Overall, the system visibility ($V^2_{\rm sys}$), as measured by these calibrators, was quite stable over the night. The calibrators span over a relatively wide range of brightness (see the legend in Fig.\ref{fig_rawvis}), and one of them (HD111422) had approximately the same injected flux counts as those of NGC4151 and Mrk231. Based on the corrected visibilities of these calibrators shown in Fig.\ref{fig_rawvis}, the flux bias seems to have been taken out quite well, although there might still be some systematics left. The visibilities of the three faint calibrators ($K>9.1$) tend to be slightly smaller than those for the other brighter ones, with the difference of the means of the former and latter being $\sim$0.9\% of the means. Therefore we assign 0.01 as a possible systematic uncertainty in system visibility estimations. Note that for the faintest target NGC4051, the flux bias correction was effectively used with a slight extrapolation (by $\sim$1 magnitude in K-band\footnotemark[3]). This should be checked with future fainter calibrator observations. For each target measurement, $V^2_{\rm sys}$ was estimated from these calibrator observations with {\sf wbCalib} using its time and sky proximity weighting scheme, yielding $V^2_{\rm sys}$ indicated as gray circles in Fig.\ref{fig_rawvis}. The final calibrated visibilities are shown in Fig.\ref{fig_calvis_uv}, with the sampled $uv$ points shown in the inset of Fig.\ref{fig_calvis_uv}. \begin{table*} \input{tab_obs} \label{tab_obs} \end{table*} \subsection{Results} Fig.\ref{fig_ngc4151} shows the final calibrated visibilities for NGC4151 as a function of projected baseline length, enlarged in the left panel of the insets. We confirm the visibility level observed by \cite{Swain03}. The covered range of projected baselines is very limited (note also that the shortest possible for NGC4151 is $\sim$70 m due to the KI's delay line restriction). We see, however, a marginal decrease of visibility over the increasing baseline. With the Spearman's rank correlation coefficient analysis, the confidence level is 98.4\%, or 2.4 $\sigma$. The decrease and absolute level of visibilities are well fitted with a simple thin ring model (i.e. inner radius equal to outer) of radius $0.45\pm0.04$ mas ($0.039\pm0.003$ pc; the error accounts for the systematic uncertainty in $V^2_{\rm sys}$; Sec.\ref{sec_kiobs}). If we convert each of the visibility measurements into a ring radius and plot it as a function of the PA of the projected baseline, we obtain the right panel of the insets in Fig.\ref{fig_ngc4151}. Over the PA range covered, from $\sim$10\degr\ to $\sim$50\degr, we do not seem to see PA dependence of the radius. For all the other targets, the calibrated visibilities are shown in Table~\ref{tab_obs} together with baseline information and deduced ring radii. \begin{figure} \centering \includegraphics[width=9cm]{fig_rawvis.eps} \caption{Observed $V^2$ plotted against observing time. Gray dots are individual measurements for blocks of 5 sec each. Gray circles are the estimated system $V^2$ at the time of target observations.} \label{fig_rawvis} \end{figure} \begin{figure} \centering \includegraphics[width=9cm]{fig_calvis_uv.eps} \caption{Calibrated $V^2$ plotted against observing time. The inset shows the sampled uv points for each target (north to the top, east to the left).} \label{fig_calvis_uv} \end{figure} \begin{figure} \centering \includegraphics[width=9cm]{fig_ngc4151_obs.eps} \caption{Calibrated $V^2$ for NGC4151 as a function of projected baselines, enlarged in the left inset. The dotted line shows the best-fit visibility curve with a ring model of radius 0.45 mas. In the right inset, ring radii are plotted along the PA of each projected baseline. Note that the correction for the accretion disk and host galaxy contributions is not incorporated in this figure, and it does not change the ring radius significantly (see Table~\ref{tab_corr}).} \label{fig_ngc4151} \end{figure} \section{UKIRT imaging and KI data corrections}\label{sec_ukirt} In order to obtain quasi-simultaneous flux measurements for the nuclear point-source in the four Type 1 AGNs (which are variable), their images were obtained with WFCAM on UKIRT in five broad-band filters (Table~\ref{tab_corr}) on 2009 June 17 (UT) under the UKIRT service programme. The seeing was $\sim$1 arcsec. The pipe-line-reduced data were obtained through the WFCAM Science Archive. The wide field of view of the WFCAM gave simultaneous measurements of PSF stars in each AGN field, while a 2x2 micro stepping gave a good image sampling with an effective pixel size of 0.2 arcsec. We implemented two-dimensional (2D) fits for each image to accurately separate the PSF component from the underlying host galaxy, following the same procedure as described by \cite{Kishimoto07}. Table~\ref{tab_corr} lists the measured flux of the nuclear PSF component. The uncertainty of our nuclear PSF flux measurements is estimated as $\sim$5\%, based on the residual fluxes after the fits and the flux calibration uncertainty. In the K-band images of the two brighter objects, namely NGC4151 and Mrk231, the central several pixels seemed affected by non-linearity or saturation, so we implemented the 2D fits by masking the central $\sim$0.4 arcsec radius region. We estimate that the nuclear PSF flux is recovered within the same uncertainty of 5\%, based on the results from the same masked fits on the other unsaturated images. Using the results of the PSF--host decomposition, we also estimated the host galaxy flux fraction within the field-of-view of the KI which is $\sim$50 mas at K-band (FWHM; set by a single-mode fiber for the fringe tracker). The results are stated in Table~\ref{tab_corr}. The obtained small values show that the host galaxy contribution is only a very small part of the observed visibility departure from unity. Fig.\ref{fig_sed} shows the resulting spectral energy distribution (SED) of the PSF component in each target, after the correction for Galactic reddening. We also corrected for the reddening in the host galaxy for the objects which show large Balmer decrements in broad emission lines (Table~\ref{tab_corr}). Assuming that the PSF flux originates from the hot dust thermal emission nearly at the sublimation temperature and from the central accretion disk (AD; thought to be directly seen in Type 1 inclinations), we estimate the flux fraction at K-band from the latter AD component. Here we fit the SED with a power-law spectrum of the form $f_{\nu} \propto \nu^{+1/3}$ for the AD, plus a spectrum of a black-body form for the dust (the best-fit temperature was $\sim$1300$-$1500 K). The AD flux fraction at K-band is estimated to be as small as $\sim$0.2 (Table~\ref{tab_corr}), in agreement with the results by \cite{Kishimoto07}. This suggests that the high visibilities observed are not due to the unresolved AD, as opposed to the preferred interpretation by \cite{Swain03} for NGC4151. The assumed near-IR AD spectral form is based on the recent study of near-IR polarized flux spectra \citep{Kishimoto08}, but also on various studies on AD spectral shapes in the optical/UV (summarized in Fig.2 of \citealt{Kishimoto08}). By assigning the uncertainty in the AD near-IR spectral index as 0.3, we also estimated the uncertainty of the K-band AD flux fraction (Table~\ref{tab_corr}). Finally, we corrected the observed visibilities for the host galaxy and the AD contributions, where the latter is assumed to remain unresolved by the KI. The corrected $V^2$ as well as corresponding thin ring radii are listed in Table~\ref{tab_corr}. \begin{table*} \input{tab_corr} \label{tab_corr} \end{table*} \begin{figure} \centering \includegraphics[width=8cm]{fig_sed.eps} \caption{Flux of the nuclear PSF component in WFCAM images derived from 2D fits. Fitted SEDs are shown in dotted lines (see text).} \label{fig_sed} \end{figure} \section{Interpretations and discussions} \begin{figure} \centering \includegraphics[width=8cm]{fig_radius_pc.eps} \caption{Corrected ring radius derived for each KI target (squares), plotted against UV luminosity, or a scaled V-band luminosity (extrapolated from the WFCAM Z-band flux; see text). Also shown in gray plus signs are the reverberation radii against the same scaled V-band luminosity \citep[][and references therein]{Suganuma06} and their fit (dotted line).} \label{fig_radius_pc} \end{figure} We interpret here the high visibilities observed for all the four objects as an indication of partially resolving the inner brightness distribution of dust thermal emission. As we discussed in Sec.\ref{sec_ukirt}, the AD flux fraction at K-band is estimated to be small, as long as the assumed power-law AD spectrum in the near-IR, smoothly continuing from the optical, is at least roughly correct. In this case, the K-band emission is dominated by the dust emission, and it is reasonable to convert the observed visibility to a thin ring radius to obtain an approximate effective radius of the dust brightness distribution for each object. (We have corrected $V^2$ for the unresolved AD contribution, but the correction is quite small; Table~\ref{tab_corr}.) The derived ring radii are plotted in parsec in Fig.\ref{fig_radius_pc} against UV luminosity $L$, here defined as a scaled V-band luminosity of $6\ \nu f_{\nu} (V)$ \citep{Kishimoto07}. The V-band flux is extrapolated from the fitted flux at 0.8 $\mu$m (Fig.\ref{fig_sed}) assuming an AD spectral shape of $f_{\nu} \propto \nu^{0}$ (based on spectral index studies referred to above). We can directly compare these ring radii with another type of independent radius measurements $R_{\rm \tau_K}$, namely the light traveling distance for the time lag of the K-band flux variation from the UV/optical variation \citep{Suganuma06}. These reverberation radii are also plotted against the same scaled V-band luminosity in Fig.\ref{fig_radius_pc}. They are known to be approximately proportional to $L^{1/2}$ (\citealt{Suganuma06}; the dotted line in Fig.\ref{fig_radius_pc} shows their fit), and are likely to be probing the dust sublimation radius in each object. We first see that $R_{\rm ring}$ is roughly comparable to $R_{\rm \tau_K}$ for all the four objects, and thus $R_{\rm ring}$ is roughly scaling also with $L^{1/2}$. This approximate match suggests that the KI data are indeed partially resolving the dust sublimation region. With a closer look at Fig.\ref{fig_radius_pc}, we see that $R_{\rm ring}$ is either roughly equal to or slightly larger than $R_{\rm \tau_K}$ (i.e. $R_{\rm ring}/R_{\rm \tau_K} \gtrsim 1$, up to a factor of a few), though we have only four objects. This could be understood if $R_{\rm \tau_K}$ is tracing a radius close to the innermost boundary radius of the dust distribution. It is known that the cross-correlation lag tends to trace an inner radius of the responding particles' distribution when the lag is determined from the peak in the cross-correlation function \citep[e.g.][and references therein]{Koratkar91}, as is the case for the data used in the fit by Suganuma et al. On the other hand, $R_{\rm ring}$ is an effective, average radius over the radial dust brightness distribution in the K-band. When the radial distribution is very steep and compact, the ratio $R_{\rm ring}/R_{\rm \tau_K}$ would become very close to unity (such as seen in NGC4151 and Mrk231), while for a flatter, more extended distribution, $R_{\rm ring} /R_{\rm \tau_K}$ would show a larger departure from unity. If our interpretation above is correct, the KI data would conversely support the dust sublimation radius as probed by the reverberation measurements that is smaller by a factor of about three than that inferred for typical ISM-size Graphite grains \citep[][0.05 $\mu$m radius]{Barvainis87} for a given $L$ \citep{Kishimoto07}. The small sublimation radius could be due to the possible dominance of large grains in the innermost region, since they can sustain at a much closer distance to the illuminating source for a given sublimation temperature. Alternatively, it could be due to an anisotropy of the AD radiation (see \citealt{Kishimoto07} for more details). In Fig.\ref{fig_radius_pc}, the reverberation and ring radii are shown essentially as a function of an instantaneous $L$ at the time of each corresponding radius measurement. \cite{Koshida09}, however, recently have shown that $R_{\rm \tau_K}$ is not exactly scaling with the instantaneous $L$ varying in a given object. It might be that $R_{\rm \tau_K}$ tends to give a dust sublimation radius that corresponds to a relatively long-term average of $L$. On the other hand, $R_{\rm \tau_K}$ does show the $L^{1/2}$ proportionality over a {\it sample} of objects. Thus, when we compare $R_{\rm \tau_K}$ and $R_{\rm ring}$, unless simultaneous measurements exist, we would have to allow for the uncertainty in $R_{\rm \tau_K}$, as a function of instantaneous $L$, being the scatter in the $L^{1/2}$ fit ($\sim$0.17 dex). If the AD spectrum does not have a power-law shape but rather has some red turn-over in the near-IR, though the near-IR polarized flux spectrum argues against this \citep{Kishimoto08}, the AD flux fraction at K-band would become higher than we estimated here. However, even if the flux fraction is as large as 0.5, the corrected ring radius would become larger than stated in Table~\ref{tab_corr} by a factor of only $\sim$1.3, resulting in no qualitative change in our discussion. Future near-IR interferometry with much longer baselines can conclusively confirm that the visibility is decreasing as we inferred from the present KI data. We plan to advance our exploration with further interferometric measurements in the infrared. \begin{acknowledgements} The data presented herein were obtained at the W.M. Keck Observatory, which is operated as a scientific partnership among the California Institute of Technology, the University of California and the National Aeronautics and Space Administration. The Observatory was made possible by the generous financial support of the W.M. Keck Foundation. We are grateful to all the staff members whose huge efforts made these Keck interferometer observations possible. The United Kingdom Infrared Telescope is operated by the Joint Astronomy Centre on behalf of the Science and Technology Facilities Council of the U.K. We thank N.~Teamo, J.~C.~Pelle and K.~Levin for kindly providing the pre-imaging data, and F. Millour for helpful discussions. This work has made use of services produced by the NASA Exoplanet Science Institute at the California Institute of Technology. \end{acknowledgements}
1,108,101,565,335
arxiv
\section{Introduction} \label{intro} Sparse approximations have over the last decade gained a great deal of popularity in numerous areas. For example, in compressed sensing, a large sparse signal is decoded by finding a sparse solution to a system of linear equalities and/or inequalities. Our particular interest of this paper is to find a sparse approximation to a convex cone programming problem in the form of \beq \label{cone} \ba{ll} \min & f(x) \\ \mbox{s.t.} & A x - b \in \cK^*, \\ & l \le x \le u \ea \eeq for some $l \in \bar\Re^n_{-}$, $u \in \bar\Re^n_{+}$, $A \in \Re^{m \times n}$ and $b \in \Re^m$, where $\cK^*$ denotes the dual cone of a closed convex cone $\cK \subseteq \Re^m$, i.e., $\cK^* = \{s\in \Re^m: s^Tx \ge 0, \forall x \in \cK\}$, and $\bar\Re^n_-=\{x:-\infty \le x_i \le 0, \ 1 \le i \le n\}$ and $\bar\Re^n_+=\{x\in \Re^n: 0 \le x_i \le \infty, \ 1 \le i \le n\}$. A sparse solution to \eqref{cone} can be sought by solving the following $l_0$ regularized convex cone programming problem: \beq \label{l0-cone} \ba{ll} \min & f(x) + \lambda \|x\|_0 \\ \mbox{s.t.} & A x - b \in \cK^*, \\ & l \le x \le u \ea \eeq for some $\lambda >0$, where $\|x\|_0$ denotes the cardinality of $x$. One special case of \eqref{l0-cone}, that is, the $l_0$-regularized unconstrained least squares problem, has been well studied in the literature (e.g., \cite{Nik11,LuZh12}), and some methods were developed for solving it. For example, the iterative hard thresholding (IHT) methods \cite{HeGiTr06,BlDa08,BlDa09} and matching pursuit algorithms \cite{MaZh93,Tr04} were proposed to solve this type of problems. Recently, Lu and Zhang \cite{LuZh12} proposed a penalty decomposition method for solving a more general class of $l_0$ minimization problems. As shown by the extensive experiments in \cite{BlDa08,BlDa09}, the IHT method performs very well in finding a sparse solution to unconstrained least squares problems. In addition, the similar type of methods \cite{CaCaSh10,JaMeDh10} were successfully applied to find low rank solutions in the context of matrix completion. Inspired by these works, in this paper we study IHT methods for solving $l_0$ regularized convex cone programming problem \eqref{l0-cone}. In particular, we first propose an IHT method and its variant for solving $l_0$ regularized box constrained convex programming. We show that the sequence generated by these methods converges to a local minimizer. Also, we establish the iteration complexity of the IHT method for finding an $\eps$-local-optimal solution. We then propose a method for solving $l_0$ regularized convex cone programming by applying the IHT method to its quadratic penalty relaxation and establish its iteration complexity for finding an $\eps$-approximate local minimizer of the problem. We also propose a variant of the method in which the associated penalty parameter is dynamically updated, and show that every accumulation point is a local minimizer of the problem. The outline of this paper is as follows. In Subsection \ref{notation} we introduce some notations that are used in the paper. In Section \ref{tech} we present some technical results about a projected gradient method for convex programming. In Section \ref{l0-box} we propose IHT methods for solving $l_0$ regularized box constrained convex programming and study their convergence. In section \ref{l0-cp} we develop IHT methods for solving $l_0$ regularized convex cone programming and study their convergence. Finally, in Section \ref{conclude} we present some concluding remarks. \subsection{Notation} \label{notation} Given a nonempty closed convex $\Omega \subseteq \Re^n$ and an arbitrary point $x \in \Omega$, $\cN_\Omega(x)$ denotes the normal cone of $\Omega$ at $x$. In addition, $d_\Omega(y)$ denotes the Euclidean distance between $y \in \Re^n$ and $\Omega$. All norms used in the paper are Euclidean norm denoted by $\|\cdot\|$. We use $\cU(r)$ to denote a ball centered at the origin with a radius $r \ge 0$, that is, $\cU(r): = \{x\in\Re^n: \|x\| \le r\}$. \section{Technical preliminaries} \label{tech} In this section we present some technical results about a projected gradient method for convex programming that will be subsequently used in this paper. Consider the convex programming problem \beq \label{cp-phi} \phi^* := \min\limits_{x\in X} \phi(x), \eeq where $X \subseteq \Re^n$ is a closed convex set and $\phi: X \to \Re$ is a smooth convex function whose gradient is Lipschitz continuous with constant $L_\phi>0$. Assume that the set of optimal solutions of \eqref{cp-phi}, denoted by $X^*$, is nonempty. Let $L \ge L_\phi$ be arbitrarily given. A projected gradient of $\phi$ at any $x\in X$ with respect to $X$ is defined as \beq \label{gx} g(x) := L\left[x - \Pi_X\left(x-\nabla \phi(x)/L\right)\right], \eeq where $\Pi_X(\cdot)$ is the projection map onto $X$ (see, for example, \cite{Nes04}). The following properties of the projected gradient are essentially shown in Proposition 3 and Lemma 4 of \cite{LaMo12} (see also \cite{Nes04}). \begin{lemma} \label{proj-grad} Let $x\in X$ be given and define $x^+: = \Pi_X(x-\nabla \phi(x)/L)$. Then, for any given $\epsilon \ge 0$, the following statements hold: \bi \item[a)] $\|g(x)\| \le \epsilon$ if and only if $\nabla \phi(x) \in -\cN_X(x^+) + \cU(\epsilon)$. \item[b)] $\|g(x)\| \le \epsilon$ implies that $\nabla \phi(x^+) \in -\cN_X(x^+) + \cU(2\epsilon)$. \item[c)] $\phi(x^+) -\phi(x) \le -\|g(x)\|^2/(2L)$. \item[d)] $\phi(x) - \phi(x^*) \ge \|g(x)\|^2/(2L)$, where $x^* \in \Arg\min\{\phi(y): y\in X\}$. \ei \end{lemma} \vgap We next study a projected gradient method for solving \eqref{cp-phi}. \gap \noindent {\bf Projected gradient method for \eqnok{cp-phi}:} \\ [5pt] Choose an arbitrary $x^0\in X$. Set $k=0$. \begin{itemize} \item[1)] Solve the subproblem \beq \label{pg-subprob} x^{k+1} = \arg\min\limits_{x\in X}\{\phi(x^k)+\nabla \phi(x^k)^T(x-x^k)+\frac{L}{2}\|x-x^k\|^2\}. \eeq \item[2)] Set $k \leftarrow k+1$ and go to step 1). \end{itemize} \noindent {\bf end} \vgap Some properties of the above projected gradient method are established in the following two theorems, which will be used in the subsequent sections of this paper. \begin{theorem} \label{sd-lemma} Let $\{x^k\}$ be generated by the above projected gradient method. Then the following statements hold: \bi \item[\rm (i)] For every $k \ge 0$ and $l \ge 1$, \beq \label{suff-reduct} \phi(x^{k+l}) - \phi^* \le \frac{L}{2l} \|x^k-x^*\|^2. \eeq \item[\rm (ii)] $\{x^k\}$ converges to some optimal solution $x^*$ of \eqref{cp-phi}. \ei \end{theorem} \begin{proof} (i) Since the objective function of \eqref{pg-subprob} is strongly convex with modulus $L$, it follows that for every $x\in X$, \[ \phi(x^k)+\nabla \phi(x^k)^T(x-x^k)+\frac{L}{2}\|x-x^k\|^2 \ge \phi(x^k)+ \nabla \phi(x^k)^T(x^{k+1}-x^k)+\frac{L}{2}\|x^{k+1}-x^k\|^2 + \frac{L}{2}\|x-x^{k+1}\|^2. \] By the convexity of $\phi$, Lipschitz continuity of $\nabla \phi$ and $L \ge L_\phi$, we have \[ \ba{lcl} \phi(x) &\ge & \phi(x^k)+\nabla \phi(x^k)^T(x-x^k), \\ [5pt] \phi(x^{k+1}) &\le &\phi(x^k)+ \nabla \phi(x^k)^T(x^{k+1}-x^k)+\frac{L}{2}\|x^{k+1}-x^k\|^2, \ea \] which together with the above inequality imply that \beq \label{1st-opt} \phi(x) +\frac{L}{2}\|x-x^k\|^2 \ \ge \ \phi(x^{k+1}) + \frac{L}{2}\|x-x^{k+1}\|^2, \ \ \ \forall x\in X. \eeq Letting $x = x^k$ in \eqref{1st-opt}, we obtain that \[ \phi(x^k) - \phi(x^{k+1}) \ \ge \ L \|x^{k+1}-x^k\|^2/2. \] Hence, $\{\phi(x^k)\}$ is decreasing. Letting $x=x^* \in X^*$ in \eqref{1st-opt}, we have \[ \phi(x^{k+1}) - \phi^* \ \le \ \frac{L}{2} \left(\|x^k-x^*\|^2-\|x^{k+1}-x^*\|^2\right), \ \ \ \forall k \ge 0. \] Using this inequality and the monotonicity of $\{\phi(x^k)\}$, we obtain that \beq \label{sum-ineq} l (\phi(x^{k+l})-\phi^*) \ \le \ \sum^{k+l-1}\limits_{i=k}[\phi(x^{i+1})-\phi^*] \ \le \ \frac{L}{2} \left(\|x^k-x^*\|^2-\|x^{k+l}-x^*\|^2\right), \eeq which immediately yields \eqref{suff-reduct}. (ii) It follows from \eqref{sum-ineq} that \beq \label{dist} \|x^{k+l} -x^*\| \ \le \ \|x^k-x^*\|, \ \ \ \forall k \ge 0, l \ge 1. \eeq Hence, $\|x^k-x^*\| \le \|x^0-x^*\|$ for every $k$. It implies that $\{x^k\}$ is bounded. Then, there exists a subsequence $K$ such that $\{x^k\}_K \to \hat x^* \in X$. It can be seen from \eqref{suff-reduct} that $\{\phi(x^k)\}_K \to \phi^*$. Hence, $\phi(\hat x^*) = \lim_{k\in K \to \infty} \phi(x^k) = \phi^*$, which implies that $\hat x^* \in X^*$. Since \eqref{dist} holds for any $x^*\in X^*$, we also have $\|x_{k+l} -\hat x^*\| \ \le \ \|x^k-\hat x^*\|$ for every $k \ge 0$ and $l \ge 1$. This together with the fact $\{x^k\}_K \to \hat x^*$ implies that $\{x^k\} \to \hat x^*$ and hence statement (ii) holds. \end{proof} \gap \begin{theorem} \label{sd-thm} Suppose that $\phi$ is strongly convex with modulus $\sigma >0$. Let $\{x^k\}$ be generated by the above projected gradient method. Then, for any given $\epsilon >0$, the following statements hold: \bi \item[\rm (i)] $\phi(x^k)-\phi^* \le \epsilon$ whenever \[ k \ge 2\lceil L/\sigma\rceil \left\lceil \log\frac{\phi(x^0)-\phi^*}{\epsilon} \right\rceil. \] \item[\rm (ii)] $\phi(x^k)-\phi^* < \epsilon$ whenever \[ k \ge 2 \lceil L/\sigma\rceil \left\lceil \log\frac{\phi(x^0)-\phi^*}{\epsilon} \right\rceil + 1. \] \ei \end{theorem} \begin{proof} (i) Let $M= \lceil L/\sigma\rceil$. It follows from Theorem \ref{sd-lemma} that \[ \phi(x^{k+l}) - \phi^* \le \frac{L}{2l} \|x^k-x^*\|^2 \le \frac{L}{\sigma l} (\phi(x^k)-\phi^*), \] where $x^*$ is the optimal solution of \eqref{cp-phi}. Hence, we have \[ \phi(x^{k+2M}) - \phi^* \le \frac{L}{2\sigma M} (\phi(x^k)-\phi^*) \ \le \ \frac12 (\phi(x^k)-\phi^*), \] which implies that \[ \phi(x^{2jM}) - \phi^* \le \frac{1}{2^j} (\phi(x^0)-\phi^*). \] Let $K=\lceil \log((\phi(x^0)-\phi^*)/\epsilon)\rceil$. Hence, when $k \ge 2KM$, we have \[ \phi(x^k) - \phi^* \ \le \ \phi(x^{2KM})-\phi^* \ \le \ \frac{1}{2^K}(\phi(x^0)-\phi^*) \ \le \ \epsilon, \] which immediately implies that statement (i) holds. (ii) Let $K$ and $M$ be defined as above. If $\phi(x^{2KM})=\phi^*$, by monotonicity of $\{\phi(x^k)\}$ we have $\phi(x^k)=\phi^*$ when $k > 2KM$, and hence the conclusion holds. We now suppose that $\phi(x^{2KM}) >\phi^*$. It implies that $g(x^{2KM}) \neq 0$, where $g$ is defined in \eqref{gx}. Using this relation, Lemma \ref{proj-grad} (c) and statement (i), we obtain that $\phi(x^{2KM+1}) < \phi(x^{2KM}) \le \eps$, which together with the montonicity of $\{\phi(x^k)\}$ implies that the conclusion holds. \end{proof} \gap Finally, we consider the convex programming problem \beq \label{cp-cone} f^*:= \min\{f(x): Ax-b \in \cK^*, x\in X\}, \eeq for some $A\in \Re^{m \times n}$ and $b\in\Re^m$, where $f:X \to \Re$ is a smooth convex function whose gradient is Lipschitz continuous gradient with constant $L_f>0$, $X \subseteq \Re^n$ is a closed convex set, and $\cK^*$ is the dual cone of a closed convex cone $\cK$. The Lagrangian dual function associated with \eqref{cp-cone} is given by \[ d(\mu) := \inf\{f(x)+\mu^T(Ax-b): x\in X\}, \ \ \ \forall \mu \in -\cK. \] Assume that there exists a Lagrange multiplier for \eqref{cp-cone}, that is, a vector $\mu^* \in -\cK$ such that $d(\mu^*)=f^*$. Under this assumption, the following results are established in Corollary 2 and Proposition 10 of \cite{LaMo12}, respectively. \begin{lemma} \label{gap-infeas} Let $\mu^*$ be a Lagrange multiplier for \eqref{cp-cone}. There holds: \[ f(x)-f^* \ge -\|\mu^*\|d_{\cK^*}(Ax-b), \ \ \ \forall x\in X. \] \end{lemma} \begin{lemma} \label{approx-soln} Let $\rho>0$ be given and $L_\rho = L_f+\rho\|A\|^2$. Consider the problem \beq \label{cp-pensub} \Phi^*_\rho := \min\limits_{x\in X} \{\Phi_\rho(x) := f(x) + \frac{\rho}{2} [d_{\cK^*}(Ax-b)]^2\}. \eeq If $x\in X$ is a $\xi$-approximate solution of \eqref{cp-pensub}, i.e., $\Phi_\rho(x)-\Phi^*_\rho \le \xi$, then the pair $(x^+,\mu)$ defined as \[ \ba{lcl} x^+ &:=& \Pi_X(x-\nabla \Phi_\rho(x)/{L_\rho}), \\ [5pt] \mu &:=& \rho[Ax^+ - b - \Pi_{\cK^*}(Ax^+ - b)] \ea \] is in $X \times (-\cK)$ and satisfies $\mu^T\Pi_{\cK^*}(Ax^+ - b) = 0$ and the relations \[ \ba{lcl} d_{\cK^*}(Ax^+-b) &\le& \frac{1}{\rho} \|\mu^*\| + \sqrt{\frac{\xi}{\rho}}, \\ [6pt] \nabla f(x^+) + A^T \mu &\in& -\cN_X(x^+) + \cU(2\sqrt{2L_\rho \xi}), \ea \] where $\mu^*$ is an arbitrary Lagrange multiplier for \eqref{cp-cone}. \end{lemma} \section{$l_0$ regularized box constrained convex programming} \label{l0-box} In this section we consider a special case of \eqref{l0-cone}, that is, $l_0$ regularized box constrained convex programming problem in the form of: \beq \label{l0-min} \ba{rl} \underline{F}^* := \min & F(x) := f(x) + \lambda \|x\|_0 \\ \mbox{s.t.} & l \le x \le u \ea \eeq for some $\lambda >0$, $l \in \bar\Re^n_{-}$ and $u \in \bar\Re^n_{+}$. Recently, Blumensath and Davies \cite{BlDa08,BlDa09} proposed an iterative hard thresholding (IHT) method for solving a special case of \eqref{l0-min} with $f(x)=\|Ax-b\|^2$, $l_i=-\infty$ and $u_i=\infty$ for all $i$. Our aim is to extend their IHT method to solve \eqref{l0-min} and study its convergence. In addition, we establish its iteration complexity for finding an $\epsilon$-local-optimal solution of \eqref{l0-min}. Finally, we propose a variant of the IHT method in which only ``local'' Lipschitz constant of $\nabla f$ is used. Throughout this section we assume that $f$ is a smooth convex function in $\cB$ whose gradient is Lipschitz continuous with constant $L_f>0$, and also that $f$ is bounded below on the set $\cB$, where \beq \label{cB} \cB := \{x\in \Re^n:l \le x \le u \}. \eeq We now present an IHT method for solving problem \eqref{l0-min}. \gap \noindent {\bf Iterative hard thresholding method for \eqnok{l0-min}:} \\ [5pt] Choose an arbitrary $x^0\in \cB$. Set $k=0$. \begin{itemize} \item[1)] Solve the subproblem \beq \label{subprob} x^{k+1} \in \Arg\min\limits_{x\in \cB}\{f(x^k)+\nabla f(x^k)^T(x-x^k)+\frac{L}{2}\|x-x^k\|^2+\lambda \|x\|_0\}. \eeq \item[2)] Set $k \leftarrow k+1$ and go to step 1). \end{itemize} \noindent {\bf end} \vgap \begin{remark} The subproblem \eqref{subprob} has a closed form solution given in \eqref{IHT}. \end{remark} \vgap In what follows, we study the convergence of the above IHT method for \eqref{l0-min}. Before proceeding, we introduce some notations that will be used subsequently. Define \beqa \cB_I &:=& \{x\in \cB: x_I = 0\}, \ \ \ \forall I \subseteq \{1,\ldots,n\}, \label{BI} \\ [6pt] \Pi_\cB(x) &:=& \arg\min \{\|y-x\|: y \in \cB\}, \ \ \ \forall x \in \Re^n, \nn \\ [6pt] s_L(x) &:=& x - \frac{1}{L} \nabla f(x), \ \ \ \forall x \in \cB, \label{sL} \\ [6pt] I(x) &:=& \{i: x_i = 0\}, \ \ \ \forall x \in \Re^n \label{Ix} \eeqa for some constant $L > L_f$. The following lemma establishes some properties of the operators $s_L(\cdot)$ and $\Pi_\cB(s_L(\cdot))$, which will be used subsequently. \begin{lemma} \label{diff-xy} For any $x$, $y \in \Re^n$, there hold: \bi \item[\rm (1)] $|[s_L(x)]^2_i-[s_L(y)]^2_i| \le 4(\|x-y\|+|[s_L(y)]_i|)\|x-y\|$; \item[\rm (2)] $|[\Pi_{\cB}(s_L(x))-s_L(x)]^2_i - [\Pi_{\cB}(s_L(y))-s_L(y)]^2_i| \le 4(\|x - y\| + |[\Pi_{\cB}(s_L(y))-s_L(y)]_i|)\|x - y\|$. \ei \end{lemma} \begin{proof} (1) We observe that \beqa \|s_L(x)-s_L(y)\| &=& \|x-y-\frac1L(\nabla f(x)-\nabla f(y))\| \ \le \ \|x-y\| + \frac1L\|\nabla f(x)-\nabla f(y)\|, \nn \\ &\le & (1+\frac{L_f}{L}) \|x-y\| \ \le \ 2\|x-y\|. \label{diff-s} \eeqa It follows from \eqref{diff-s} that \[ \ba{lcl} |[s_L(x)]^2_i-[s_L(y)]^2_i| & = & |[s_L(x)]_i+[s_L(y)]_i| \cdot |[s_L(x)]_i-[s_L(y)]_i|, \\ [7pt] &\le & (|[s_L(x)]_i-[s_L(y)]_i| + 2|[s_L(y)]_i|) \cdot |[s_L(x)]_i-[s_L(y)]_i|, \\ [7pt] & \le & 4(\|x-y\|+|[s_L(y)]_i|)\|x-y\|. \ea \] (2) It can be shown that \[ \|\Pi_{\cB}(x)-x +y - \Pi_{\cB}(y)\| \ \le \ \|x-y\|. \] Using this inequality and \eqref{diff-s}, we then have \[ \ba{l} |[\Pi_{\cB}(s_L(x))-s_L(x)]^2_i - [\Pi_{\cB}(s_L(y))-s_L(y)]^2_i| \\ [5pt] \le (|[\Pi_{\cB}(s_L(x))-s_L(x)]_i - [\Pi_{\cB}(s_L(y))-s_L(y)]_i| +2 |\Pi_{\cB}(s_L(y))-s_L(y)]_i|) \\ \ \ \ \cdot |[\Pi_{\cB}(s_L(x))-s_L(x)]_i - [\Pi_{\cB}(s_L(y))-s_L(y)]_i|, \\ [5pt] \le (\|s_L(x) - s_L(y)\| +2 |[\Pi_{\cB}(s_L(y))-s_L(y)]_i|)\cdot\|s_L(x) - s_L(y)\|, \\ [5pt] \le 4(\|x - y\| + |[\Pi_{\cB}(s_L(y))-s_L(y)]_i|)\|x - y\|. \ea \] \end{proof} \gap The following lemma shows that for the sequence $\{x^k\}$, the magnitude of any nonzero component $x^k_i$ cannot be too small for $k \ge 1$. \begin{lemma} Let $\{x^k\}$ be generated by the above IHT method. Then, for all $k \ge 0$, \beq \label{lower-bdd} |x^{k+1}_j| \ \ge \ \delta := \min\limits_{i \notin I_0} \delta_i \ > \ 0, \ \ \ \mbox{if} \ \ x^{k+1}_j \neq 0, \eeq where $I_0 = \{i: l_i=u_i=0\}$ and \beq \label{deltai} \delta_i = \left\{\ba{ll} \min(u_i,\sqrt{2\lambda/L}), & \mbox{if} \ l_i=0, \\ [4pt] \min(-l_i,\sqrt{2\lambda/L}), & \mbox{if} \ u_i=0, \\ [4pt] \min(-l_i,u_i,\sqrt{2\lambda/L}), & \mbox{otherwise}, \ea\right. \quad\quad\quad\forall i \in I_0. \eeq \end{lemma} \begin{proof} One can observe from \eqref{subprob} that for $i=1, \ldots, n$, \beq \label{IHT} x^{k+1}_i = \left\{\ba{ll} [\Pi_\cB(s_L(x^k))]_i, & \ \mbox{if} \ [s_L(x^k)]^2_i-[\Pi_{\cB}(s_L(x^k))-s_L(x^k)]^2_i > \frac{2\lambda}{L}, \\ [7pt] 0, & \ \mbox{if} \ [s_L(x^k)]^2_i-[\Pi_{\cB}(s_L(x^k))-s_L(x^k)]^2_i < \frac{2\lambda}{L}, \\ [7pt] [\Pi_\cB(s_L(x^k))]_i \ \mbox{or} \ 0, & \ \mbox{otherwise} \ea\right. \eeq (see, for example, \cite{LuZh12}). Suppose that $j$ is an index such that $x^{k+1}_j \neq 0$. Clearly, $j \notin I_0$, where $I_0$ is define above. It follows from \eqref{IHT} that \beq \label{xki} x^{k+1}_j = [\Pi_\cB(s_L(x^k))]_j \neq 0, \quad\quad [s_L(x^k)]^2_j-[\Pi_{\cB}(s_L(x^k))-s_L(x^k)]^2_j \ge \frac{2\lambda}{L}. \eeq The second relation of \eqref{xki} implies that $|[s_L(x^k)]_j| \geq \sqrt{2\lambda/L} $. In addition, by the first relation of \eqref{xki} and the definition of $\Pi_\cB$, we have \beq \label{xj} x^{k+1}_j=[\Pi_\cB(s_L(x^k))]_j = \min(\max([s_L(x^k)]_j,l_j),u_j) \neq 0. \eeq Recall that $j \notin I_0$. We next show that $|x^{k+1}_j| \ge \delta_j$ by considering three separate cases: i) $l_j=0$; ii) $u_j=0$; and iii) $l_j u_j \neq 0$. For case i), it follows from \eqref{xj} that $[s_L(x^k)]_j \ge 0$ and $x^{k+1}_j=\min([s_L(x^k)]_j,u_j)$. This together with the relation $|[s_L(x^k)]_j| \geq \sqrt{2\lambda/L} $ and the definition of $\delta_j$ implies that $|x^{k+1}_j| \ge \delta_j$. By the similar arguments, we can show that $|x^{k+1}_j| \ge \delta_j$ also holds for the other two cases. Then, it is easy to see that the conclusion of this lemma holds. \end{proof} \gap We next establish that the sequence $\{x^k\}$ converges to a local minimizer of $\eqref{l0-min}$, and moreover, $F(x^k)$ converges to a local minimum value of \eqref{l0-min}. \begin{theorem} \label{limit-thm} Let $\{x^k\}$ be generated by the above IHT method. Then, $x^k$ converges to a local minimizer $x^*$ of problem \eqref{l0-min} and moreover, $I(x^k) \to I(x^*)$, $\|x^k\|_0 \to \|x^*\|_0$ and $F(x^k) \to F(x^*)$. \end{theorem} \begin{proof} Since $\nabla f$ is Lipschitz continuous with constant $L_f$, we have \[ f(x^{k+1}) \ \le \ f(x^k)+\nabla f(x^k)^T(x-x^k)+\frac{L_f}{2}\|x^{k+1}-x^k\|^2. \] Using this inequality, the fact that $L > L_f$, and \eqref{subprob}, we obtain that \[ \ba{lcl} F(x^{k+1}) &=& f(x^{k+1})+ \lambda \|x^{k+1}\|_0 \ \le \ \overbrace{ f(x^k)+\nabla f(x^k)^T(x^{k+1}-x^k)+\frac{L_f}{2}\|x^{k+1}-x^k\|^2 + \lambda \|x^{k+1}\|_0}^a, \nn \\ [14pt] &\le & \underbrace{f(x^k)+\nabla f(x^k)^T(x^{k+1}-x^k)+\frac{L}{2}\|x^{k+1}-x^k\|^2 + \lambda \|x^{k+1}\|_0 }_b \nn \\ [14pt] &\le & f(x^k)+\lambda \|x^k\|_0 \ = \ F(x^k), \ea \] where the last inequality follows from \eqref{subprob}. The above inequality implies that $\{F(x^k)\}$ is nonincreasing and moreover, \beq \label{diff-seq} F(x^k) - F(x^{k+1}) \ \ge \ b-a \ = \ \frac{L-L_f}{2} \|x^{k+1}-x^k\|^2. \eeq By the assumption, we know that $f$ is bounded below in $\cB$. It then follows that $\{F(x^k)\}$ is bounded below. Hence, $\{F(x^k)\}$ converges to a finite value as $k \to \infty$, which together with \eqref{diff-seq} implies that \beq \label{diff-lim} \lim_{k \to \infty}\|x^{k+1}-x^k\| = 0. \eeq Let $I_k = I(x^k)$, where $I(\cdot)$ is defined in \eqref{Ix}. In view of \eqref{lower-bdd}, we observe that \beq \label{x-change} \|x^{k+1}-x^k\| \ge \delta \ \ \mbox{if} \ \ I_k \neq I_{k+1}. \eeq This together with \eqref{diff-lim} implies that $I_k$ does not change when $k$ is sufficient large. Hence, there exist some $K \ge 0$ and $I \subseteq \{1,\ldots,n\}$ such that $I_k = I$ for all $k \geq K$. Then one can observe from \eqref{subprob} that \[ x^{k+1} = \arg\min\limits_{x\in \cB_I}\{f(x^k)+\nabla f(x^k)^T(x-x^k)+\frac{L}{2}\|x-x^k\|^2\}, \ \ \ \forall k > K, \] where $\cB_I$ is defined in \eqref{BI}. It follows from Lemma \ref{sd-lemma} that $x^k \to x^*$, where \beq \label{loc-min} x^* \in \Arg\min \{f(x): x\in \cB_I\} . \eeq It is not hard to see from \eqref{loc-min} that $x^*$ is a local minimizer of \eqref{l0-min}. In addition, we know from \eqref{lower-bdd} that $|x^k_i| \ge \delta$ for $k > K$ and $i \notin I$. It yields $|x^*_i| \ge \delta$ for $i \notin I$ and $x^*_i = 0$ for $i \in I$. Hence, $I(x^k) = I(x^*)=I$ for all $k > K$, which clearly implies that $\|x^k\|_0 = \|x^*\|_0$ for every $k > K$. By continuity of $f$, we have $f(x^k) \to f(x^*)$. It then follows that \[ F(x^k) = f(x^k) + \lambda \|x^k\|_0 \to f(x^*)+\lambda \|x^*\|_0 = F(x^*). \] \end{proof} \gap As shown in Theorem \ref{limit-thm}, $x^k \to x^*$ for some local minimizer $x^*$ of \eqref{l0-min} and $F(x^k) \to F(x^*)$. Our next aim is to establish the iteration complexity of the IHT method for finding an $\epsilon$-local-optimal solution $x_\eps \in \cB$ of \eqref{l0-min} satisfying $F(x_\eps) \le F(x^*)+\epsilon$ and $I(x_\eps)=I(x^*)$. Before proceeding, we define \beqa \alpha &=& \min\limits_{I \subseteq \{1,\ldots,n\}} \left\{\min\limits_{i} \left|[s_L(x^*)]^2_i -[\Pi_{\cB}(s_L(x^*))-s_L(x^*)]^2_i -\frac{2\lambda}{L}\right|: \ x^* \in \Arg\min \{f(x): \ x\in \cB_I\}\right\}, \label{alpha} \\ [6pt] \beta &=& \max\limits_{I \subseteq \{1,\ldots,n\}} \left\{\max\limits_{i} |[s_L(x^*)]_i|+|\Pi_{\cB}(s_L(x^*))-s_L(x^*)]_i|: \ x^* \in \Arg\min \{f(x): \ x\in \cB_I\}\right\}. \label{beta} \eeqa \begin{theorem} \label{complexity} Assume that $f$ is a smooth strongly convex function with modulus $\sigma>0$. Suppose that $L>L_f$ is chosen such that $\alpha >0$. Let $\{x^k\}$ be generated by the above IHT method, $I_k = I(x^k)$ for all $k$, $x^* =\lim_{k \to \infty} x^k$, and $F^* = F(x^*)$. Then, for any given $\eps>0$, the following statements hold: \bi \item[\rm (i)] The number changes of $I_k$ is at most $\left\lfloor \frac{2(F(x^0)-F^*)}{(L-L_f)\delta^2} \right\rfloor$. \item[\rm (ii)] The total number of iterations by the IHT method for finding an $\eps$-local-optimal solution $x_\eps \in \cB$ satisfying $I(x_\eps)=I(x^*)$ and $F(x_\eps) \le F^*+\eps$ is at most $2\lceil L/\sigma\rceil \log \frac{\theta}{\eps}$, where \beqa &\theta = (F(x^0) - F^*)2^{\frac{\omega+3}{2}}, \quad \omega= \max\limits_t \left\{(d- 2 c) t -ct^2: 0 \le t \le \left\lfloor \frac{2(F(x^0)-F^*)}{(L-L_f)\delta^2} \right\rfloor\right\}, \label{thetaw}\\ [6pt] &c=\frac{(L-L_f)\delta^2}{2(F(x^0)-\underline{F}^*)}, \quad\quad\ \gamma = \sigma(\sqrt{2\alpha+\beta^2}-\beta)^2/32, \label{gammac}\\ [6pt] &d = 2 \log(F(x^0) - \underline{F}^*) +4- 2 \log \gamma+ c. \nn \eeqa \ei \end{theorem} \begin{proof} (i) As shown in Theorem \ref{limit-thm}, $I_k$ only changes for a finite number of times. Assume that $I_k$ only changes at $k=n_1+1, \ldots, n_J+1$, that is, \beq \label{chg-I} I_{n_{j-1}+1} = \cdots = I_{n_j} \neq I_{n_j+1} = \cdots = I_{n_{j+1}}, \ j=1, \ldots, J-1, \eeq where $n_0=0$. We next bound $J$, i.e., the total number of changes of $I_k$. In view of \eqref{x-change} and \eqref{chg-I}, one can observe that \[ \|x^{n_j+1}-x^{n_j}\| \ge \delta, \ \ \ j=1, \ldots, J, \] which together with \eqref{diff-seq} implies that \beq \label{diff-Fx} F(x^{n_j}) - F(x^{n_j+1}) \ge \frac12(L-L_f)\delta^2, \ \ \ j=1,\ldots, J. \eeq Summing up these inequalities and using the monotonicity of $\{F(x^k)\}$, we have \beq \label{ineq-J} \frac12(L-L_f)\delta^2 J \ \le \ F(x^{n_1}) - F(x^{n_J+1}) \ \le \ F(x^0)-F^*, \eeq and hence \beq \label{bound-J} J \ \le \ \left\lfloor \frac{2(F(x^0)-F^*)}{(L-L_f)\delta^2} \right\rfloor. \eeq (ii) Let $n_j$ be defined as above for $j=1, \ldots, J$. We first show that \beq \label{diff-nj} n_j - n_{j-1} \ \le \ 2+ 2\lceil L/\sigma\rceil \left\lceil \log\left(F(x^0) - (j-1)(L-L_f) \delta^2/2 - \underline{F}^*\right) - \log \gamma \right\rceil, \ \ \ j=1,\ldots, J, \eeq where $\underline F^*$ and $\gamma$ are defined in \eqref{l0-min} and \eqref{gammac}, respectively. Indeed, one can observe from \eqref{subprob} that \[ x^{k+1} = \arg\min\limits_{x\in \cB}\{f(x^k)+\nabla f(x^k)^T(x-x^k)+\frac{L}{2}\|x-x^k\|^2: x_{I_{k+1}}=0\}. \] Therefore, for $j=1, \ldots, J$ and $k=n_{j-1}, \ldots, n_j-1$, \[ x^{k+1} = \arg\min\limits_{x\in \cB}\{f(x^k)+\nabla f(x^k)^T(x-x^k)+\frac{L}{2}\|x-x^k\|^2: x_{I_{n_j}}=0\}. \] We arbitrarily choose $1 \le j \le J$. Let $\bx^*$ (depending on $j$) denote the optimal solution of \beq \label{aux-prob} \min\limits_{x\in \cB}\{f(x): x_{I_{n_j}}=0\}. \eeq One can observe that \[ \|\bx^*\|_0 \ \le \ \|x^{n_{j-1}+1}\|_0. \] Also, it follows from \eqref{diff-Fx} and the monotonicity of $\{F(x^k)\}$ that \beq \label{bdd-Fx} F(x^{n_j+1}) \ \le \ F(x^0) - \frac{j}{2}(L-L_f) \delta^2, \ \ \ j=1,\ldots, J. \eeq Using these relations and the fact that $F(\bx^*) \ge \underline{F}^*$, we have \beqa f(x^{n_{j-1}+1}) - f(\bx^*) &=& F(x^{n_{j-1}+1}) - \lambda \|x^{n_{j-1}+1}\|_0 - F(\bx^*)+\lambda \|\bx^*\|_0, \nn \\ [6pt] & \le & F(x^0) - \frac{j-1}{2}(L-L_f) \delta^2 - \underline{F}^*. \label{bdd-Fxj} \eeqa Suppose for a contradiction that \eqref{diff-nj} does not hold for some $1 \le j \le J$. Hence, we have \[ n_j - n_{j-1} > 2+ 2\lceil L/\sigma\rceil \left\lceil \log\left(F(x^0) - (j-1)(L-L_f) \delta^2/2 - \underline{F}^*\right) - \log \gamma \right\rceil. \] This inequality and \eqref{bdd-Fxj} yields \[ n_j - n_{j-1} > 2+ 2\lceil L/\sigma\rceil \left\lceil \log\frac{f(x^{n_{j-1}+1})-f(\bx^*)}{\gamma}\right\rceil. \] Using the strong convexity of $f$ and applying Theorem \ref{sd-thm} (ii) to \eqref{aux-prob} with $\epsilon=\gamma$, we obtain that \[ \frac{\sigma}{2} \|x^{n_j}-\bx^*\|^2 \le f(x^{n_j}) - f(\bx^*) < \frac{\sigma}{32}(\sqrt{2\alpha+\beta^2}-\beta)^2. \] It implies that \beq \label{xdiff-bdd} \|x^{n_j}-\bx^*\| < \frac{\sqrt{2\alpha+\beta^2}-\beta}{4}. \eeq Using \eqref{xdiff-bdd}, Lemma \ref{diff-xy} and the definition of $\beta$, we have \beqa && |[s_L(x^{n_j})]^2_i-[s_L(\bx^*)]^2_i - [\Pi_{\cB}(s_L(x^{n_j}))-s_L(x^{n_j})]^2_i + [\Pi_{\cB}(s_L(\bx^*))-s_L(\bx^*)]^2_i| \nn \\ [5pt] && \le |[s_L(x^{n_j})]^2_i-[s_L(\bx^*)]^2_i| + |[\Pi_{\cB}(s_L(x^{n_j}))-s_L(x^{n_j})]^2_i - [\Pi_{\cB}(s_L(\bx^*))-s_L(\bx^*)]^2_i| \nn \\ [5pt] && \le \ 4(2\|x^{n_j}-\bx^*\|+\beta)\|x^{n_j}-\bx^*\| \ < \ \alpha, \label{diff-sp} \eeqa where the last inequality is due to \eqref{xdiff-bdd}. Let \[ I^* = \left\{i: [s_L(\bx^*)]^2_i-[\Pi_{\cB}(s_L(\bx^*))-s_L(\bx^*)]^2_i < \frac{2\lambda}{L}\right\} \] and let $\bar I^* = \{1,\ldots,n\}\setminus I^*$. Since $\alpha>0$, we know that \[ [s_L(\bx^*)]^2_i-[\Pi_{\cB}(s_L(\bx^*))-s_L(\bx^*)]^2_i > \frac{2\lambda}{L}, \ \ \ \forall i \in \bar I^*. \] It then follows from \eqref{diff-sp} and the definition of $\alpha$ that \[ \ba{l} [s_L(x^{n_j})]^2_i- [\Pi_{\cB}(s_L(x^{n_j}))-s_L(x^{n_j})]^2_i < \frac{2\lambda}{L}, \ \ \ \forall i \in I^*, \\ [6pt] [s_L(x^{n_j})]^2_i- [\Pi_{\cB}(s_L(x^{n_j}))-s_L(x^{n_j})]^2_i > \frac{2\lambda}{L}, \ \ \ \forall i \in \bar I^*. \ea \] Observe that $[\Pi_\cB(s_L(x^{n_j}))]_i \neq 0$ for all $i \in \bar I^*$. This fact together with \eqref{IHT} implies that \[ x^{n_j+1}_i =0, \ i \in I^* \ \ \mbox{and} \ \ x^{n_j+1}_i \neq 0, \ i \in \bar I^*. \] By a similar argument, one can show that \[ x^{n_j}_i =0, \ i \in I^* \ \ \mbox{and} \ \ x^{n_j}_i \neq 0, \ i \in \bar I^*. \] Hence, $I_{n_j} = I_{n_j+1} = I^*$, which is a contradiction to \eqref{chg-I}. We thus conclude that \eqref{diff-nj} holds. Let $N_\eps$ denote the total number of iterations for finding an $\eps$-local-optimal solution $x_\eps \in \cB$ by the IHT method satisfying $I(x_\eps)=I(x^*)$ and $F(x_\eps) \le F^*+\eps$. We next establish an upper bound for $N_\eps$. Summing up the inequality \eqref{diff-nj} for $j=1,\ldots, J$, we obtain that \[ n_J \le \sum^J\limits_{j=1} \left\{2+ 2\lceil L/\sigma\rceil \left\lceil \log(F(x^0) - \frac{j-1}{2}(L-L_f) \delta^2 - \underline{F}^*) - \log \gamma \right\rceil\right\}. \] Using this inequality, \eqref{ineq-J}, and the facts that $L \ge \sigma$ and $\log(1-t) \le -t $ for all $t\in (0,1)$, we have \beqa n_J &\le& \sum^J\limits_{j=1} \left[2+ 2\lceil L/\sigma \rceil \left(\log(F(x^0) - \frac{j-1}{2}(L-L_f) \delta^2 - \underline{F}^*) - \log \gamma +1\right)\right], \nn \\ & \le & \sum^J\limits_{j=1} \left[2+ 2\lceil L/\sigma\rceil \left(\log(F(x^0) - \underline{F}^*) - \frac{(L-L_f)\delta^2}{2(F(x^0)-\underline{F}^*)}(j-1) - \log \gamma +1\right)\right], \nn \\ &\le& \lceil L/\sigma \rceil\left[\underbrace{\left(2 \log(F(x^0) - \underline{F}^*) +4- 2 \log \gamma + \frac{(L-L_f)\delta^2}{2(F(x^0)-\underline{F}^*)}\right)}_d J - \underbrace{\frac{(L-L_f)\delta^2}{2(F(x^0)-\underline{F}^*)}}_c J^2\right]. \label{n_J} \eeqa By the definition of $n_J$, we observe that after $n_J+1$ iterations, the IHT method becomes the projected gradient method applied to the problem \[ x^* = \arg\min\limits_{x\in \cB}\{f(x): x_{I_{n_J+1}}=0\}. \] In addition, we know from Theorem \ref{limit-thm} that $I(x^k) = I(x^*)$ for all $k > n_J$. Hence, $f(x^k)-f(x^*)=F(x^k)-F^*$ when $k > n_J$. Using these facts and Theorem \ref{sd-thm} (ii), we have \[ N_\eps \le n_J + 1+2\lceil L/\sigma\rceil \left\lceil \log\frac{F(x^{n_J+1})-F^*}{\epsilon}\right\rceil. \] Using this inequality, \eqref{bdd-Fx}, \eqref{n_J} and the facts that $F^* \ge \underline{F}^*$, $L \ge \sigma$ and $\log(1-t) \le -t $ for all $t\in (0,1)$, we obtain that \beqas N_\eps &\le& n_J + 1+2\lceil L/\sigma\rceil \left(\log(F(x^0) - \frac{J}{2}(L-L_f) \delta^2 - F^*) +1 - \log \eps\right), \\ &\le& n_J + \lceil L/\sigma\rceil \left(2 \log(F(x^0) - F^*) - \frac{(L-L_f)\delta^2J}{F(x^0)-F^*} +3 -2 \log \eps\right) \\ &\le&\lceil L/\sigma\rceil \left[ (d- 2 c) J -cJ^2 +2 \log(F(x^0) - F^*)+3 -2 \log \eps\right], \eeqas which together with \eqref{bound-J} and \eqref{thetaw} implies that \[ N_\eps \ \le \ 2\lceil L/\sigma\rceil \log \frac{\theta}{\eps}. \] \end{proof} \gap The iteration complexity given in Theorem \ref{complexity} is based on the assumption that $f$ is strongly convex in $\cB$. We next consider a case where $\cB$ is bounded and $f$ is convex but not strongly convex. We will establish the iteration complexity of finding an $\epsilon$-local-optimal solution of \eqref{l0-min} by the IHT method applied to a perturbation of \eqref{l0-min} obtained from adding a ``small'' strongly convex regularization term to $f$. Consider a perturbation of \eqref{l0-min} in the form of \beq \label{pert-l0} {\underline{F}}^*_\nu := \min\limits_{x\in\cB} \{F_\nu(x) := f_\nu(x) + \lambda \|x\|_0\}, \eeq where $\nu >0$ and \[ f_\nu(x) := f(x) + \frac{\nu}{2}\|x\|^2. \] One can easily see that $f_\nu$ is strongly convex in $\cB$ with modulus $\nu$ and moreover $\nabla f_\nu$ is Lipschitz continuous with constant $L_\nu$, where \beq \label{Lnu} L_\nu = L_f + \nu. \eeq We next establish the iteration complexity of finding an $\epsilon$-local-optimal solution of \eqref{l0-min} by the IHT method applied to \eqref{pert-l0}. Given any $L>0$, let $s_L$, $\alpha$ and $\beta$ be defined according to \eqref{sL}, \eqref{alpha} and \eqref{beta}, respectively, by replacing $f$ by $f_\nu$, and let $\delta$ be defined in \eqref{lower-bdd}. \begin{theorem} Suppose that $\cB$ is bounded and $f$ is convex but not strongly convex. Let $\eps >0$ be arbitrarily given, $D = \max\{\|x\|: x \in \cB\}$, $\nu = \epsilon/D^2$, and $L > L_\nu$ be chosen such that $\alpha >0$. Let $\{x^k\}$ be generated by the IHT method applied to \eqref{pert-l0}, and let $x^* =\lim_{k \to \infty} x^k$, $F^*_\nu = F_\nu(x^*)$ and $F^* = \min\{F(x): x\in \cB_{I^*}\}$, where $I^*=\{i: x^*_i=0\}$. Then, the total number of iterations by the IHT method for finding an $\eps$-local-optimal solution $x_\eps \in \cB$ satisfying $F(x_\eps) \le F^*+\eps$ is at most $2\left\lceil \frac{D^2L_f}{\eps}+1\right\rceil \log \frac{2\theta}{\eps}$, where \beqas &\theta = (F_\nu(x^0) - F_\nu^*)2^{\frac{\omega+3}{2}}, \quad \omega = \max\limits_t \left\{(d- 2 c) t -ct^2: 0 \le t \le \left\lfloor \frac{2(F_\nu(x^0)-F_\nu^*)}{(L-L_\nu)\delta^2} \right\rfloor\right\}, \\ [6pt] & c=\frac{(L-L_\nu)\delta^2}{2(F_\nu(x^0)-{\underline{F}}^*_\nu)}, \quad\quad \gamma = \nu(\sqrt{2\alpha+\beta^2}-\beta)^2/32, \\ [6pt] & d = 2 \log(F_\nu(x^0) - {\underline{F}}^*_\nu) +4- 2 \log \gamma+ c. \eeqas \end{theorem} \begin{proof} By Theorem \ref{complexity} (ii), we see that the IHT method applied to \eqref{pert-l0} finds an $\eps/2$-local-optimal solution $x_\eps \in \cB$ of \eqref{pert-l0} satisfying $I(x_\eps)=I(x^*)$ and $F_\nu(x_\eps) \le F^*_\nu+\eps/2$ within $2\lceil L_\nu/\nu\rceil \log \frac{2\theta}{\eps}$ iterations. From the proof of Theorem \ref{limit-thm}, we observe that \[ F_\nu(x^*) = \min \{F_\nu(x): x \in \cB_{I^*}\}. \] Hence, we have \[ F^*_\nu \ = \ F_\nu(x^*) \ \le \ \min\limits_{x\in\cB_{I^*}} f(x) + \frac{\nu D^2}{2} \ \le \ F^* + \frac{\eps}{2}. \] In addition, we observe that $F(x_\eps) \le F_\nu(x_\eps)$. Hence, it follows that \[ F(x_\eps) \ \le \ F_\nu(x_\eps) \ \le \ F^*_\nu+\frac{\eps}{2} \ \le \ F^* + \eps. \] Note that $F^*$ is a local optimal value of \eqref{l0-min}. Hence, $x_\eps$ is an $\eps$-local-optimal solution of \eqref{l0-min}. The conclusion of this theorem then follows from \eqref{Lnu} and $\nu=\eps/D^2$. \end{proof} \gap For the above IHT method, a fixed $L$ is used through all iterations, which may be too conservative. To improve its practical performance, we can use ``local'' $L$ that is update dynamically. The resulting variant of the method is presented as follows. \gap \noindent {\bf A variant of IHT method for \eqref{l0-min}:} \\ [5pt] Let $0< L_{\min} < L_{\max}$, $\tau>1$ and $\eta>0$ be given. Choose an arbitrary $x^0 \in \cB$ and set $k=0$. \begin{itemize} \item[1)] Choose $L^0_k \in [L_{\min}, L_{\max}]$ arbitrarily. Set $L_k = L^0_k$. \bi \item[1a)] Solve the subproblem \beq \label{v-subprob} x^{k+1} \in \Arg\min\limits_{x\in \cB} \{f(x^k)+\nabla f(x^k)^T(x-x^k) +\frac{L_k}{2}\|x-x^k\|^2+\lambda \|x\|_0\}. \eeq \item[1b)] If \beq \label{descent} F(x^k) - F(x^{k+1}) \ge \frac{\eta}{2} \|x^{k+1}-x^k\|^2 \eeq is satisfied, then go to step 2). \item[1c)] Set $L_k \leftarrow \tau L_k$ and go to step 1a). \ei \item[2)] Set $k \leftarrow k+1$ and go to step 1). \end{itemize} \noindent {\bf end} \gap \begin{remark} $L^0_k$ can be chosen by the similar scheme as used in \cite{BaBo88,BiMaRa00}, that is, \[ L^0_k = \max\left\{L_{\min},\min\left\{L_{\max},\frac{\Delta f^T \Delta x}{\|\Delta x\|^2}\right\}\right\}, \\ [6pt] \] where $\Delta x = x^k -x^{k-1}$ and $\Delta f= \nabla f(x^k) - \nabla f(x^{k-1})$. \end{remark} \gap At each iteration, the IHT method solves a single subproblem in step 1). Nevertheless, its variant needs to solve a sequence of subproblems. We next show that for each outer iteration, its number of inner iterations is finite. \begin{theorem} \label{inner-iter} For each $k \ge 0$, the inner termination criterion \eqref{descent} is satisfied after at most $\left\lceil \frac{\log(L_f+\eta)-\log(L_{\min})} {\log \tau} +2\right\rceil$ inner iterations. \end{theorem} \begin{proof} Let $\bar L_k$ denote the final value of $L_k$ at the $k$th outer iteration. By \eqref{v-subprob} and the similar arguments as for deriving \eqref{diff-seq}, one can show that \[ F(x^k) - F(x^{k+1}) \ \ge \ \frac{L_k-L_f}{2} \|x^{k+1}-x^k\|^2. \] Hence, \eqref{descent} holds whenever $L_k \ge L_f+\eta$, which together with the definition of $\bar L_k$ implies that $\bar L_k/\tau < L_f+\eta$, that is, $\bar L_k <\tau(L_f+\eta)$. Let $n_k$ denote the number of inner iterations for the $k$th outer iteration. Then, we have \[ L_{\min} \tau^{n_k-1} \le L^0_k \tau^{n_k-1} = \bar L_k < \tau(L_f+\eta). \] Hence, $n_k \le \left\lceil \frac{\log(L_f+\eta)-\log(L_{\min})}{\log \tau} +2\right\rceil$ and the conclusion holds. \end{proof} \gap We next establish that the sequence $\{x^k\}$ generated by the above variant of IHT method converges to a local minimizer of $\eqref{l0-min}$ and moreover, $F(x^k)$ converges to a local minimum value of \eqref{l0-min}. \begin{theorem} \label{outer-iter} Let $\{x^k\}$ be generated by the above variant of IHT method. Then, $x^k$ converges to a local minimizer $x^*$ of problem \eqref{l0-min}, and moreover, $I(x^k) \to I(x^*)$, $\|x^k\|_0 \to \|x^*\|_0$ and $F(x^k) \to F(x^*)$. \end{theorem} \begin{proof} Let $\bar L_k$ denote the final value of $L_k$ at the $k$th outer iteration. From the proof of Theorem \ref{inner-iter}, we know that $\bar L_k \in [L_{\min}, \tau(L_f+\eta))$. Using this fact and a similar argument as used to prove \eqref{lower-bdd}, one can obtain that \[ |x^{k+1}_i| \ \ge \ \bar\delta := \min\limits_{i \notin I_0} \bar\delta_i \ > \ 0, \ \ \ \mbox{if} \ \ x^{k+1}_j \neq 0, \] where $I_0 = \{i: l_i=u_i=0\}$ and $\bar\delta_i$ is defined according to \eqref{deltai} by replacing $L$ by $\tau(L_f+\eta)$ for all $i \in I_0$. It implies that \[ \|x^{k+1}-x^k\| \ge \bar \delta \ \ \mbox{if} \ \ I(x^k) \neq I(x^{k+1}). \] The conclusion then follows from this inequality and the similar arguments as used in the proof of Theorem \ref{limit-thm}. \end{proof} \section{$l_0$-regularized convex cone programming} \label{l0-cp} In this section we consider $l_0$-regularized convex cone programming problem \eqref{l0-cone} and propose IHT methods for solving it. In particular, we apply the IHT method proposed in Section \ref{l0-box} to a quadratic penalty relaxation of \eqref{l0-cone} and establish the iteration complexity for finding an $\eps$-approximate local minimizer of \eqref{l0-cone}. We also propose a variant of the method in which the associated penalty parameter is dynamically updated, and show that every accumulation point is a local minimizer of \eqref{l0-cone}. Let $\cB$ be defined in \eqref{cB}. We assume that $f$ is a smooth convex function in $\cB$, $\nabla f$ is Lipschitz continuous with constant $L_f$ and that $f$ is bounded below on $\cB$. In addition, we make the following assumption throughout this section. \begin{assumption} \label{assump-cone} For each $I \subseteq \{1,\ldots,n\}$, there exists a Lagrange multiplier for \beq \label{convex-subprob} f^*_I = \min\{f(x): A x - b \in \cK^*, x \in \cB_I\}, \eeq provided that \eqref{convex-subprob} is feasible, that is, there exists $\mu^* \in -\cK$ such that $f^*_I = d_I(\mu^*)$, where \[ d_I(\mu) := \inf\{f(x)+ \mu^T(Ax-b): x\in \cB_I\}, \ \forall \mu \in -\cK. \] \end{assumption} Let $x^*$ be a point in $\cB$, and let $I^*=\{i: x^*_i=0\}$. One can observe that $x^*$ is a local minimizer of \eqref{l0-cone} if and only if $x^*$ is a minimizer of \eqref{convex-subprob} with $I = I^*$. Then, in view of Assumption \ref{assump-cone}, we see that $x^*$ is a local minimizer of \eqref{l0-cone} if and only if $x^* \in \cB$ and there exists $\mu^* \in -\cK$ such that \beq \label{KKT-cond} \ba{l} Ax^* - b \in \cK^*, \ \ \ (\mu^*)^T(Ax^*-b)=0, \\ [5pt] \nabla f(x^*) + A^T \mu^* \in -\cN_{\cB_{I^*}}(x^*). \ea \eeq Based on the above observation, we can define an approximate local minimizer of \eqref{l0-cone} to be the one that nearly satisfies \eqref{KKT-cond}. \begin{definition} Let $x^*$ be a point in $\cB$, and let $I^*=\{i: x^*_i=0\}$. $x^*$ is an $\epsilon$-approximate local minimizer of \eqref{l0-cone} if there exists $\mu^* \in -\cK$ such that \[ \ba{l} d_{\cK^*}(A x^* - b) \ \le \ \epsilon, \ \ \ (\mu^*)^T \Pi_{\cK^*}(Ax^*-b) = 0, \\ [6pt] \nabla f(x^*) + A^T \mu^* \in -\cN_{\cB_{I^*}}(x^*) + \cU(\epsilon). \ea \] \end{definition} \gap In what follows, we propose an IHT method for finding an approximate local minimizer of \eqref{l0-cone}. In particular, we apply the IHT method or its variant to a quadratic penalty relaxation of \eqref{l0-cone} which is in the form of \beq \label{l0-penalty} {\underline\Psi}^*_\rho := \min\limits_{x \in \cB} \{\Psi_\rho(x) := \Phi_\rho(x) + \lambda \|x\|_0\}, \eeq where \beq \label{Phi-rho} \Phi_\rho(x) := f(x) + \frac{\rho}{2} [d_{\cK^*}(Ax-b)]^2 \eeq It is not hard to show that the function $\Phi_\rho$ is convex differentiable and moreover $\nabla \Phi_{\rho}$ is Lipschitz continuous with constant \beq \label{L-rho} L_\rho = L_f + \rho\|A\|^2 \eeq (see, for example, Proposition 8 and Corollary 9 of \cite{LaMo12}). Therefore, problem \eqref{l0-penalty} can be suitably solved by the IHT method or its variant proposed in Section \ref{l0-box}. Under the assumption that $f$ is strongly convex in $\cB$, we next establish the iteration complexity of the IHT method applied to \eqref{l0-penalty} for finding an approximate local minimizer of \eqref{l0-cone}. Given any $L>0$, let $s_L$, $\alpha$ and $\beta$ be defined according to \eqref{sL}, \eqref{alpha} and \eqref{beta}, respectively, by replacing $f$ by $\Phi_\rho$, and let $\delta$ be defined in \eqref{lower-bdd}. \begin{theorem} Assume that $f$ is a smooth strongly convex function with modulus $\sigma>0$. Given any $\eps >0$, let \beq \label{rho} \rho = \frac{t}{\epsilon}+\frac{1}{\sqrt{8}\|A\|} \eeq for any $t \ge \max\limits_{I \subseteq \{1,\ldots,n\}}\min\limits_{\mu \in \Lambda_I}\|\mu\|$, where $\Lambda_I$ is the set of Lagrange multipliers of \eqref{convex-subprob}. Let $L > L_\rho$ be chosen such that $\alpha >0$. Let $\{x^k\}$ be generated by the IHT method applied to \eqref{l0-penalty}, and let $x^* =\lim_{k \to \infty} x^k$ and $\Psi^*_\rho = \Psi_\rho(x^*)$. Then the IHT method finds an $\epsilon$-approximate local minimizer of \eqref{l0-cone} in at most \[ N :=2\left\lceil \frac{L_\rho}{\sigma} \right\rceil \log \frac{8L_\rho\theta}{\eps^2} \] iterations, where \beqas &\theta = (\Psi_\rho(x^0) - \Psi_\rho^*)2^{\frac{\omega+3}{2}}, \quad \omega = \max\limits_t \left\{(d- 2 c) t -ct^2: 0 \le t \le \left\lfloor \frac{2(\Psi_\rho(x^0)-\Psi_\rho^*)}{(L-L_\rho)\delta^2} \right\rfloor\right\}, \\ [6pt] & c=\frac{(L-L_\rho)\delta^2}{2(\Psi_\rho(x^0)-{\underline\Psi}^*_\rho)}, \quad\quad \gamma = \sigma(\sqrt{2\alpha+\beta^2}-\beta)^2/32, \\ [6pt] &d = 2 \log(\Psi_\rho(x^0) - {\underline\Psi}^*_\rho) +4- 2 \log \gamma+ c. \eeqas \end{theorem} \begin{proof} We know from Theorem \ref{limit-thm} that $x^k \to x^*$ for some local minimizer $x^*$ of \eqref{l0-penalty}, $I(x^k) \to I(x^*)$ and $\Psi_\rho(x^k) \to \Psi_\rho(x^*) = \Psi^*_\rho$. By Theorem \ref{complexity}, after at most $N$ iterations, the IHT method generates $\tx \in \cB$ such at $I(\tx)=I(x^*)$ and $\Psi_\rho(\tx)-\Psi_\rho(x^*) \le \xi := \epsilon^2/(8L_\rho)$. It then follows that $\Phi_\rho(\tx)-\Phi_\rho(x^*) \le \xi$. Since $x^*$ is a local minimizer of \eqref{l0-penalty}, we observe that \beq \label{penalty-subprob} x^* = \arg\min_{x\in \cB_{I^*}} \Phi_\rho(x), \eeq where $I^*=I(x^*)$. Hence, $\tx$ is a $\xi$-approximate solution of \eqref{penalty-subprob}. Let $\mu^* \in \Arg\min\{\|\mu\|: \mu \in \Lambda_{I^*}\}$, where $\Lambda_{I^*}$ is the set of Lagrange multipliers of \eqref{convex-subprob} with $I=I^*$. In view of Lemma \ref{approx-soln}, we see that the pair $(\tx^+,\mu)$ defined as $\tx^+ := \Pi_{\cB_{I^*}}(\tx-\nabla \Phi_\rho(\tx)/{L_\rho})$ and $\mu := \rho[A\tx^+ - b - \Pi_{\cK^*}(A\tx^+ - b)]$ satisfies \[ \ba{l} \nabla f(\tx^+) + A^T \mu \ \in \ -\cN_{\cB_{I^*}}(\tx^+) + \cU(2\sqrt{2L_{\rho}\xi}) \ = \-\cN_{\cB_I}(\tx^+) + \cU(\epsilon), \\ [6pt] d_{\cK^*}(A\tx^+-b) \ \le \ \frac{1}{\rho}\|\mu^*\| + \sqrt{\frac{\xi}{\rho}} \ \le \ \frac{1}{\rho}\left(\|\mu^*\|+\frac{\epsilon}{\sqrt8 \|A\|}\right) \ \le \ \epsilon, \ea \] where the last inequality is due to \eqref{rho} and the assumption $t \ge \hat t \ge \|\mu^*\|$. Hence, $\tx^+$ is an $\epsilon$-approximate local minimizer of \eqref{l0-cone}. \end{proof} \gap We next consider finding an $\epsilon$-approximate local minimizer of \eqref{l0-cone} for the case where $\cB$ is bounded and $f$ is convex but not strongly convex. In particular, we apply the IHT method or its variant to a quadratic penalty relaxation of a perturbation of \eqref{l0-cone} obtained from adding a ``small' ' strongly convex regularization term to $f$. Consider a perturbation of \eqref{l0-cone} in the form of \beq \label{pert-l0-cone} \min\limits_{x\in\cB}\{f(x) + \frac{\nu}{2}\|x\|^2 +\lambda \|x\|_0: \ A x - b \in \cK^*\}. \eeq The associated quadratic penalty problem for \eqref{pert-l0-cone} is given by \beq \label{qp-pert} {\underline\Psi}^*_{\rho,\nu} := \min\limits_{x\in\cB} \{\Psi_{\rho,\nu}(x) := \Phi_{\rho,\nu}(x) + \lambda \|x\|_0\}, \eeq where \[ \Phi_{\rho,\nu}(x) := f(x) + \frac{\nu}{2}\|x\|^2 + \frac{\rho}{2} [d_{\cK^*}(Ax-b)]^2. \] One can easily see that $\Phi_{\rho,\nu}$ is strongly convex in $\cB$ with modulus $\nu$ and moreover $\nabla \Phi_{\rho,\nu}$ is Lipschitz continuous with constant \[ L_{\rho,\nu} := L_f + \rho \|A\|^2 + \nu. \] Clearly, the IHT method or its variant can be suitably applied to \eqref{qp-pert}. We next establish the iteration complexity of the IHT method applied to \eqref{qp-pert} for finding an approximate local minimizer of \eqref{l0-cone}. Given any $L>0$, let $s_L$, $\alpha$ and $\beta$ be defined according to \eqref{sL}, \eqref{alpha} and \eqref{beta}, respectively, by replacing $f$ by $\Phi_{\rho,\nu}$, and let $\delta$ be defined in \eqref{lower-bdd}. \begin{theorem} Suppose that $\cB$ is bounded and $f$ is convex but not strongly convex. Let $\eps >0$ be arbitrarily given, $D = \max\{\|x\|: x \in \cB\}$, \beq \label{rho-nu} \rho = \frac{\left(\sqrt{D}+\sqrt{D+16t + \frac{2\sqrt{2}\epsilon} {\|A\|}}\right)^2}{16\epsilon}, \ \ \ \nu = \frac{\epsilon}{2D} \eeq for any $t \ge \max\limits_{I \subseteq \{1,\ldots,n\}}\min\limits_{\mu \in \Lambda_I}\|\mu\|$, where $\Lambda_I$ is the set of Lagrange multipliers of \eqref{convex-subprob}. Let $L > L_{\rho,\nu}$ be chosen such that $\alpha >0$. Let $\{x^k\}$ be generated by the IHT method applied to \eqref{qp-pert}, and let $x^* =\lim_{k \to \infty} x^k$ and $\Psi^*_{\rho,\nu} = \Psi_{\rho,\nu}(x^*)$. Then the IHT method finds an $\epsilon$-approximate local minimizer of \eqref{l0-cone} in at most \[ N:= 2\left\lceil \frac{2DL_{\rho,\nu}}{\eps} \right\rceil \log \frac{32L_{\rho,\nu}\theta}{\eps^2} \] iterations, where \beqas &\theta = (\Psi_{\rho,\nu}(x^0) - \Psi_{\rho,\nu}^*)2^{\frac{\omega+3}{2}}, \quad \omega = \max\limits_t \left\{(d- 2 c) t -ct^2: 0 \le t \le \left\lfloor \frac{2(\Psi_{\rho,\nu}(x^0)-\Psi_{\rho,\nu}^*)}{(L-L_{\rho,\nu})\delta^2} \right\rfloor\right\}, \\ [6pt] & c=\frac{(L-L_{\rho,\nu})\delta^2}{2(\Psi_{\rho,\nu}(x^0)-{\underline\Psi}^*_{\rho,\nu})}, \quad\quad \gamma = \nu(\sqrt{2\alpha_{\rho,\nu}+\beta^2_{\rho,\nu}}-\beta_{\rho,\nu})^2/32, \\ [6pt] & d = 2 \log(\Psi_{\rho,\nu}(x^0) - {\underline\Psi}^*_{\rho,\nu}) +4- 2 \log \gamma+ c. \eeqas \end{theorem} \begin{proof} From Theorem \ref{limit-thm}, we know that $x^k \to x^*$ for some local minimizer $x^*$ of \eqref{qp-pert}, $I(x^k) \to I(x^*)$ and $\Psi_{\rho,\nu}(x^k) \to \Psi_{\rho,\nu}(x^*) = \Psi^*_{\rho,\nu}$. By Theorem \ref{complexity}, after at most $N$ iterations, the IHT method applied to \eqref{qp-pert} generates $\tx \in \cB$ such at $I(\tx)=I(x^*)$ and $\Psi_{\rho,\nu}(\tx)-\Psi_{\rho,\nu}(x^*) \le \xi := \epsilon^2/(32L_{\rho,\nu})$. It then follows that $\Phi_{\rho,\nu}(\tx)-\Phi_{\rho,\nu}(x^*) \le \xi$. Since $x^*$ is a local minimizer of \eqref{qp-pert}, we see that \beq \label{pert-subprob} x^* = \arg\min_{x\in \cB_{I^*}} \Phi_{\rho,\nu}(x), \eeq where $I^*=I(x^*)$. Hence, $\tx$ is a $\xi$-approximate solution of \eqref{pert-subprob}. In view of Lemma \ref{approx-soln}, we see that the pair $(\tx^+,\mu)$ defined as $\tx^+ := \Pi_{\cB_I}(\tx-\nabla \Phi_{\rho,\nu}(\tx)/{L_{\rho,\nu}})$ and $\mu := \rho[A\tx^+ - b - \Pi_{\cK^*}(A\tx^+ - b)]$ satisfies \[ \nabla f(\tx^+) + \nu \tx^+ + A^T \mu \ \in \ -\cN_{\cB_{I^*}}(\tx^+) + \cU(2\sqrt{2L_{\rho,\nu}\xi}) \ = \ -\cN_{\cB_{I^*}}(\tx^+) + \cU(\epsilon/2), \] which together with the fact that $\nu\|\tx^+\| \le \nu D \le \epsilon/2$ implies that \[ \nabla f(\tx^+) + A^T \mu \ \in \ -\nu \tx^+ -\cN_{\cB_{I^*}}(\tx^+) + \cU(\epsilon/2) \ \subseteq \ -\cN_{\cB_{I^*}}(\tx^+) + \cU(\epsilon). \] In addition, it follows from Lemma \ref{proj-grad} (c) that $\Phi_{\rho,\nu}(\tx^+) \le \Phi_{\rho,\nu}(\tx)$, and hence \[ \Phi_{\rho,\nu}(\tx^+) - \Phi_{\rho,\nu}(x^*) \ \le \ \Phi_{\rho,\nu}(\tx)-\Phi_{\rho,\nu}(x^*) \ \le \ \xi. \] Let $\Phi^*_\rho = \min\{\Phi_\rho(x): x\in \cB_{I^*}\}$, where $\Phi_\rho$ is defined in \eqref{Phi-rho}. Notice that $\Phi_{\rho,\nu}(x^*) \le \Phi^*_\rho + \nu D^2/2$. It then follows that \[ \Phi_{\rho}(\tx^+) - \Phi^*_{\rho} \ \le \ \Phi_{\rho,\nu}(\tx^+) - \Phi_{\rho,\nu}(x^*) + \frac{\nu D^2}{2} \ \le \ \xi + \frac{\epsilon D}{4} \ \le \ \frac{\epsilon^2}{32\rho\|A\|^2} + \frac{\epsilon D}{4}. \] Let $\mu^* \in \Arg\min\{\|\mu\|: \mu \in \Lambda_{I^*}\}$, where $\Lambda_{I^*}$ is the set of Lagrange multipliers of \eqref{convex-subprob} with $I=I^*$. In view of Lemma \ref{approx-soln} and the assumption $t \ge \hat t \ge \|\mu^*\|$, we obtain that \[ d_{\cK^*}(A\tx^+-b) \ \le \ \frac{1}{\rho}\|\mu^*\| + \sqrt{\frac{\epsilon^2}{32\rho^2\|A\|^2}+\frac{\epsilon D}{4\rho}} \ \le \ \frac{1}{\rho}\left(t+\frac{\epsilon}{\sqrt{32}\|A\|}\right)+\sqrt{\frac{\epsilon D}{4\rho}} \ = \ \epsilon, \] where the last inequality is due to \eqref{rho-nu}. Hence, $\tx^+$ is an $\epsilon$-approximate local minimizer of \eqref{l0-cone}. \end{proof} \gap For the above method, the fixed penalty parameter $\rho$ is used through all iterations, which may be too conservative. To improve its practical performance, we can update $\rho$ dynamically. The resulting variant of the method is presented as follows. Before proceeding, we define the projected gradient of $\Phi_\rho$ at $x \in \cB_I$ with respect to $\cB_I$ as \beq \label{g-rho} g(x;\rho,I) = L_\rho[x-\Pi_{\cB_I}(x-\frac{1}{L_\rho}\nabla \Phi_\rho(x))], \eeq where $I \subseteq \{1,\ldots,n\}$, and $\Phi_\rho$ and $L_\rho$ are defined in \eqref{Phi-rho} and \eqref{L-rho}, respectively. \gap \noindent {\bf A variant of IHT method for \eqnok{l0-cone}:} \\ [5pt] Let $\{\epsilon_k\}$ be a positive decreasing sequence. Let $\rho_0 >0$, $\tau > 1$, $t > \max\limits_{I \subseteq \{1,\ldots,n\}}\min\limits_{\mu \in \Lambda_I}\|\mu\|$, where $\Lambda_I$ is the set of Lagrange multipliers of \eqref{convex-subprob}. Choose an arbitrary $x^0\in \cB$. Set $k=0$. \begin{itemize} \item[1)] Start from $x^{k-1}$ and apply the IHT method or its variant to problem \eqref{l0-penalty} with $\rho=\rho_k$ until finding some $x^k \in \cB$ such that \beq \label{inner-cond-c} d_{\cK^*}(Ax^k-b) \le \frac{t}{\rho_k}, \ \ \ \ \ \|g(x^k;\rho_k,I_k)\| \le \min\{1,L_{\rho_k}\}\epsilon_k, \eeq where $I_k = I(x^k)$. \item[2)] Set $\rho_{k+1} := \tau\rho_k$. \item[3)] Set $k \leftarrow k+1$ and go to step 1). \end{itemize} \noindent {\bf end} \vgap The following theorem shows that $x^k$ satisfying \eqref{inner-cond-c} can be found within a finite number of iterations by the IHT method or its variant applied to problem \eqref{l0-penalty} with $\rho=\rho_k$. Without loss of generality, we consider the IHT method or its variant applied to problem \eqnok{l0-penalty} with any given $\rho >0$. \begin{theorem} \label{inner} Let $x_0 \in \cB$ be an arbitrary point and the sequence $\{x_l\}$ be generated by the IHT method or its variant applied to problem \eqnok{l0-penalty}. Then, the following statements hold: \bi \item[\rm(i)] $\lim\limits_{l\to\infty} g(x_l;\rho,I_l) = 0$, where $I_l = I(x_l)$ for all $l$. \item[\rm(ii)] $\lim\limits_{l\to\infty} d_{\cK^*}(Ax_l-b) \le \frac{\hat t}{\rho}$, where $\hat t := \max\limits_{I \subseteq \{1,\ldots,n\}} \min\limits_{\mu \in \Lambda_I}\|\mu\|$ and $\Lambda_I$ is the set of Lagrange multipliers of \eqref{convex-subprob}. \ei \end{theorem} \begin{proof} (i) It follows from Theorems \ref{limit-thm} and \ref{outer-iter} that $x_l \to x^*$ for some local minimizer $x^*$ of \eqref{l0-penalty} and moreover, $\Phi_\rho(x_l) \to \Phi_\rho(x^*)$ and $I_l \to I^*$, where $I_l=I(x_l)$ and $I^*=I(x^*)$. We also know that \[ x^* \in \Arg\min\limits_{x\in\cB_{I^*}} \Phi_\rho(x). \] It then follows from Lemma \ref{proj-grad} (d) that \[ \Phi_\rho(x_l) -\Phi_\rho(x^*) \ \ge \ \frac{1}{2L_\rho} \|g(x_l;\rho,I^*)\|^2, \ \ \ l \ge N. \] Using this inequality and $\Phi_\rho(x_l) \to \Phi_\rho(x^*)$, we thus have $g(x_l;\rho,I^*) \to 0$. Since $I_l=I^*$ for $l \ge N$, we also have $g(x_l;\rho,I_l) \to 0$. (ii) Let $f^*_I$ be defined in \eqref{convex-subprob}. Applying Lemma \ref{gap-infeas} to problem \eqref{convex-subprob}, we know that \beq \label{feas-bdd} f(x_l) - f^*_{I(l)} \ \ge \ -\hat t d_{\cK^*}(Ax_l-b), \ \ \ \forall l, \eeq where $\hat t$ is defined above. Let $x^*$ and $I^*$ be defined in the proof of statement (i). We observe that $f^*_{I^*} \ge \Phi_\rho(x^*)$. Using this relation and \eqref{feas-bdd}, we have that for sufficiently large $l$, \[ \ba{lcl} \Phi_\rho(x_l) - \Phi_\rho(x^*) &=& f(x_l) + \frac{\rho}{2}[d_{\cK^*}(Ax_l-b)]^2 - \Phi_\rho(x^*) \ \ge \ f(x_l) - f^*_{I^*} + \frac{\rho}{2}[d_{\cK^*}(Ax_l-b)]^2 \\ [6pt] &=& f(x_l) - f^*_{I(l)} + \frac{\rho}{2}[d_{\cK^*}(Ax_l-b)]^2 \ge \ -\hat t d_{\cK^*}(Ax_l-b) + \frac{\rho}{2}[d_{\cK^*}(Ax_l-b)]^2, \ea \] which implies that \[ d_{\cK^*}(Ax_l-b) \ \le \ \frac{\hat t}{\rho} + \sqrt{\frac{\Phi_\rho(x_l) - \Phi_\rho(x^*)}{\rho}}. \] This inequality together with the fact $\lim_{l\to\infty} \Phi_\rho(x_l) = \Phi_\rho(x^*)$ yields statement (ii). \end{proof} \gap \begin{remark} From Theorem \ref{inner}, we can see that the inner iterations of the above method terminates finitely. \end{remark} \gap We next establish convergence of the outer iterations of the above variant of the IHT method for \eqref{l0-cone}. In particular, we show that every accumulation point of $\{x^k\}$ is a local minimizer of \eqref{l0-cone}. \begin{theorem} Let $\{x^k\}$ be the sequence generated by the above variant of the IHT method for solving \eqref{l0-cone}. Then any accumulation point of $\{x^k\}$ is a local minimizer of \eqref{l0-cone}. \end{theorem} \begin{proof} Let \[ \tx^k = \Pi_{\cB_{I_k}}(x^k-\frac{1}{L_{\rho_k}} \nabla \Phi_{\rho_k}(x^k)). \] Since $\{x^k\}$ satisfies \eqref{inner-cond-c}, it follows from Lemma \ref{proj-grad} (a) that \beq \label{opt-cond-seq} \nabla \Phi_{\rho_k}(x^k) \ \in \ -\cN_{\cB_{I_k}}(\tx^k) + \cU(\epsilon_k), \eeq where $I_k=I(x^k)$. Let $x^*$ be any accumulation point of $\{x^k\}$. Then there exists a subsequence $K$ such that $\{x^k\}_K \to x^*$. By passing to a subsequence if necessary, we can assume that $I_k = I$ for all $k\in K$. Let \[ \mu^k = \rho_k [Ax^k-b-\Pi_{\cK^*}(Ax^k-b)]. \] We clearly see that \beq \label{orth} (\mu^k)^T\Pi_{\cK^*}(Ax^k-b)=0. \eeq Using \eqref{opt-cond-seq} and the definitions of $\Phi_\rho$ and $\mu^k$, we have \beq \label{approx-opt} \nabla f(x^k) + A^T\mu^k \in -\cN_{\cB_{I}}(\tx^k) + \cU(\epsilon_k), \ \ \ \forall k\in\cK. \eeq By \eqref{g-rho}, \eqref{inner-cond-c} and the definition of $\tx^k$, one can observe that \beq \label{diff-x} \|\tx^k - x^k\| \ = \ \frac{1}{L_{\rho_k}} \|g(x^k;\rho_k,I_k)\| \ \le \ \epsilon_k. \eeq In addition, notice that $\|\mu^k\| = \rho_k d_{\cK^*}(Ax^k-b)$, which together with \eqref{inner-cond-c} implies that $\|\mu^k\| \le t$ for all $k$. Hence, $\{\mu^k\}$ is bounded. By passing to a subsequence if necessary, we can assume that $\{\mu^k\}_K \to \mu^*$. Using \eqref{diff-x} and upon taking limits on both sides of \eqnok{orth} and \eqnok{approx-opt} as $k\in K \to \infty$, we have \[ (\mu^*)^T\Pi_{\cK^*}(Ax^*-b)=0, \ \ \ \nabla f(x^*) + A^T\mu^* \in -\cN_{\cB_{I}}(x^*) \] In addition, since $x^k_I =0$ for $k\in K$, we know that $x^*_I=0$. Also, it follows from \eqref{inner-cond-c} that $d_{\cK^*}(Ax^*-b)=0$, which implies that $Ax^*-b \in \cK^*$. These relations yield that \[ x^* \in \Arg\min\limits_{x\in \cB_I}\{f(x): Ax-b \in \cK^*\}, \] and hence, $x^*$ is a local minimizer of \eqref{l0-cone}. \end{proof} \section{Concluding remarks} \label{conclude} In this paper we studied iterative hard thresholding (IHT) methods for solving $l_0$ regularized convex cone programming problems. In particular, we first proposed an IHT method and its variant for solving $l_0$ regularized box constrained convex programming. We showed that the sequence generated by these methods converges to a local minimizer. Also, we established the iteration complexity of the IHT method for finding an $\eps$-local-optimal solution. We then proposed a method for solving $l_0$ regularized convex cone programming by applying the IHT method to its quadratic penalty relaxation and established its iteration complexity for finding an $\eps$-approximate local minimizer. Finally, we proposed a variant of this method in which the associated penalty parameter is dynamically updated, and showed that every accumulation point is a local minimizer of the problem. Some of the methods studied in this paper can be extended to solve some $l_0$ regularized nonconvex optimization problems. For example, the IHT method and its variant can be applied to problem \eqref{l0-min} in which $f$ is nonconvex and $\nabla f$ is Lipschitz continuous. In addition, the numerical study of the IHT methods will be presented in the working paper \cite{HuLiLu12}. Finally, it would be interesting to extend the methods of this paper to solve rank minimization problems and compare them with the methods studied in \cite{CaCaSh10,JaMeDh10}. This is left as a future research. \section*{Acknowledgment} The author would like to thank Ting Kei Pong for proofreading and suggestions which substantially improves the presentation of the paper. \gap \gap \gap
1,108,101,565,336
arxiv
\section{Introduction} According to the most general definition of a system (see e.g. \cite{KaFaArbib}), by a discrete system (further --- a system) we understand a stationary dynamical system with a discrete time $\N_0=\{0,1,2,\ldots\}$; that is, a 5-tuple $\mathfrak A=\langle\Cal I,\Cal S,\Cal O,S,O\rangle$ where $\mathcal I$ is a non-empty finite set, the \emph{input alphabet}; $\Cal O$ is a non-empty finite set, the \emph{output alphabet}; $\Cal S$ is a non-empty (possibly, infinite) set of \emph{states}; $S\colon\Cal I\times\Cal S\to \Cal S$ is a \emph{state transition function}; $O\colon\Cal I\times\Cal S\to \Cal O$ is an \emph{output function}. Note that in literature systems are also called (synchronous) automata; however, in order to avoid misunderstanding, in the paper only \emph{initial} automata are so referred. Remind that the \emph{initial automaton} $\mathfrak A(s_0)=\langle\Cal I,\Cal S,\Cal O,S,O, s_0\rangle$ is a discrete system $\mathfrak A$ where one state $s_0\in\Cal S$ is fixed; $s_0$ is called the \emph{initial state}. At the moment $n=0$ the system $\mathfrak A(s_0)$ is at the state $s_0$; once feeded by the input symbol $\chi_0\in\Cal I$, the system outputs the symbol $\xi_0=O(\chi_0,s_o)\in\Cal O$ and reaches the state $s_1=S(\chi_0,s_0)\in\Cal S$; then the system is feeded by the next input symbol $\chi_1\in\Cal I$ and repeats the routine. We stress that the definition of the automaton $\mathfrak A(s_0)$ is nearly the same as the one of \emph{Mealy automaton} (see e.g. \cite{Bra,VHDL}) (or of a `letter' \emph{transducer}, see e.g. \cite{Allouche-Shall,Grigorch_auto2_eng}), with the only important difference: the automata $\mathfrak A(s_0)$ we consider in the paper are \emph{not necessarily finite}; that is, the set of states $\Cal S$ of $\mathfrak A(s_0)$ may be infinite. Furthermore, throughout the paper we assume that there exists a state $s_0\in \Cal S$ such that all the states of the system $\mathfrak A$ are \emph{reachable} from $s_0$; that is, given $s\in\Cal S$, there exists input word $w$ over alphabet $\Cal I$ such that after the word $w$ has been feeded to the automaton $\mathfrak A(s_0)$, the automaton reaches the state $s$. To the system $\mathfrak A$ we put into a correspondence the family $\Cal F(\mathfrak A)$ of all automata $\mathfrak A(s)=\langle\Cal I,\Cal S,\Cal O,S,O, s\rangle$, $s\in\Cal S$. For better exposition, throughout the paper we assume that both alphabets $\Cal I$ and $\Cal O$ are $p$-element alphabets with $p$ prime: $\Cal I=\Cal O=\{0,1,\ldots,p-1\}=\F_p$; so further the word `automaton' stands for initial automaton with input/output alphabets $\F_p$. A typical example of an automaton of that sort is the \emph{2-adic adding machine} $\mathfrak O(1)=\langle\F_2,\F_2,\F_2,S,O,1\rangle$, where $S(\chi,s)\equiv \chi s\pmod 2$, $O(\chi,s)\equiv\chi+s\pmod 2$ for $s\in\Cal S=\F_2$, $\chi\in\Cal I=\F_2$. Automata often are represented by \emph{Moore diagrams}. Moore diagram of the automaton $\mathfrak A(s_0)=\langle\Cal I,\Cal S,\Cal O,S,O,s_0\rangle$ is a directed labeled graph whose vertices are identified with the states of the automaton and for every $s\in\Cal S$ and $r\in\Cal I$ the diagram has an arrow from $s$ to $S(r,s)$ labeled by the pair $(r,O(r,s))$. Figure \ref{fig:Moore} is an example of Moore diagram. \begin{figure}[h] \begin{quote}\psset{unit=0.5cm} \begin{pspicture}(0,1.5)(24,7) \pnode(8.7,3.8){S1-o} \pnode(8.6,4.3){S1-i} \pnode(15.4,4.4){S0-o} \pnode(14.8,4.5){S0-i} \pnode(15.4,3.6){S00-o} \pnode(14.8,3.5){S00-i} \nccurve[angleA=150,angleB=90,ncurv=-12]{<-}{S00-o}{S00-i} \nccurve[angleA=210,angleB=270,ncurv=-12]{<-}{S0-o}{S0-i} \nccurve[angleA=210,angleB=150,ncurv=15]{->}{S1-o}{S1-i} \pscircle[fillstyle=solid,fillcolor=yellow,linewidth=1pt](9,4){.5} \pscircle[fillstyle=solid,fillcolor=yellow,linewidth=1pt](15,4){.5} \psline{->}(9.5,4)(14.5,4) \uput{1}[90](9,2.8){$1$} \uput{1}[90](15,2.8){$0$} \uput{1}[90](12,3.3){$(0,1)$} \uput{1}[90](17.5,.3){$(1,1)$} \uput{1}[90](17.5,5.3){$(0,0)$} \uput{1}[90](5.2,2.6){$(1,0)$} \end{pspicture} \end{quote} \caption{Moore diagram of the 2-adic adding machine.} \label{fig:Moore} \end{figure} Given an automaton $\mathfrak A(s)=\langle\F_p,\Cal S,\F_p,S,O, s\rangle\in\Cal F(\mathfrak A)$, the automaton transforms input words of length $n$ into output words of length $n$; that is, $\mathfrak A(s)$ maps the set $W_n$ of all words of length $n$ into $W_n$; we denote corresponding mapping via $f_{n,\mathfrak A(s)}$. It is clear now that behaviour of the system $\mathfrak A$ can be described in terms of the mappings $f_{n,\mathfrak A(s)}$ for all $s\in\Cal S$ and all $n\in\N=\{1,2,3,\ldots\}$. As all states of the system $\mathfrak A$ are reachable from the state $s_0$, it suffices to study only the mappings $f_{n,\mathfrak A(s_0)}$ for all $n\in\N$. Now we remind the notion of transitivity: \begin{defn}[Transitivity of a family of mappings] \label{def:trans} A \emph{family $\Cal F$ of mappings} of a finite non-empty set $M$ into $M$ is called \emph{transitive} whenever given a pair $(a,b)\in M\times M$, there exists $f\in\Cal F$ such that $f(a)=b$. \end{defn} It is clear that once $M$ contains more than one element, a family that consists of a single mapping $f\:M\>M$ cannot be transitive in the meaning of Definition \ref{def:trans}; that is why the transitivity of a single mapping is defined as follows: \begin{defn}[Transitivity of a single mapping] \label{def:trans-1} A \emph{mapping} $f\:M\>M$, where $M$ is a finite non-empty set, is called \emph{transitive} if it $f$ cyclically permutes elements of $M$. \end{defn} In other words, a single mapping $f\:M\>M$ is transitive if and only if the family $\{e,f, f^{2}=f\ast f, f^3=f\ast f\ast f,\ldots\}$ is transitive in the meaning of the Definition \ref {def:trans} (here $e$ stands for identity mapping, $\ast$ for composition of mappings). Note that a transitive mapping is necessarily bijective but generally not vice versa. Now we introduce the main notions of the paper. \begin{defn}[Automata transitivity] \label{def:auto-trans} The automaton $\mathfrak A(s_0)$ (equivalently, the system $\mathfrak A$) is said to be \begin{itemize} \item \emph{$n$-word transitive}, if the mapping $f_{n,\mathfrak A(s_0)}$ is transitive on the set $W_n$ of all words of length $n$; \item\emph{word transitive}, if $\mathfrak A(s_0)$ is $n$-word transitive for all $n\in\N=\{1,2,3,\ldots\}$; \item\emph{completely transitive}, if for every $n\in\N$, the family $f_{n,{\mathfrak A(s)}}$, $s\in \Cal S$, is transitive on $W_n$; \item\emph{absolutely transitive}, if for every $s\in \Cal S$ the automaton ${\mathfrak A}(s)$ is completely transitive; that is, if for every $n\in\N$ the family $f_{n,{\mathfrak A(t)}}$, $t\in \Cal S_{\mathfrak A(s)}$, is transitive on $W_n$, where $\Cal S_{\mathfrak A(s)}$ is the set of all reachable states of the automaton $\mathfrak A(s)$. \end{itemize} \end{defn} The transitivity properties may be defined in equivalent way, in terms of words; this way is more common in automata theory. We remand some notions related to words beforehand. Given a non-empty alphabet $\Cal A$, its elements are called \emph{symbols}, or \emph{letters}. By the definition, a \emph{word of length $n$ over alphabet $\Cal A$}\index{word (over an alphabet)}\index{word (over an alphabet)!-- of length $n$} is a finite string (stretching from right to left) $\alpha_{n-1}\cdots\alpha_1\alpha_0$, where $\alpha_{n-1},\ldots,\alpha_1,\alpha_0\in\Cal A$. The \emph{empty word} is a sequence of length 0, that is, the one that contains no symbols. Hereinafter the length of the word $w$ is denoted via $\Lambda(w)$. Given a word $w=\alpha_{n-1}\cdots\alpha_1\alpha_0$, any word $v=\alpha_{k-1}\cdots\alpha_1\alpha_0$, $n\ge k\ge 1$, is called a \emph{prefix} (or, an \emph{initial subword}) of the word $w$, any word $u=\alpha_{n-1}\cdots\alpha_{i+1}\alpha_i$, $0\le i\le n-1$ is called a \emph{suffix} of the word $w$, and any word $\alpha_{k}\cdots\alpha_{i+1}\alpha_i$, $n-1\ge k\ge i\ge 0$, is called a \emph{subword} of the word $w$. Given words $a=\alpha_{n-1}\cdots\alpha_1\alpha_0$ and $b=\beta_{k-1}\cdots\beta_1\beta_0$, the \emph{concatenation} $a\circ b$ is the following word (of length $n+k$): \[ a\circ b=\alpha_{n-1}\cdots\alpha_1\alpha_0\beta_{k-1}\cdots\beta_1\beta_0. \] \begin{defn}[Automata transitivity, equivalent] \label{def:auto-trans-e} ~ \begin{enumerate} \item The \emph{word transitivity} means that given two finite words $w$, $w^\prime$ whose lengths are equal one to another, $\Lambda(w)=\Lambda(w^\prime)=n$, the word $w$ can be transformed into $w^\prime$ by a sequential composition of a sufficient number of copies of $\mathfrak A(s_0)$: \begin{figure}[h] \begin{quote}\psset{unit=0.5cm} \begin{pspicture}(0,10)(24,12) \psline[linecolor=black]{<-}(20,11)(18,11) \psline[linecolor=black]{<-}(6,11)(4,11) \psline[linecolor=black](4,11.3)(4,10.7) \psframe[fillstyle=solid,fillcolor=yellow,linecolor=black,linewidth=2pt](6,10)(10,12) \psframe[fillstyle=solid,fillcolor=yellow,linecolor=black,linewidth=2pt](14,10)(18,12) \uput{0}[90](8,10.7){$\mathfrak A$} \uput{0}[90](16,10.7){$\mathfrak A$} \uput{0}[90](5,9){$\underbrace{}_{ w}$} \uput{0}[90](19,9){$\underbrace{}_{ w^\prime}$} \uput{0}[90](12,10.7){$\cdots\cdots\cdots$} \end{pspicture} \end{quote} \end{figure} \item The \emph{complete transitivity} means that given finite words $w$, $w^\prime$ such that $\Lambda(w)=\Lambda(w^\prime)$, there exists a finite word $y$ (may be of length other than that of $w$ and $w^\prime$) such that the automaton $\mathfrak A(s_0)$ transforms the input word $w\circ y$ (with the prefix $y$) to the output word $w^\prime \circ y^\prime$ that has a suffix $w^\prime$: \begin{figure}[h] \begin{quote}\psset{unit=0.5cm} \begin{pspicture}(0,10)(24,12) \psline[linecolor=black]{<-}(20,11)(14,11) \psline[linecolor=black]{<-}(10,11)(4,11) \psline[linecolor=black](6,11.3)(6,10.7) \psline[linecolor=black](4,11.3)(4,10.7) \psline[linecolor=black](16,11.3)(16,10.7) \psframe[fillstyle=solid,fillcolor=yellow,linecolor=black,linewidth=2pt](10,10)(14,12) \uput{0}[90](12,10.7){$\mathfrak A$} \uput{0}[90](8,9){$\underbrace{\ast\cdots\cdots\cdots\ast}_y$} \uput{0}[90](5,9){$\underbrace{}_{ w}$} \uput{0}[90](15,9){$\underbrace{}_{ w^\prime}$} \end{pspicture} \end{quote} \end{figure} \item The \emph{absolute transitivity} means that given finite words $x$, $w$, $w^\prime$ such that $\Lambda(w)=\Lambda(w^\prime)$ (may be $\Lambda(x)\ne\Lambda(w)$), there exists a finite word $y$ such that the automaton $\mathfrak A(s_0)$ transforms the input word $w\circ y\circ x$ to the output word $w^\prime\circ y^\prime\circ x^\prime$: \begin{figure}[h] \begin{quote}\psset{unit=0.5cm} \begin{pspicture}(0,10)(24,12) \psline[linecolor=black]{<-}(20,11)(14,11) \psline[linecolor=black]{<-}(10,11)(4,11) \psline[linecolor=black](8,11.3)(8,10.7) \psline[linecolor=black](6,11.3)(6,10.7) \psline[linecolor=black](4,11.3)(4,10.7) \psline[linecolor=black](16,11.3)(16,10.7) \psframe[fillstyle=solid,fillcolor=yellow,linecolor=black,linewidth=2pt](10,10)(14,12) \uput{0}[90](12,10.7){$\mathfrak A$} \uput{0}[90](9,9){$\underbrace{}_{ x}$} \uput{0}[90](7,9){$\underbrace{\ast\cdots\ast}_y$} \uput{0}[90](5,9){$\underbrace{}_{ w}$} \uput{0}[90](15,9){$\underbrace{}_{ w^\prime}$} \end{pspicture} \end{quote} \end{figure} \end{enumerate} \end{defn} \begin{exmp}[Word transitive automaton] \label{exm:odo} The 2-adic adding machine $\mathfrak O(1)$, which was introduced above, is word transitive: It is clear that if one treats an $n$-bit word as a base-2 expansion of a non-negative integer $w$ then $f_{n,\mathfrak O(1)}(w)\equiv w+1\pmod{2^n}$, $n=1,2,3,\ldots$; therefore $f_{n,\mathfrak O(1)}^i(w)\equiv w+i\pmod{2^n}$ for all $i\in\N_0=\{0,1,2,\ldots\}$ which means that $f$ is transitive on the set $W_n$ of all $n$-bit words, cf. Definition \ref{def:auto-trans}, Definition \ref{def:trans-1} and Definition \ref{def:auto-trans-e}(i). \end{exmp} Note that the 2-adic adding machine $\mathfrak O(1)$ is not completely transitive as given $n\in\N=\{1,2,3,\ldots\}$, the corresponding family consists of the following two mappings: $f_{n,\mathfrak O(1)}(w)\equiv w+1\pmod{2^n}$ and $f_{n,\mathfrak O(0)}(w)\equiv w\pmod{2^n}$; so none of the mappings maps the two-bit word 00 (which is a base-2 expansion of 0) to the two-bit word 10 (which is a base-2 expansion of 2). \begin{exmp}[Absolutely transitive automaton] \label{exm:const} Let $(\alpha_i)_{i=0}^\infty=\alpha_0,\alpha_1,\ldots$ be an infinite binary sequence such that every binary pattern $\beta_1\cdots\beta_n$ occurs in the sequence $(\alpha_i)_{i=0}^\infty$ (whence, occurs infinitely many times); that is, given $n\in\N=\{1,2,\ldots\}$ and $\beta_1,\ldots,\beta_n\in\F_2$, the following equalities $\alpha_i=\beta_1,\alpha_{i+1}=\beta_2,\ldots,\alpha_{i+n-1}=\beta_n$ hold simultaneously for some (equivalently, for infinitely many) $i\in\N_0=\{0,1,2,\ldots\}$. Then the following automaton $\mathfrak C(0)$ is absolutely transitive: $\mathfrak C(0)=\langle\F_2,\N_0,\F_2,S,O,0\rangle$, where $S(\chi,s)=s+1$, $O(\chi,s)=\alpha_s$ for $s\in\Cal S=\N_0$, $\chi\in\Cal I=\F_2$. Indeed, given an $n$-bit word $w$, we see that $f_{n,\mathfrak C(s)}(w)=\alpha_{s+n-1}\cdots\alpha_s$ for every $s\in\Cal S=\N_0$ which by Definition \ref{def:auto-trans-e}(iii) (or, equivalently, by Definition \ref{def:auto-trans}) implies absolute transitivity of the automaton $\mathfrak C(0)$ due to the choice of the sequence $(\alpha_i)_{i=0}^\infty$. \end{exmp} Note also that the automaton $\mathfrak C(0)$ is $n$-word transitive for no $n\in\N=\{1,2,3,\ldots\}$ as $f_{n,\mathfrak C(0)}$ is not bijective on $W_n$, cf. Definitions \ref{def:trans-1} and \ref{def:auto-trans}. The goal of the paper is to present techniques to determine whether a system $\mathfrak A$ is word transitive, or completely transitive, or absolutely transitive. For this purpose, we study how the automaton $\mathfrak A(s_0)$ acts on \emph{infinite} words over alphabet $\F_p$. The latter words are considered as $p$-adic integers, and the corresponding transformation turns out to be a continuous transformation on the space of $p$-adic integers $\Z_p$. We remind main notions of $p$-adic analysis in the next section where we describe our approach, first formally and then less formally. We note that the $p$-adic approach (and wider the non-Archimedean one) has already been successfully applied to automata theory. Seemingly the paper \cite{Lunts} is the first one where the $p$-adic techniques is applied to study automata functions; the paper deals with linearity conditions of automata mappings. For application of the non-Archimedean methods to automata and formal languages see expository paper \cite{Pin_p-adic_auto} and references therein; for applications to automata and group theory see \cite{Grigorch_auto,Grigorch_auto2_eng}. In \cite{Vuillem_circ,Vuillem_DigNum,Vuillem_fin} the 2-adic methods are used to study binary automata, in particular, to obtain the finiteness criterion for these automata. In monograph \cite{AnKhr} the $p$-adic ergodic theory is studied (see numerous references therein) aiming at applications to computer science and cryptography (in particular, to automata theory, to pseudorandom number generation and to stream cipher design) as well as to applications in other areas like quantum theory, cognitive sciences and genetics. As for mathematical techniques used in the paper, these are somewhat complex: to study ergodic properties of families of automata functions related to a given discrete system, we combine $p$-adic methods, methods of real analysis and methods from automata theory. The paper is organized as follows: \begin{itemize} \item In Section \ref{sec:p-adic} we remind basic notions of $p$-adic analysis and show that automata functions (the transformations of infinite words performed by automata) are continuous (actually, 1-Lipschitz) functions w.r.t. the $p$-adic metric. In particular, we mention that basic computer instructions, both arithmetic (like addition, subtraction and multiplication of integers) and bitwise logical (like bitwise conjunction, disjunction, negation and exclusive `or'), as well as some other (like shifts towards higher order bits and masking) are continuous w.r.t. 2-adic metric. \item In Section \ref{sec:p-erg} we remind basics of the $p$-adic ergodic theory in connection to automata functions. \item Section \ref{sec:p-real} contains main results of the paper: By plotting an automaton function into real unit square we establish the automata 0-1 law and find sufficient conditions for a system to be completely transitive or absolutely transitive. \item We conclude in Section \ref{sec:Concl}. \end{itemize} \section{The $p$-adic representation of automata functions} \label{sec:p-adic} Every (left-)infinite word $\ldots\chi_2\chi_1\chi_0$ over the alphabet $\F_p$ can be associated to the $p$-adic integer $\chi_0+\chi_1p+\chi_2p^2+\cdots$ which is an element of the ring $\Z_p$ of $p$-adic integers; the ring $\Z_p$ is a complete compact metric space w.r.t. $p$-adic metric (we remind the notion below). The automaton $\mathfrak A(s_0)$ maps infinite words to infinite words. Denote the corresponding mapping via $f=f_{\mathfrak A(s_0)}$; then $f$ is a function defined on $\Z_p$ and valuated in $\Z_p$. The function $f=f_{\mathfrak A(s_0)}\:\Z_p\>\Z_p$ is called the \emph{automaton function} of the automaton $\mathfrak A(s_0)$. For instance, the automaton function $f_{\mathfrak O(1)}$ of the 2-adic adding machine $\mathfrak O(1)$ is the \emph{2-adic odometer}, the transformation $f_{\mathfrak O(1)}(x)=x+1$ of the ring $\Z_2$ of 2-adic integers; whereas the automaton function $f_{\mathfrak C(0)}$ of the automaton $\mathfrak C(0)$ from Example \ref{exm:const} is a constant function on $\Z_2$: $f_{\mathfrak C(0)}(x)= \sum_{i=0}^\infty\alpha_i2^i\in\Z_2$. Due to the fact that at every moment $n=0,1,2,\ldots$, the $n$-th output symbol may depend only on the input symbols $\chi_0,\chi_1,\ldots,\chi_n$ that have been feeded to the automaton at the moments $0,1,\ldots,n$ respectively, the \emph{automaton function is a $p$-adic 1-Lipschitz function}; that is, $f$ satisfies the $p$-adic Lipschitz condition with the constant 1 w.r.t. $p$-adic metric and thus $f$ is a $p$-adic continuous function. Vice versa, given a 1-Lipschitz function $f\:\Z_p\>\Z_p$, there exists an automaton $\mathfrak A(s_0)$ such that $f=f_{\mathfrak A(s_0)}$, see further Theorem \ref{thm:auto-1L}. Therefore to study the behavior of the system $\mathfrak A$ we may (and will) study corresponding automata functions rather than automata themselves; and to study the behaviour of the latter functions we may apply techniques from $p$-adic analysis and $p$-adic dynamics, see \cite{AnKhr}. This is the key point of our approach. We remind that the space $\Z_p$ is the completion of the ring $\Z=\{0,\pm1,\pm 2,\ldots\}$ of (rational) integers w.r.t. the $p$-adic metric $d_p$ which is defined as follows: given $a,b\in\Z$, if $a\ne b$ then denote $p^{\ord_p(a-b)}$ the largest power of $p$ that divides $a-b$ and put $d_p(a,b)=\|a-b\|_p=p^{-\ord_p(a-b)}$, put $\|a-b\|_p=0$ if $a=b$. The $p$-adic metric violates the Archimedean Axiom and thus is called a non-Archimedean metric (or, an ultrametric). Now we describe our approach less formally. Multiplication and addition of infinite words over alphabet $\F_p$ can be defined via school-textbook-like algorithms for multiplication/addition of integers represented by base-$p$ expansions. For instance, in case of 2-adic integers (i.e., when $p=2$) the following example shows that $-1=\ldots 11111$ in $\Z_2$ (as $\ldots0001=1$): {\footnotesize \begin{align*} &\mbox{}&{}&\ldots 1&{}& 1&{}&1&{}&1&{}&\\ &\mbox{$+$}&{}&{}\\ &\mbox{}&{}& \ldots 0&{}&0&{}&0&{}&1&{}&\\ \intertext{\hbox to 3.5cm{}\hbox to 7cm{\hrulefill}} &\mbox{}&{}&\ldots 0&{}&0&{}&0&{}&0&{}& \end{align*} } The next example shows that $\ldots1010101=-\frac{1}{3}$ in $\Z_2$ (as $\ldots00011=3$): {\footnotesize \begin{align*} &\mbox{}&{}&\ldots 0&{}&1&{}& 0&{}&1&{}&0&{}&1\\ &\mbox{$\times$}&{}&{}\\ &\mbox{}&{}& \ldots 0&{}&0&{}&0&{}&0&{}&1&{}&1\\ \intertext{\hbox to 3.2cm{}\hbox to 9.3cm{\hrulefill}} &\mbox{}&{}&\ldots 0&{}&1&{}& 0&{}&1&{}&0&{}&1\\ &\mbox{$+$}&{}&{}\\ &\mbox{}&{}& \ldots 1&{}&0&{}&1&{}&0&{}&1&{}&\\ \intertext{\hbox to 3.2cm{}\hbox to 9.3cm{\hrulefill}} &\mbox{}&{}&\ldots 1&{}&1&{}&1&{}&1&{}&1&{}&1 \end{align*} } The set of all infinite words over the alphabet $\F_p$ with so defined operations (and distance) constitutes the ring (and the metric space) $\Z_p$. Note that $\Z_p$ contains the ring of all (rational) integers $\Z$ as well as some other elements from the field $\Q$ of rational numbers; so $\Z_p\cap\Q\varsupsetneqq\Z$. For instance, in $\Z_2$ the sequences that contain only finite number of 1-s correspond to non-negative rational integers represented by their base-2 expansions (e.g., $\ldots 00011=3$); the sequences that contain only finite number of 0-s correspond to negative rational integers (e.g., $\ldots 111100=-4$); the sequences that are (eventually) periodic correspond to rational numbers that can be represented by irreducible fractions with odd denominators (e.g., $\ldots1010101=-\frac{1}{3}$); and non-periodic sequences correspond to no rational numbers. It is also worth noticing that when $p=2$, the 2-adic integers representing negative rational integers may be regarded as \emph{2's complements} of the latter (cf. e.g. \cite{VHDL,Knuth}). In computer science, 2-adic representations of rational integers are also known as \emph{Hensel codes}, cf. \cite{Knuth}, after the name of German mathematician Kurt Hensel who discovered $p$-adic numbers more than a century ago. By the definition, given two infinite words $\ldots\chi_2\chi_1\chi_0$ and $\ldots\xi_2\xi_1\xi_0$ over the alphabet $\F_p$, the distance $d_p$ between these words is $p^{-n}$, where $n=\min\{i=0,1,2,\ldots : \chi_i\ne\xi_i\}$, and the distance is 0 if no such $n$ exists. For instance, in the case $p=2$ we have that \begin{equation*} \left. \begin{aligned} \ldots1010\underline{1}{0101}&={\footnotesize{-\frac{1}{3}}}\\ \ldots0000\underline{0}{0101}&=5\\ \end{aligned} \right\} \Rightarrow d_2\left(-\frac{1}{3},5\right)=\left\|\left(-\frac{1}{3}\right)-5\right\|_2=\frac{1}{2^4}=\frac{1}{16}. \end{equation*} In other words, $-\frac{1}{3}\equiv 5\pmod{16}; -\frac{1}{3}\not\equiv 5\pmod{32}$. Note that actually $\md p^k$, the \emph{reduction modulo $p^k$}, is an epimorphism of $\Z_p$ onto the residue ring $\Z/p^k\Z$ modulo $p^k$ (we associate elements of the latter ring to $0,1,\ldots, p^k-1$): \begin{equation} \label{eq:md} \md p^k\colon\Z_p\>\Z/p^k\Z;\ \left(\sum_{i=0}^\infty\alpha_ip^i\right) \md p^k=\sum_{i=0}^{k-1}\alpha_ip^i, \end{equation} where $\alpha_i\in\F_p$. Thus, for $a,b\in\Z_p$, the following equivalences hold: \begin{equation} \label{eq:md=ineq} \|a-b\|_p\le p^{-k} \ \text{if and only if} \ a\md p^k=b\md p^k; \ \text{that is, if and only if}\ a\equiv b\pmod{p^k}. \end{equation} Due to equivalence \eqref{eq:md=ineq}, one may use congruences between $p$-adic numbers rather than inequalities for $p$-adic absolute values which is sometimes more convenient during proofs. The advantage of using congruences rather than inequalities in $p$-adic analysis over $\Z_p$ is that one may work with congruences by applying standard number-theoretic techniques; e.g., add or multiply congruences sidewise, etc. More about this in \cite{AnKhr}. Metrics on Cartesian powers $\Z_p^n$ can be defined in a manner similar to that of the case $n=1$: $$ \|(a_1,\ldots,a_n)-(b_1,\ldots,b_n)\|_p=\max\{\|a_j-b_j\|_p\colon j=1,2,\ldots,n\} $$ for every $(a_1,\ldots,a_n),(b_1,\ldots,b_n)\in\Z_p^n$; so $p$-adic continuous multi-variate functions defined on and valuated $\Z_p$ can be considered as well. Once the metric is defined, one can speak of limits, of continuous functions, of derivatives. of convergent series, etc.; that is, of $p$-adic Calculus. We refer to the numerous books on $p$-adic analysis (e.g., \cite{Gouvea:1997,Kat,Kobl,Sch} ) for further details. An important example of continuous 2-adic functions are basic computer instructions, both arithmetic (addition, multiplication, subtraction) and bitwise logical ($\AND$, the bitwise conjunction; $\OR$, the bitwise disjunction; $\XOR$, the bitwise exclusive `or'; $\NOT$, the bitwise negation) and some others (shifts towards higher order bits, masking). All these instructions can be regarded as (univariate or two-variate) 1-Lipschitz functions defined on and valuated in the space of 2-adic integers $\Z_2$, \cite{AnKhr}. That is why the theory we develop finds numerous applications in computer science and cryptology: the straight-line programs (and more complicated ones) combined from the mentioned instructions can also be regarded as continuous 2-adic mappings; so behaviour of these programs can be analysed by techniques of the non-Archimedean dynamics, see e.g. \cite{me-NATO,me-CJ,AnKhr,me:1,me:ex,me:2}. It is worth noticing here that similar approaches work effectively also in genetics, cognitive sciences, image processing, quantum theory, etc., see comprehensive monograph \cite{AnKhr} and references therein. Concluding the section, we now give a formal proof that the class of all automata functions $f_{\mathfrak A(s_0)}$ of automata of the form $\mathfrak A(s_0)=\langle\F_p,\Cal S,\F_p,S,O,s_0\rangle$ coincides with the class of all 1-Lipschitz functions $f\:\Z_p\>\Z_p$. The result is not new: It can be derived from a general result on asynchronous automata \cite[Theorem 2.4, Proposition 3.7]{Grigorch_auto}; in a special case $p=2$ the result was proved in \cite{Vuillem_circ}. We use an opportunity to give a direct `$p$-adic' proof here as we consider only synchronous automata, and for arbitrary $p$. \begin{thm}[Automata functions are 1-Lipschitz functions and vice versa] \label{thm:auto-1L} The automaton function $f_{\mathfrak A(s_0)}\:\Z_p\>\Z_p$ of the automaton $\mathfrak A(s_0)=\langle\F_p,\Cal S,\F_p,S,O,s_0\rangle$ is 1-Lipschitz. Conversely, for every 1-Lipschitz function $f\:\Z_p\>\Z_p$ there exists an automaton $\mathfrak A(s_0)=\langle\F_p,\Cal S,\F_p,S,O,s_0\rangle$ such that $f=f_{\mathfrak A(s_0)}$. \end{thm} \begin{proof} Given a $p$-adic integer $z\in\Z_p$, denote via $\delta_i(z)\in\F_p$, the $i$-th `$p$-adic digit' of $z$; that is, the $i$-th term coefficient in the $p$-adic representation of $ z=\sum_{i=0}^\infty\delta_i(z)p^i. $ As $s_i=S(\delta_{i-1}(z),s_{i-1})$ for every $i=1,2\ldots$, the $i$-th output symbol $\xi_i=\delta_i(f_{\mathfrak A}(z))$ depends only on input symbols $\chi_0,\chi_1,\ldots,\chi_i$; that is \[ \delta_i(f_{\mathfrak A}(z))=\psi_i(\delta_0(z),\delta_1(z),\ldots,\delta_i(z)) \] for all $i=0,1,2,\ldots$ and for suitable mappings $\psi_i\:\F_p^{i+1}\>\F_p$. That is, $f=f_{\mathfrak A(s_0)}\:\Z_p\>\Z_p$ is of the form \begin{equation} \label{eq:tri} f\colon x=\sum_{i=0}^\infty\chi_ip^i\mapsto f(x)=\sum_{i=0}^\infty\psi_i(\chi_0,\ldots,\chi_i)p^i. \end{equation} This means that the function $f_{\mathfrak A(s_0)}$ is 1-Lipschitz by \cite[Proposition 3.35]{AnKhr} as the mentioned proposition in application to the mappings we consider here can be re-stated as follows: A mapping $f\colon\Z_p\>\Z_p$ is 1-Lipschitz if and only if $f$ can be represented in the form \eqref{eq:tri} for suitable mappings $\psi_i\:\F_p^{i+1}\>\F_p$, $i=0,1,2,\ldots$. Conversely, let $f\:\Z_p\>\Z_p$ be a 1-Lipschitz mapping; then by \cite[Proposition 3.35]{AnKhr} $f$ can be represented in the form \eqref{eq:tri} for suitable mappings $\psi_i\:\F_p^{i+1}\>\F_p$, $i=0,1,2,\ldots$. We now construct an automaton $\mathfrak A(s_0)=\langle\F_p,\Cal S,\F_p,S,O,s_0\rangle$ such that $f_{\mathfrak A(s_0)}=f$. Let $\F_p^\star$ be a set of all non-empty finite words over the alphabet $\F_p$. We consider these words as base-$p$ expansions of numbers from $\N=\{1,2,3,\ldots\}$ and enumerate all these words by integers $1,2,3,\ldots$ in radix order in accordance with the natural order on $\F_p$, $0<1<2<\cdots<p-1$: $$ 0<1<2<\ldots<p-1<00<01<02<\ldots<0(p-1)<10<11<12<\ldots; $$ so that $\nu(0)=1,\nu(1)=2,\nu(2)=3,\ldots,\nu(p-1)=p,\nu(00)=p+1,\nu(01)=p+2,\ldots$. This way we establish a one-to-one correspondence between the words $w\in\F_p^\star$ and integers $i\in\N$: $w \leftrightarrow \nu(w)$, $i\leftrightarrow \omega(i)$ ($\nu(w)\in\N$, $\omega(i)\in\F_p^\star$). Note that $\nu(\omega(i))=i$, $\omega(\nu(w))=w$ for all $i\in\N$ and all non-empty words from $w\in\F_p^\star$. Define $\omega(0)$ to be the empty word. Now put $\Cal S=\N_0=\{0,1,2,3,\ldots\}$, the set of all states of the automaton $\mathfrak A(s_0)$ under construction, and take the initial state $s_0=0$. The state transition function $S$ is defined as follows: \begin{equation} \label{eq:auto-st} S(r,i)=\nu(r\circ\omega(i)), \end{equation} where $i=0,1,2,\ldots$ and $r\in\F_p$. That is, $S(r,i)$ is the number of the word $r\circ\omega(i)$ which is a concatenation of the word $\omega(i)$ (the word whose number is $i$), the prefix, with the single-letter word $r$, the suffix. Now consider a one-to-one mapping $\theta_{n}(\chi_{n-1}\cdots\chi_1\chi_0)=(\chi_{0},\chi_1,\ldots,\chi_{n-1})$ from the $n$-letter words onto $\F_p^n$ and define the output function of the automaton $\mathfrak A(0)$ as follows: \begin{equation} \label{eq:auto-out} O(r,i)=\psi_{\Lambda(\omega(i))}(\theta_{\Lambda(\omega(i))+1}(r\circ\omega(i))), \end{equation} where $i=0,1,2,\ldots$ and $r\in\F_p$. Remind that we denote via $\Lambda(w)$ the length of the word $w$. The idea of the construction is illustrated by Figure \ref{fig:constr} which depicts Moore diagram of the automaton $\mathfrak A(0)$ for the case $p=2$: \begin{figure}[h] \begin{quote}\psset{unit=0.5cm} \begin{pspicture}(-1.2,-2)(24,10) \pscircle[fillstyle=solid,fillcolor=yellow,linewidth=1pt](4,4){.5} \pscircle[fillstyle=solid,fillcolor=yellow,linewidth=1pt](10,6){.5} \pscircle[fillstyle=solid,fillcolor=yellow,linewidth=1pt](10,2){.5} \pscircle[fillstyle=solid,fillcolor=yellow,linewidth=1pt](16,8){.5} \pscircle[fillstyle=solid,fillcolor=yellow,linewidth=1pt](16,5.3){.5} \pscircle[fillstyle=solid,fillcolor=yellow,linewidth=1pt](16,2.6){.5} \pscircle[fillstyle=solid,fillcolor=yellow,linewidth=1pt](16,0){.5} \psline{->}(4.5,4)(9.5,6) \psline{->}(4.5,4)(9.5,2) \psline{->}(10.5,6)(15.5,8) \psline{->}(10.5,6)(15.5,5.3) \psline{->}(10.5,2)(15.5,2.6) \psline{->}(10.5,2)(15.5,0) \psline[linestyle=dotted,linewidth=1pt](16.5,0)(18.5,-.7) \psline[linestyle=dotted,linewidth=1pt](16.5,0)(18.5,.7) \psline[linestyle=dotted,linewidth=1pt](16.5,2.6)(18.5,1.8) \psline[linestyle=dotted,linewidth=1pt](16.5,2.6)(18.5,3.2) \psline[linestyle=dotted,linewidth=1pt](16.5,5.3)(18.5,4.6) \psline[linestyle=dotted,linewidth=1pt](16.5,5.3)(18.5,6) \psline[linestyle=dotted,linewidth=1pt](16.5,8)(18.5,7.2) \psline[linestyle=dotted,linewidth=1pt](16.5,8)(18.5,8.6) \uput{1}[90](4,2.75){$0$} \uput{1}[90](10,4.8){$1$} \uput{1}[90](10,.8){$2$} \uput{1}[90](16,6.7){$3$} \uput{1}[90](16,4.1){$5$} \uput{1}[90](16,1.35){$4$} \uput{1}[90](16,-1.25){$6$} \uput{1}[90](6.5,4.5){$(0,\psi_0(0))$} \uput{1}[90](6.5,.6){$(1,\psi_0(1))$} \uput{1}[90](13,6.8){$(0,\psi_1(0,0))$} \uput{1}[90](13,3.5){$(1,\psi_1(0,1))$} \uput{1}[90](13,1.7){$(0,\psi_1(1,0))$} \uput{1}[90](13,-1.6){$(1,\psi_1(1,1))$} \end{pspicture} \end{quote} \caption{Moore diagram of the automaton $\mathfrak A(0)$, $p=2$; so $\omega(0)$ is the empty word, $\omega(1)=0$, $\omega(2)=1$, $\omega(3)=00$, $\omega(4)=01$, $\omega(5)=10$, $\omega(6)=11$,\ldots} \label{fig:constr} \end{figure} Now, as both $f$ and $f_{\mathfrak A(s_0)}$ are 1-Lipschitz, thus continuous with respect to the $p$-adic metric, and as $\N_0$ is dense in $\Z_p$, to prove that $f=f_{\mathfrak A(s_0)}$ is suffices to show that \begin{equation} \label{eq:auto-1} f_{\mathfrak A(s_0)}(\tilde w)\equiv f(\tilde w)\pmod{p^{\Lambda(w))}} \end{equation} for all finite non-empty words $w\in\F_p^\star$, where $\tilde w\in\N_0$ stands for the integer whose base-$p$ expansion is $w$. We prove that \eqref{eq:auto-1} holds for all $w\in\F_p^\star$ once $\Lambda(w)=n>0$ by induction on $n$. If $n=1$ then $\tilde w\in\F_p$; so once $w$ is feeded to $\mathfrak A$, the automaton reaches the state $S(w,0)=\nu(w)$ (cf. \eqref{eq:auto-st}) and outputs $O(w,0)=\psi_0(\theta_1(w)) \equiv f(\tilde w)\pmod p$ (cf. \eqref{eq:auto-out}), see \eqref{eq:tri}. Thus, \eqref{eq:auto-1} holds in this case. Now assume that \eqref{eq:auto-1} holds for all $w\in\F_p^\star$ such that $\Lambda(w)=n<k$ and prove that \eqref{eq:auto-1} holds also when $\Lambda(w)=n=k$. Represent $w=r\circ v$, where $r\in\F_p$ and $\Lambda(v)=n-1$. By the induction hypothesis, after the word $v$ has been feeded to $\mathfrak A$, the automaton reaches the state $\nu(v)$ and outputs the word $v_1$ of length $n-1$ such that $\tilde v_1\equiv f(\tilde v)\md p^{n-1}$. Next, being feeded by the letter $r$, the automaton (which is in the state $\nu(v)$ now) outputs the letter $O(r,\nu(v))=\psi_{\Lambda(\omega(\nu(v)))}(\theta_{\Lambda(\omega(\nu(v)))+1}(r\circ\omega(\nu(v))))= \psi_{\Lambda(v)}(\theta_{\Lambda(v)+1}(r\circ v))$. This means that once feeded by $w$, the automaton $\mathfrak A(s_0)$ outputs the word $v_2=(\psi_{\Lambda(v)}(\theta_{\Lambda(v)+1}(r\circ v)))\circ v_1$. Now note that $\tilde v_2\equiv f(\tilde w)\pmod{p^n}$ by \eqref{eq:tri}. \end{proof} \begin{note} From the proof of Theorem \ref{thm:auto-1L} it is clear that the mapping $f_{n,\mathfrak A(s_0)}\:\Z/p^n\Z\>\Z/p^n\Z$ is just a reduction modulo $p^n$ of the automaton function $f_{\mathfrak A(s_0)}$: $f_{n,\mathfrak A(s_0)}=f_{\mathfrak A(s_0)}\md p^n$ for all $n=1,2,3,\ldots$. \end{note} \begin{note} In automata theory, \emph{word transducers} (or, \emph{asynchronous automata}) are also considered; the latter are automata that allow (possibly empty) words as output for each transition. Although the automata we consider are all synchronous (i.e.., letter transducers rather than word transducers), it is worth mentioning here that the automaton function of a word transducer whose input/output alphabets are $\F_p$ can also be considered as a continuous (however, not necessarily 1-Lipschitz any longer) mapping from $\Z_p$ to $\Z_p$ once the transducer is non-degenerate, see \cite[Theorem 2.4]{Grigorch_auto}. \end{note} Further in the paper, given a 1-Lipschitz function $f\:\Z_p\>\Z_p$, via $\mathfrak A_f(s_0)$ we denote an automaton $\langle\F_p,\Cal S,\F_p,S,O,s_0\rangle$ whose automaton function is $f$; that is, $f_{\mathfrak A_f(s_0)}=f$. Note that given $f$, the automaton $\mathfrak A_f(s_0)$ is \emph{not} unique: There are numerous automata that have the same automaton function $f$. Nonetheless, this non-uniqueness will not cause misunderstanding since in the paper we are mostly interested with automata functions rather than with `internal structure' (e.g., with state sets, state transition and output functions, etc.) of automata themselves. \section{The $p$-adic ergodic theory and transitivity of automata} \label{sec:p-erg} The ring $\Z_p$ can be endowed with a probability measure $\mu_p$: Elementary $\mu_p$-measurable sets are balls $B_{p^{-r}}(a)=a+p^r\Z_p=\{z\in\Z_p\colon z\equiv a\pmod{p^r}\}$ of radii $p^{-r}$, $r=1,2,\ldots$, centered at $a\in\Z_p$. In other words, the ball $B_{p^{-r}}(a)$ is a set of all infinite words over alphabet $\F_p=\{0,1,\ldots,p-1\}$ that have common prefix of length $r$. We put $\mu_p(B_{p^{-r}}(a))=p^{-r}$ thus endowing $\Z_p$ with a probability measure $\mu_p$ (which actually is a normalized Haar measure). Note that all 1-Lipschitz mappings $f\:\Z_p\>\Z_p$ are $\mu_p$-measurable (i.e., $f^{-1}(S)$ is $\mu_p$-measurable once $S\subset\Z_p$ is $\mu_p$-measurable). A $\mu_p$-measurable mapping $f\colon \Z_p\>\Z_p$ is called \emph{ergodic} if the two following conditions hold simultaneously: \begin{enumerate} \item $f$ \emph{preserves the measure} $\mu_p$; i.e., $\mu_p(f^{-1}(S))=\mu_p(S)$ for each $\mu_p$-measurable subset $S\subset \Z_p$, and \item $f$ has no proper invariant $\mu_p$-measurable subsets: $f^{-1}(S)= S$ {implies either} $\mu_p(S)=0$, {or} $\mu_p(S)=1$. \end{enumerate} A family $\Cal F=\{f_i\:i\in I\}$ of $\mu_p$-measurable mappings $f_i\:\mathbb Z_p\>\mathbb Z_p$ (which are not necessarily measure-preserving mappings) is called \emph{ergodic} if the mappings $f_i$, $i\in I$, have no common $\mu_p$-measurable invariant subset other than sets of measure 0 or 1; that is, if there exists a $\mu_p$-measurable subset $S\subset\mathbb Z_p$ such that $f^{-1}_i(S)= S$ for all $i\in I$, then necessarily either $\mu_p(S)=0$, or $\mu_p(S)=1$. Note that in the paper speaking of ergodicity of a \emph{single} mapping we always mean the mapping is measure-preserving; whereas in general ergodic theory the non-measure-preserving ergodic mappings (the ones that satisfy only the second condition (ii) of the above two) are sometimes also concerned. To illustrate the notion of ergodicity we use, consider a finite set $M$ endowed with a natural probability measure $\mu(A)=\#A/\#M$ for all $A\subset M$ (where $\#A$ stands for the number of elements in $A$). The measure-preservation of the mapping $f\:M\>M$ is equivalent to the bijectivity of $f$, whereas the ergodicity of $f$ (when respective conditions (i) and (ii) hold simultaneously) is equivalent to the transitivity of the mapping $f$ in the meaning of Definition \ref{def:trans-1}; and the ergodicity of the family $\Cal F$ of mappings $f_i\:M\> M$, $i\in I$, is equivalent to the transitivity of the family $\Cal F$ in the meaning of Definition \ref{def:trans}. As in the paper we deal with the only measure $\mu_p$, so further speaking of measure-preservation (as well as of measurability and of ergodicity) we omit mentioning the respective measure. From the $p$-adic ergodic theory (see \cite{AnKhr}) the following theorem can be deduced: \begin{thm} \label{thm:word-trans=erg} A system $\mathfrak A=\langle\F_p,\Cal S,\F_p,S,O\rangle$ is word transitive if and only if the automaton function $f_{\mathfrak A(s_0)}$ on $\Z_p$ is ergodic. If the system $\mathfrak A$ is completely transitive, the family $f_{\mathfrak A(s)}$, $s\in\Cal S$, of automata functions is ergodic. \end{thm} Remind that under conventions from the beginning of the paper, $s_0$ is the state of the system $\mathfrak A$ such that all other states are reachable from $s_0$. Theorem \ref{thm:word-trans=erg} implies a number of various methods to determine the word transitivity of automata: For instance, a \emph{binary} automaton $\mathfrak P$ (that is an automaton with a binary input and output; i.e., with $p=2$) whose automaton function $f_{\mathfrak P}$ is a polynomial with integer coefficients (i.e., $f_\mathfrak P=g$ where $g(x)\in\Z[x]$) is word transitive if and only if it is 3-word transitive; that is, the transitivity of $\mathfrak P$ on the set $W_3$ of all binary words of length 3 is equivalent to the transitivity of $\mathfrak P$ on the set $W_n$ of all binary words of length $n$, for all $n=1,2,3,\ldots$. Moreover, a binary automaton $\mathfrak F$ is word transitive if and only if its automaton function is of the form $f_{\mathfrak F}(x)=1+x+2(g(x+1)-g(x))$, where $g=g_{\mathfrak G}$ is an automaton function of some binary automaton $\mathfrak G$. For other results of this sort and for the whole $p$-adic ergodic theory see \cite{AnKhr}. Although complete transitivity of the system $\mathfrak A=\langle\F_p,\Cal S,\F_p,S,O\rangle$ is also related to ergodicity; however, to the ergodicity of the family of automata functions $f_{\mathfrak A(s)}$, $s\in\Cal S$, cf. Definition \ref{def:auto-trans} and Theorem \ref{thm:word-trans=erg}, rather than to the ergodicity of a single automaton function $f_{\mathfrak A(s_0)}$. This is why to determine complete/absolute transitivity rather than just word transitivity we need some more sophisticated techniques that are discussed further. \section{Plots of automata functions on the real plane} \label{sec:p-real} Remind that under conventions from the beginning of the paper, there exists a state $s_0$ of the system $\mathfrak A$ such that all other states are reachable from $s_0$; so although further results of the paper are stated mostly for automata, they hold for systems as well. Given an automaton $\mathfrak A(s_0)$, consider the corresponding automaton function $f=f_{\mathfrak A(s_0)}\colon\Z_p\rightarrow\Z_p$. Denote $E_k(f)$ the set of all the following points $e_k^f(x)$ of closed Euclidean unit square $\mathbb I^2=[0;1]\times[0;1]\subset\R^2$: $$e_k^f(x)=\left(\frac{x\md p^k}{p^k},\frac{f(x)\md p^k}{p^k} \right),$$ where $x\in\Z_p$ and $\md p^k$ is a reduction modulo $p^k$, cf. \eqref{eq:md}. Note that $x\md p^k$ corresponds to the prefix of length $k$ of the infinite word $x\in\Z_p$, i.e., to the input word of length $k$ of the automaton $\mathfrak A(s_0)$; while $f(x)\md p^k$ corresponds to the respective output word of length $k$. That is, given an input word $w=\chi_{k-1}\cdots\chi_1\chi_0$ and the corresponding output word $w^\prime=\xi_{k-1}\cdots\xi_1\xi_0$, we consider in $\mathbb I^2$ the set of all points $$(\chi_{k-1}p^{-1}+\cdots+\chi_1p^{-k+1}+\chi_0p^{-k}, \xi_{k-1}p^{-1}+\cdots+\xi_1p^{-k+1}+\xi_0p^{-k}),$$ for all pairs $(w,w^\prime)$ of input/output words of length $k$. \begin{figure}[h] \begin{quote}\psset{unit=0.5cm} \begin{pspicture}(0,8)(24,12) \psline[linecolor=black]{<-}(20,11)(14,11) \psline[linecolor=black]{<-}(10,11)(3.7,11) \psline[linecolor=black](3.7,11.3)(3.7,10.7) \psframe[fillstyle=solid,fillcolor=yellow,linecolor=black,linewidth=2pt](10,10)(14,12) \uput{0}[90](12,10.7){$\mathfrak A$} \uput{0}[90](1.9,9.2){$x\md p^k=$} \uput{0}[90](22.6,9.2){$=f(x)\md p^k$} \uput{0}[90](11.9,8.7){${\underbrace{{\black \chi_{k-1}\cdots\cdots\cdots\chi_1\chi_0\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \xi_{k-1}\cdots\cdots\cdots\xi_1\xi_0}}} $} \uput{0}[90](12,7.5){{$e_k^f(x)=(0.\chi_{k-1}\ldots\chi_1\chi_0,0.\xi_{k-1}\ldots\xi_1\xi_0)$}} \end{pspicture} \end{quote} \end{figure} The set $E_k(f)$ may be considered as a sort of a plot of the automaton function $f$ on the real plane $\R^2$. The plot characterizes behaviour of the automaton; namely, it can be observed that basically the behaviour is of two types only: \begin{enumerate} \item as $k\to\infty$, the point set $E_k(f)$ is getting more and more dense (cf. Fig. \ref{fig:Quad-16}--\ref{fig:Quad-23}, $p=2$) , or \item $E_k(f)$ is getting less and less dense while $k\to\infty$, cf. Fig. \ref{fig:KlSh-lac-16}--\ref{fig:KlSh-lac-22} ($p=2$). \end{enumerate} \begin{figure}[h] \begin{minipage}[b]{.495\linewidth} \epsfig{file=Quad-16bw.eps,width=\linewidth} \caption {$f(x)=2x^2+3x+1$, $k=16$}} \label{fig:Quad-16} \end{minipage}\hfill \begin{minipage}[b]{.495\linewidth} \epsfig{file=Quad-18bw.eps,width=\linewidth} \caption {Same function, $k=18$}} \label{fig:Quad-18} \end{minipage} \begin{minipage}[b]{.495\linewidth} \epsfig{file=Quad-20bw.eps,width=\linewidth} \caption {Same function, $k=20$}} \label{fig:Quad-20} \end{minipage}\hfill \begin{minipage}[b]{.495\linewidth} \epsfig{file=Quad-23bw.eps,width=\linewidth} \caption {Same function, $k=23$}} \label{fig:Quad-23} \end{minipage} \end{figure} \begin{figure}[th] \begin{minipage}[b]{.495\linewidth} \epsfig{file=KlSh-lacuna-16bw.eps,width=\linewidth} \caption {$f(x)= x+x^2\OR (-131065)$, $k=16$}} \label{fig:KlSh-lac-16} \end{minipage}\hfill \begin{minipage}[b]{.495\linewidth} \epsfig{file=KlSh-lacuna-17bw.eps,width=\linewidth} \caption {Same function, $k=17$}} \label{fig:KlSh-lac-17} \end{minipage} \begin{minipage}[b]{.495\linewidth} \epsfig{file=KlSh-lacuna-18bw.eps,width=\linewidth} \caption {Same function, $k=18$}} \label{fig:KlSh-lac-18} \end{minipage}\hfill \begin{minipage}[b]{.495\linewidth} \epsfig{file=KlSh-lacuna-22bw.eps,width=\linewidth} \caption {Same function, $k=22$}} \label{fig:KlSh-lac-22} \end{minipage} \end{figure} It is intuitively clear that, say, for pseudorandom number generation automata of type (i) are preferable\footnote{For a deeper mathematical reasoning see \cite{AnKhr}.}; so we need to explain/prove the phenomenon and to develop techniques in order to determine/construct automata of type (i). \subsection{The automata 0-1 law} Denote $\Cal E(f)$ the closure of the set $E(f)=\bigcup_{k=1}^\infty E_k(f)$ in the topology of the real plane $\R^2$. As $\Cal E$ is closed, it is measurable with respect to the Lebesgue measure on the real plane $\R^2$. Let $\alpha(f)$ be the Lebesgue measure of $\Cal E(f)$. It is clear that $0\le\alpha(f)\le1$; but it turns out that in fact only two extreme cases occur: $\alpha(f)=0$ or $\alpha(f)=1$. This is the first of the main results of the paper: \begin{thm}[The automata 0-1 law] \label{thm:Auto_0-1} For $f$, the following alternative holds: Either $\alpha(f)=0$ \textup{(equivalently, $\Cal E(f)$ is nowhere dense in $\mathbb I^2$)}, or $\alpha(f)=1$ \textup{(equivalently, $\Cal E(f)=\mathbb I^2$)}. \end{thm} We note that although Theorem \ref{thm:Auto_0-1} has been already announced, see \cite[Proposition 11.15]{AnKhr}, actually in \cite{AnKhr} only part of the statement is proved (the one that concerns density of $\Cal E(f)$) whereas the part that concerns the value of the Lebesgue measure is not. Remind that nowhere dense sets can nevertheless have positive Lebesgue measures, cf. fat Cantor sets (e.g. the Smith-Volterra-Cantor set), also known as \emph{$\epsilon$-Cantor sets}, see e.g. \cite{RealAnalys}. Nonetheless, Theorem \ref{thm:Auto_0-1} is true; a complete proof follows. \begin{proof}[Proof of Theorem \ref{thm:Auto_0-1}] Let $\alpha(f)>0$; we are going to prove that then $\alpha(f)=1$ and $\Cal E(f)=\mathbb I^2$. Either of the two following cases is possible: 1) Some point from $\Cal E(f)$ have an open neighbourhood (in the unit square $\mathbb I^2$) that lies completely in $\Cal E(f)$, or, on the contrary, 2) no such point in $\Cal E(f)$ exists (thus, $\Cal E(f)$ is nowhere dense in $\mathbb I^2$ then). We consider the two cases separately and prove that within the first one necessarily $\alpha(f)=1$ while the second one is impossible (that is, if $\Cal E(f)$ is nowhere dense in $\mathbb I^2$ then necessarily $\alpha(f)=0$). Given $a,b\in\R$, $a\le b$, during the proof we denote via $(a;b)$ (respectively, via $[a;b]$) the corresponding open interval (respectively, closed segment) of the real line $\R$; while for $c,d\in\R$ we denote via $(c,d)$ the corresponding point on the real plane $\R^2$. \textbf{Case 1}: In this case, there exist $u,v,u',v'$, $0\le u<v\le 1$, $0\le u'<v'\le 1$ such that the closed square $[u;v]\times [u';v']\subset\mathbb I^2$ lies completely in $\Cal E(f)$, and every point from the open real interval $(u';v')$ is a limit (with respect to the standard Archimedean metric in $\R$) of some sequence of fractions $u'<\frac{f(a_m)\md{p^m}}{p^{m}}<v'$, where $u<\frac{a_m}{p^{m}}<v$, $m=1,2,\ldots$. Thus, we can take $n\in\N$ and $w=\omega_0+\omega_1\cdot p+\cdots+\omega_{n-1}\cdot p^{n-1}$, where $\omega_i\in\{0,1,\ldots,p-1\}$, $i=0,1,\ldots,n-1$, so that the square $$ S=\left[\frac{w}{p^{n}};\frac{w}{p^{n}}+\frac{1}{p^{n}}\right]\times \left[\frac{f(w)\md{p^n}}{p^{n}};\frac{f(w)\md{p^n}}{p^{n}}+\frac{1}{p^{n}}\right] $$ lies completely in $\Cal E(f)$, and every inner point $(x,y)$ of the square $S$ \footnote{that is, $(x,y)$ has an open neighborhood that lies completely in $S$} is a limit as $j\to\infty$ (with respect to the standard Archimedean metric in $\R^2$) of a sequence of inner points $$ (r_j,t_j)=\left(\frac{z_j+p^{N_j}\cdot w}{p^{N_j+n}},\frac{f(z_j+p^{N_j}\cdot w)\md{p^{N_j+n}}}{p^{N_j+n}}\right)\in S, $$ where $N_j\in\N$, $z_j\in\{0,1,\ldots,p^{N_j}-1\}$. Now, as $f$ is a 1-Lipschitz mapping from $\Z_p$ to $\Z_p$, for every $z\in\{0,1,\ldots,p^N-1\}$ we have that $f(z+p^N\cdot w)\equiv (f(z)\md{p^N})+p^N\cdot\xi_N(z)\pmod{p^{N+n}}$ for a suitable $\xi_N(z)\in\{0,1,\ldots,p^n-1\}$; thus, $$ \frac{f(z+p^N\cdot w)\md{p^{N+n}}}{p^{N+n}}=\frac{f(z)\md{p^N}}{p^{N+n}}+ \frac{\xi_N(z)}{p^n}. $$ Hence, $\xi_{N_j}(z_j)=f(w)\md{p^n}$ for all $j=1,2,\ldots$ as all $(r_j,t_j)$ are inner points of $S$. Therefore, every inner point $(x,y)\in S$, which can be represented as $$ (x,y)=\left(\frac{w}{p^n}+\frac{\chi}{p^n}, \frac{f(w)\md{p^n}}{p^n}+\frac{\gamma}{p^n}\right), $$ where $\chi$ and $\gamma$ are real numbers, $0<\chi<1$, $0<\gamma<1$, is a limit (as $j\to\infty)$ of the point sequence $$ (r_j,t_j)=\left(\frac{w}{p^n}+\frac{z_j}{p^{N_j}}\cdot\frac{1}{p^n}, \frac{f(w)\md{p^n}}{p^n}+\frac{f(z_j)\md{p^{N_j}}}{p^{N_j}}\cdot\frac{1}{p^n}\right)\in S. $$ From here it follows that every inner point $(\chi,\gamma)\in\mathbb I^2$ is a limit point of the corresponding sequence of points $\left(\frac{z_j}{p^{N_j}},\frac{f(z_j)\md{p^{N_j}}}{p^{N_j}}\right)$ as $j\to\infty$. This means that $\Cal E(f)=\mathbb I^2$ and thus $\alpha(f)=1$. \textbf{Case 2}: No point from $\Cal E(f)$ has an open neighbourhood that lies completely in $\Cal E(f)$; i.e., any open neighbourhood $U$ of any point from $\Cal E(f)$ contains points from the subset $\mathbb I^2\setminus\Cal E(f)$, which is open in $\mathbb I^2$. Hence, $U$ contains an open subset that lies completely in $\mathbb I^2\setminus\Cal E(f)$ (we assume that $\mathbb I^2\setminus\Cal E(f)\ne\emptyset$ since otherwise $\alpha(f)=1$ and there is nothing to prove). Then there exists an open square $$ T_m(a,b)=\left(\frac{a}{p^{m}};\frac{a}{p^{m}}+\frac{1}{p^{m}}\right)\times \left(\frac{b}{p^{m}};\frac{b}{p^{m}}+\frac{1}{p^{m}}\right), $$ where $a,b\in\{0,1,\ldots,p^m-1\}$, that lies completely in $\mathbb I^2\setminus\Cal E(f)$. That is, $T_m(a,b)$ contains no points of the form $$ \left(\frac{x\md p^k}{p^{k}},\frac{f(x)\md p^k}{p^{k}}\right), $$ where $x\in\Z_p$ and $k\in\N$. In other words this means that there exist words $\tilde a,\tilde b$ of length $m$ in the alphabet $\F_p$ (which are just base-$p$ representations of $a$ and $b$, respectively) such that, whenever the automaton $\mathfrak A=\mathfrak A_f$ is feeded by any input word $\tilde w$ with suffix $\tilde a$, i.e., $w=p^{\ell+m}a+u$ where $u\in\{0,1,\ldots,p^{\ell}-1\}$, the corresponding output word $f(w)\md{ p^{\ell+m}}=p^{\ell+m}t+v$, $v\in\{0,1,\ldots,p^{\ell}-1\}$, never has the suffix $\tilde b$, i.e., $t\ne b$ for all $\ell\in\N_0$ and all $u\in\{0,1,\ldots,p^{\ell}-1\}$ ($u$ is the empty word if $\ell=0$). It is clear now that given any numbers $a^\prime, b^\prime\in\{0,1,\ldots, p^{m^\prime}-1\}$, $m^\prime\ge m$, such that $a^\prime\equiv a\pmod{p^m}$, $b^\prime\equiv b\pmod{p^m}$, the corresponding open square $T_{m^\prime}(a^\prime,b^\prime)$ lies completely outside of $\Cal E(f)$, i.e., contains no points of the form $\left(\frac{x\md p^k}{p^{k}},\frac{f(x)\md p^k}{p^{k}}\right)$, where $x\in\Z_p$ and $k\in\N$. Indeed, otherwise some input word $w^\prime$ with the suffix $a^\prime$ results in the output word with the suffix $b^\prime$; but, this means that the corresponding initial subword (whose suffix is $a$) of the word $w^\prime$ results in output word whose suffix is $b$. The latter case contradicts our choice of $a,b$. Now take $m^\prime=im$ for $i=1,2,\ldots$ and construct inductively a collection $\Cal T_i$ that consists of $(p^{2m}-1)^{i-1}$ disjoint open squares $T_{m^\prime}(a^\prime,b^\prime)$. The collection $\Cal T_1$ consists of the only square $T_m(a,b)$. Given the collection $\Cal T_{i-1}$, the collection $\Cal T_{i}$ consists of all open squares $T_{im}(a^\prime,b^\prime)$, where $a^\prime,b^\prime\in\{0,1,\ldots,p^{im}-1\}$, $a^\prime\equiv a\pmod{p^m}$, $b^\prime\equiv b\pmod{p^m}$, that are disjoint from all squares from the collections $\Cal T_1,\ldots\Cal T_{i-1}$. That is, at the first step we obtain a collection $\Cal T_1$ that consists of the only $p^{-m}\times p^{-m}$ square $T_1(a,b)$; on the second step we obtain a collection $\Cal T_2$ that consists of $p^{2m}-1$ disjoint $p^{-2m}\times p^{-2m}$-squares; on the third step we obtain a collection $\Cal T_3$ that consists of $(p^{2m}-1)p^{2m}-(p^{2m}-1)=(p^{2m}-1)^2$ disjoint $p^{3m}\times p^{3m}$-squares, etc. The union $T$ of all these open squares from $\Cal T_1, \Cal T_2, \ldots$ is open, whence, measurable, and the Lebesgue measure of $T$ is $$ \frac{1}{p^{2m}}+ (p^{2m}-1)\cdot\frac{1}{p^{4m}}+(p^{2m}-1)^2\cdot\frac{1}{p^{6m}}+\cdots=1 $$ since all these open squares are disjoint by the construction. On the other hand, by the construction $T$ contains no points of the form $\left(\frac{x\md p^k}{p^{k}},\frac{f(x)\md p^k}{p^{k}}\right)$, where $x\in\Z_p$ and $k\in\N$. Consequently, $T\cap\Cal E(f)=\emptyset$; in turn, this implies that the Lebesgue measure of $\Cal E(f)$ must be 0, i.e, that $\alpha(f)=0$. The latter contradicts the assumption from the beginning of the proof. This proves the theorem. \end{proof} \subsection{Completely transitive automata} From Theorem \ref{thm:Auto_0-1} we immediately derive the second main result of the paper: \begin{thm}[Criterion of complete transitivity] \label{thm:comp-trans} A system $\mathfrak A$ is completely transitive if and only if $\alpha(f_{\mathfrak A(s_0)})=1$. \end{thm} \begin{proof} Follows from Theorem \ref{thm:Auto_0-1}, cf. equivalent definition of complete transitivity in terms of words. \end{proof} \begin{note} Nowhere in the proofs of Theorem \ref{thm:Auto_0-1} and of Theorem \ref{thm:comp-trans} we used that $p$ is a prime; so both theorems are true without this limitation. \end{note} A finite system (i.e., the one whose set of states is finite) can be word transitive; the odometer $ x\mapsto x+1$ on $\Z_2$ serves as an example. On the other hand, by \cite[Theorem 11.10]{AnKhr}, given a finite system $\mathfrak A$, the set $\Cal E(f_{\mathfrak A})$ is nowhere dense; so from Theorems \ref{thm:Auto_0-1} and \ref{thm:comp-trans} it follows that a finite system can not be completely transitive. Thus, $\alpha (\mathfrak A(s_0))=0$ if $\mathfrak A(s_0)$ is a finite automaton. To construct automata $\mathfrak A$ of measure 1 (i.e., such that $\alpha(f_{\mathfrak A})=1$) the following theorem (which is the third main result of the paper) may be applied: \begin{thm}[Sufficient conditions for complete transitivity] \label{thm:comp-trans-der} Let $f=f_{\mathfrak A}\colon\Z_p\>\Z_p$ be the automaton function of an automaton $\mathfrak A$, and let $f$ be differentiable everywhere in a ball $B\subset\Z_p$ of a non-zero radius. The function $f$ is of measure 1 whenever the following two conditions hold simultaneously: \begin{enumerate} \item $f(B\cap\N_0)\subset\N_0$; \item $f$ is two times differentiable at some point $v\in B\cap\N_0$, and $f^{\prime\prime}(v)\ne 0$. \end{enumerate} \end{thm} \begin{proof} We will show that for every sufficiently large $k$ and every $z,u\in\{0,1,\ldots, p^k-1\}$ there exists $M=M(k)$ and $a\in\{0,1,\ldots,p^M-1\}$ such that \begin{equation} \label{eq:no_lac_00} \left|\frac{a}{p^M}-\frac{u}{p^k}\right|<\frac{1}{p^k}\ \text{and}\ \left|\frac{f(a)\md p^M}{p^M}-\frac{z}{p^k}\right|<\frac{1}{p^k}. \end{equation} This will prove the theorem as every point from the unit square $\mathbb I^2$ can be approximated by points of the form $\left(\frac{u}{p^k},\frac{z}{p^k}\right)$. Briefly, our idea of the proof is as follows: As $v\in\N_0$, there exists $k\in\N_0$ such that all terms $\nu_i\in\{0,1,\ldots,p-1\}$ in the $p$-adic expansion $v=\sum_{i=0}^\infty\nu_i\cdot p^i$ are zero, for all $i\ge k$. We then somehow tweak $v$: Namely, we replace zeros in the $p$-adic expansion at positions starting with $\ell$-th, $\ell>k$, by certain other letters from $\{0,1,\ldots,p-1\}$ so that the tweaked $v$, the natural number $a=v+p^\ell t$, satisfies inequalities \eqref{eq:no_lac_00} for some $M$. As $f$ is differentiable everywhere in $B$, for $x\in B$ we have that given arbitrary $K\in\N$, the following congruence holds for all $h\in\Z_p$ and all sufficiently large $L\in\N$: \begin{equation} \label{eq:der} f(x+p^L h)\equiv f(x)+p^L h\cdot f^\prime(x)\pmod{p^{K+L}}. \end{equation} Indeed, given $a,b\in\Z_p$, the condition $\|a-b\|_p\le p^{-d}$ is equivalent to the condition $a\md{p^d}=b\md{p^d}$, where $\md{p^d}$ is a reduction modulo $p^d$, cf. \eqref{eq:md=ineq}; so \eqref{eq:der} is just re-statement of a condition of differentiability of a function at a point, in terms of congruences rather than in terms of inequalities for $p$-adic absolute values: we just write $a\equiv b\pmod{p^d}$ instead of $\|a-b\|_p\le p^{-d}$. Let $\|f^{\prime\prime}(v)\|_p=p^{-s}$; that is, $f^{\prime\prime}(v)=p^s\cdot\xi$, where $s\in\Z$ and $\xi$ is a unity of $\Z_p$ (in other words, $\xi$ has a multiplicative inverse in $\Z_p$). Note that $s$ is not necessarily non-negative since $f^{\prime\prime}(v)$ is in $\Q_p$, and not necessarily in $\Z_p$; nonetheless further in the proof we assume that $k+s>0$ as we may take $k$ large enough. Remind that $\|f^\prime(x)\|_p \le 1$ as $f$ is 1-Lipschitz; so $f^\prime(x)\in\Z_p$. Now let $r\in\N$ be an arbitrary number such that $r>s$, $p^r>v$, and $p^{-r}$ is less than the radius of the ball $B$ (it is clear that there are infinitely many choices of $r$). Given $r$, consider $n\in\N$ such that $n>\max\{\log_p f(v+p^{k+r}t)\colon t=0,1,2,\ldots,p^k-1\}$ and $n>2k+2r+2s$ (we remind that in view of condition 2 of the theorem, all $f(v+p^{k+r}t)$ are in $\N_0$ due to our choice of $n$). Put \begin{align} \label{eq:no_lac_1} \tilde u&=1+p^{k+r+s}u\\ \label{eq:no_lac_2} \tilde z&=f^\prime(v)+p^{k+r+s}\hat z, \end{align} where $\hat z\in\{0,1,\ldots,p^k-1\}$ is such that $\lfloor\frac{\tilde z}{p^{k+r+s}}\rfloor\md p^k=z$. In other words, we choose $\hat z$ in such a way that the number whose base-$p$ expansion stands in positions from $(k+r+s)$-th to $(2k+r+s-1)$-th in the canonical $p$-adic expansion of $\tilde z$, is equal to $z$. Obviously, given $f^\prime(v)$ and $z$, there exists a unique $\hat z$ that satisfies this condition: $\hat z\equiv z-\lfloor\frac{f^\prime(v)}{p^{k+r+s}}\rfloor\pmod{p^k}$; so \begin{equation} \label{eq:no_lac_more} \tilde z\md p^{2k+r+s}=(f^\prime(v)\md p^{k+r+s}) +p^{k+r+s}\cdot z. \end{equation} As $f$ is two times differentiable at $v$, for every $\zeta\in\{0,1,\ldots p^k-1\}$ we conclude that \begin{equation} \label{eq:no_lac_0} f^\prime(v+p^{r+k}\zeta)\equiv f^\prime(v)+p^{r+k}\zeta\cdot f^{\prime\prime}(v) \pmod{p^{2k+r+s}} \end{equation} for all sufficiently large $r$ (formally, we just substitute $f^\prime$ for $f$, $v$ for $x$, $\zeta$ for $h$, $k+s$ for $K$, and $r+k$ for $L$ in \eqref{eq:der}). From here we deduce that as $f$ is differentiable in $B$, the following congruence holds for all sufficiently large $n$: \begin{multline} \label{eq:no_lac_3} f(v+p^{r+k}\zeta+p^n\tilde u)\equiv f(v+p^{r+k}\zeta)+\\ p^n\tilde u\cdot(f^\prime(v)+p^{r+k}\zeta\cdot f^{\prime\prime}(v))\pmod{p^{n+2k+r+s}}. \end{multline} Note that the latter congruence is obtained by combination of congruence \eqref{eq:der} where $K=2k+r+s$, $x=v+p^{r+k}\zeta$, $h=\tilde u$ and $L=n$, with congruence \eqref{eq:no_lac_0}. We claim that there exists $\zeta\in\{0,1,\ldots p^k-1\}$ such that \begin{equation} \label{eq:no_lac} \tilde u\cdot(f^\prime(v)+p^{r+k}\zeta\cdot f^{\prime\prime}(v)) \equiv \tilde z\pmod{p^{2k+r+s}}. \end{equation} Indeed, in view of \eqref{eq:no_lac_1}--\eqref{eq:no_lac_2} this congruence is equivalent to the congruence $(1+p^{k+r+s}u)\cdot (f^\prime(v)+p^{r+k}\zeta\cdot f^{\prime\prime}(v)) \equiv f^\prime(v)+p^{k+r+s}\hat z\pmod{p^{2k+r+s}}$, and the latter congruence is equivalent to the congruence $f^\prime(v)+p^{r+k}\zeta\cdot f^{\prime\prime}(v)\equiv (1-p^{k+r+s}u)\cdot(f^\prime(v)+p^{k+r+s}\hat z)\pmod{p^{2k+r+s}}$ as $(1+p^{k+r+s}u)^{-1}\equiv 1-p^{k+r+s}u\pmod{p^{2k+r+s}}$. That is, congruence \eqref{eq:no_lac} is equivalent to the congruence $p^{k+r}\zeta\cdot f^{\prime\prime}(v)\equiv p^{k+r+s}\hat z-p^{k+r+s}u\cdot f^\prime(v)\pmod{p^{2k+r+s}}$. Further, as $f^{\prime\prime}(v)=p^s\xi$, the latter congruence is equivalent to the congruence $\zeta\xi\equiv \hat z-u\cdot f^\prime(v)\pmod{p^k}$. From here we find $\zeta\equiv\xi^{-1}\cdot(\hat z-u\cdot f^\prime(v))\pmod{p^k}$, thus proving our claim (we remind that $\xi$ is a unity of $\Z_p$; hence, $\xi$ has a multiplicative inverse $\xi^{-1}$ modulo $p^k$). Now we put $M=n+2k+r+s$ and $a=v+p^{r+k}\zeta +p^n\cdot(1+p^{k+r+s}u)$; then $$ \frac{a}{p^M}=\frac{u}{p^k}+\frac{v+p^{r+k}\zeta+p^n}{p^{n+2k+r+s}}, $$ so $\left|\frac{a}{p^M}-\frac{u}{p^k}\right|<\frac{1}{p^k}$, since $v<p^r$, $\zeta<p^k$, and $n>2r+2s+2k$. Also, combining \eqref{eq:no_lac}, \eqref{eq:no_lac_2}, \eqref{eq:no_lac_more}, and \eqref{eq:no_lac_3}, we see that \begin{equation} \label{eq:no_lac_4} \frac{f(a)\md p^M}{p^M}=\frac{z}{p^k}+\frac{f(v+p^{r+k}\zeta)}{p^n}\cdot \frac{1}{p^{2k+r+s}}+\frac{f^\prime(v)\md p^{k+r+s}}{p^{k+r+s}}\cdot\frac{1}{p^{k}}, \end{equation} since $f(a)\md p^M=f(v+p^{r+k}\zeta)+p^n\cdot (f^\prime(v)\md p^{k+r+s})+p^{n+k+r+s}z$ (the number in the right-hand side is less than $p^M$ due to our choice of $n$). Now from \eqref{eq:no_lac_4} it follows that $\left|\frac{f(a)\md p^M}{p^M}-\frac{z}{p^k}\right|<\frac{1}{p^k}$ since $0\le f(v+p^{r+k}\zeta)\le p^n-1$ due to our choice of $n$. \end{proof} \begin{note} \label{note:minus} We note that $\alpha(f(x))=\alpha(-f(x))=\alpha(f(-x))$ for every 1-Lipschitz function $f\colon\Z_p\>\Z_p$ of variable $x$; so we may replace condition 1 of Theorem \ref{thm:comp-trans-der} by either of conditions $f(B\cap-\N_0)\subset\N_0$, $f(B\cap\N_0)\subset-\N_0$, or $f(B\cap-\N_0)\subset-\N_0$, where $-\N_0=\{0,-1,-2,\ldots\}$. Indeed, for every $c\in\N$ and every $n\in\N$ we have that $\frac{-c\md p^n}{p^n}=\frac{p^n-(c\md p^n)}{p^n}= 1-\frac{c\md p^n}{p^n}$. Thus, a symmetry with respect to the axis $y=\frac{1}{2}$ of the unit square $\mathbb I^2\subset\R^2$ maps the subset $$ E(f)=\left\{\left(\frac{x\md p^n}{p^n},\frac{f(x)\md p^n}{p^n}\right)\colon x\in\Z_p, n\in\N\right\}\subset\mathbb I^2 $$ onto the subset $E(-f)$ and vice versa; so $\alpha(f(x))=\alpha(-f(x))$. A similar argument proves that $\alpha(f(x))=\alpha(f(-x))$. \end{note} By using Theorem \ref{thm:comp-trans-der}, one may construct numerous automata (and systems) that are completely transitive. For instance, given $c\in\{2,3,4,\ldots\}$, listed below are examples of automata functions $f_{\mathfrak A(s_0)}=f$ that satisfy Theorem \ref{thm:comp-trans-der}; so the corresponding automata $\mathfrak A(s_0)$ are completely transitive: \begin{itemize} \item $f(x)=cx+c^x$ if $c\ne 1$, $c\equiv 1\pmod p$; \item $f(x)=(x\AND c)+((x^2)\OR c)$ if $p=2$. \end{itemize} Note that the first of these automata is word transitive while the second one is not. With the use of Theorem \ref{thm:comp-trans-der} new types of absolutely transitive automata can be constructed as well. The following corollary from Theorem \ref{thm:comp-trans-der} is a key tool in the construction of these: \begin{cor} \label{cor:abs-t-diff} Let an automaton function $f=f_{\mathfrak A(s_0)}$ map $\N_0$ into $\N_0$, let $f$ be two times differentiable on $\Z_p$, and let $f^{\prime\prime}(x)$ have no more than a finite number of zeros in $\N_0$. Then the automaton $\mathfrak A(s_0)$ is absolutely transitive. \end{cor} \begin{proof} Given a finite non-empty word $\tilde g$ (say, of length $m>0$) over the alphabet $\F_p$, take a finite word $\tilde v$ whose prefix is $\tilde g$ and such that the corresponding non-negative rational integer $v$~\footnote{The one whose base $p$-expansion is $\tilde v$; remind that according to our conventions words are read from right to left, that is the rightmost letters of $\tilde v$ correspond to low order digits in the base-$p$ expansion of $v$.} is a non-zero of $f^{\prime\prime}$: $f^{\prime\prime}(v)\ne 0$. The word $\tilde v$ that satisfies these conditions simultaneously exists as $f^{\prime\prime}$ has not more than a finite number of zeros in $\N_0$ (fixing arbitrary $\tilde g$ means that only some less significant digits in the base-$p$ expansion of $v$ are fixed); so by taking $v$ whose base-$p$ expansion is sufficiently long (thus making $v$ large enough), we find $v\in\N_0$ such that $f^{\prime\prime}(v)\ne 0$ and the $m$-letter prefix of the word $\tilde v$ is $\tilde g$. In other words, given an arbitrary finite word $\tilde g$ over the alphabet $\F_p$, by properly choosing $r\in\N_0$ we find a positive rational integer $v=g+p^m r$ (where $g\in\{0,1,\ldots,p^m-1\}$, $\tilde g$ is a base-$p$ expansion of $g$) such that $f^{\prime\prime}(v)\ne 0$. This is possible due to the finiteness of a number of zeros of $f^{\prime\prime}$ in $\N_0$. We see that both $f$ and the so constructed $v$ satisfy conditions of Theorem \ref{thm:comp-trans-der}: just assume that the ball $B$ from the conditions of the mentioned theorem is the whole space $\Z_p$. Now note that the claim stated at the very beginning of the proof of Theorem \ref{thm:comp-trans-der} is just a re-statement of (ii) from Definition \ref{def:auto-trans-e}: Indeed, under notation of Definition \ref{def:auto-trans-e} and the one from the beginning of the proof of Theorem \ref{thm:comp-trans-der}, the concatenation $w\circ y$ corresponds to $a$, $w$ corresponds to $u$, $w^\prime$ corresponds to $z$, and $w$ is a $k$-letter suffix of the output word which is a base-$p$ expansion for $f(a)\md{p^M}$, whereas $M$ is the length of the word $w\circ y$. Up to these correspondences, condition \eqref{eq:no_lac_00} is equivalent to (ii) from Definition \ref{def:auto-trans-e}. Furthermore, as the word $\tilde v$ has an arbitrarily chosen prefix $\tilde g$, and as the condition \eqref{eq:no_lac_00} holds for $a=v+p^\ell t$ from the proof of Theorem \ref{thm:comp-trans-der} (as the whole Theorem \ref{thm:comp-trans-der} holds for $f$ and $v$), (ii) from Definition \ref{def:auto-trans-e} holds for input word with arbitrarily chosen prefix $\tilde g$, up to all mentioned correspondences. This means that (iii) from Definition \ref{def:auto-trans-e} also holds for $x=\tilde g$ in the case under consideration. The latter finally proves Corollary \ref{cor:abs-t-diff}. \end{proof} We remark that Note \ref{note:minus} can be applied to Corollary \ref{cor:abs-t-diff} as well. Note also that the only type of absolutely transitive automata $\mathfrak A(s_0)$ were known earlier: The ones whose automata functions are polynomials over $\Z$ of degree greater than 1, see \cite[Theorem 11.11]{AnKhr}. The latter assertion follows from Corollary \ref{cor:abs-t-diff}. Yet many other types automata can be proved to be absolutely transitive as well by using the corollary. For instance, an automaton whose both input and output alphabets are $\F_2$ and whose automata function is $f(x)=a+bx+((x^2)\OR c)$, where $a,b,c\in\N_0$, is absolutely transitive: This easily follows from Corollary \ref{cor:abs-t-diff} as $f(\N_0)\subset\N_0$ and $f^{\prime\prime}(x)=2$ for all $x\in\Z_2$. \section{Discussion} \label{sec:Concl} In the paper, by combining tools from $p$-adic and real analysis and automata theory we have shown that discrete systems (automata) with respect to the transitivity of their actions on finite words constitute two classes, the systems whose real plots have Lebesgue measures 1 (equivalently, the completely transitive systems; i.e. such that given two arbitrary words $w$, $w^\prime$ of equal lengths, the system transforms $w$ into $w^\prime$) and systems whose real plots have Lebesgue measures 0. Also we have found conditions for complete transitivity of a system; the conditions yield a method to construct numerous completely transitive automata and respective automata functions, especially the ones that are combined from standard computer instructions and thus are easily programmable. The ergodic completely transitive automata are preferable in constructions of various pseudo-random number generators aimed at cryptographic and/or simulation usage; e.g., in stream ciphers and quasi Monte-Carlo methods.
1,108,101,565,337
arxiv
\section{Introduction} The nonconforming finite element methods (FEMs) have recently been rehabilitated by the medius analysis, which combines arguments from traditional a~priori and a~posteriori error analysis \cite{Gudi10}. {In particular, nonconforming finite element schemes can be equivalent \cite{CCDGNN15,CC_DP_MS12} or superior to conforming finite element schemes \cite{CCKKDPMS15}.} The conforming FEMs for fourth-order problems require $C^1$ conformity and lead to cumbersome implementations, while the nonconforming Morley FEM is as simple as quadratic Lagrange finite elements; the reader may consider the finite element program in \cite[Sec. 6.5]{CCDGJH14} with less than 30 lines of Matlab for a proof of its simplicity. \medskip The Morley FEM allows for other benefits as well as it leads, for instance, to guaranteed lower eigenvalue bounds \cite{CCDG14_eigenvalues}, has quasi-optimal convergence and allows for optimal adaptive mesh-refining \cite{DG_Morley_Eigen}. Given the relevance of nonconforming approximations, the contributions in the literature on the attractive application of the Morley FEM to semilinear problems with the linear biharmonic operator as the leading term plus quadratic lower-order contributions is surprisingly poor. There are important model applications of this problem in the stream-function formulation of the incompressible 2D Navier-Stokes equations \cite{BrezziRappazRaviart80,CN86,CN89} and in the von K\'{a}rm\'{a}n equations \cite{CiarletPlates,GMNN_NCFEM} for nonlinear plates in solid mechanics. There is an overall smallness assumption (on the load $f$) in \cite{CN89} and none of the above references treats a~posteriori error control; although the later is included in \cite{CCGMNN18} for a dG discretisation. \medskip This paper considers the local approximation of a general regular solution $u$ to a nonlinear function $N(u)$ without any extra conditions like a small load or an extra-regularity assumption on the exact solution. The invertible Frech\'et derivative $DN(u)$ of the nonlinear function $N:X\to Y^*$ at a regular solution $u$ is by definition a linear bijection between Banach spaces $X$ and $Y^*$; this is equivalent to an $\inf$-$\sup$ condition on the associated bilinear form $DN(u;\bullet,\bullet)=a+b: X\times Y\to\bR$ (split into two contributions $a$ and $b$ in Section~2). For a nonconforming finite element discretisation with some finite element space $X_h\times Y_h \not\subset X\times Y$, in the absence of further conditions, the $\inf$-$\sup$ condition for $a+b: X\times Y\to\bR$ does {\em not} imply an $\inf$-$\sup$ condition for the discrete bilinear form $a_h+b_h:X_h\times Y_h\to\bR$. Section~2 studies two general bilinear forms $\widehat{a}$ and $\widehat{b}$ defined on a superspace $\widehat{X}\times \widehat{Y}$ of $X_h\times Y_h$ and $X\times Y$ and introduces four parameters in {\bf (H1)}-{\bf (H4)} with a sufficient condition for an $\inf$-$\sup$ condition to hold for $a_h+b_h: X_h\times Y_h\to\bR$ to enable a Petrov-Galerkin scheme and is the first contribution of this paper. \medskip There will be three applications of this abstract framework in this paper. The first of which is on former results in \cite{CCADNNAKP15} on a nonconforming Crouzeix-Raviart FEM for well-posed second-order linear self-adjoint and indefinite elliptic problems: Since the framework applies the medius analysis tools, there are no smoothness assumptions and the feasibility and best-approximation property for sufficiently small mesh-sizes is newly established for the Crouzeix-Raviart FEM for $L^\infty$ coefficients in this paper (compared to piecewise Lipschitz continuous coefficients in \cite{CCADNNAKP15}). \medskip The second and third applications of this discrete stability framework of Section~2 is on semilinear problems with a trilinear nonlinearity: The stream function formulation of the incompressible 2D Navier-Stokes problem \cite{BrezziRappazRaviart80} in Section~4 and the von K\'{a}rm\'{a}n equations \cite{CiarletPlates,Brezzi} in Section~5 with conforming and Morley FEM. The abstract stability result overcomes (a) the high regularity assumptions $u \in H^2_0(\Omega) \cap H^3(\Omega)$ and (b) is not restricted to small data as in \cite{CN86,CN89}. Section~3 studies these semilinear problems with a trilinear nonlinearity in an abstract framework. Given the regular solution $u$ to $N(u)=0$, some condition on the parameters in {\bf (H1)}-{\bf (H4)} lead to the $\inf$-$\sup$ condition of the discrete bilinear form $D\widehat{N}(u; \bullet,\bullet): X_h\times Y_h\to\bR$ of the extended nonlinear function $\widehat{N}:\widehat{X}\to\widehat{Y}^*$. Two further conditions {\bf (H5)}-{\bf (H6)} guarantee in particular that some discrete function $x_h\in X_h\cap \overline{B(u,\delta_6)}$ that solves the discrete problem exists in the closed ball $\overline{B(u,\delta_6)}$ around $u$ and radius $\delta_6>0$ in the normed linear space $\widehat{X}$. For sufficiently small $\delta_6$, the $\inf$-$\sup$ condition of the bilinear form $D\widehat{N}(x; \bullet,\bullet)$ at $u$ implies that of $D\widehat{N}(x_h; \bullet,\bullet)$ and this enables the Newton-Kantorovic theorem. It implies the quadratic convergence of the Newton scheme with the initial iterate $x_h$ towards a solution $u_h$ of the discrete equation $N_h(u_h)=0$; in particular it implies the unique existence of a discrete solution in $X_h\cap \overline{B(u,\epsilon)}$. The other contributions of the paper are sufficient conditions on the parameters in {\bf (H1)}-{\bf (H6)} sufficient for {\bf (A)--(D)} described for a class of semilinear problems with a trilinear nonlinearity. It turns out that six parameters describe the stability and approximation property in an abstract framework in {\bf (H1)}-{\bf (H6)} of Sections~2 and 3 in this paper. They provide formulas for $\epsilon$ and $\delta$ in {\bf (A)} as well as computability and best-approximation. \\ {\bf (A).} {\em There exist $\epsilon,\delta>0$ such that, for all $\displaystyle\cT\in\bT(\delta)$, there exists a unique discrete solution $u_h\in V_h(\cT) \cap \overline{B(u,\epsilon)}$ to $N_h(u_h)=0$.} \medskip Here and throughout the paper, the discrete space is a piecewise polynomial space (of polynomials of degree at most $k$) based on a shape-regular triangulation $\cT$ of the bounded simply-connected polyhedral Lipschitz domain $\Omega\subset\mathbb{R}^n$ into simplices. The set of all those triangulations is $\mathbb{T}$ is generated from newest-vertex bisection from some initial triangulation that matches the domain exactly; it is merely required that the simplices are uniformly shape-regular in $\mathbb{T}$. Given any $\delta>0$, the subset $\bT(\delta)$ denotes the set of all triangulations in $\mathbb{T}$ with a maximal mesh-size $\max h_\cT=h_{\max}\le \delta$. The (conforming or nonconforming) finite element space $V_h\equiv V_h(\cT)\subset P_k(\cT;\bR^m)$ is associated with some $\cT\in\bT$ even if this is suppressed in the notation; the simplified notation also applies to the discrete nonlinear function $N_h\equiv N_h(\cT)$. \medskip \noindent {\bf (B).} {\em There exist $\epsilon,\delta,\rho>0$ such that, for all $\displaystyle\cT\in\bT(\delta)$ and for any initial iterate $u_h^{(0)}\in V_h(\cT)\cap \overline{B(u_h,\rho)}$, the Newton scheme converges $R$-quadratically to the unique discrete solution $u_h\in V_h(\cT) \cap \overline{B(u,\epsilon)}$ to $N_h(u_h)=0$.} \\ {\bf (C).} {\em There exist $\epsilon,\delta,C_{\text{\rm qo}}>0$ such that {\bf (A)} holds and, for all $\cT\in\bT(\delta)$, $$ \| u-u_h\|_{\widehat{V}}\leq C_{\text{\rm qo}}\left(\min_{v_h\in V_h(\cT)}\| u-v_h\|_{\widehat{V}}+{\rm apx}(\cT)\right) $$ with some approximation term ${\rm apx}(\cT)$ to be specified in the particular application.} \\ A local reliable and efficient a~posteriori error control holds even for inexact solve (owing to a termination in an iterative solver) in the sense of \\ {\bf (D).} {\em There exist $\epsilon,\delta,C_{\text{\rm rel}},C_{\text{\rm eff}}>0$ such that any approximation $v_h\in V_h(\cT) $ with $\| u-v_h\|_{\widehat{V}}\leq\epsilon$ and $\displaystyle\cT\in\bT(\delta)$ satisfies $$ C_{\text{\rm rel}}^{-1}\| u-v_h\|_{\widehat{V} } \le \| N(v_h)\|_{\widehat V^*} + \min_{v\in V} \| v_h-v \|_{\widehat{V}} \le C_{\text{\rm eff}}\| u-v_h\|_{\widehat{V} }. $$ } \\ It is part of the abstract results in Section~2 and 3 to identify the reliability and efficiency constants in the above displayed estimate and prove that the positive constants $\epsilon$, $\delta$, $\rho$, $C_{\rm qo}$, $C_{\text{\rm rel}}$, and $C_{\text{\rm eff}}$ are mesh-independent. The abstract error control in {\bf (D)} is the point of departure in the applications to the stream function formulation of the incompressible 2D Navier-Stokes problem \cite{BrezziRappazRaviart80} in Section~4 and the von K\'{a}rm\'{a}n equations \cite{CiarletPlates,Brezzi} in Section~5. This paper establishes the first reliable estimate of $\| N(v_h)\|_{\widehat V^*} $ and $\min_{v\in V} \| v_h-v \|_{\widehat{V}}$ in terms of an explicit residual-based error estimator for the conforming and Morley FEM and discusses its efficiency. This presentation is restricted to quadratic problems in which the weak formulation involves a trilinear form for a simple outline to cover two semilinear fourth-order problems that are important in applications. The generalisation to more general and stronger nonlinearities, however, requires appropriate growth conditions in various norms and involves a more technical framework. The presentation matches exactly the nonconforming applications (Crouzeix-Raviart and Morley finite elements); other schemes like the discontinuous Galerkin schemes \cite{CCGMNN18} with their discrete norms and various jump conditions could be included with more additional technicalities. Standard notation on Lebesgue and Sobolev spaces applies throughout the paper and $\|\bullet\|$ abbreviates $\|\bullet\|_{L^2(\Omega)}$ with a $L^2$ scalar product $(\bullet,\bullet)_{L^2(\Omega)}$, while the duality brackets $<\bullet,\bullet>_{V^*\times V}$ are reserved for a dual pairing in $V^*\times V$; $\|\bullet\|_{\infty}$ abbreviates the norm in $L^\infty(\Omega)$; $H^m(\Omega)$ denotes the Sobolev spaces of order $m$ with norm $\|\bullet\|_{H^m(\Omega)}$; $H^{-1}(\Omega)$ (resp. $H^{-2}(\Omega)$) is the dual space of $H^1_0(\Omega):=\{v\in H^1(\Omega): v|_{\partial \Omega}=0\}$ (resp. $H^2_0(\Omega):=\{v\in H^2(\Omega): v|_{\partial \Omega}=\frac{\partial v}{\partial \nu}|_{\partial \Omega}=0\}$). With a regular triangulation $\cT$ of the polygonal Lipschitz domain $\Omega\subset\bR^n$ into simplices, associate its piecewise constant mesh-size $h_\cT\in P_0(\cT)$ with $h_\cT|_T:=h_T:=\text{diam}(T) \approx |T|^{1/n}$ for all $T\in\cT$ and its maximal mesh-size $h_{\max}:=\max h_\cT$. Here and throughout, $$ P_k(\cT):=\left\{v\in L^2(\Omega):\,\forall T\in\cT,v|_{T}\in P_k(T)\right\} $$ denotes the piecewise polynomials of degree at most $k\in\mathbb{N}_0$ and let $\Pi_k$ denote the $L^2(\Omega)$ (resp. $L^2(\Omega;\bR^n)$ or $L^2(\Omega;\bR^{n\times n})$) orthogonal projection onto $P_k(\cT)$ (resp. $P_k(\cT;\bR^m)$ or $P_k(\cT;\bR^{m \times m})$). Oscillations of degree $k$ read \[ {\rm osc}_k(\bullet,\cT):=\|h_{\cT}^{p}(I-\Pi_k)\bullet\|_{L^2(\Omega)} \] with its square ${\rm osc}_k^2(\bullet,\cT):={\rm osc}_k(\bullet,\cT)^2$ for $p=1$ for second-order in Section~2 and $p=2$ for fourth-order problems in Sections~4 and 5. The notation $A\lesssim B$ means there exists a generic $h_{\cT}$-independent constant $C$ such that $A\leq CB$; $A\approx B$ abbreviates $A\lesssim B\lesssim A$. In the sequel, $C_{\text {rel}}$ and $C_{\text {eff}}$ denote generic reliability and efficiency constants. The set of all $n\times n$ real symmetric matrices is $\bS:=\bR^{n\times n}_{sym}$. \section{Well-posedness of the discrete problem}\label{infsup} This section presents sufficient conditions for the stability of nonconforming discretizations of a well-posed linear problem. Subsection 2.1 introduces four parameters {\bf (H1)}-{\bf (H4)} and a condition on them sufficient for a discrete inf-sup condition for the sum $a+b$ of two bilinear forms $a, b: X\times Y\to \bR$ extended to {superspaces} $\widehat{X}\supset X+X_h$ and $\widehat{Y}\supset Y+Y_h$. Subsection 2.2 discusses a first application to second-order non-self adjoint and indefinite elliptic problems \cite{CCADNNAKP15}. \subsection{Abstract discrete inf-sup condition}\label{sec:abs_result} Let $\widehat{X}$ (resp. $\widehat{Y}$) be a real Banach space with norm $\|\bullet\|_{\widehat{X}}$ (resp. $\|\bullet\|_{\widehat{Y}}$) and suppose $X$ and $X_h$ (resp. $Y$ and $Y_h$) are two complete linear subspaces of $\widehat{X}$ (resp. $\widehat{Y}$) with inherited norms $\|\bullet\|_{X}:=\left(\|\bullet\|_{\widehat{X}}\right)|_{X}$ and $\|\bullet\|_{X_h}:=\left(\|\bullet\|_{\widehat{X}}\right)|_{X_h}$ (resp. $\|\bullet\|_Y:=\left(\|\bullet\|_{\widehat{Y}}\right)|_{Y}$ and $\|\bullet\|_{Y_h}:=\left(\|\bullet\|_{\widehat{X}}\right)|_{X_h}$). Let $\widehat{a},\widehat{b}:\widehat{X}\times \widehat{Y}\to \bR$ be bounded bilinear forms and abbreviate \begin{align}\label{defn_ab} &a:=\widehat{a}|_{X\times Y},\: a_h:=\widehat{a}|_{X_h\times Y_h}\text{ and } b:=\widehat{b}|_{X\times Y},\: b_h:=\widehat{b}|_{X_h\times Y_h}. \end{align} Let the bilinear forms $a$ and $b$ be associated to the linear operators $A$ and $B\in L(X;Y^*)$, e.g., $Ax:=a(x,\bullet)\in Y^*$ for all $x\in X$. Suppose that the linear operator $\widehat{A}\in L(\widehat{X}; \widehat{Y}^*)$ (resp. $A+B\in L(X;Y^*)$) associated to the bilinear form $\widehat{a}$ (resp. $a+b$) is invertible and \begin{align} 0<\widehat{\alpha}&:=\inf_{\substack{\widehat{x}\in \widehat{X} \\ \|\widehat{x}\|_{\widehat{X}}=1}} \sup_{\substack{\widehat{y}\in \widehat{Y}\\ \|\widehat{y}\|_{\widehat{Y}}=1}}\widehat{a}(\widehat{x},\widehat{y});\label{dis_Ah_infsup}\\ 0<\beta&:=\inf_{\substack{x\in X\\ \|x\|_{X}=1}} \sup_{\substack{y\in Y\\ \|y\|_{Y}=1}}(a+b)(x,y).\label{cts_infsup} \end{align} Suppose that three linear operators $P\in L(\widehat{Y};Y_h)$, $Q\in L(X_h; X)$, $\cC\in L(Y_h;Y)$ exist and lead to parameters $\delta_1,\delta_2,\delta_3,\Lambda_4\ge 0$ in \begin{itemize} \item[{\bf (H1)}] $\displaystyle\delta_1:=\sup_{\substack{x_h\in X_h\\ \|x_h\|_{X_h=1}}} \sup_{\substack{y_h\in Y_h\\ \|y_h\|_{Y_h}=1}}\widehat{a}\left(A^{-1}\left(\widehat{b}(x_h,\bullet)|_{Y}\right),y_h-\cC y_h\right);$ \item[{\bf (H2)}] $\displaystyle\delta_2:=\sup_{\substack{x_h\in X_h\\ \|x_h\|_{X_h=1}}} \sup_{\substack{\widehat{y}\in \widehat{Y}\\ \|\widehat{y}\|_{\widehat{Y}}=1}}\widehat{a}\left(x_h{ + }A^{-1}\left(\widehat{b}(x_h,\bullet)|_{Y}\right),\widehat{y}-P\widehat{y}\right);$ \item[{\bf (H3)}] $\displaystyle\delta_3:=\sup_{\substack{x_h\in X_h\\ \|x_h\|_{X_h}=1}}\left\|\widehat{b}\big{(}x_h,(1-\cC)\bullet\big{)}\right\|_{ Y_h^*};$ \item[{\bf (H4)}] $\exists\Lambda_4<\infty\,\forall x_h\in X_h\;\; \|(1-Q)x_h\|_{\widehat{X}}\leq \Lambda_4\;{\rm dist}_{\|\bullet\|_{\widehat{X}}}\left(x_h,X\right).$ \end{itemize} Abbreviate the bound $\| \widehat{b}\|_{\widehat{X}\times Y^*}$ of the bilinear form $ \widehat{b}|_{\widehat{X}\times Y^*}$ simply by $\| \widehat{b}\|$ and set $\|a\|:=\|A\|_{L(X;Y^*)}$ as well as $\|A^{-1}\|:=\|A^{-1}\|_{L(Y^*;X)}$ --- whenever there is no risk of confusion (e.g. with the $L^2$ norm $\|\bullet\|$ of a Lebesgue function). If {\bf (H4)} holds with $0\leq\Lambda_4<\infty$, set \begin{equation}\label{AsmpCondn} \widehat{\beta}:=\frac{\beta}{\Lambda_4\beta+\|a\|\left(1+\Lambda_4\left(1+\|A^{-1}\|\|\widehat{b}\|\right)\right)}>0. \end{equation} In the applications discussed in this paper, $\delta_1+\delta_2+\delta_3$ from {\bf (H1)}-{\bf (H3)} will be smaller than $\widehat{\alpha}\widehat{\beta}$ so that the subsequent result provides a discrete inf-sup condition with $\beta_h>0$. \begin{thm}[discrete inf-sup condition]\label{dis_inf_sup_thm} Under the aforementioned notation, \eqref{dis_Ah_infsup}-{\eqref{AsmpCondn}} and {\bf (H1)}-{\bf (H4)} imply \begin{equation}\label{eqdis_inf_sup_defbetah} \widehat{\alpha}\widehat{\beta}-\left(\delta_1+\delta_2+\delta_3\right)\leq \beta_h:=\inf_{\substack{x_h\in X_h\\ \| x_h\|_{X_h}=1}}\sup_{\substack{y_h\in Y_h\\ \| y_h\|_{Y_h}=1}} (a_h+b_h)(x_h,y_h). \end{equation} \end{thm} \begin{proof} Given any $x_h\in X_h$ with $\| x_h\|_{X_h}=1$, define $$x:=Qx_h,\: \xi=A^{-1}\left(\widehat{b}(x_h,\bullet)|_Y\right)\in X,\text{ and }\eta=A^{-1}\left(b(x,\bullet)|_Y\right)\in X .$$ \noindent The inf-sup condition \eqref{cts_infsup} and $A\eta=Bx$ lead to \begin{equation*} \beta\|x\|_X\leq \|Ax+Bx\|_{Y^*}=\|A(x+\eta)\|_{Y^*}\leq \|a\|\|x+\eta\|_{X}. \end{equation*} This and triangle inequalities imply \begin{equation}\label{xbound} {\beta}/{\|a\|}\,\|x\|_X\leq\|x+\eta\|_{X}\leq \|x-x_h\|_{\widehat{X}}+\|x_h+\xi\|_{\widehat{X}}+\|\eta-\xi\|_X. \end{equation} The definition of $\xi$ and $\eta$, the boundedness of the operator $A^{-1}$ and of the bilinear form $\widehat{b}|_{\widehat{X}\times Y}$ show \begin{equation}\label{xi_eta_bdd} \|\xi-\eta\|_X=\|A^{-1}\left(\widehat{b}(x-x_h,\bullet)|_Y\right)\|_{X}\leq \|A^{-1}\|\|\widehat{b}\|\|x-x_h\|_{\widehat{X}}. \end{equation} The combination of \eqref{xbound}-\eqref{xi_eta_bdd} reads \begin{align}\label{xfirst_bdd} {\beta}/{\|a\|}\,\|x\|_{X}\leq \|x_h+\xi\|_{\widehat{X}}+\left(1+\|A^{-1}\|\|\widehat{b}\|\right)\|x-x_h\|_{\widehat{X}}. \end{align} Since {\bf (H4)} implies \begin{equation}\label{A4app} \| x-x_h\|_{\widehat{X}}\leq\Lambda_4\| x_h+\xi\|_{\widehat{X}}, \end{equation} the estimate \eqref{xfirst_bdd} results in \begin{align} \|x\|_X\leq {\|a\|}/{\beta}\left(1+\Lambda_4\left(1+\|A^{-1}\|\|\widehat{b}\|\right)\right)\|x_h+\xi\|_{\widehat{X}}\label{xnew_bdd}. \end{align} The triangle inequality and \eqref{A4app}-\eqref{xnew_bdd} lead to \begin{align*} 1=\|x_h\|_{X_h}&\leq\|x-x_h\|_{\widehat{X}}+\|x\|_X\\ &\leq \left(\Lambda_4+{\|a\|}/{\beta}\left(1+\Lambda_4\left(1+\|A^{-1}\|\|\widehat{b}\|\right)\right)\right)\|x_h+\xi\|_{\widehat{X}}. \end{align*} With the definition of $\widehat{\beta}$ in \eqref{AsmpCondn}, this reads \begin{equation}\label{beta_bdd} \widehat{\beta}\leq \|x_h+\xi\|_{\widehat{X}}. \end{equation} For given $x_h+\xi\in\widehat{X}$ and for any $0<\epsilon<\widehat{\alpha}$, the inf-sup condition \eqref{dis_Ah_infsup} implies the existence of some $\widehat{y}\in \widehat{Y}$ with $\|\widehat{y}\|_{\widehat{Y}}=1$ and \begin{equation}\label{alpha_bdd} (\widehat{\alpha}-\epsilon)\|x_h+\xi\|_{\widehat{X}}\leq \widehat{a}(x_h+\xi,\widehat{y})=\widehat{a}(x_h+\xi,\widehat{y}-P\widehat{y}) +\widehat{a}(x_h+\xi,P\widehat{y}). \end{equation} Since $\widehat{a}(\xi,\cC y_h)=\widehat{b}(x_h,\cC y_h)$ for $y_h:=P\widehat{y}$, the {latter} term is equal to \begin{equation*} \widehat{a}(x_h+\xi,y_h)=a_h(x_h,y_h)+b_h(x_h,y_h)+\widehat{a}(\xi,y_h-\cC y_h)+\widehat{b}(x_h,\cC y_h-y_h). \end{equation*} Let $\gamma_h:=a_h(x_h,y_h)+b_h(x_h,y_h)$, then {\bf (H1)}-{\bf (H3)} and \eqref{alpha_bdd} lead to \begin{align}\label{bdd_deltas} \widehat{a}(x_h+\xi,\widehat{y})\leq \gamma_h+\delta_1+\delta_2+\delta_3. \end{align} The combination of \eqref{beta_bdd}-\eqref{bdd_deltas} and $\epsilon\searrow0$ in the end result in \begin{equation*} \widehat{\alpha}\widehat{\beta}-(\delta_1+\delta_2+\delta_3)\leq \gamma_h\leq \|a_h(x_h,\bullet)+b_h(x_h,\bullet)\|_{Y_h^*}. \end{equation*} The last estimate holds for an arbitrary $x_h$ with $\|x_h\|_{X_h}=1$ and so proves the discrete inf-sup condition $\widehat{\alpha}\widehat{\beta}-\left(\delta_1+\delta_2+\delta_3\right)\leq \beta_h$. \end{proof} It is well known that a positive $\beta_h>0$ in \eqref{eqdis_inf_sup_defbetah} implies the best-approximation for the Petrov-Galerkin scheme \cite{MR3097958,MR2373954,Braess,DiPetroErn12} in the following sense. \begin{cor}[best-approximation]\label{rembestapproximation} Suppose $(\widehat{X},\widehat{a})$ is a Hilbert space and $u\in X$, $u_h\in X_h$, and $\widehat{F}\in \widehat{Y}^*$ satisfy $(a+b)(u,\bullet)=F:=\widehat{F}|_Y\in Y^*$ and $(a_h+b_h)(u_h,\bullet)=F_h:=\widehat{F}|_{Y_h}\in Y_h^*$. Then \[ \beta_h \| u- u_h \|_{\widehat{X}} \le M \min_{x_h\in X_h} \| u - x_h \|_{\widehat{X}} + \sup_{\substack{y_h\in Y_h\\ \| y_h\|_{Y_h}=1}} \left(F_h(y_h)- (\widehat{a}+\widehat{b})( u ,y_h)\right) \] with the bound $M:= \| \widehat{a}+\widehat{b} \|_{ \widehat{X} \times Y_h } \le \| \widehat{a}+\widehat{b} \|$ of the bilinear form $ (\widehat{a}+\widehat{b})|_{\widehat{X} \times Y_h}$. \end{cor} The proof of the quasi-optimal convergence for a stable discretisation is nowadays standard in all finite element textbooks in the context of the Strang-Fix lemmas. This form is seemingly not explicitly available, so we outline the proof in an appendix. \subsection{Second-order linear non-selfadjoint \\ and indefinite elliptic problems \label{sec:SelfAdjoint} This subsection applies {\bf (H1)}-{\bf (H4)} to second-order linear self-adjoint and indefinite elliptic problems and establishes a priori estimates for conforming and nonconforming FEMs under more general conditions on the smoothness of the coefficients of the elliptic operator and for $\Omega \subset {\mathbb R}^n$ vis-\`{a}-vis \cite{CCADNNAKP15}. \subsubsection{Mathematical model}\label{subsectMathematicalmodelinSection2} The strong form of a second-order problem with $L^\infty$ coefficients ${\bf A},$ {\bf b} and $\gamma$ reads: Given $f\in L^2(\Omega)$ seek $u\in V:=H^1_0(\Omega)$ such that \begin{eqnarray}\label{eq1} \mathcal{L}u := -\nabla \cdot (\boldmath A\nabla u+u {\mathbf b})+ \gamma \: u=f. \end{eqnarray} The coefficients ${\bf A}\in L^\infty(\Omega;\bS),\; {\mathbf b}\in L^\infty(\Omega;\bR^n),\gamma\in L^\infty(\Omega)$ satisfy $0<\underline{\lambda}\leq \lambda_1({\bf A}(x))\leq \cdots \leq \lambda_n({\bf A}(x)) \leq \overline{\lambda}<\infty$ for the eigenvalues $\lambda_j({\bf A}(x))$ of the SPD ${\bf A}(x)$ for a.e. $x\in \Omega$. \noindent For $u,v\in V,$ the expression \begin{equation}\label{defn_a} a(u,v):=\int_{\Omega}({\bf A}\nabla u)\cdot\nabla v\dx \end{equation} defines a scalar product on $V$ (and $V$ is endowed with this scalar product in the sequel) equivalent to the standard scalar product in the sense { that} the $H^1$-seminorm $|\bullet|_{H^1(\Omega)}:=\|\nabla\bullet\|$ in $V$ satisfies \begin{equation}\label{lambda_bdd} \underline{\lambda}^{1/2}| \bullet|_{H^1(\Omega)} \leq \|\bullet\|_{a} :=a(\bullet,\bullet)^{1/2}\leq \overline{\lambda}^{1/2}| \bullet|_{H^1(\Omega)}. \end{equation} Given the bilinear form $b:V\times V\to \bR$ with \begin{align}\label{defn_b} b(u,v)&:=\int_{\Omega}\left(u {\mathbf b}\cdot\nabla v+\gamma uv\right)\dx \; {\rm for \; all \; } u,v \in V \end{align} and the linear form $F\in L^2(\Omega)^*\subset H^{-1}(\Omega)=: V^*$ with $F(v):=\int_{\Omega}fv\dx$ for all $v\in V$, the weak formulation of \eqref{eq1} seeks the solution $u\in V$ to \begin{align}\label{WeakFrmSec} (a+b)(u,v):=a(u,v)+b(u,v)=F(v)\quad\text{ for all } v\in V. \end{align} In the absence of further conditions on the smoothness of the coefficients, any higher regularity of the weak solution $u\in H^1_0(\Omega)$ \eqref{eq1} in the form $u\in H^s(\Omega)$ for any $s>1$ is not guaranteed even for $f\in C^\infty(\Omega)$ \cite[p. 20]{SchatzWang96}. \subsubsection{Triangulations} Throughout this paper, $\bT$ is a set of shape-regular triangulations of the polyhedral bounded Lipschitz domain $\Omega\subset \bR^n$ into simplices. Given an initial triangulation $\cT_0$ of $\Omega$, let the newest-vertex bisection define local mesh-refining that leads to a set of shape-regular triangulations $\cT\in\bT$. Shape-regularity means that there exists a universal constant $\kappa>0$ such that the maximal diameter ${\rm diam}(B)$ of a ball $B\subset K$ satisfies $\kappa\, h_K\leq {\rm diam}(B)\leq {\rm diam}(K)=:h_K$ for any $K\in\cT \in \bT$. Given $\cT\in\bT$, let $h_{\cT}\in P_0(\cT)$ be piecewise constant with $h_{\cT}|_K=h_K ={\rm diam} (K)$ for $K\in\cT$ and let $h_{\max} :=h_{\max}(\cT):=\max h_{\cT}$; recall $\bT(\delta):=\left\{\cT\in\bT : h_{\max}(\cT)\le \delta\right\}$ for any $\delta >0$. The set of all sides of {the shape-regular triangulation $\cT$ of $\Omega$ into simplices} is denoted by $\cE$. The set of all internal vertices (resp. boundary vertices) and interior sides (resp. boundary sides) of $\cT$ are denoted by $\cN (\Omega)$ (resp. $\cN(\partial\Omega)$) and $\cE (\Omega)$ (resp. $\cE (\partial\Omega)$). \subsubsection{Conforming FEM} Let $P_1(\cT)$ denote the piecewise affine functions in $L^\infty(\Omega)$ with respect to the triangulation $\cT$ so that the associated $P_1$ conforming finite element function spaces without and with (homogeneous) boundary conditions read \begin{align*} S^1(\cT):=P_1(\cT)\cap C(\bar\Omega)\text{ and } S^1_0(\cT):=\left\{v_C\in S^1(\cT):v_C=0 \text{ on }\partial\Omega\right\}. \end{align*} The interior nodes $\cN(\Omega)$ label the nodal basis functions $\varphi_z$ with patch {$\omega_z=\{\varphi_z>0\}={\rm int}( {\rm supp}\,\varphi_z)$} around $z\in\cN(\Omega)$. Given some finite-dimensional finite element space $V_h$ with $S_0^1(\cT)\subseteq V_h\subset V\equiv H^1_0(\Omega)$, the discrete formulation of \eqref{WeakFrmSec} seeks $u_h\in V_h$ with \begin{equation}\label{weakdisSec} a(u_h,v_h)+b(u_h,v_h)=F(v_h)\text{ for all }v_h\in V_h . \end{equation} The arguments of \cite{SchatzWang96} are rephrased in the following lemma (proven in the appendix) that allows the application of Theorem~\ref{dis_inf_sup_thm} in the subsequent theorem. \begin{lem}\label{SpecLem} For any $\epsilon>0$ there exists some $\delta>0$ such that the solution $z\in V\equiv H^1_0(\Omega)$ to $a(z,\bullet)=g\in \lt\subset H^{-1}(\Omega)$ for $g\in \lt$ satisfies, for all $\cT\in\bT(\delta)$, that \[ \min_{z_C\in S^1_0(\cT)} \| z-z_C\|_a + \min_{Q_0\in P_0(\cT;\bR^n)}\| A\nabla z - Q_0 \| \leq\epsilon \|g\|.\] \end{lem} \begin{thm} Adopt the aforementioned assumptions on $a$ and $b$ in \eqref{defn_a} -\eqref{defn_b} and suppose that \eqref{eq1} is well-posed in the sense that it allows for a unique solution $u$ for all right-hand sides $f\in L^2(\Omega)$. Then \begin{equation*} 0<\beta:=\inf_{\substack{x\in V\\ \| x_h\|_{a}=1}} \sup_{\substack{y\in V \\ \| y\|_{a}=1}}(a+b)(x,y) \end{equation*} and for any positive $\beta_0<\beta$, there exist $\delta>0$ such that \begin{equation*} \beta_0\leq\beta_h:=\inf_{\substack{x_h\in V_h\\ \| x_h\|_{a}=1}} \sup_{\substack{y_h\in V_h \\ \| y_h\|_{a}=1}}(a+b)(x_h,y_h) \end{equation*} holds for all $S_0^1(\cT)\subset V_h:=X_h=Y_h\subset V$ with respect to $\cT\in\bT(\delta)$. Moreover, the solution $u$ to \eqref{eq1} and $u_h$ to \eqref{weakdisSec} satisfy \begin{align}\label{Kato_arg} \| u-u_h\| _a\leq \frac{\|a+b\|}{\beta_0}\min_{v_h\in V_h}\| u-v_h\|_a. \end{align} \end{thm} \begin{proof} The invertibility of a linear operator from one Banach space into the dual of another is equivalent to an $\inf$-$\sup$ condition \cite{MR3097958,MR2373954,Braess,DiPetroErn12}; in particular, the well-posedness of the theorem implies $\beta>0$. The remaining assertions follow from Theorem~\ref{dis_inf_sup_thm} with $\widehat{a}=a$, $\widehat{b}=b$, $S_0^1(\cT)\subset V_h=X_h=Y_h \subset X=Y=V=H^1_0(\Omega) $ endowed with the norm $\| \bullet \|_a$. Then $\alpha=\widehat{\alpha}=1=\| \widehat{a}\| $ and $\beta$ is the constant in \eqref{cts_infsup}. To conclude the discrete inf-sup condition, it is sufficient to verify that the parameters involved in {\bf (H1)}-{\bf (H4)} can be chosen such that the discrete inf-sup constant in Theorem~\ref{dis_inf_sup_thm} is positive. Moreover, the discrete inf-sup constants of $a+b$ are equal to those of the dual problem $a+ b^*$ with $b^*(u,v)=b(v,u)$. Therefore, Theorem~\ref{dis_inf_sup_thm} is applied to $a$ and $b^*$ (rather than $a$ and $b$). Let $Q$ and $\cC$ be the identity, while $P\in L(V;V_h)$ denotes the Galerkin projection onto $V_h$ with respect to $a$, i.e. $a(v-Pv,\bullet)=0$ in $V_h$ for all $v\in V$. Then {the parameters in} {\bf (H1)}, {\bf (H3)}, and {\bf (H4)} are $\delta_1=\delta_3=\Lambda_4=0$. The {choice} of {the parameter $\delta_2$ in} {\bf (H2)} concerns $v\in V$ and $u_h\in V_h$ with $\| v\|_a=1=\| u_h\|_a$ and the solution $z:=A^{-1}(b^*(u_h,\bullet))\in V$ to $a(z,\bullet)=b(\bullet, u_h)$. Notice that $g:={\bf b}\cdot\nabla u_h+\gamma u_h\in\lt$ satisfies \[ b(\varphi, u_h)=\int_{\Omega}\left({\bf b}\cdot\nabla u_h+\gamma u_h\right)\varphi\dx=\integ g\varphi\dx \quad {\rm for \; all \;} \varphi \in V \] and (with the Friedrichs constant $C_{F} $ for $\|\bullet\|\leq C_F\trinl\bullet\trinr \le \underline{\lambda}^{-1/2}C_F\| \bullet\|_a$ in $V$) \begin{align} \|g\|\leq \| u_h\|_a \,\|{\bf A}^{-1/2} {\bf b}\|_{\infty}+\|\gamma\|_{\infty}\|u_h\| \leq (\|{\bf b}\|_{\infty}+C_F\|\gamma\|_{\infty})\underline{\lambda}^{-1/2} =: C .\label{Frid} \end{align} The Galerkin orthogonality with $P$, the definition of $z$, and a Cauchy inequality with $\| v\|_a=1$ in the end show \[ a\left(u_h+A^{-1}(b^*(u_h,\bullet)),v- P v\right)=a(z,v- P v)=a(z- P z,v) \le \|z-Pz\|_a. \] Given any $\epsilon>0$, Lemma~\ref{SpecLem} leads to {$\delta >0$} such that for all $\cT\in\bT(\delta)$ there exists some $z_C \in S_0^1(\cT)$ with \[ \|z-Pz\|_a\le \| z-z_C\|_a \leq \epsilon \|g\|\le \epsilon C \] with \eqref{Frid} in the last step. The combination of the previous inequalities proves {\bf (H2)} with ${\delta_2} := \epsilon C$. Theorem~\ref{dis_inf_sup_thm} applies with $\beta=\widehat{\beta}$ and $\beta_h\ge \beta- \epsilon C$. This proves the assertion on $\beta_h\ge\beta_0$ for sufficiently small $\epsilon$ and $\delta$. The quasi-optimal convergence \eqref{Kato_arg} follows from Corollary~\ref{rembestapproximation} without the second term in the conforming discretisation. \end{proof} \begin{rem} The proof requires that the discrete space $V_h$ solely satisfies $S_0^1(\cT)\subset V_h\subset H^1_0(\Omega)$ and so allows for conforming $hp$ finite element spaces. The condition $\cT\in\bT(\delta)$ allows for local mesh-refining as long as max $h_0$ is sufficiently small. \end{rem} \subsubsection{Nonconforming FEM} This subsection establishes the {\it first} best-approximation-type a~priori error estimate for the lowest-order nonconforming FEM in any space dimension $\ge 2$ under the assumptions on the coefficients of Subsubsection~\ref{subsectMathematicalmodelinSection2} as an application of Theorem~\ref{dis_inf_sup_thm}. This generalises \cite[Thm 3.3]{CCADNNAKP15} from piecewise Lipschitz continuous to $L^\infty$ coefficients. The nonconforming Crouzeix-Raviart (CR) finite element spaces read \begin{eqnarray*} && CR^1(\mathcal T): =\{v \in P_1(\mathcal{T}): \forall E \in \mathcal E,~ v~ \text{ is continuous at mid($E$) }\},\\ && CR^1_0(\mathcal T):=\{v \in CR^1(\mathcal T):v(\text{mid} (E)) =0 ~~\text{for all} ~E \in \mathcal E ({\partial\Omega}) \}. \end{eqnarray*} Here $\text{mid} (E)$ denotes the mid operator for a simplex obtained by taking the arithmetic mean of all vertices. The CR finite element spaces give rise to the bilinear forms $a_{\text{pw}},b_{\text{pw}}:CR_0^1(\cT)\times CR_0^1(\cT)\to \bR$ defined, for all $u_{\text{CR}},v_{\text{CR}}\in CR_0^1(\cT)$, by \begin{align} &a_{\text{pw}}(u_{\text{CR}},v_{\text{CR}}):=\sum_{T\in\cT}\int_T(\mathbf A \nabla u_{\text{CR}})\cdot\nabla v_{\text{CR}}\dx,\\ &b_{\text{pw}}(u_{\text{CR}},v_{\text{CR}}):=\sum_{T\in\cT}\int_T\left(u_{\text{CR}} {\mathbf b}\cdot\nabla v_{\text{CR}}+\gamma u_{\text{CR}}v_{\text{CR}}\right)\dx. \end{align} The nonconforming FEM seeks the discrete solution $u_{\text{CR}}\in CR_0^1(\cT)$ to \begin{align}\label{dis_weak_Se} a_{\text{pw}}(u_{\text{CR}},v_{\text{CR}})+b_{\text{pw}}(u_{\text{CR}},v_{\text{CR}})=F(v_{\text{CR}})\fl v_{\text{CR}}\in CR_0^1(\cT). \end{align} Notice that $\|\nabla_{\text{pw}} \bullet \|$ with the piecewise action $\nabla_{\text{pw}}$ of the gradient $\nabla $ is a norm on $CR_0^1(\cT)$ and so is $\trinl \bullet\trinr_{\text{pw}}:=\| {\bf A}^{1/2} \nabla_{\text{pw}}\bullet\|$. The subsequent theorem implies the unique solvability and boundedness of discrete solutions for sufficiently fine meshes. \begin{thm}\label{dis_inf_sup_Se} Suppose that $\cL$ is a bijection and so $\cL^{-1}$ is bounded and \eqref{cts_infsup} holds with $\beta=\|\cL^{-1}\|>0$. Then there exist positive $\delta$ and $\beta_0$ such that any $\cT\in\bT(\delta)$ satisfies \begin{align}\label{eqnewlabelccforinfsupbeta0} \beta_{0}\leq \beta_h:=\inf_{\substack{w_{\text{CR}}\in CR_0^1(\cT)\\ \trinl w_{\text{CR}}\trinr_{\text{pw}}=1}} \sup_{\substack{ v_{\text{CR}}\in CR_0^1(\cT)\\ \trinl v_{\text{CR}}\trinr_{\text{pw}}=1}}(a_{\text{pw}}+b_{\text{pw}})(w_{\text{CR}},v_{\text{CR}}). \end{align} \end{thm} \begin{proof} Let $\displaystyle H^1(\cT):=\left\{v\in L^2(\Omega)\,|\,\forall\,T\in\cT,\, v|_{T}\in H^1(T)\right\}$ and endow the vector space $$\widehat{V}:=\widehat{X}:=\widehat{Y}:=\left\{\widehat{v}\in H^1(\cT)\,|\,\forall\,E\in\cE, \int_E[\widehat{v}]_E\ds=0\right\}\supset V+CR_0^1(\cT)$$ with the norm $\trinl\bullet\trinr_{\text{pw}}:=\|{\bf A}^{1/2}\nabla_{\text{pw}}\bullet\|$. Here and throughout the paper, the jump of $\widehat{v}\in \widehat{V}$ across any interior face $E=\partial K_+\cap \partial K_-\in \cE(\Omega)$ shared by two simplices $K_+$ and $K_-$ reads \begin{align*} [\widehat{v}]_E:= \widehat{v}|_{K_+}-\widehat{v}|_{K_-}\quad \text{on } E=\partial K_+\cap \partial K_- \end{align*} (then $\omega_E:=\text{int}(\partial K_+\cup \partial K_-)$), while $[\widehat{v}]_E:=\widehat{v}|_E$ along any boundary face $E\in \cE(\partial\Omega)$ according to the homogeneous boundary condition on $\partial\Omega$ (and then $\omega_E:=\text{int}(K)$ for $K\in\cT$ with $E\in \cE(K)$). \noindent The boundedness of $\widehat{a}+\widehat{b}$ follows from a piecewise Friedrichs inequality \[ \| \hat v \| \le C_\text{pwF} \left( \sum _{E\in\cE} | \omega_E|^{-1} \, |\int_E [\hat v]ds |^2 + \| \nabla_{\text{pw}} \hat v \|^2 \right)^{1/2} \] known for all $\hat v\in \widehat{V}$ with the volume $|\omega_E|$ of the side-patch $|\omega_E|\approx h_E^n$. For $\hat v\in \widehat{V}$ and $E\in\cE$, the integral $\int_E [\hat v]ds=0$ vanishes; hence the piecewise Friedrichs inequality reduces to $\| \hat v \| \le C_\text{pwF} \| \nabla_{\text{pw}} \hat v \|$. This enables a proof that $(\widehat{V},\widehat{a})$ is a Hilbert space that $\widehat{b}$ is a bounded bilinear form with respect to those norms. Consequently, $\widehat{\alpha}=1=\|\widehat{a}\|$ and \eqref{cts_infsup} holds with some $\beta=\|\cL^{-1}\|>0$. Define the nonconforming interpolation operator $I_{\text{CR}}\in L(\widehat{V};CR_0^1(\cT))$ by \begin{align}\label{integ_mean} I_{\text{CR}}v &:= \sum_{E \in {\mathcal E}} \left(\fint_E v~ds\right) \psi_E \quad\text{for all }v \in \widehat{V} \end{align} with the side-oriented basis functions $\psi_E$ of $CR_0^1(\cT)$ with $\psi_E({\rm mid}(F))=\delta_{EF}$, the Kronecker symbol, for all sides $E,F \in {\mathcal E}.$ For any $v_{\text{CR}}\in CR_0^1(\cT)$, the conforming companion operator $Q:=\cC:= J \in L\left(CR_0^1(\cT);V\right) $ with $J v_{\text{CR}}\in P_4(\cT)\cap C^0(\bar\Omega)$ from \cite[p. 1065]{CCDGMS15} satisfies (a) that $w:=v_{\text{CR}}-J v_{\text{CR}}\perp P_1(\cT)$ is $L^2$ orthogonal to the space $P_1(\cT)$ of piecewise first-order polynomials, (b) the integral mean property of the gradient \begin{equation}\label{integ_mean_ortho} \Pi_0\left(\nabla_{\text{pw}}(v_{\text{CR}}-J v_{\text{CR}})\right)=0, \end{equation} and (c) the approximation and stability property (with a universal constant $\Lambda_{\text{CR}}$) \begin{equation}\label{J4_enrich} \|h_{\cT}^{-1}(v_{\text{CR}}-J v_{\text{CR}})\| +\| \nabla_{\text{pw}}( v_{\text{CR}}-J v_{\text{CR}}) \| \le \Lambda_{\text{CR}} \min_{v\in H^1_0(\Omega)}\|\nabla_{\text{pw}} ( v_{\text{CR}}-v) \|. \end{equation} (The proofs in \cite{CCDGMS15} are in 2D, but can be generalised to any dimension). Note that $J $ is a right inverse to $I_{\text{CR}}$ in the sense that $I_{\text{CR}}J v_{\text{CR}}=v_{\text{CR}}$ holds for all $v_{\text{CR}}\in CR_0^1(\cT)$. The inequality \eqref{J4_enrich} implies {\bf (H4)} with $\Lambda_4=(\overline{\lambda}/\underline{\lambda})^{1/2}\Lambda_{\text{CR}}$. The bilinear forms $\widehat{a}\equiv a_{\text{pw}}, \widehat{b}\equiv b_{\text{pw}}:\widehat{V}\times \widehat{V}\to\bR$ read, for all $\widehat{u},\widehat{v}\in\widehat{V}$, as \begin{equation} \widehat{a}(\widehat{u},\widehat{v}):=\sum_{T\in\cT}\int_T(\mathbf A \nabla \widehat{u})\cdot\nabla \widehat{v}\dx\quad \text{and} \quad \widehat{b}(\widehat{u},\widehat{v}):=\sum_{T\in\cT}\int_T(\widehat{u} {\mathbf b}\cdot\nabla\widehat{v} +\gamma \widehat{u}\widehat{v})\dx. \end{equation} As in the stability proof of the conforming FEM, Theorem~\ref{dis_inf_sup_thm} applies to $\widehat{a}$ and $\widehat{b}^*$ (rather than to $\widehat{a}$ and $\widehat{b}$). The proof of {\bf (H1)} concerns $u_{\text{CR}},v_{\text{CR}}\in CR_0^1(\cT)$ with $\trinl u_{\text{CR}}\trinr_{\text{pw}}=1=\trinl v_{\text{CR}}~\trinr_{\text{pw}}$ and the solution $z=A^{-1}(\widehat{b}(\bullet, u_{\text{CR}}) |_{V})\in V$ to $a(z,\bullet)= \widehat{b}(\bullet, u_{\text{CR}})$ in $V$. The right-hand side is the $L^2$ scalar product of the test function in $V$ with $g:={\bf b}\cdot\nabla_{\text{pw}} u_{\text{CR}}+\gamma u_{\text{CR}} \in\lt$ bounded with the discrete Friedrichs inequality $\|\bullet\| \leq C_{dF}\|\nabla_{\text{pw}}\bullet\|$ in $CR_0^1(\cT)$ \cite[p. 301]{Brenner} by $\|g\|\leq (\|{\bf b}\|_{\infty} +C_{dF} \|{\gamma}\|_{\infty})\underline{\lambda}^{-1/2}=: C_0$. Since $\nabla_{\text{pw}} w\perp P_0(\cT;\bR^n)$ in $L^2(\Omega;\bR^n)$ for $w:= v_{\text{CR}}-Jv_{\text{CR}}$, Lemma~\ref{SpecLem} applies for any $\epsilon>0$ and leads to $\delta>0$ so that, for $\cT\in\bT(\delta)$ with the $L^2$ projection $\Pi_0$, \begin{align} &\widehat{a}\left(A^{-1}\left(\widehat{b}^*(u_{\text{CR}},\bullet)|_V\right),v_{\text{CR}}-J v_{\text{CR}}\right)= a_{\text{pw}}(z,w) =\int_{\Omega} ((1-\Pi_0){\bf A}\nabla z )\cdot\nabla_{\text{pw}}w \dx \notag\\ &\quad\leq \| (1-\Pi_0){\bf A}\nabla z \|\,\| \nabla_{\text{pw}}w \| \le \epsilon \|g \| \Lambda_{\text{CR}} \underline{\lambda}^{-1/2} \le C_1\epsilon =:\delta_1 \label{H1assp} \end{align} with \eqref{J4_enrich} for $v=0$ and $\| \nabla_{\text{pw}} v_{\text{CR}}\|\le \underline{\lambda}^{-1/2} \trinl v_{\text{CR}} \trinr_{\text{pw}} =\underline{\lambda}^{-1/2}$ in the end for $C_1:= C_0 \Lambda_{\text{CR}} \underline{\lambda}^{-1/2}$. This concludes the proof of {\bf (H1)}. The proof of {\bf (H2)} concerns $u_{\text{CR}}\in CR_0^1(\cT)$, $\widehat{v}\in\widehat{V}$ with $\trinl u_{\text{CR}}\trinr_{\text{pw}}=1 =\trinl \widehat{v} \trinr_{\text{pw}}$, and the solution $z\in V$ to $a(z,\bullet)= \widehat{b}(\bullet, u_{\text{CR}})$ in $V$ as before. The operator $P:\widehat{V}\to CR^1_0(\cT)$, however, is not $I_{\text{CR}}$ because the oscillating coefficients ${\bf A}$ prevent the immediate cancellation property for $\widehat{a}(u_{\text{CR}},\widehat{v}-P\widehat{v} )=0$. The latter is a consequence of the best-approximation $P$ in the Hilbert space $\widehat{V}$ onto its linear and closed subspace $ CR_0^1(\cT)$; so let $P\widehat{v}\in CR_0^1(\cT) $ be the unique minimiser in \[ \trinl \widehat{v} -P\widehat{v} \trinr_{\text{pw}}=\min_{v_{\text{CR}}\in CR_0^1(\cT)} \trinl \widehat{v} - v_{\text{CR}}\trinr_{\text{pw}}\le \trinl \widehat{v} \trinr_{\text{pw}} =1. \] Lemma~\ref{SpecLem} applies for any $\epsilon>0$ and leads to $\delta>0$ so that, for each $\cT\in\bT(\delta)$, there exists some $z_C\in S^1_0(\cT)\subset CR_0^1(\cT)$ with $ \trinl z-z_C\trinr_{\text{pw}} \le \epsilon C_0$. This, $\widehat{a}(u_{\text{CR}}+z_C,\widehat{v}-P\widehat{v} )=0$, and $ \trinl \widehat{v}- P \widehat{v} \trinr_{\text{pw}} \le 1$ in the end provide {\bf (H2)} (with $\widehat{b}^*$ replacing $\widehat{b}$): \begin{align*} &\widehat{a}(u_{\text{CR}}+ A^{-1}\left(\widehat{b}^*(u_{\text{CR}},\bullet)|_V\right), \widehat{v}-P\widehat{v}) \\ &=\int_{\Omega}({\bf A}\nabla_{\text{pw}}(z-z_C)\cdot\nabla_{\text{pw}}(\widehat{v}-P\widehat{v})\dx\notag\\ &\leq \trinl z-z_C\trinr_{\text{pw}} \trinl \widehat{v}- P \widehat{v} \trinr_{\text{pw}} \le C_0 \epsilon =:\delta_2 \end{align*} The proof of {\bf (H3)} concerns $u_{\text{CR}},v_{\text{CR}}\in CR_0^1(\cT)$ with $\trinl u_{\text{CR}}\trinr_{\text{pw}}=1=\trinl v_{\text{CR}}\trinr_{\text{pw}}$ and $w:= v_{\text{CR}}- J v_{\text{CR}}$. This and \eqref{J4_enrich} (with $v=0$) prove \begin{align*} &\widehat{b}^*\left(u_{\text{CR}},v_{\text{CR}}- J v_{\text{CR}}\right) =\int_{\Omega}\left({\bf b}\cdot\nabla_{\text{pw}}u_{\text{CR}}+\gamma u_{\text{CR}}\right) w\dx \\ &= \int_{\Omega} g w\dx \le h_{\max} \|g\|\Lambda_{\text{CR}} \|\nabla_{\text{pw}} v_{\text{CR}}\| \le C_0\Lambda_{\text{CR}} \underline{\lambda}^{-1/2}\delta. \end{align*} Without loss of generality, assume $\delta\le\epsilon$. Then {\bf (H3)} follows with $\delta_3:= C_3\epsilon$ for $C_3:=C_0\Lambda_{\text{CR}} \underline{\lambda}^{-1/2}$. (It is remarkable that in the last inequalities, the extra property $w:=v_{\text{CR}}-J v_{\text{CR}}\perp P_1(\cT)$ leads to the bound $ \underline{\lambda}^{-1/2}\Lambda_{\text{CR}}\text{osc}_1(g,\cT)$, but that can easily be exploited solely for piecewise smooth or at least piecewise continuous ${\bf b}$ and $\gamma$). Since {\bf (H1)}-{\bf (H4)} hold for $\widehat{a}$ and $\widehat{b}^*$, Theorem~\ref{dis_inf_sup_thm} proves $\beta_h\ge \widehat{\beta} - (C_1+C_2+C_3)\epsilon$ with positive $\widehat{\beta}<\beta$ defined in \eqref{AsmpCondn}. Any positive $\epsilon< \widehat{\beta}/(C_1+C_2+C_3) $ concludes the proof; in fact, any constant $\beta_0$ with $0<\beta_0< \widehat{\beta}$ can be realised in \eqref{eqnewlabelccforinfsupbeta0} by small $\delta>0$. \end{proof} The following best-approximation-type error estimate generalises a result in \cite{CCADNNAKP15}. \begin{thm}\label{SA_err_est} Let $u\in H^1_0(\Omega)$ solve \eqref{WeakFrmSec} and set $p:={\bf A}\nabla u+u{\bf b}\in H({\rm div},\Omega)$. There exists $\delta>0$ such that for all $\cT\in\bT(\delta)$, the discrete problem \eqref{dis_weak_Se} has a unique solution $u_{\text{CR}}\in CR_0^1(\cT)$ and $u, \: u_{\text{CR}}, \:p$ and its piecewise integral mean $\Pi_0 p$ satisfy \begin{align} \trinl u-u_{\text{CR}}\trinr_{\text{pw}}&\lesssim \trinl u-I_{\text{CR}}u\trinr_{\text{pw}}+\|p-\Pi_0p\|+{\rm {\rm osc}}_1(f-\gamma u,\cT). \end{align} \end{thm} \begin{proof} Given $e_{\text{CR}}:=I_{\text{CR}}u-u_{\text{CR}}$, the discrete inf-sup condition of Theorem~\ref{dis_inf_sup_Se} implies the existence of $v_{\text{CR}}\in CR_0^1(\cT)$ with $\trinl v_{\text{CR}}\trinr_{\text{pw}}\le 1/\beta_0$ and \begin{align}\label{bhapp3} \trinl e_{\text{CR}}\trinr_{\text{pw}} = a_{\text{pw}}(e_{\text{CR}},v_{\text{CR}})+b_{\text{pw}}(e_{\text{CR}},v_{\text{CR}}). \end{align} Recall from (a)-(b) in the proof of Theorem \ref{dis_inf_sup_Se} that $v:= J v_{\text{CR}}$ satisfies $I_{\text{CR}}v=v_{\text{CR}}$ and $\Pi_1v=\Pi_1v_{\text{CR}}$. Since $a(u,v)=-b(u,v)+F(v)$ and $u_{\text{CR}}$ solves \eqref{dis_weak_Se}, $w:=v-v_{\text{CR}}$ satisfies \begin{align*} a_{\text{pw}}(e_{\text{CR}},v_{\text{CR}})&=a_{\text{pw}}(u,v_{\text{CR}})-a_{\text{pw}}(u_{\text{CR}},v_{\text{CR}})\\ &=F(w) -a_{\text{pw}}(u,w)-b(u,v)+b_{\text{pw}}(u_{\text{CR}},v_{\text{CR}}). \end{align*} This leads in \eqref{bhapp3} to \begin{align*} \trinl e_{\text{CR}}\trinr&= F(w) -a_{\text{pw}}(u,w)-b_{\text{pw}}(u,w) {-b_{\text{pw}}(u-I_{\text{CR}}u, v_{\text{CR}})}\notag\\ &=\int_{\Omega}(f-\gamma u) w\dx - \int_{\Omega} p\cdot\nabla_{\text{pw}}w\dx { -b_{\text{pw}}(u-I_{\text{CR}}u, v_{\text{CR}})}. \end{align*} Since $\nabla_{\text{pw}}w\,\bot\, P_0(\cT;\bR^n)$ in $L^2(\Omega;\bR^n)$ and $w\,\bot\, P_1(\cT;\bR^n)$, {an upper bound for the first two terms on the right-hand side is} \begin{align} &\integ(I-\Pi_1)(f-\gamma u)w\dx - \integ(p-\Pi_0 p)\cdot \nabla_{\text{pw}} w\dx\notag\\ &\leq \left(\|p-\Pi_0 p\|+{\rm osc}_1(f-\gamma u,\cT)\right) \underline{\lambda}^{-1/2} \trinl w\trinr\notag\\ &\leq \Lambda_{\text{CR}}\underline{\lambda}^{-1/2}\beta_0^{-1} \left(\|p-\Pi_0 p\|+{\rm osc}_1(f-\gamma u,\cT)\right). \end{align} This and a triangle inequality conclude the proof. \end{proof} \section[Semilinear problems with trilinear nonlinearity]{A class of semilinear problems with\\ trilinear nonlinearity} \label{error This section is devoted to an abstract framework for an a~priori and a~posteriori analysis to solve a class of semilinear problems that includes the applications in Section~4 and 5. \subsection{A priori error control} Suppose $X$ and $Y$ are real Banach spaces and let the quadratic function $N:X\to Y^*$ be of the form \begin{equation}\label{eqccdefN} N(x):=\cL x+\Gamma(x,x,\bullet) \end{equation} with a leading linear operator $A\in L(X;Y^*)$ and $F\in Y^*$ for the affine operator $\cL x:=Ax-F$ for all $x\in X$ and a bounded trilinear form $\Gamma: X\times X\times Y\to \bR$. To approximate a regular $u$ solution to $N(u)=0$, the discrete version involves some discrete spaces $X_h$ and $Y_h$ plus a discrete function $F_h\in Y_h^*$, $\cL_hx_h:=A_hx_h-F_h$, and a {bounded trilinear form} $\Gamma_h:X_h\times X_h\times Y_h\to \bR$ with $N_h(x_h)=\cL_h x_h+\Gamma_h(x_h,x_h,\bullet)$. The discrete problem seeks $u_h\in X_h$ such that \[\cL_hu_h+\Gamma_h(u_h,u_h,\bullet)=0\quad \text{ in } Y_h^*.\] The $a~priori$ error analysis is based on the Newton-Kantorovich theorem and adapts the abstract discrete inf-sup results of Subsection~\ref{sec:abs_result}. Some further straightforward notation is required for this. Suppose that there exists some invertible bounded linear operator operator $\widehat{A}$ (i.e. $\widehat{A}v=\widehat{a}(v,\bullet)$ in $\widehat{Y}$ for all $v\in\widehat{X}$) on extended Banach spaces $\widehat{X}$ and $\widehat{Y}$ and suppose that there exists a bounded extension \[ \widehat{\Gamma}:\widehat{X}\times \widehat{X}\times \widehat{Y}\to \bR \quad\text{with}\quad \| \widehat{\Gamma}\| :=\| \widehat{\Gamma}\|_{\widehat{X}\times \widehat{X}\times \widehat{Y}}:= \sup_{\substack{\widehat{x}\in \widehat{X}\\ \|\widehat{x}\|_{\widehat{X}}=1}} \sup_{\substack{\widehat{\xi}\in \widehat{X}\\ \|\widehat{\xi}\|_{\widehat{X}}=1}} \sup_{\substack{\widehat{y}\in \widehat{Y}\\ \|\widehat{y}\|_{\widehat{Y}}=1}} \widehat{\Gamma}(\widehat{x},\widehat{\xi},\widehat{y})<\infty \] of $\Gamma=\widehat{\Gamma}|_{X\times X\times Y}$ with $\Gamma_h=\widehat{\Gamma}|_{X_h\times X_h\times Y_h}$. Given the regular solution $u\in X$ to $N(u)=0$ in \eqref{eqccdefN}, let the bilinear form $\widehat{b}:\widehat{X}\times \widehat{Y}\to \bR$ be the linearisation of $\widehat{\Gamma}$ at the solution $u$ , i.e., \[ \widehat{b}(\bullet,\bullet):= \widehat{\Gamma}(u,\bullet, \bullet)+\widehat{\Gamma}(\bullet,u,\bullet), \] and be bounded by $\|\widehat{b}\|:=\|\widehat{b}\|_{ \widehat{X}\times \widehat{Y} } \le 2 \|u\|_X \| \widehat{\Gamma}\|$. Adopt the notation \eqref{defn_ab} for the bilinear forms $a,a_h,b$, and $b_h$ as respective restrictions of $\widehat{a}$ and $\widehat{b}$ and suppose $\widehat{F}\in \widehat{Y}^*$ exists with $F:=\widehat{F}|_Y$ and $F_h:=\widehat{F}|_{Y_h}$. Recall that the bounded linear operator $\widehat{A}$ is invertible and the so the associated bilinear form $\widehat{a}$ is bounded and satisfies \eqref{dis_Ah_infsup} with some positive $\widehat{\alpha}$. Recall that $u$ is a regular solution to $N(u)=0$ in the sense that $N(u)=0$ and $DN(u)\in L(X;Y^*)$ with $DN(u)=(a+b)(\bullet,\bullet)$ satisfies the inf-sup condition \eqref{cts_infsup}. Suppose all the aforementioned bilinear forms satisfy {\bf (H1)}-{\bf (H4)} with some operators $P\in L(\widehat{Y};Y_h)$, $ Q\in L(X_h; X)$, and $ \cC\in L(Y_h;Y)$. In addition to {\bf (H1)}-{\bf (H4)} suppose that $\delta_5,\delta_6\ge 0$ satisfy \\ ${\bf (H5)}\quad\displaystyle \delta_5:=\left\|\left(\widehat{F}-\widehat{A}u\right)(1-\cC)\bullet\right\|_{Y_h^*}$; \\ ${\bf (H6)} \quad \displaystyle \exists\,x_h\in X_h $ such that $\delta_6:=\|u-x_h\|_{\widehat{X}}.$ \\ The non-negative parameters $\delta_1,\delta_2,\delta_3,\delta_5,\delta_6$ and $\widehat{\alpha}$, $\beta$, $\|\widehat{b}\|$ all depend on the fixed regular solution $u$ to $N(u)=0$ and this dependence is suppressed in the notation for simplicity. Under the present assumptions and with the additional smallness assumption $4 {\delta} \|\widehat{\Gamma}\| < \beta_0$ (in the notation of \eqref{ccdefn_beta0}-\eqref{defn_delta}) the properties {\bf (A)}-{\bf (B)} hold for the fixed discretisation at hand in the following sense. Suppose that $\|\widehat{\Gamma}\|>0$ for otherwise $N$ is a linear equation with a unique solution and the results of Section~2 apply. \begin{thm}[existence and uniqueness of a discrete solution]\label{thm3.1} Given a regular solution $u\in X$ to $N(u)=0$, assume the existence of extended bilinear forms $\widehat a$ and $\widehat{b}$ with \eqref{defn_ab}-\eqref{dis_Ah_infsup} and $\widehat{\alpha}>0$ (resp. $\beta>0$ in \eqref{cts_infsup} and $\widehat{\beta}>0$ in \eqref{AsmpCondn}). Suppose that {\bf (H1)}-{\bf (H6)} hold with parameters $\delta_1,\ldots,\delta_6\ge 0$ and that $x_h\in X_h$ satisfies {\bf (H6)}. Suppose that \begin{align} \label{ccdefn_beta0} \beta_0&:=\widehat{\alpha}\widehat{\beta}-(\delta_1+\delta_2+\delta_3+2\|\widehat{\Gamma}\|\delta_6) >0\quad\text{and } \\ \label{defn_delta} {\delta}&:=\beta_0^{-1} \left(\delta_5+ \|\widehat{a}\| \delta_6+\delta_6 \big{(}\|x_h\|_{X_h}+ \|\cC\|\, \|u\|_{X} \big{)}\|\widehat{\Gamma}\|\,+\delta_3/2 \right)\ge 0 \end{align} satisfy $4 {\delta} \|\widehat{\Gamma}\| < \beta_0$. Then $ \epsilon:=\delta_6 +\delta + r_- $ with $m:=2 \|\widehat{\Gamma}\|/\beta_0 >0$, $h:= \delta m\ge 0$, \begin{equation}\label{rminus} r_-:= (1-\sqrt{1-2h })/m - {\delta}\ge 0 , \quad\text{and}\quad \rho:=(1+\sqrt{1-2h})/ m>0 \end{equation} satisfy (i) there exists a solution $u_h\in X_h$ to $N_h(u_h)=0$ with $\| u-u_h \|_{\widehat{X}}\le \epsilon$ and (ii) given any $v_h\in X_h$ with $\|v_h-u_h\|_{{X_h}}\le \rho$, the Newton scheme with initial iterate $v_h$ converges R-quadratically to the discrete solution $u_h$ in (i). If even $4 \epsilon \|\widehat{\Gamma}\| \le \beta_0$, then (iii) there is at most one solution $u_h\in X_h$ to $N_h(u_h)=0$ with $\| u-u_h \|_{\widehat{X}}\le \epsilon$. \end{thm} The proof is based on the Newton-Kantorovich convergence theorem found, e.g., in \cite[Subsection~5.5]{MR1344684} for \( X= Y=\bR^n\) and in \cite[Subsection~5.2]{MR816732} for Banach spaces. The notation is adopted to the setting of Theorem~\ref{thm3.1}. \begin{thm}[Kantorovich (1948)] \label{kantorovich} Assume the Frech\'et-derivative $DN_h(x_h)$ of $N_h$ at some \(x_h\in X_h\) satisfies \begin{equation}\label{Kanto_Condn} \|D N_h(x_h)^{-1}\|_{L( Y_h^*; X_h)} \leq 1/\beta_0 \quad\text{and}\quad \|D N_h(x_h)^{-1}N_h(x_h)\|_{X_h} \leq {\delta}. \end{equation} Suppose that \(D N_h\) is Lipschitz continuous with Lipschitz constant $2 \|\widehat{\Gamma}\|$ and that {\(4 {\delta} \|\widehat{\Gamma}\| \leq \beta_0\)}. Then there exists a unique root \(u_h\in \overline{ B(x_1,r_-)} \) to \(N_h\) in the ball around the first iterate \(x_1 := x_h - D N_h(x_h)^{-1}N_h(x_h)\) and this is the only root in \(\overline{B(x_h,\rho)}\) with $r_-, \rho$ from \eqref{rminus}. If even {\(4 {\delta} \|\widehat{\Gamma}\| < \beta_0\)}, then the Newton scheme with initial iterate \(x_h\) leads to a sequence in {\(B(x_h,\rho)\)} that converges R-quadratically to \(u_h\). \qed \end{thm} \noindent{\it Proof of Theorem \ref{thm3.1}.} Suppose that $\delta\ge 0$ and $\|\widehat{\Gamma}\|>0 $ so that $r_-\ge 0$ in \eqref{rminus} well defined. The bounded trilinear form $\Gamma_h=\widehat{\Gamma}|_{ X_h\times X_h\times Y_h}$ leads to the Frech\'et-derivative $DN_h( x_h)\in L(X_h;Y_h^*)$ with \[ DN_h( x_h;\xi_h,\eta_h)=a_h(\xi_h,\eta_h)+\Gamma_h(x_h, \xi_h,\eta_h) +\Gamma_h( \xi_h,x_h,\eta_h)\quad\text{for all } x_h, \xi_h\in X_h,\; \eta_h\in Y_h. \] The definitions of $a$ and $b$ and their extensions and discrete versions with {\bf (H1)}-{\bf (H4)} allow in Theorem~\ref{dis_inf_sup_thm} for a positive inf-sup constant $\beta_1:=\widehat{\alpha}\widehat{\beta}-(\delta_1+\delta_2+\delta_3)$ in \eqref{eqdis_inf_sup_defbetah} for the bilinear form \[ D\widehat{N}(u)|_{X_h\times Y_h}= a_h+\widehat{\Gamma}(u,\bullet,\bullet)+\widehat{\Gamma}(\bullet,u,\bullet) = a_h+b_h \] for the extended nonlinear form $\widehat{N}(\widehat{x})=\widehat{A}(\widehat{x}) -\widehat{F}+ \widehat{\Gamma}(\widehat{x},\widehat{x},\bullet) \in \widehat{Y}^*$ for $\widehat{x}\in \widehat{X}$ and its derivative $D\widehat{N}(u)$ at $u$. This discrete inf-sup condition \eqref{eqdis_inf_sup_defbetah} and a triangle inequality with $x_h$ from {\bf (H6)} lead to an inf-sup constant \begin{align*} 0< \beta_0: \beta_1 - 2\|\widehat{\Gamma}\|\delta_6 \leq \beta_h:=\inf_{\substack{\xi_h\in X_h\\ \|\xi\|_{X_h}=1}} \sup_{\substack{\eta_h\in Y_h\\ \|\eta_h\|_{Y_h}=1}}DN_h(x_h;\xi_h,\eta_h) \end{align*} for the bilinear form $DN_h(x_h;\bullet,\bullet)=a_h+\Gamma_h(x_h,\bullet,\bullet) +\Gamma_h(\bullet,x_h,\bullet)$. The discrete inf-sup constant is a singular value and equal to the norm of the inverse operator; $1/\beta_0$ is an upper bound of the operator norm of the discrete inverse. This proves the first estimate of \eqref{Kanto_Condn}. It also proves in the second estimate of \eqref{Kanto_Condn} that \begin{equation}\label{Kant_cond1} \|DN_h(x_h)^{-1}N_h(x_h)\|_{X_h}\leq \beta_0^{-1}\|N_h(x_h)\|_{Y_h^*} \end{equation} and it remains to estimate $N_h(x_h)$ in the norm of $Y_h^*$. Given any $y_h\in Y_h$ with $\|y_h\|_{Y_h}=1$ and $y:=\cC y_h\in Y$, an exact Taylor expansion with $N(u;y)=0$ shows \begin{align} &N_h(x_h;y_h)= N_h(x_h;y_h)-N(u;y)\notag\\ &= \widehat{F}(y-y_h) +a_h( x_h,y_h)-a(u,y) +\Gamma_h( x_h, x_h,y_h)-\Gamma(u,u,y)\notag\\ &=\widehat{F}(y-y_h) -\widehat{a}( u,y-y_h) +\widehat{a}(x_h-u,y_h)+\Gamma_h( x_h, x_h,y_h) -\Gamma(u,u,y).\label{nonlin_exp} \end{align} In abbreviated duality brackets, the first two terms in \eqref{nonlin_exp} are equal to \begin{align*} \widehat{F}(y-y_h) -\widehat{a}( u,y-y_h) =\langle \widehat{F}-\widehat{A}u,(\cC-I)y_h\rangle\leq \delta_5 \end{align*} with {\bf (H5)}. The definition of $\delta_6$ in {\bf (H6)} proves \begin{align*} \widehat{a}(x_h-u,y_h)\leq\|\widehat{a}\|\delta_6. \end{align*} Up to the factor $2$, the last two terms in \eqref{nonlin_exp} are equal to \begin{align*} &2\Gamma_h( x_h, x_h,y_h)-2\Gamma(u,u,y) =\widehat{\Gamma}( x_h-u, x_h,y_h)+\widehat{\Gamma}( x_h,x_h-u,y_h)\notag\\ &\quad+\widehat{\Gamma}(u,x_h-u,y)+\widehat{\Gamma}(x_h-u,u,y)-\widehat{b}(x_h,y-y_h).\notag\\ &\leq2\delta_6\left(\|x_h\|_{X_h}+\|\cC\| \: \|u\|_{X}\right)\|\widehat{\Gamma}\|\,+\delta_3. \end{align*} The combination of the preceding three displayed estimates with \eqref{nonlin_exp} implies \begin{equation}\label{eqproofofKant_cond1} \displaystyle\beta_0^{-1}\|N_h(x_h)\|_{Y_h^*}\leq {\delta} \end{equation} with ${\delta}\ge 0$ from \eqref{defn_delta}. The combination of \eqref{Kant_cond1} and \eqref{eqproofofKant_cond1} shows the second inequality in \eqref{Kanto_Condn}. The smallness assumption reads $h<1/2$ and is stated explicitly in the theorem; hence the Newton-Kantorovich Theorem~\ref{kantorovich} applies. Let us interrupt the proof for a brief discussion of the extreme but possible case $\delta=0$ with the implications $\delta_6=\delta_5=\delta_3=0$ and $x_h=u$ in {\bf (H6)}. The proof of \eqref{eqproofofKant_cond1} remains valid in this case and then $N_h(x_h)=0$ guarantees that $u=x_h$ is the discrete solution $u_h$. In this very particular situation, the Newton scheme converges and leads to the constant sequence $x_h=x_1=x_2=...$ with the limit $x_h=u_h$. Theorem~\ref{kantorovich} applies with $r_-=0=\epsilon$ and provides (i)-(iii). Therefore, throughout the remainder of this proof suppose that $\delta>0$ and so $\rho,\epsilon, r_- >0$ in Theorem~\ref{kantorovich} show the existence of a discrete solution $u_h$ to $N_h(u_h)=0 $ in $\overline{B(x_1,r_-)}$ and this is the only discrete solution in $\overline{B(x_h,\rho)}$. This and triangle inequalities lead to \begin{align*} \|u-u_h\|_{\widehat{X}}\leq \|u-x_h\|_{\widehat{X}}+ \|x_1-x_h\|_{X_h} +\|x_1-u_h\|_{X_h}\leq \delta_6+\delta+r_-= \epsilon \label{Newton_conv} \end{align*} for the Newton correction $x_1-x_h$ is estimated in the second inequality of \eqref{Kanto_Condn}. This proves the existence of a discrete solution $u_h$ in $ X_h\cap \overline{B(u,\epsilon)}$ as asserted in (i). Theorem~\ref{kantorovich} implies (ii) and it remains to prove the uniqueness of discrete solutions in $\overline{B(u,\epsilon)}$ under the additional assumption that $4 \epsilon \|\widehat{\Gamma}\| \le \beta_0$, i.e., $2m\epsilon\le 1$. Recall that the limit $u_h\in \overline{B(x_1,r_-)}$ in (i)-(ii) is the only discrete solution in $\overline{B(x_h,\rho)}$. Suppose there exists a second solution $\widetilde{u}_h\in X_h\cap \overline{B(u,\epsilon)}$ to $N_h(\widetilde{u_h})=0$. The uniqueness in $\overline{B(x_h,\rho)}$ and a triangle inequality imply that \[ \rho< \| x_h- \widetilde{u}_h\|_{\widehat{X}}\le \| u- \widetilde{u}_h\|_{\widehat{X}} +\| u- x_h\|_{\widehat{X}}\le \epsilon+\delta_6\le 2\epsilon\le 1/m \] with the smallness assumption on $\epsilon$ in the end. But this leads to a contradiction with the definition of $\rho$ in \eqref{rminus} and so concludes the proof of (iii). \qed \begin{rem} In the applications, if $h_{\max}$ is chosen sufficiently small, the parameters $\delta_1,\delta_2,\delta_3$, $\delta_5$, and $\delta_6$ are also small. In particular, ${\delta}$ from \eqref{defn_delta} is small and so is $\epsilon$. This ensures $4 {\delta}\|\widehat{\Gamma}\|\le 4 {\epsilon}\|\widehat{\Gamma}\|< \beta_0$ so that Theorem~\ref{thm3.1} applies. \end{rem} \begin{rem} The convergence speed in the Newton-Kantorovich theorem is known to be $h=\delta m$ and this parameter is uniformly smaller than one in the applications. Hence the number of iterations in the Newton scheme does not increase as the mesh-size decreases. \end{rem} \subsection{Best-approximation} This subsection discusses the best-approximation result {\bf (C)} for regular solutions of semilinear problems with trilinear nonlinearity \ccnew{under the assumption {\bf (H1)}-{\bf (H6)} with parameters $\delta_1,\ldots,\delta_6$ and $\widehat{\alpha}$ (resp. $\widehat{\beta}$) from \eqref{dis_Ah_infsup} (resp. \eqref{AsmpCondn})}. The extra term $\|\hat N (u)\|_ {Y_h^*}$ in the best-approximation result in Theorem~\ref{err_apriori_thm_vke} will be discussed afterwards and leads to some best- and data-approximation term. \begin{thm}[a priori]\label{err_apriori_thm_vke} If $u$ is a regular solution to $N(u)=0$ and $\delta $ and $ \epsilon:=\delta_6 +\delta + r_- $ from \eqref{defn_delta}-\eqref{rminus} satisfy $2m\epsilon\le 1$, then there exists $ C_\text{\rm qo}>0 $ such that the unique discrete solution $u_h\in X_h\cap\overline{B(u,\epsilon )}$ satisfies the best-approximation property \begin{equation*} \| u-u_h\|_{\widehat{X}}\leq C_\text{\rm qo} \left(\min_{v_h\in X_h}\|u-v_h\|_{\widehat{X}} +\|\widehat{N}(u)\|_{Y_h^*}\right). \end{equation*} \end{thm} \begin{proof} Given the best-approximation $u_h^*$ to $u$ in $X_h$ with respect to the norm in $\widehat{X}$, set $e_h:=u_h^*-u_h\in X_h$ and apply the discrete $\inf$-$\sup$ condition \eqref{eqdis_inf_sup_defbetah} to the bilinear form $D\widehat{N}(u)|_{X_h\times Y_h}$ with the constant $\beta_1:=\widehat{\alpha}\widehat{\beta}-(\delta_1+\delta_2+\delta_3)$ from the proof of Theorem~\ref{thm3.1}. This leads to $y_h\in Y_h$ with $\|y_h\|_{Y_h}\le 1/\beta_1$ and \begin{equation}\label{inf_sup_tp1} \|e_h\|_{X_h}= D\widehat{N}(u;e_h,y_h). \end{equation} Since the quadratic Taylor expression of $\widehat{N}$ at $u$ for $N_h(u_h;y_h)=0$ is exact, $e:=u-u_h\in \widehat{X}$ satisfies \begin{equation}\label{inf_sup_tp2} 0=\widehat{N}(u;y_h)-D\widehat{N}(u;e,y_h) {-} \half D^2\widehat{N}(u;e,e,y_h). \end{equation} The sum of \eqref{inf_sup_tp1} and \eqref{inf_sup_tp2}, $ D^2\widehat{N}(u;e,e,y_h)=2 \Gamma(e,e,y_h)$, and $\|y_h\|_{Y_h}\le 1/\beta_1$ prove \begin{equation*} \beta_1 \|e_h\|_{X_h}\leq \|\widehat{N}(u)\|_{Y_h^*} +\|D\widehat{N}(u)\|\|u-u_h^*\|_{\widehat{X}} +\|\widehat{\Gamma}\|\|e\|_{\widehat{X}}^2. \end{equation*} This, a triangle inequality, and $\min_{x_h\in X_h}\|u-x_h\|_{\widehat{X}} = \|u-u_h^* \|_{\widehat{X}}$ show \begin{equation}\label{eqbestapproxalmost} \left(\beta_1-\|\widehat{\Gamma}\| \|e\|_{\widehat{X}}\right)\|e\|_{\widehat{X}} \leq \|\widehat{N}(u)\|_{Y_h^*}+\left(\beta_1+\| D\widehat{N}(u)\|\right)\min_{x_h\in X_h}\|u-x_h\|_{\widehat{X}}. \end{equation} Recall $4\epsilon\|\widehat{\Gamma}\|\le \beta_0\le \beta_1$ and $\|e\|_{\widehat{X}}\le \epsilon$ from Theorem~\ref{thm3.1}, so that $3\beta_1/4 \le \beta_1-\|\widehat{\Gamma}\| \|e\|_{\widehat{X}}$ leads in \eqref{eqbestapproxalmost} to $C_{\text{qo}}=3/4\, \max\{ 1/ \beta_1, 1 +\| D\widehat{N}(u)\|/\beta_1 \} $ and ${\rm apx}(\cT):=\|\widehat{N}(u)\|_{Y_h^*}$ in the asserted best-approximation. This concludes the proof. \end{proof} Two examples for the term ${\rm apx}(\cT):=\|\widehat{N}(u)\|_{Y_h^*}$ conclude this subsection. \begin{example} If $Y_h\subset Y$, then ${\rm apx}(\cT)=\|\widehat{N}(u)\|_{Y_h^*}\leq \|N(u)\|_{Y^*}=0$. Hence, Theorem~\ref{err_apriori_thm_vke} implies the quasi-optimality of the conforming FEM. \end{example} \begin{example} For the second-order linear non-selfadjoint and indefinite elliptic problems of Subsection~2.2, $\|\widehat{\Gamma}\|=0$ and $\beta_0=\beta_1$ etc. is feasible in Theorem \ref{err_apriori_thm_vke} and the best-approximation estimate holds. The approximation term ${\rm apx}(\cT)$ is the norm of the functional $F-(a_{\text{pw}}+b_{\text{pw}}) (u,\bullet )$ in $V_h^*$. This is exactly the extra term in Corollary~\ref{rembestapproximation} that leads to the additional two terms in Theorem~\ref{SA_err_est}. \end{example} \subsection{A posteriori error control} The regular solution $u$ to $N(u)=0$ is approximated by some $v_h\in X_h$ sufficiently close to $u$ such that the Theorem~\ref{abs_res_thm} below asserts reliability \eqref{relib_eqn} and efficiency \eqref{enrich_bdd}-\eqref{eff_temp2}. \begin{thm}\label{abs_res_thm} Any $v_h\in X_h$ with $\|u-Qv_h\|_{X}<\beta/\|\Gamma\|$ satisfies \begin{align} & \|u-v_h\|_{\widehat{X}}\leq \frac{\|N(Qv_h)\|_{Y^*}}{\beta-\|\Gamma\| \|u-Qv_h\|_{X}}+\|Qv_h-v_h\|_{\widehat{X}},\label{relib_eqn}\\ & \|Qv_h-v_h\|_{\widehat{X}}\leq \Lambda_4\|u-v_h\|_{\widehat{X}},\label{enrich_bdd}\\ & \|N(Qv_h)\|_{Y^*} \leq (1+\Lambda_4)\left(\|DN(u)\|+ \beta \right)\|u-v_h\|_{\widehat{X}}.\label{eff_temp2} \end{align} \end{thm} \begin{proof} Abbreviate $\xi=Qv_h$ and $e:=u-\xi$. Recall that the bilinear form $a+b$ is associated to the derivative $DN(u;\bullet,\bullet)\in L(X;Y^*)$ with an inf-sup constant $\beta>0$. Hence for any $0<\epsilon<\beta$ there exists some $y\in Y$ with $\|y\|_{Y}=1$ and \begin{equation}\label{Der_infsup} (\beta-\epsilon)\|e\|_{X}\leq DN(u;e,y). \end{equation} Since $N(u)=0$ and $N$ is quadratic, the finite Taylor series \begin{align}\label{quadratic_identity} N(\xi,y)=- DN(u;e,y)+\half D^2N(u;e,e,y) \end{align} is exact. This, $D^2N(u;e,e,y)=2\Gamma(e,e,y)$, and \eqref{Der_infsup} imply \begin{align*} (\beta-\epsilon)\|e\|_{X}&\leq -N(\xi,y)+\Gamma(e,e,y) \leq \|N(\xi)\|_{Y^*}+\|\Gamma\| \|e\|_X^2. \end{align*} With $\epsilon\searrow 0$ and $\beta-\|\Gamma\| \|e\|_X>0$, this leads to \begin{equation*} \|e\|_X\leq \frac{\|N(\xi)\|_{Y^*}}{\beta-\|\Gamma\| \|e\|_X}. \end{equation*} A triangle inequality $\displaystyle \|u-v_h\|_{\widehat{X}}\leq \|e\|_X+\|Qv_v-v_h\|_{\widehat{X}}$ concludes the proof of \eqref{relib_eqn}. Recall that {\bf (H4)} implies \eqref{enrich_bdd}. This and a triangle inequality show \begin{equation*} \|e\|_X\leq \|u-v_h\|_{\widehat{X}}(1+\Lambda_4). \end{equation*} The identity \eqref{quadratic_identity} results in \begin{align*} \|N(\xi)\|_{Y^*}\leq \|DN(u;e)\|_{Y^*}+\|\Gamma(e,e,\bullet)\|_{Y^*} \leq \left(\|DN(u)\|+\|\Gamma\| \|e\|_X\right)\|e\|_X. \end{align*} The combination of the previous two displayed estimates proves \eqref{eff_temp2}. \end{proof} The discrete function $v_h$ can be estimated in the sense of {\bf (D)} from the introduction. \begin{cor}[a posteriori]\label{coraposteriori} In addition to the assumptions of Theorem~\ref{abs_res_thm} suppose that $\| u- v_h\|_{\widehat{X}} \le \epsilon\le \kappa\beta /(\|\Gamma\| (1+\Lambda_4))$ holds for some positive $\kappa<1$ and $v_h\in X_h$. Then $C_{\text{\rm rel},1}:=1/(\beta(1-\kappa))$ and $C_{\text{\rm rel},2}:= 1+ LC_{\text{\rm rel},1}$ for $L:=\| \widehat{a}\|+2\|\widehat{\Lambda}\|(\| u \|_X+\epsilon(1+\Lambda_4))$ satisfy reliability in the sense that \[ \| u- v_h\|_{\widehat{X}} \le C_{\text{\rm rel},1} \|\widehat{N}(v_h)\|_{Y^*} +C_{\text{\rm rel},2} \|Qv_h-v_h\|_{\widehat{X}} \] and efficiency with \eqref{enrich_bdd} and with $C_{\text{\rm eff},1}:= \left((1+\Lambda_4) (\|DN(u)\|+\beta) + L\Lambda_4\right)$ in \[ \|\widehat{N}(v_h)\|_{Y^*}\le C_{\text{\rm eff},1}\| u- v_h\|_{\widehat{X}} . \] \end{cor} \begin{proof} Recall the abbreviations $\xi=Qv_h$ and $e:=u-\xi$. A triangle inequality and {\bf (H4)} show that $\|e\|_X \le (1+\Lambda_4) \| u- v_h\|_{\widehat{X}}\le \epsilon(1+\Lambda_4)\le \kappa \beta/ \|\Gamma\|$. This and Theorem~\ref{abs_res_thm} imply \[ \|u-v_h\|_{\widehat{X}}\leq \frac{\|N(Qv_h)\|_{Y^*}}{\beta(1-\kappa)} +\|Qv_h-v_h\|_{\widehat{X}}. \] The derivative $D\widehat{N}$ is globally Lipschitz continuous with a Lipschitz constant $2\|\widehat{\Lambda}\|$, the function $\widehat{N}$ is Lipschitz continuous in the closed ball $\overline{B(u, \epsilon(1+\Lambda_4))}$ in $\widehat{X}$ with a Lipschitz constant $L$. Since $ v_h, Qv_h\in \overline{B(u, \epsilon(1+\Lambda_4))}$, \[ \|N(Qv_h)\|_{Y^*} \le \|\widehat{N}(v_h)\|_{Y^*} + L\,\|Qv_h-v_h\|_{\widehat{X}}. \] The combination of the previous displayed estimates proves the asserted reliability. The efficiency employs the Lipschitz continuity as well and then utilises \eqref{enrich_bdd}-\eqref{eff_temp2} to verify \[ \|\widehat{N}(v_h)\|_{Y^*} \le \|N(Qv_h)\|_{Y^*}+ L\,\|Qv_h-v_h\|_{\widehat{X}} \leq C_{\text{\rm eff},1} \|u-v_h\|_{\widehat{X}}. \] This concludes the proof. \end{proof} \section[Incompressible 2D Navier-Stokes problem]{Stream function vorticity formulation of the\\ incompressible 2D Navier-Stokes problem}\label{sec:NS} This section is devoted to the stream function vorticity formulation of 2D Navier-Stokes equations with right-hand side $f\in\lt$ in a polygonal bounded Lipschitz domain $\Omega\subset\mathbb{R}^2$: There exists \cite{Lions} at least one distributional solution $u\in V:=\hto$ to \begin{equation}\label{NS} \Delta^2 u+\frac{\partial}{\partial x_1}\left((-\Delta u)\frac{\partial u}{\partial x_2}\right)-\frac{\partial}{\partial x_2}\left((-\Delta u)\frac{\partial u}{\partial x_1}\right)=f\text{ in }\Omega. \end{equation} The analysis of extreme viscosities lies beyond the scope of this paper and the viscosity (the factor in front of the bi-Laplacian in \eqref{NS}) is set one throughout this paper. \subsection{Continuous problem}\label{subsectContinuousproblemNS} The weak formulation to \eqref{NS} seeks $u\in V$ such that \begin{equation}\label{NS_weak} a(u,v)+\Gamma(u,u, v)=F( v)\fl v\in V. \end{equation} The associated bilinear form $a: V\times V\to \bR$ and the trilinear form $\Gamma: V\times V\times V\to \bR$ read \begin{align*} & a(\eta, \chi):=\integ \Delta\eta\Delta \chi\dx, \quad \Gamma(\eta,\chi,\phi):=\integ \Delta\eta\left(\frac{\partial \chi}{\partial x_2}\frac{\partial \phi}{\partial x_1}-\frac{\partial \chi}{\partial x_1}\frac{\partial \phi}{\partial x_2}\right)\dx, \end{align*} and $F\in V^*$ is given by $\displaystyle F( \phi):=\integ f \phi\dx$ for all $\eta,\chi,\phi\in V$. The Hilbert space $V\equiv H^2_0(\Omega)$ with the scalar product $a(\bullet,\bullet)$ is endowed with the $H^2$ seminorm $\displaystyle\trinl\bullet\trinr:=|\bullet|_{H^2(\Omega)}$ and $\|\bullet\|_{V^*}$ denotes the dual norm. The bilinear form $a(\bullet,\bullet)$ is equivalent to the scalar product in $V$ and the trilinear form $\Gamma(\bullet,\bullet,\bullet)$ is bounded \ccnew{(owing to the continuous embedding $V\subset H^2(\Omega)\hookrightarrow W^{1,4}(\Omega)$)} with \[ \langle N(u), v\rangle=N(u; v):=a(u, v)-F(v)+\Gamma(u,u, v)\quad\text{ for all } u,v\in V. \] The 2D Navier-Stokes equations in the weak stream function vorticity formulation \eqref{NS_weak} seeks $u\in V$ with $N(u)=0$. The regularity results for the biharmonic operator $\Delta^2$ in \cite{BlumRannacher} ensure that $z\in V$ with $a(z,\bullet)\in H^{-1}(\Omega)\subset V^*$ belongs to $ H^{2+s}(\Omega)$ for some elliptic regularity index {$s \in (1/2,1]$} and $\|z\|_{H^{2+s}(\Omega)}\leq C \|a(z,\bullet)\|_{H^{-1}(\Omega)}$. The regularity results for the Navier-Stokes problem \ccnew{in} \cite[Section 6(b)]{BlumRannacher} ensure that \ccnew{any weak solution $u\in V$ to $N(u)=0$ satisfies $u\in H^{2+s}(\Omega)$. This makes the continuous embeddings $H^{2+s}(\Omega) \hookrightarrow W^{1,\infty}(\Omega)$ (for $s>0$) and $H^{2+s}(\Omega)\hookrightarrow W^{2,4}(\Omega)$ (for $s>1/2$) available throughout this (and the subsequent) section.} The embeddings and \Holder inequalities imply for $u\in H^{2+s}(\Omega)$ and for $\theta,\phi\in V$ that \begin{align*} \Gamma(u,\theta,\phi)&\lesssim \max\left\{ \|u\|_{W^{2,4}(\Omega)}\|\theta\|_{W^{1,4}(\Omega)}, \|u\|_{W^{1,\infty}(\Omega)}\|\theta\|_{H^2(\Omega)}\right\} \|\phi\|_{H^1(\Omega)} \\ & \lesssim \|u\|_{H^{2+s}(\Omega)} \|\theta\|_{H^2(\Omega)}\|\phi\|_{H^1(\Omega)}. \end{align*} Consequently, the derivative $b(\bullet,\bullet) := DN(u;\bullet,\bullet):= \Gamma(u,\bullet,\bullet)+\Gamma(\bullet,u,\bullet)$ at the solution $u$ is a bounded bilinear form in $H^2(\Omega)\times H^1(\Omega)$ and will be key in the subsequent analysis. \subsection{Conforming FEM}\label{sec:CFEM_NS} Let $V_C$ be a conforming finite element space contained in $ C^1(\overline\Omega)\cap V$; for example, the spaces associated with Bogner-Fox-Schmit, HCT, or Argyris elements \cite{Ciarlet} and a regular triangulation $\cT$ of $\Omega$ into triangles. The conforming finite element formulation seeks $u_C\in V_C$ with \begin{equation}\label{NS_dis} N_h(u_C;v_C):=N(u_C;v_C):=a(u_C, v_C)-F( v_C)+\Gamma(u_C,u_C, v_C)=0\fl v_C\in V_C. \end{equation} \begin{thm}[a priori]\label{apriori_NS_est_C} If $u$ is a regular solution to $N(u)=0$, then there exist positive $\epsilon$, $\delta$, and $\rho$ such that {\bf (A)}-{\bf (C)} hold with ${\rm apx}(\cT)\equiv 0$ for all $\cT \in \bT(\delta)$. \end{thm} \begin{proof} Set $X=Y=V$, $X_h=Y_h=V_C$, $\widehat{a}(\bullet,\bullet):=a(\bullet,\bullet)$, and $ \widehat{b}(\bullet,\bullet):=b(\bullet,\bullet) :=\Gamma(u,\bullet,\bullet)+\Gamma(\bullet,u,\bullet)$. For $\cC$ and $Q$ chosen as identity, the parameters in the hypotheses {\bf (H1)} and {\bf (H3)}-{\bf (H5)} are $\delta_1=\delta_3=\Lambda_4=\delta_5=0$. For the proof of {\bf (H2)}, suppose $\theta_h\equiv x_h\in V_C\subset V$ with $ \trinl\theta_h \trinr=1$ and recall from the end of the previous subsection that $\widehat{b}(\theta_h, \bullet)\in H^{-1}(\Omega)$. Hence the solution $z\in V$ to the biharmonic problem \begin{equation*} a(z,\phi)=\widehat{b}(\theta_h,\phi)\fl \phi\in X \end{equation*} satisfies $z\in H^{2+s}(\Omega)$ and $\|z\|_{H^{2+s}(\Omega)}\leq C \trinl\theta_h\trinr= C$. (Note that $z$ is called $A^{-1}(\widehat{b}(x_h,\bullet)|_{Y})$ in Subsection~\ref{sec:abs_result}). This regularity and the Galerkin projection ${P}$ with the Galerkin orthogonality and the approximation property \ccnew{$\trinl z-{P}z\trinr\lesssim h_{\rm {max}}^s$ \cite{Brenner} lead for any $y\in Y\equiv V$ with $ \trinl y \trinr=1$ to \begin{equation*} a(x_h + z, {y}-{P}{{y}} ) =a(z,{y}-{P}{{y}} ) = a(z-Pz,y) \lesssim h_{\max}^{s}. \end{equation*} This proves {\bf (H2)} with $\delta_2\lesssim h_{\rm max}^s$. The choice $x_h={P}u$ implies {\bf (H6)} with $\delta_6\lesssim h_{\rm max}^{s}$ (from the higher regularity of $u$ and $\trinl u-Pu\trinr\lesssim h_{\max}^s$). Consequently, for sufficiently small maximal mesh-size $h_{\max}$, Theorem~\ref{dis_inf_sup_thm} provides the discrete inf-sup condition $1\lesssim \beta_h$ and Theorem~\ref{thm3.1} applies.} Since $V_C$ is a conforming finite element space, Theorem~\ref{err_apriori_thm_vke} holds with ${\rm apx}(\cT):= \|\widehat{N}(u)\|_{Y_h^*} \equiv 0$. This concludes the proof. \end{proof} The explicit residual-based $a~posteriori$ error estimator for the stream function vorticity formulation of 2D Navier-Stokes equations requires some notation for the differential operators: For any scalar function $v$, vector field $\Phi=(\phi_1,\phi_2)^T$, and tensor $ {\boldsymbol \sigma}$ with the 4 entries $\sigma_{11}$, $\sigma_{12}$, $\sigma_{21}$, and $\sigma_{22}$ in form of a $2\times 2$ matrix, $$\displaystyle \nabla v=\begin{pmatrix} \frac{\partial v}{\partial x_1}\\ \frac{\partial v}{\partial x_2} \end{pmatrix},\; {\rm Curl}\, v=\begin{pmatrix} -\frac{\partial v}{\partial x_2}\\ \frac{\partial v}{\partial x_1} \end{pmatrix},\; {\rm curl} \begin{pmatrix} \phi_1\\ \phi_2 \end{pmatrix} =\frac{\partial \phi_2}{\partial x_1}-\frac{\partial \phi_1}{\partial x_2}, \quad D\Phi= \begin{pmatrix} \frac{\partial \phi_1}{\partial x_1} &\frac{\partial \phi_1}{\partial x_2}\\ \frac{\partial \phi_2}{\partial x_1} &\frac{\partial \phi_2}{\partial x_2} \end{pmatrix}, $$ \noindent $$ \divc \begin{pmatrix} \phi_1\\ \phi_2 \end{pmatrix}=\frac{\partial \phi_1}{\partial x_1}+\frac{\partial \phi_2}{\partial x_2} ,\; {\rm Curl} \begin{pmatrix} \phi_1\\ \phi_2 \end{pmatrix} =\begin{pmatrix} -\frac{\partial \phi_1}{\partial x_2} &\frac{\partial \phi_1}{\partial x_1}\\ -\frac{\partial \phi_2}{\partial x_2} &\frac{\partial \phi_2}{\partial x_1} \end{pmatrix} , \; {\text {and} \;} \divc{\boldsymbol \sigma} =\begin{pmatrix} \frac{\partial \sigma_{11}}{\partial x_1} +\frac{\partial \sigma_{12}}{\partial x_2}\\ \frac{\partial \sigma_{21}}{\partial x_1} +\frac{\partial \sigma_{22}}{\partial x_2} \end{pmatrix}. $$ For any $K\in\cT$ and $E\in\cE(\Omega)$, define the volume and edge error estimators by \begin{align*} \eta_K^2&:=h_K^4\big{\|}{} \Delta^2 u_C -{\rm curl}(\Delta u_C\nabla u_C) -f\big{\|}^2_{L^2(K)}, \\ \eta_E^2&:=h_E^{3}\left\| \left [\divc (D^2 u_C)\right]_E \cdot\nu_E\right\|^2_{L^2(E)} +h_E\left\|\left[\Delta u_C \right]_E\right\|^2_{L^2(E)}\\ &\quad+h_E^{3}\left\|[\Delta u_C\nabla u_C]_E\cdot\tau_E\right\|^2_{L^2(E)} \end{align*} with the unit tangential (resp. normal) vector $\tau_E$ (resp. $\nu_E$) along the edge $E\in\cE$. Recall ${\rm osc}_{m}(\bullet,\cT):=\|h_{\cT}^2(I-\Pi_m)\bullet\|_{L^2(\Omega)}$ for $m\in\mathbb{N}_0$ in all fourth-order applications. \begin{thm}[a posteriori] \label{reliability_C_NS} If $u\in V$ is a regular solution to $N(u)=0$ and $m\in\bN_0$, then there exist positive $\epsilon,\delta, C_{\rm rel}$, and $C_{\rm eff}$ such that, for any $\cT \in \bT(\delta)$, the unique discrete solution $u_C\in V_C$ to \eqref{NS_dis} with $\trinl u-u_C\trinr<\epsilon$ satisfies \begin{align} C_{\rm rel}^{-2}\trinl u-u_C\trinr^2&\leq\sum_{K\in\cT}\eta_K^2 +\sum_{E\in\cE (\Omega)}\eta_E^2\leq C_{\rm eff}^2(\trinl u-u_C\trinr^2 +{\rm osc}_{m}^2(f)). \label{reliability_est_C_NS} \end{align} \end{thm} The proof utilizes a quasiinterpolation operator. \begin{lem}[quasiinterpolation] \label{interpolation_BFS} For any $\cT\in\bT$ there exists an interpolation operator $\Pi_h: H^2_0(\Omega)\to V_C$ such that, for $0\le k\le m\le 2 $ and $\varphi\in H^2_0(\Omega)$, \begin{equation*} \|\varphi-\Pi_h\varphi\|_{H^k(K)}\lesssim h_K^{m-k}|\varphi|_{H^q(\omega_K)} \end{equation*} holds for any in the triangle $K\in\cT$ and the interior $\omega_K$ of the union $\overline{\omega_K}$ of the triangles in $\cT$ sharing a vertex with $K$. \end{lem} \begin{proof} This follows from \cite{Clement75} once the required scaling properties of the degrees of freedom are clarified. The Argyris or the HCT finite element schemes involve some normal derivative and do {\em not} form an affine finite element family, but an {\em almost affine} finite element element family \cite{Ciarlet}. It is by now understood that this guarantees the appropriate scaling properties. This is explicitly calculated in \cite{Ciarlet} for the HCT finite elements and also follows for the Argyris finite elements, as employed e.g. in \cite[p. 995]{brennermathcomp}. Since the result is frequently accepted \cite{Verfurth}, further details are omitted. \end{proof} \noindent{\it Proof of Theorem~\ref{reliability_C_NS}.} Continue the notation of the proof of Theorem~\ref{apriori_NS_est_C} with $X=Y=V$, $X_h=Y_h=V_C$, $Q=1$, etc. and recall that, for sufficiently small $\delta$, Theorem~\ref{apriori_NS_est_C} guarantees $\trinl u-u_C\trinr < \beta/\|\Gamma\|$. Hence Corollary~\ref{coraposteriori} implies (for $v_h\equiv u_C$) \begin{align}\label{F1_NS} \trinl u-u_C\trinr\le C_{\text{\rm rel},1} \|N(u_C)\|_{V^*}. \end{align} With $\Pi_h$ from Lemma~\ref{interpolation_BFS}, some appropriate $\phi\in V$ with $\trinl \phi\trinr=1$ satisfies \begin{equation}\label{CFEM_NS_res} \|N(u_C)\|_{V^*}= N(u_C;\phi)= N(u_C;\phi-\Pi_h\phi). \end{equation} Two successive integrations by parts result in \begin{align} &a(u_C,\phi-\Pi_h\phi)=\sik (\Delta^2 u_C) (\phi-\Pi_h\phi) \dx+\sie\left[\Delta u_C\right]_E \nabla(\phi-\Pi_h\phi)\cdot \nu_E \ds \notag \\ &\qquad\qquad\qquad\qquad- \sie (\phi-\Pi_h\phi)\left[\divc(D^2 u_C)\right]_E \cdot\nu_E\ds. \label{1} \end{align} An integration by parts in the nonlinear term $\Gamma(u_C,u_C,\phi-\Pi_h\phi)$ leads to \begin{align} &\Gamma(u_C,u_C,\phi-\Pi_h\phi)=\sum_{K\in\cT} \int_K \Delta u_C\nabla u_C\cdot{\rm Curl}(\phi-\Pi_h\phi)\dx \label{2}\\ &=\sum_{K\in\cT}\int_K (\phi-\Pi_h\phi){\rm curl}(-\Delta u_C\nabla u_C)\dx +\sum_{E\in\cE}\int_{E}(\phi-\Pi_h\phi) [\Delta u_C\nabla u_C]_E\cdot\tau_E\ds.\notag \end{align} Those identities show that \eqref{CFEM_NS_res} is equal to a sum over edges of jump contributions plus a sum over triangle of volume contributions; the latter is \[ \sik\left( \Delta^2 u_C-{\rm curl} (\Delta u_C\nabla u_C)-f \right) (\phi-\Pi_h\phi) \dx \lesssim \sum_{K\in\cT} \eta_K h_K^{-2} || \phi- \Pi_h \phi \|_{L^2(K) } \] and controlled with standard manipulations based on Lemma~\ref{interpolation_BFS} (with $k=0$ and $m=2$) and the finite overlap of the patches $(\omega_K:K\in\cT)$. The jump contributions include some trace inequality as well and are otherwise standard as in linear problems that involve the bi-Laplacian. For instance, the nonlinear jump contribution for each edge $E$ reads \[ \int_{E}(\phi-\Pi_h\phi) [\Delta u_C\nabla u_C]_E\cdot\tau_E\ds =\int_{E}(\phi-\Pi_h\phi) [\Delta u_C]_E \, \nabla u_C\cdot\tau_E\ds \] in case of an interior edge $E$ shared by the two triangles $T_+$ and $T_-$ that form the patch $\omega_E$ and vanishes in case of a boundary edge $E\subset\partial\Omega$ (with $\phi=\Pi_h\phi=0$ on $\partial\Omega$). The continuity of $\nabla u_C$ leads to the previous equality. This term is controlled by the residual $h_E^{3/2} \| [\Delta u_C]_E \, \nabla u_C\cdot\tau_E\|_{L^2(E)}$ times \[ h_E^{-3/2} \|\phi-\Pi_h\phi\|_{L^2(E)} \lesssim h_E^{-2} \| \phi-\Pi_h\phi\|_{L^2(T_\pm)} +h_E^{-1} \| \phi-\Pi_h\phi \|_{H^1(T_\pm)} \lesssim | \phi |_{H^2(\omega_{T_\pm})} \] with a trace inequality on one of the two triangles $T_\pm$ in the first and Lemma~\ref{interpolation_BFS} (for $k=0,1$) in the second estimate. The remaining terms are controlled in a similar way. Some words are in order about the term $ h_E^{3/2} \| [\Delta u_C]_E \, \nabla u_C\cdot\tau_E\|_{L^2(E)} $, in which an inverse inequality along the interior edge $E=\partial T_+\cap\partial T_-$ (shared by $T_\pm\in\cT$) of the polynomial $ \nabla u_C\cdot\tau_E$ (unique as a trace from $T_\pm$) shows $\| \nabla u_C\cdot\tau_E \|_{L^\infty(E)}\lesssim h_E^{-1} \| u_C \|_{L^\infty(E)}$. This and the global continuous embedding $H^2(\Omega)\hookrightarrow L^{\infty}(\Omega)$ leads to \[ h_E^{3/2} \| [\Delta u_C]_E \, \nabla u_C\cdot\tau_E\|_{L^2(E)} \lesssim h_E^{1/2} \| [\Delta u_C]_E \|_{L^2(E)} \trinl u_C \trinr. \] Since $ \trinl u_C \trinr\lesssim 1$, the nonlinear edge contribution is controlled by another contribution $h_E^{1/2} \| [\Delta u_C]_E \|_{L^2(E)} $ to $\eta_E$; in other words, this nonlinear edge contribution can be omitted. The overall strategy in the efficiency proof follows the bubble-function technique due to Verf\"urth \cite{Verfurth}. The emphasis in this paper is on the nonlinear contributions and on the interaction of the various nonlinear terms with the volume estimator. We will give two examples only to illustrate some details and start with the cubic bubble-function $b_k\in W^{1,\infty}_0(K)$ (the product of all three barycentric coordinates times $27$) of the triangle $K\in \cT$ with $0\le b_K\le\max b_K=1$. Let $f_K:=\Pi_m f\in P_m(K)$ be the $L^2(K)$ orthogonal polynomial projection of $f\in L^2(K)$ for degree $m\in \mathbb{N}_0$ so that $ \|f-f_K\|_{L^2(K)}=h_K^{-2}{\rm osc}_m(f,K)$. Since $ g:= \Delta^2 u_C-{\rm curl} (\Delta u_C\nabla u_C)-f_K$ is a polynomial of degree at most $\max\{k-4,(k-2)(k-1)-1,m\} $ (recall that $k$ is the degree of the finite element functions), an inverse estimate reads $\| g\|_K^2\lesssim \int_K\rho_K g\dx$ for the test function $\rho_K:=b_K^2g\in H^2_0(K)\subset V$. The above integrations by parts \eqref{1}-\eqref{2} with the test function $ \phi-\Pi_h\phi $ replaced by $\rho_K$ are restricted to $K$ for the support of $b_K$ and $\nabla b_K$ is $K$. This leads to the first equality in \begin{align*} \int_K g \rho_K\dx&=a(u_C, \rho_K)+ \Gamma(u_C,u_C,\rho_K) -\int_K\rho_Kf_K\dx\\ &=a(u_C-u, \rho_K)+ \Gamma(u_C,u_C,\rho_K) - \Gamma(u,u,\rho_K) +\int_K\rho_K(f-f_K)\dx \end{align*} and \eqref{NS_weak} leads to the second. Except for the last term (that leads to oscillations in the end), elementary algebra, $ \Gamma(u,u,\rho_K)- \Gamma(u_C,u_C,\rho_K) =\Gamma(u-u_C,u,\rho_K)+\Gamma(u_C,u-u_C,\rho_K)$, Cauchy, and H\"older inequalities bound the above terms upto a constant by \begin{equation} \label{NS_trilin} \| u-u_C\|_{H^2(K)}\left( (1 + | u |_{W^{1,\infty}(\Omega)}) \| \rho_K\|_{H^2(K)} +| u_C|_{H^2(\Omega)} | \rho_K|_{W^{1,\infty}(K)}\right). \end{equation} The inverse estimates $ \| \rho_K\|_{H^2(K)} + | \rho_K|_{W^{1,\infty}(K)} \lesssim h_K^{-2} \| \rho_K\|_{L^2(K)} \le h_K^{-2} \| g \|_{L^2(K)}$ lead in the preceding estimates (after division by $h_K^{-2} \| g \|_{L^2(K)}$) to \[ h_K^2 \| g\|_{L^2(K)} \lesssim \| u-u_C\|_{H^2(K)}+{\rm osc}_m(f,K). \] This and a triangle inequality prove efficiency $\eta_K\lesssim \| u-u_C\|_{H^2(K)}+{\rm osc}_m (f,K)$ of the volume contribution. The patch $\omega_E$ of an interior edge $E\in \cE$ is the interior of the union of the two neighbouring triangles in $\cT$ sharing the edge $E$ and may be a non-convex quadrilateral. Observe that the shape-regularity in $\cT$ implies the shape-regularity of the largest rhombus $R$ contained in the patch $\omega_E$ that has $E$ as one diagonal. Let $b_R\in H^1_0(R)\subset H^1_0(\omega_E)$ be the (piecewise quadratic) edge-bubble function of $E$ in $R$ (with $0\le b_R\le \max b_R=1$) and let $\Phi_E\in P_1(R)$ be the affine function that vanishes along $E$ and satisfies $\nabla \Phi_E = h_E^{-1}\nu_E$. Then $b_E:= \Phi_E b_R^3\in H^2_0(R)\subset H^2_0(\omega_E)$ satisfies $\nabla b_E \cdot\nu_E= h_E^{-1} b_{R}^3$ along $E$ and $ | b_E|_{L^{\infty}(\omega_E)}\lesssim 1$ as in \cite{Georgoulis2011}. Extend $[\Delta u_{C}]_E$ constantly in the normal direction to $E$ and set $\varrho_E:=h_E^2 [\Delta u_{C}]_E\, b_E$ $\in H^2_0(R)\subset H^2_0(\omega_E)$. An inverse estimate in the beginning, $\nabla \varrho_E\cdot \nu_E=h_E b_R^3[\Delta u_C]_E$ on $E$, and piecewise integrations by parts lead to \begin{align*} &h_E \|[\Delta u_{C}]_E\|_{L^2(E)}^2 \lesssim h_E\|b_{R}^{{3/2}}[\Delta u_{C}]_E\|_{L^2(E)}^2 =\int_E \nabla\varrho_E\cdot\nu_E [\Delta u_{C}]_E\ds \\ &\qquad =\int_{\omega_E}(\Delta u_{C}\Delta \varrho_E -\varrho_E\Delta^2_{\text{pw}} u_{C})\dx. \end{align*} The test-function $\varrho_E$ in \eqref{NS_weak} shows that, the right-hand side reads \[ a(u_C-u,\varrho_E)+\Gamma(u_C,u_C,\varrho_E)-\Gamma(u,u,\varrho_E) + \int_{\omega_E} \hspace{-2mm}(f-\Delta_{\text{pw}} ^2 u_{C}+{\rm curl}_{\text{pw}} (\Delta u_C\nabla u_C))\varrho_E\dx. \] A Cauchy inequality in the first, the arguments for \eqref{NS_trilin} in the second term, and the bound $(\eta_{T_+}+\eta_{T_-})h_E^{-2} \| \varrho_E \|_{L^2(\omega_E)}$ for the third term lead to \[ h_E \|[\Delta u_{C}]_E\|_{L^2(E)}^2 \lesssim \left( \| u-u_C\|_{H^2(\omega_E)} +\eta_{T_+}+\eta_{T_-}\right) \left(h_E^{-2} \| \varrho_E \|_{L^2(\omega_E)}+ |\varrho_E |_{H^2(\omega_E)} \right). \] The function $\varrho_E$ is polynomial in each of the two open triangles in $R\setminus (E\cup\partial R)$ and allows for inverse estimates. Since $|b_E|\lesssim 1$ a.e., this proves that the last factor is controlled by \[ h_E^{-2} \|\varrho_E\|_{L^2(\omega_E)}\lesssim \|[\Delta u_{C}]_E\|_{L^2(\omega_E)} \lesssim h_E^{1/2}\|[\Delta u_{C}]_E\|_{L^2(E)} \] for the constant extension of $[\Delta u_{C}]_E$ in the direction of $\nu_E$ in the last step. The combination of the previous two displayed inequalities with the above efficiency of the volume contribution concludes the proof of \[ h_E^{1/2} \|[\Delta u_{C}]_E\|_{L^2(E)}\lesssim \|u-u_{C}\|_{H^2(\omega_E)}+\eta_{T_+}+\eta_{T_-} \lesssim \|u-u_C\|_{H^2(\omega_E)}+{\rm osc}_m (f,\{T_+,T_-\}). \] The efficiency of $h_E^{3}\left\| \left[\divc (D^2 u_C)\right]_E \cdot\nu_E\right\|^2_{L^2(E)}$ is also established through an adoption of the corresponding arguments in \cite{Georgoulis2011}. Hence the straightforward details are omitted. \qed \subsection{Morley FEM} The nonconforming {\it Morley element space} $V_h:=\cM(\cT)$ associated with the triangulation $\cT$ of the polygonal domain $\Omega\subset\mathbb{R}^2$ into triangles reads \[ \cM(\cT):=\left\{ v_M\in P_2(\cT) \; \vrule\;\; \begin{aligned} & v_M \text{ is continuous at } \cN(\Omega) \text{ and vanishes at } \cN(\partial \Omega), \\& \int_{E}\left[\frac{\partial v_M}{\partial \nu}\right]_E\ds=0 \text{ for all } E \in \cE (\Omega), \\ & \int_{E}\frac{\partial v_M}{\partial \nu}\ds=0 \text{ for all }E\in \cE (\partial\Omega) \end{aligned} \right\}. \] The discrete formulation seeks $ u_M\in \cM(\cT)$ such that \begin{equation}\label{NS_dis_NC} N_h(u_M;v_M):=a_{\text{pw}}( u_{M}, v_{M})-F( v_{M}) +\Gamma_{\text{pw}}( u_{M}, u_{M}, v_{M})=0\fl v_{M}\in \cM(\cT). \end{equation} Here and throughout this section, $ \widehat{V}:=V+ \cM(\cT)$ is endowed with the mesh-dependent norm $\displaystyle \trinl\widehat{\varphi}\trinr_{\text{pw}} :=\sqrt{a_{\text{pw}}(\widehat{\varphi},\widehat{\varphi})}$ for $\widehat{\varphi} \in \widehat{V}$ and, for all $\eta,\chi,\phi\in \cM(\cT)$, \begin{align}\label{eqccnerwandlast1234a} a_{\text{pw}}(\eta,\chi) &:=\sum_{K\in\cT}\int_K D^2 \eta:D^2\chi\dx,\\ \label{eqccnerwandlast1234b} \Gamma_{\text{pw}}(\eta,\chi,\phi) &:= \hspace{-0.2cm}\sum_{T\in\cT}\int_T \Delta \eta\left(\frac{\partial \chi}{\partial x_2}\frac{\partial \phi}{\partial x_1}-\frac{\partial \chi}{\partial x_1}\frac{\partial \phi}{\partial x_2}\right)\dx. \end{align} The a~priori error estimate means best-approximation up to first-order terms and so refines \cite{CN86,CN89} for the Morley FEM and generalises it for any regular solution. \begin{thm}[a priori]\label{apriori_NS_est_NCFEM} If $u\in H^2_0(\Omega)$ is a regular solution to $N(u)=0$, then there exist positive $\epsilon$, $\delta$, and $\rho$ such that {\bf (A)}-{\bf (C)} hold for all $\cT \in \bT(\delta)$ with \begin{align}\label{eqnapriori_NS_est_NCFEM} {\rm apx}(\cT) \lesssim \ensuremath{| \!| \! |}u-I_{ M} u\ensuremath{| \!| \! |}_{\text{\rm pw}} + \| h_\cT \Delta u \nabla u \| + \text{\rm osc}_0(f,\cT)\lesssim h_{\rm max}^{s}. \end{align} \end{thm} The proof requires the following four lemmas. \begin{lem}[Morley interpolation \cite{HuShi_Morley_Apost,CCDGJH14}] \label{Morley_Interpolation} For any $v\in V+ \cM(\cT)$, the Morley interpolation $I_M(v)\in \cM(\cT)$ defined by $$ (I_M v)(z)=v(z) \text{ for any } z\in \cN(\Omega) \text{ and } \int_E\frac{\partial I_M v}{\partial \nu_E}\ds=\int_E\frac{\partial v}{\partial \nu_E}\ds \text{ for any } E\in \cE $$ satisfies (a) $D^2_{\text{\rm pw}} I_M =\Pi_0 D^2$ and (b) \begin{equation*} \|h_K^{-2}(1-I_M)v \|_{L^2(K)}+\|h_K^{-1}\nabla(1-I_M) v\|_{L^2(K)} +{\|D^2 I_Mv\|_{L^2(K)}}\lesssim \|D^2v\|_{L^2(K)}.\qquad\qed \end{equation*} \end{lem} Let $H^2(\cT(\omega_K))$ denote the piecewise $H^2$ functions on the neighbourhood $\omega_K$, piecewise with respect to the triangulation $\cT(\omega_K)$ of all triangles $T$ with zero distance to $K\in\cT$. Let $|\bullet|_{H^2(\cT(\omega_K))}$ be the corresponding seminorm as the local contributions of $\trinl \bullet \trinr_{\text{pw}}$ associated with $\omega_K$. \begin{lem}[enrichment \cite{BSZ2013,DG_Morley_Eigen}] \label{hctenrich} There exists an enrichment operator $E_M:\cM(\cT)\to V$ such that $\varphi_M\in \cM(\cT)$ satisfies \begin{align*} &(a)\quad \sum_{m=0}^2 h_K^{2m}|\varphi_M-E_M\varphi_M|_{H^m(K)}^2 \lesssim \: h_{K}^4|\varphi_M|_{H^2(\cT(\omega_K))}^2\fl K\in\cT; \\ &(b)\quad \| h_{\cT}^{-2}(\varphi_M-E_M\varphi_M)\|_{\lt}^2\lesssim \sum_{E\in\cE} h_E \|[D^2\varphi_M]_E\tau_E\|_{L^2(E)}^2\\ &\hspace*{4cm}\lesssim \trinl \varphi_M-E_M\varphi_M\trinr_{\text{\rm pw}}^2\leq\Lambda \min_{\varphi\in V}\| D_{h}^2(\varphi_M-\varphi)\|_{\lt}^2; \\ &(c) \quad I_ME_M\varphi_M=\varphi_M,\quad\text{and}\quad \varphi_M-E_M\varphi_M\perp P_0(\cT)\text{ in }L^2(\Omega).\qquad\qquad\qed \end{align*} \end{lem} The Sobolev embeddings for conforming functions depend on the domain $\Omega$, while their discrete counterparts for nonconforming functions require particular attention. \begin{lem}[discrete embeddings]\label{lemmadiscreteembeddings} For any $1\le p<\infty$, there exists a constant $C=C(\Omega,p,\sphericalangle \cT)$ (which depends on $p$, $\Omega$, and the shape regularity of $\cT$) with \[ \| \widehat{v} \|_{L^{\infty}(\Omega)}+\| \widehat{v} \|_{W^{1,p}(\cT)}\le C \trinl \widehat{v} \trinr_{\text{\rm pw}}\quad\text{for all }\widehat{v} \in H^2_0(\Omega)+ \cM(\cT). \] \end{lem} \begin{proof}The main observation is that the enrichment operator $E_M$ from Lemma~\ref{hctenrich} maps into the HCT finite element space plus squared bubble-functions \cite{DG_Morley_Eigen}; so $v_M-E_Mv_M$ is a piecewise polynomial of degree at most $6$ for any $v_M\in \cM(\cT)$ (with respect to some refinement of $\cT$, where each triangle $T$ is divided into three sub-triangles by connecting each vertex with its center of inertia). This leads to inverse estimates such as \[ | v_M-E_Mv_M |_{W^{1,\infty}(T)}\lesssim h_T^{-1} | v_M-E_Mv_M |_{H^{1}(T)} \lesssim h_T^{-2} \| v_M-E_Mv_M \|_{L^{2}(T)}. \] Lemma~\ref{hctenrich}.b shows for $v\in H^1_0(\Omega)$ that the right-hand side is controlled by \[ \| h_\cT^{-2} (v_M-E_Mv_M) \|_{L^{2}(\Omega)}\lesssim \trinl v_M-E_Mv_M \trinr_{\text{pw}}\le \Lambda \min_{\varphi\in V}\trinl v_M-\varphi \trinr_{\text{pw}}\le \Lambda \trinl v+v_M \trinr_{\text{pw}}. \] Since $T\in \cT$ is arbitrary, this proves \begin{equation}\label{eqnewccstra1} | v_M-E_Mv_M |_{W^{1,\infty}(\Omega,\cT)}\lesssim\trinl v+v_M \trinr_{\text{pw}}. \end{equation} Since $v_M-E_Mv_M$ is Lipschitz continuous with Lipschitz constant $| v_M-E_Mv_M |_{W^{1,\infty}(\cT)} $ and vanishes at the vertices of $T\in\cT$, \[ || v_M-E_Mv_M ||_{L^{\infty}(\Omega)} \le h_{\max} | v_M-E_Mv_M |_{W^{1,\infty}(\Omega,\cT)} \] holds for the maximal mesh-size $h_{\max} \le \text{diam}(\Omega)$. This and \eqref{eqnewccstra1} imply (with $C_1\approx 1$) \[ \| v_M-E_Mv_M \|_{L^{\infty}(\Omega)}\le C_1 \trinl v+v_M\trinr_{\text{pw}}. \] The boundedness of the continuous 2D Sobolev embedding $ H^2(\Omega)\hookrightarrow L^{\infty} (\Omega) $ leads to $ | \bullet |_{ L^{\infty}(\Omega)}\le C_2\, \trinl \bullet \trinr$ in $H^2_0(\Omega)$. Consequently, with a triangle inequality in the beginning, \begin{align* \|v+v_M \|_{L^{\infty}(\Omega)} &\le \|v_M-E_M v_M \|_{L^{\infty}(\Omega)}+ \| v+ E_Mv_M \|_{L^{\infty}(\Omega)}\\ & \le C_1\trinl v+v_M\trinr_{\text{pw}}+ C_2 \trinl v+ E_M v_M \trinr. \end{align*} The triangle inequality and Lemma~\ref{hctenrich}.b (again with $\varphi=-v$) show \begin{equation}\label{eqnewccstra3} \trinl v+E_M v_M\trinr \le \trinl v+v_M \trinr_{\text{pw}}+ \trinl v_M -E_Mv_M \trinr_{\text{pw}} \lesssim \trinl v+v_M \trinr_{\text{pw}}. \end{equation} The combination of \eqref{eqnewccstra3} with the previously displayed estimate shows the first assertion $\|v+v_M \|_{L^{\infty}(\Omega)} \lesssim \trinl v+v_M \trinr_{\text{pw}}$. The proof of the second assertion is similar with \eqref{eqnewccstra1}-\eqref{eqnewccstra3}. The boundedness of the continuous 2D Sobolev embedding $ H^2(\Omega)\hookrightarrow W^{1,p}(\Omega) $ leads to $ | \bullet |_{W^{1,p}(\Omega)}\le C(p,\Omega)\, \trinl \bullet \trinr$ in $H^2_0(\Omega)$. Consequently, \begin{align*} | v+v_M |_{W^{1,p}(\Omega,\cT)} &\le | v+E_M v_M |_{W^{1,p}(\Omega)} +| v_M -E_Mv_M|_{W^{1,p}(\Omega,\cT)}\\ &\le C(p,\Omega) \trinl v+E_M v_M \trinr+ |\Omega|^{1/p} | v_M -E_Mv_M|_{W^{1,\infty}(\Omega,\cT)} \end{align*} with the area $|\Omega|\approx 1\approx C(p,\Omega)$. Recall \eqref{eqnewccstra1} and \eqref{eqnewccstra3} in the end to control the previous upper bound in terms of $ \trinl v+E_M v_M \trinr+ \trinl v+ v_M \trinr_{\text{pw}}\lesssim \trinl v+ v_M \trinr_{\text{pw}}$. This concludes the proof of the second assertion $| v+v_M |_{W^{1,p}(\Omega,\cT)}\lesssim \trinl v+ v_M \trinr_{\text{pw}}$. \end{proof} \begin{rem}[boundedness]\label{remarkccnew12324boundedness} The bound for $a_{\text{pw}}$ is immediate from \eqref{eqccnerwandlast1234a} for the norm $\trinl\bullet\trinr_{\text{pw}}$ in $ \widehat{V}\equiv V+ \cM(\cT)$. The bound $\| \Gamma_{\text{pw}} \| =\sqrt{2} C(\Omega,4,\sphericalangle \cT)^2 $ in \[ | \Gamma_{\text{pw}}(\widehat \eta,\widehat{ \chi},\widehat{\phi})|\le \| \Gamma_{\text{pw}} \| \trinl\widehat \eta \trinr_{\text{pw}} \trinl\widehat{ \chi} \trinr_{\text{pw}} \trinl\widehat\phi \trinr_{\text{pw}} \quad\text{for all } \widehat \eta , \widehat{ \chi}, \widehat{\phi}\in \widehat{V}\equiv V+ \cM(\cT) \] follows from \eqref{eqccnerwandlast1234b} with H\"older inequalities and Lemma~\ref{lemmadiscreteembeddings}. \end{rem} \begin{lem}[\cite{BSZ2013}]\label{EnrichSmooth} For $1/2<s\le 1$ there exists a positive constant $C$ such that any $\eta\in H^{2+s}(\Omega)$ and $\varphi_M\in \cM(\cT)$ satisfy $\displaystyle a_{\text{pw}}(\eta,\varphi_M-E_M\varphi_M)\leq Ch_{\max}^{s}\|\eta\|_{H^{2+s}(\Omega)}\trinl \varphi_M\trinr_{\text{\rm pw}}. $ \qed \end{lem} {\it Proof of Theorem~\ref{apriori_NS_est_NCFEM}}. Set $X=Y=V$, $X_h=Y_h=V_h$, $\widehat{X}=V+V_h$, $\widehat{a}(\bullet,\bullet):=a_{\text{pw}}(\bullet, \bullet)$, $ \widehat{b}(\bullet,\bullet):=\Gamma_{\text{pw}}( u,\bullet,\bullet)+\Gamma_{\text{pw}}(\bullet,u,\bullet)$ and $P=I_M$, $Q={\mathcal C}= E_M$. The regularity $u\in H^{2+s}(\Omega)$ of Subsection~\ref{subsectContinuousproblemNS} with $s>1/2$ allows for the bounded global Sobolev embeddings $H^{2+s}(\Omega)$ $\hookrightarrow W^{2,4}(\Omega)\hookrightarrow W^{1,\infty}(\Omega)$. This and Lemma~\ref{lemmadiscreteembeddings} lead for $\widehat{\theta}\in \widehat{X}$ and $\phi\in H^1(\Omega)$ to \begin{align} |\Gamma_{\text{pw}}(u,\widehat{\theta},\phi)|+|\Gamma_{\text{pw}}(\widehat{\theta},u,\phi)| & \lesssim \left( \|u\|_{W^{2,4}(\Omega)}\|\widehat{\theta}\|_{W^{1,4}(\Omega,\cT)} + \| u\|_{W^{1,\infty}(\Omega)}\ensuremath{| \!| \! |}\widehat{\theta}\ensuremath{| \!| \! |}_{\text{pw}} \right)\|\phi\|_{H^1(\Omega)} \notag\\ & \lesssim\|u\|_{H^{2+s}(\Omega) }\ensuremath{| \!| \! |}\widehat{\theta}\ensuremath{| \!| \! |}_{\text{pw}}\|\phi\|_{H^1(\Omega)}. \label{Gamma2bdd} \end{align} For $\theta_M\in \cM(\cT)$ with $\trinl\theta_M\trinr_{\text{pw}}=1$, the aforementioned estimates imply that $\widehat{b}(\theta_M,\bullet)\in H^{-1}(\Omega)$ and so the solution $z\in V$ to the biharmonic problem \begin{equation*} a(z,\phi)=\widehat{b}(\theta_M,\phi)\fl \phi\in V \end{equation*} satisfies $z\in H^{2+s}(\Omega)$ and $\| z \|_{H^{2+s}(\Omega)}\lesssim 1$ \cite{BlumRannacher}. The regularity $z\in H^{2+s}(\Omega)$ and Lemma~\ref{EnrichSmooth} (resp. Lemma~\ref{Morley_Interpolation}) imply {\bf (H1)} (resp. {\bf (H2)}) with $\delta_1\lesssim h_{\rm max}^s$ (resp. $\delta_2\lesssim h_{\rm max}^s$). The estimate \eqref{Gamma2bdd} and Lemma~\ref{hctenrich} verify {\bf (H3)} with $\delta_3\lesssim h_{\rm max}$. Lemma~\ref{hctenrich}.b leads to {\bf (H4)} with $\Lambda_4=\Lambda$. For any $y_M\in M(\cT)$ with $\ensuremath{| \!| \! |}y_M\ensuremath{| \!| \! |}_{\text{pw}}=1$, Lemma~\ref{EnrichSmooth} guarantees \[ a_{\text{pw}}(u, y_M-E_My_M)\lesssim h_{\max}^s \| u\|_{H^{2+s}(\Omega)}\approx h_{\max}^s, \] while Lemma~\ref{hctenrich} shows \[ F(y_M-E_My_M)\lesssim h_{\max}^2\| f\|\lesssim h_{\max}^s. \] This implies {\bf (H5)} with $\delta_5\lesssim h_{\max}^s$. Choose $x_h=I_M u$ so that {\bf (H6)} holds with $\delta_6\lesssim h_{\rm max}^{s}$. In conclusion, for sufficiently small mesh-size $h_{\max}$, the discrete inf-sup inequality of Theorem~\ref{dis_inf_sup_thm} holds with $\beta_h\ge \beta_0>0$. Moreover, Theorems~\ref{thm3.1} and \ref{err_apriori_thm_vke} apply and prove {\bf (A)}-{\bf (C)}. To compute ${\rm apx}(\cT)=\| \widehat{N}(u)\|_{M(\cT)^*} $, let $\phi_M \in M(\cT)$ satisfy $\ensuremath{| \!| \! |}\phi_M\ensuremath{| \!| \! |}_{\text{pw}}=1$ and ${\rm apx}(\cT) = \widehat{N}(u; \phi_M) $. Since $N(u, E_M\phi_M)=0$, the difference $\widehat{\psi} :=\phi_M-E_M \phi_M\in\widehat{V}$ satisfies \begin{align*} {\rm apx}(\cT) &= \widehat{N}(u;\widehat{\psi} ) = {a}_{\text{\rm pw}} (u- I_M u, \widehat{\psi})-F((1-\Pi_0)\widehat{\psi})+ \Gamma_{\text{\rm pw}}(u,u,\widehat{\psi}) \end{align*} with Lemma~\ref{hctenrich}.c for ${a}_{\text{\rm pw}} (I_M u, \widehat{\psi})=0$ and $\Pi_0 \widehat{\psi}=0$ a.e. in the last step. This, the finite overlap of $(\omega_K:K\in\cT)$ in Lemma~\ref{hctenrich}.a and \ref{hctenrich}.b for $\ensuremath{| \!| \! |}\widehat{\psi}\ensuremath{| \!| \! |}_{\text{pw}}\lesssim 1$ lead to \eqref{eqnapriori_NS_est_NCFEM}. \qed \subsection{A posteriori error estimate} For any $K\in\cT$ and $E\in\cE$, define the volume and edge error estimators by \begin{align*} &\eta_K^2:=h_K^4\|{\rm curl}(-\Delta u_M\nabla u_M)-f\|_{L^2(K)}^2\text{ and }\\ &\eta_E^2 :=h_E\|\left[D^2 u_M\right]_E\tau_E\|_{L^2(E)}^2 +h_E^{3}\|[\Delta u_M\nabla u_M]_E\cdot\tau_E\|_{L^2(E)}^2 +h_E^{3} \|\{\Delta u_M\nabla u_M\}_E\cdot\tau_E\|_{L^2(E)}^2. \end{align*} Here and throughout this section, the average of $\widehat{\phi}\in\widehat{X}$ across the interior edge $E=\partial K_+\cap \partial K_-\in \cE(\Omega)$ shared by two triangles $K_\pm\in\cT$ reads $\{\widehat{\phi}\}_E:=(\widehat{\phi}|_{K_+}+\widehat{\phi}|_{K_-})/2$, while $\{\widehat{\phi}\}_E:=\widehat{\phi}|_E$ along any boundary edge $E\in \cE(\partial\Omega)$. \begin{thm}[a posteriori]\label{reliability_NS_NC} If $u\in V$ is a regular solution to \eqref{NS_weak}, then there exist positive $\delta,\epsilon$, and $C_{\rm rel}$ such that, for any $\cT\in\bT(\delta)$, the discrete solution $u_{M}\in \cM(\cT)$ to \eqref{NS_dis_NC} with $\trinl u-u_M\trinr_{\text{\rm pw}}\le \epsilon$ satisfies \begin{align*} C_{\rm rel}^{-2}\trinl u-u_{M}\trinr_{\text{\rm pw}}^2&\leq \sum_{K\in\cT}\eta_K^2+\sum_{E\in\cE}\eta_E^2. \end{align*} \end{thm} \begin{proof} Let $u_M$ be the solution to \eqref{NS_dis_NC} close to $u$ and apply Theorem~\ref{abs_res_thm} with $X=Y=V,$ $X_h=Y_h=V_h$, $v_h= u_M$, and $Q:=E_M$ from Lemma~\ref{hctenrich}. Suppose that $\epsilon,\delta$ satisfy Theorem~\ref{apriori_NS_est_NCFEM} and, if necessary, are chosen smaller such that, for any $\cT\in\bT(\delta)$, exactly one discrete solution $u_{\rm M}\in X_M$ to \eqref{NS_dis_NC} satisfies $\trinl u-u_{ M} \trinr_{\text{\rm pw}} \le \epsilon \le \beta/( 2(1+\Lambda )\|\Gamm \|)$. Lemma \ref{hctenrich}.b implies \trinl u_M-E_M u_M \trinr_{\text{\rm pw}} \le \Lambda \trinl u -u_M \trinr_{\text{\rm pw}} \le \Lambda \epsilon $. This and triangle inequalities show \begin{align*} \trinl E_M u _M\trinr + \trinl u_M\trinr_{\text{pw}} &\le \trinl u_M-E_M u_M \trinr_{\text{pw}} + 2 \trinl u_M\trinr_{\text{pw}} \le 2 \trinl u \trinr +(2+ \Lambda)\epsilon=:M; \\ \trinl u- E_Mu_M\trinr &\le \trinl u- u_M\trinr_{\text{pw}}+ \trinl u_M- E_M u_M\trinr_{\text{pw}} \le (1+\Lambda)\epsilon \le \beta/( 2\|\Gamm \|). \end{align*} Consequently, the abstract residual \eqref{relib_eqn} in Theorem~\ref{abs_res_thm} implies \begin{align}\label{eqccforold4.14} \trinl u- u_M\trinr_{\text{pw}}\le 2\beta^{-1} \|N(E_M u_M)\|_{ V^*} +\trinl u_M- E_M u_M\trinr_{\text{pw}}. \end{align} There exists some $\phi\in V$ with $\trinl \phi\trinr=1$ and \begin{align*} &\|N(E_Mu_M)\|_{V^*}= N(E_M u_M;\phi)= a(E_M u_M,\phi)-F(\phi) +\Gamma(E_M u_M,E_M u_M,\phi)\notag\\ &=\widehat{N}( u_M;\phi)+ a_{\text{pw}}(E_M u_M- u_M,\phi)+\Gamma(E_M u_M,E_M u_M,\phi) -\Gamma_{\text{pw}}( u_M, u_M,\phi) \end{align*} with the definition of $N$ and of $\widehat{N}$. This, the bound of $a_{\text{pw}}$, elementary arguments with the trilinear form and its bound $\|\Gamma_{\rm pw}\|$ from Remark~\ref{remarkccnew12324boundedness}, and $M$ prove \begin{align} \label{eqccforold4.14b} \|N(E_M u_M)\|_{ V^*}\le \widehat{N}( u_M;\phi)+(1+ M \|\Gamma_{\text{\rm pw}}\| \trinl u_M- E_M u_M\trinr_{\text{\rm pw}}. \end{align} Since $u_M$ solves \eqref{NS_dis_NC}, $N_h( u_M;\phi)=N_h( u_M;\chi)$ holds for $\chi:=\phi-I_M \phi$ with the Morley interpolation $ I_M \phi$ of $\phi$. Since Lemma~\ref{Morley_Interpolation}.a implies $a_{\text{pw}}( u_{M},\phi-I_M\phi)=0$, an integration by parts in the nonlinear term $\Gamma_{\text{pw}}(\bullet,\bullet,\bullet)$ leads to \begin{align*} \widehat{N}( u_M;\phi)&=\sum_{K\in\cT}\int_K \Delta u_M\nabla u_M\cdot{\rm curl}\chi\dx -F(\chi)\notag\\ &=\sum_{K\in\cT}\int_K ({\rm curl}(-\Delta u_M\nabla u_M)-f)\chi\dx +\sum_{E\in\cE}\int_{E} [\Delta u_M\nabla u_M\cdot\tau_E]_E\{\chi\}_E\ds\notag\\ &\quad+\sum_{E\in\cE}\int_{E} \{\Delta u_M\nabla u_M\}_E\cdot\tau_E \; [\chi]_E\ds. \end{align*} This and standard arguments with Cauchy and trace inequalities plus Lemma~\ref{Morley_Interpolation}.b with $\trinl \phi\trinr=1$ eventually lead to some constant $C_A\approx 1$ with \begin{align}\label{eqccverlateinMayno33extra} C_A ^{-2} \widehat{N}( u_M;\phi)^2 \le \sum_{K\in\cT}\eta_K^2+\sum_{E\in\cE}\eta_E^2. \end{align} Piecewise inverse estimates $\trinl u_M- E_M u_M\trinr_{\text{pw}}\lesssim \| h_\cT^{-2} ( u_M- E_M u_M )\|_{L^2(\Omega)}$ and Lemma~\ref{hctenrich}.b with the tangential jump residuals lead to some constant $C_B\approx 1$ with \begin{align}\label{eqccverlateinMayno33} C_B ^{-2} \trinl u_M- E_M u_M\trinr_{\text{pw}}^2 \le \sum_{E\in\cE} h_E \|[D^2 u_M]_E\tau_E\|_{L^2(E)}^2. \end{align} This is bounded by $\sum_{E\in\cE}\eta_E^2$. The combination of \eqref{eqccforold4.14}-\eqref{eqccverlateinMayno33} concludes the proof with $C_{\rm rel}= 2 \beta^{-1} C_A+ (1+ 2\beta^{-1}(1+M \|\Gamma _{\text{pw}} \|)) C_B $. \end{proof} \begin{rem}[residuals develop correct convergence rate] The efficiency of the estimator remains as an open question owing to the average term $\left\|\{\Delta u_M\nabla u_M\cdot\tau_E\}_E\right\|_{L^2(E)}$ in $\eta_E$ (for the remaining contributions are efficient). The sum of all those contributions associated to those terms, however, converge (at least) with linear rate in that \begin{equation}\label{extrahigherorderremarkapostccnew1} S:= (\sum_{E\in\cE}h_E^{3}\left\|\{\Delta u_M\nabla u_M\cdot\tau_E\}_E\right\|_{L^2(E)}^2 )^{1/2} \lesssim h_{\max} \| u \|_{H^{2+s}(\Omega)} \trinl u_M\trinr_{\text{pw}} =O(h_{\max}). \end{equation} Before a sketch of the proof concludes this remark, it should be stressed that \eqref{extrahigherorderremarkapostccnew1} can be a higher-order term: Consider a uniform mesh in a singular situation with re-entering corners (with an exact solution of reduced regularity $u\notin H^3(\Omega) $ \cite{BlumRannacher}) with a suboptimal convergence rate $s<1$. Then $S$ in \eqref{extrahigherorderremarkapostccnew1} is of higher-order. The proof of \eqref{extrahigherorderremarkapostccnew1} starts with a triangle inequality \[ S^2 \le \sum_{T\in \cT}\sum_{E\in \cE(T)} h_E^{3}\left\| (\Delta u_M\nabla u_M) |_T\right\|_{L^2(E)}^2. \] The discrete trace inequality (i.e. a trace inequality followed by an inverse inequality) for each summand shows \[ h_E^{3}\left\| (\Delta u_M\nabla u_M) |_T\right\|_{L^2(E)}^2 \lesssim h_E^{2}\left\| \Delta u_M\nabla u_M\right\|_{L^2(T)}^2. \] Recall the piecewise constant mesh-size $h_{\cT}\in P_0(\cT)$, $h_{\cT}|_T:=h_T:=\text{diam}(T) $ for $T\in\cT$, with maximum $h_{\max}:=\max h_\cT \le\delta$. The shape regularity of $\cT$ shows \[ S\lesssim \left\| h_\cT \Delta_{\text{pw}} u_M\nabla_{\text{pw}} u_M\right\|_{L^2(\Omega)} \le \left\| h_\cT \Delta u \nabla_{\text{pw}} u_M\right\|_{L^2(\Omega)} + \left\| h_\cT \Delta_{\text{pw}} (u-u_M)\nabla_{\text{pw}} u_M\right\|_{L^2(\Omega)} \] with a triangle inequality in the last step. Recall that $u\in H^{2+s}(\Omega)$ for $s>1/2$ enables the bounded embedding $H^s(\Omega)\hookrightarrow L^{2p}(\Omega)$ for any $p$ with $1<p<1/(1-s)$. This and a H\"older inequality with $1/p+1/p'=1$ leads to \[ \left\| \Delta u \nabla_{\text{pw}} u_M\right\|_{L^2(\Omega)} \le \| \Delta u \|_{L^{2p}(\Omega)} \left\| \nabla_{\text{pw}} u_M\right\|_{L^{2p'}(\Omega)}. \] Lemma~\ref{lemmadiscreteembeddings} shows that the last term is controlled by $\trinl u_M\trinr_{\text{pw}}$. Consequently, \[ \left\| h_\cT \Delta u \nabla_{\text{pw}} u_M\right\|_{L^2(\Omega)} \lesssim h_{\max} \| u \|_{H^{2+s}(\Omega)}\trinl u_M\trinr_{\text{pw}}. \] The analysis of the second term starts with $0<r\le s\le 1$ and the elementary observation \[ \left\| h_\cT \Delta_{\text{pw}} (u-u_M)\nabla_{\text{pw}} u_M\right\|_{L^2(\Omega)} \le \sqrt{2} h_{\max}^{1-r} \trinl u-u_M\trinr_{\text{pw}} | h_\cT^r \, u_M |_{W^{1,\infty}(\Omega,\cT)}. \] The asserted convergence rate follows with $\trinl u-u_M\trinr_{\text{pw}}\lesssim h_{\max}^s \| u \|_{H^{2+s}(\Omega)}$. The maximum of the remaining term $ | h_\cT^r \, u_M |_{W^{1,\infty}(\Omega,\cT)}=h_T^r | u_M |_{W^{1,\infty}(T)}$ is attained for (at least) one $T\in\cT$. An inverse inequality and Lemma~\ref{lemmadiscreteembeddings} in the end show \[ h_T^r | u_M |_{W^{1,\infty}(T)}\lesssim | u_M |_{W^{1,2/r}(T)} \le | u_M |_{W^{1,2/r}(\Omega,\cT)}\lesssim \trinl u_M\trinr_{\text{pw}}. \] Consequently, $\left\| h_\cT \Delta_{\text{pw}} (u-u_M)\nabla_{\text{pw}} u_M\right\|_{L^2(\Omega)} \lesssim h_{\max}^{1+s-r} \trinl u_M\trinr_{\text{pw}}$. The combination of the previous estimates proves \eqref{extrahigherorderremarkapostccnew1}. \qed \end{rem} \begin{rem}[no efficiency analysis] The lack of local efficiency is part of a more general structural difficulty. Whenever volume terms require a piecewise integration by parts with Morley finite element test functions, there arise average terms like $\{\widehat{\phi}\}_E$ in Theorem~\ref{reliability_NS_NC}, which are not residuals. This prevents an efficiency analysis in this section as well as in \cite[Adini FEM]{CarstensenGallistlHu2013} or \cite[Subsect 7.8]{Gallistl2014Adaptive}. It is left as an open problem for future research and may cause a modification of the discrete scheme. In the vibration of a biharmonic plate or in the von K\'{a}rm\'{a}n equations of the subsequent section, this difficulty does not arise. \end{rem} \section{Von K\'{a}rm\'{a}n equations}\label{Preli} Given a load function $f\in L^{2}(\Omega)$, the von K\'{a}rm\'{a}n equations model the deflection of a very thin elastic plate with vertical displacement $u\in\hto$ and the Airy stress function $v\in\hto$ such that \begin{equation}\label{vke} \Delta^2 u =[u,v]+f \text{ and }\Delta^2 v =-\half[u,u] \text{ in } \Omega. \end{equation} With the co-factor matrix $\cof(D^2 v )$ of $D^2 v $, the von K\'{a}rm\'{a}n brackets read \begin{equation*} [u,v]:=\frac{\partial^2 u}{\partial x_1^2}\frac{\partial^2 v}{\partial x_2^2} +\frac{\partial^2 u}{\partial x_2^2}\frac{\partial^2 v }{\partial x_1^2} -2\frac{\partial^2 u}{\partial x_1\partial x_2}\frac{\partial^2 v}{\partial x_1\partial x_2}=\cof(D^2 u):D^2 v. \end{equation*} \subsection{Continuous problem} The weak formulation of the von K\'{a}rm\'{a}n equations \eqref{vke} seeks $u,v\in V:=H^2_0(\Omega)$ with \begin{subequations}\label{wform} \begin{align} a(u,\varphi_1)+ \gamma(u,v,\varphi_1)+\gamma(v,u,\varphi_1)&= (f,\varphi_1)_{L^2(\Omega)} \fl\varphi_1\in V\label{wforma}\\ a(v,\varphi_2)-\gamma(u,u,\varphi_2) &=0 \fl\varphi_2 \in V. \label{wformb} \end{align} \end{subequations} Here and throughout this section abbreviate, for all $ \eta,\chi,\varphi\in V$, \[ a(\eta,\chi):=\integ D^2 \eta:D^2\chi\dx \quad\text{and} \quad \gamma(\eta,\chi,\varphi):=-\half\integ [\eta,\chi]\varphi\dx. \] The abstract theory of Sections~\ref{infsup}-\ref{error} applies for the real Hilbert space $X:=V\times V$ with its dual $X^*$ to the operator $ N: X\to X^*$ defined by \begin{align} N({\boldmath\Psi};{\boldmath\Phi}):=\langle N({\boldmath\Psi}), \Phi\rangle:= A({\boldmath\Psi},{\boldmath\Phi})-F({\boldmath\Phi})+\Gamma({\boldmath\Psi},{\boldmath\Psi},{\boldmath\Phi}) \end{align} for all ${\boldmath\Xi}=(\xi_{1},\xi_{2})$, ${\boldmath\Theta}=(\theta_{1},\theta_{2})$, ${\boldmath \Phi}=(\varphi_{1},\varphi_{2})\in X$ and the abbreviations \begin{align*} A(\Theta,{\boldmath \Phi})&:=a(\theta_1,\varphi_1)+a(\theta_2,\varphi_2),\\ F({\boldmath\Phi})&:=(f,\varphi_1)_{\lt},\\ \Gamma({\boldmath\Xi},\Theta,{\boldmath\Phi})&:=\gamma(\xi_{1},\theta_{2},\varphi_{1})+\gamma(\xi_{2},\theta_{1},\varphi_{1})-\gamma(\xi_{1},\theta_{1},\varphi_{2}). \end{align*} Note that $A(\bullet,\bullet)$ is a scalar product in $X$ and the trilinear form $\Gamma(\bullet,\bullet,\bullet)$ is bounded \cite{GMNN_CFEM}. It is known \cite{CiarletPlates,Brezzi} that there exist a solution $\Psi \in X$ with $N(\Psi)=0$. Any solution has the regularity $\Psi\in {\bf H}^{2+s}(\Omega):=(H^{2+\alpha}(\Omega))^2$ for $1/2<s\le 1$ depending on the polygonal bounded Lipschitz domain $\Omega$ \cite{BlumRannacher}. This allows for the boundedness \[ \Gamma({\boldmath\Psi},{\boldmath\Theta},{\boldmath\Phi})\leq C \|{\boldmath\Psi}\|_{H^{2+s}(\Omega)} \trinl\Theta\trinr \|{\boldmath\Phi}\|_{H^{1}(\Omega)} \quad\text{for any }{\boldmath\Theta}\in X\text{ and }{\boldmath\Phi} \in H^1_0(\Omega;\bR^2). \] \subsection{Conforming FEM}\label{Sec:CFEM_vke} With the notation of Section~\ref{sec:CFEM_NS} on $V_C\subset H^2_0(\Omega)$, the conforming finite element formulation seeks ${\boldmath\Psi}_C=(u_C,v_C)\in X_{h}:=V_C\times V_C$ such that \begin{align} \label{vformdC} \displaystyle N({\boldmath\Psi}_C;{\boldmath\Phi}_C)=0 \quad\text{for all} \quad {\boldmath\Phi}_C \in X_h. \end{align} \begin{thm}[a priori]\label{apriori_NS_est_NC} If ${\boldmath\Psi}\in X$ is a regular solution to $N({\boldmath\Psi})=0$, then there exist positive $\epsilon$, $\delta$, and $\rho$ such that {\bf (A)}-{\bf (C)} hold with ${\rm apx}(\cT)\equiv 0$ for all $\cT \in \bT(\delta)$. \end{thm} \begin{proof} The proof is analogous to that of Theorem~\ref{apriori_NS_est_C} and hence omitted. The a~priori error analysis is derived in \cite{GMNN_CFEM} with a fixed point iteration (of linear convergence). \end{proof} For any $K\in\cT$ and $E\in\cE$, define the volume and edge error estimators by \begin{align*} \eta_K^2&:=h_K^4\big{\|}\Delta^2 u_C-[u_C,v_C]-f\big{\|}^2_{L^2(K)} +h_K^4 \big{\|}\Delta^2 v_C+1/2[u_C,u_C]\big{\|}^2_{L^2(K)},\\ \eta_E^2&:=h_E^{3}\left\|\left[\divc (D^2 u_C)\right]_E\cdot\nu_E\right\|^2_{L^2(E)} +h_E^{3} \left\|\left[\divc (D^2 v_C)\right]_E\cdot\nu_E\right\|^2_{L^2(E)}\notag\\ &\qquad +h_E\left\|\left[D^2 u_C\nu_E\right]_E\cdot\nu_E\right\|^2_{L^2(E)} +h_E\left\|\left[D^2 v_C\nu_E\right]_E\cdot\nu_E\right\|^2_{L^2(E)} . \end{align*} \begin{thm}[a posteriori]\label{reliability_C} If ${\boldmath\Psi}\in X$ is a regular solution to $N({\boldmath\Psi})=0$, then there exist positive $\delta, \epsilon$, $C_{\rm rel}$, and $C_{\rm eff}$ such that, for all $\cT \in \bT(\delta)$, the unique discrete solution ${\boldmath\Psi}_{C}=(u_C,v_C)\in X_h$ to \eqref{vformdC} with $ \trinl{\boldmath\Psi}-{\boldmath\Psi}_C\trinr <\epsilon $ satisfies \begin{align} C_{\rm rel}^{-2}\trinl{\boldmath\Psi}-{\boldmath\Psi}_C\trinr^2&\leq\sum_{K\in\cT}\eta_K^2 +\sum_{E\in\cE }\eta_E^2\leq C_{\rm eff}^2(\trinl{\boldmath\Psi}-{\boldmath\Psi}_{C}\trinr^2+{\rm osc}_0^2(f)). \label{reliability_est_C} \end{align} \end{thm} \begin{proof} For $Y=X$, $Y_h=X_h$, we proceed as in the proof of Theorem \ref{reliability_C_NS} and, for sufficiently small $\delta$, derive {\bf (H1)}- {\bf (H6)} and $\trinl u-u_C\trinr < \beta/\|\Gamma\|$ from Theorem~\ref{apriori_NS_est_C}. Hence Corollary~\ref{coraposteriori} implies for $v_h\equiv {\boldmath\Psi}_C= (u_C, v_C)$ that \begin{align*} \trinl{\boldmath\Psi}-{\boldmath\Psi}_C\trinr\lesssim \|N({\boldmath\Psi}_C)\|_{X^*}=N({\boldmath\Psi}_C;{\boldmath\Phi}) \end{align*} for some ${\boldmath\Phi}\in X$ with $\trinl{\boldmath\Phi}\trinr=1$ and its approximation $\Pi_h{\boldmath\Phi}\in X_h$ ($\Pi_h$ from Lemma~\ref{interpolation_BFS} applies componentwise). Abbreviate $(\chi_1,\chi_2):={ \boldmath \chi} :={\boldmath\Phi}-\Pi_h{\boldmath\Phi}$ and deduce from \eqref{vformdC} that $\|N({\boldmath\Psi}_C)\|_{X^*}=N({\boldmath\Psi}_C;(\chi_1,\chi_2))$. Successive integrations by parts show \begin{align*} &A({\boldmath\Psi}_C,{ \boldmath \chi} )=\sik (\Delta^2 u_C) \chi_1 \dx + \sik(\Delta^2 v_C) \chi_2 \dx \notag \\ &\quad +\sie\left[D^2 u_C\right]_E\nu_E\cdot\nabla\chi_1\ds + \sie\left[D^2 v_C\right]_E\nu_E\cdot\nabla\chi_2 \ds \notag \\ &\quad- \sie \chi_1 \left[\divc(D^2 u_C)\right]_E\cdot\nu_E\ds - \sie \chi_2 \left[\divc(D^2 v_C)\right]_E\cdot\nu_E \ds. \end{align*} This and the definition of $\Gamma(\bullet,\bullet,\bullet)$ lead to the residual \begin{align} &A({\boldmath\Psi}_C,{\boldmath\Phi}-\Pi_h{\boldmath\Phi})-F({\boldmath\Phi}-\Pi_h{\boldmath\Phi})+\Gamma({\boldmath\Psi}_C,{\boldmath\Psi}_C,{\boldmath\Phi}-\Pi_h{\boldmath\Phi})\notag\\ &\quad= \sik\left(\Delta^2 u_C-[u_C,v_C]-f\right)\chi_1\dx+\sik \big(\Delta^2 v_C+\half[u_C,u_C]\big)\chi_2\dx\notag\\ &\quad\quad-\sie\left(\left[\divc(D^2 u_C)\right]_E\cdot\nu_E\right)\chi_1\ds +\sie\left[D^2 u_C\right]_E\nu_E\cdot\nabla\chi_1\ds\notag\\ &\quad\quad -\sie\left(\left[\divc(D^2 v_C)\right]_E\cdot\nu_E\right)\chi_2\ds + \sie\left[D^2 v_C\right]_E\nu_E\cdot\nabla\chi_2\ds. \label{int_estimator} \end{align} The two edge terms in the above expression that involve $\nabla \chi_j $ for $j=1,2$ can be rewritten as \begin{align} &\sie\left[D^2 u_C\right]_E\nu_E\cdot\nabla\chi_1\ds+\sie\left[D^2 v_C\right]_E\nu_E\cdot\nabla\chi_2\ds\notag\\ &\quad=\sie\left[D^2 u_C\nu_E\right]_E\cdot\nu_E \frac{\partial\chi_1}{\partial \nu}\ds+\sie\left[D^2 v_C\nu_E\right]_E\cdot\nu_E \frac{\partial\chi_2}{\partial \nu}\ds \nonumber \\ &\quad\quad +\sie\left[D^2 u_C\nu_E\right]_E\cdot\tau_E \frac{\partial\chi_1}{\partial \tau}\ds+\sie\left[D^2 v_C\nu_E\right]_E\cdot\tau_E \frac{\partial\chi_2}{\partial \tau}\ds. \nonumber \end{align} The last two terms involve tangential derivatives and so vanish for $u_C$ and $v_C$ belong to $\hto$. Standard arguments analogous to \cite[(5.12)-(5.14)]{CCGMNN18} with a Cauchy inequality, an inverse inequality, and Lemma \ref{interpolation_BFS} conclude the proof of the reliability. The proof of the efficiency of the volume term $\eta_K$ is immediately adopted from that of \cite[Lemma 5.3]{CCGMNN18}. The arguments in the proof of efficiency for the edge terms $h_E\left\|\left[D^2 u_C\nu_E\right]_E\cdot\nu_E\right\|_{L^2(E)}$ and $h_E\left\|\left[D^2 v_C\nu_E\right]_E\cdot\nu_E\right\|_{L^2(E)}$ are the same as for the (linear) biharmonic equation and can be adopted from \cite[Theorem 4.4 ]{Georgoulis2011} or \cite[Theorem 6.2]{CCGMNN18}. Further details are omitted. \end{proof} \subsection{Morley FEM}\label{Sec:NCFEM_vke} The Morley FEM seeks ${\boldmath\Psi}_{M}\in X_M:={\cM(\cT) \times \cM(\cT)}\subset \widehat{X}:=X+X_M$ (endowed with the norm $\trinl\bullet \trinr_{\text{pw}}$) such that \begin{equation}\label{vformdNC} N_h({\boldmath\Psi}_{M};{\boldmath\Phi}_M):=A_{\text{pw}}({\boldmath\Psi}_M,{\boldmath\Phi}_M) +\Gamma_{\text{pw}}({\boldmath\Psi}_M,{\boldmath\Psi}_M,{\boldmath\Phi}_M)-F({\boldmath\Phi}_M)=0 \fl {\boldmath\Phi}_M \in X_M. \end{equation} Here and throughout this subsection, for all $ {\boldmath\Xi}=(\xi_{1},\xi_{2})$, $\Theta=(\theta_{1},\theta_{2})$, ${\boldmath\Phi}=(\varphi_{1},\varphi_{2})\in \widehat{X}$, \begin{align*} & A_{\text{pw}}(\Theta,{\boldmath\Phi}):=a_{\text{pw}}(\theta_1,\varphi_1)+a_{\text{pw}}(\theta_2,\varphi_2), \; F({\boldmath\Phi}):=\sit f\varphi_1\dx, \\ &\Gamma_{\text{pw}}({\boldmath\Xi},\Theta,{\boldmath\Phi}):=b_{\text{pw}}(\xi_{1},\theta_{2},\varphi_{1})+b_{\text{pw}}(\xi_{2},\theta_{1},\varphi_{1})-b_{\text{pw}}(\xi_{1},\theta_{1},\varphi_{2}), \end{align*} and, for all $ \eta,\chi, \varphi \in \widehat{V}:=H^2_0(\Omega)+ \cM(\cT)$, \begin{equation*} a_{\text{pw}}(\eta,\chi):=\sit D^2 \eta:D^2\chi \dx\text{ and } b_{\text{pw}}(\eta,\chi,\varphi):=-\half\sit [\eta,\chi]\varphi \dx. \end{equation*} (The boundedness of $a_{\text{pw}}$ is immediate and that of $\Gamma_{\text{pw}} $ follows from Lemma~\ref{lemmadiscreteembeddings}.) \begin{thm}[a priori]\label{apriori_VKE_est_NC} If ${\boldmath\Psi}\in X$ is a regular solution to $N({\boldmath\Psi})=0$, then there exist positive $\epsilon$, $\delta$, and $\rho$ such that {\bf (A)}-{\bf (C)} hold for any $\cT \in \bT(\delta)$ with \[ {\rm apx(\cT)}\lesssim \trinl {\boldmath\Psi}-I_{M} {\boldmath\Psi} \trinr_{\rm pw} + \text{\rm osc}_0(f+ [u,v],\cT) + \text{\rm osc}_0([u,u],\cT)\lesssim h_{\rm max}^{s}. \] \end{thm} \begin{proof} Set $Y=X$, $Y_h=X_M$, $\widehat{X}= X+X_M$, $\widehat{a}(\bullet,\bullet):=A_{\text{pw}}(\bullet,\bullet),$ $\widehat{b}(\bullet,\bullet):=2\Gamma_{\text{pw}}( {\boldmath\Psi},\bullet,\bullet)$ and $P=I_M$, $Q={\mathcal C}=E_M$. Given ${\boldmath\Psi}\in {\bf H}^{2+s}(\Omega)$, $\widehat{\Theta},\: \widehat{{\boldmath\Phi}}\in \widehat{X}$, piecewise \Holder inequalities and the bounded global Sobolev imbedding $ H^{2+s}(\Omega)\hookrightarrow W^{2,4}(\Omega)$ (for $s>1/2$) show \begin{align} \label{Gamma3bdd} \Gamma_{\text{pw}}({\boldmath\Psi},\widehat{\theta},\widehat{{\boldmath\Phi}}) \lesssim \|{\boldmath\Psi}\|_{H^{2+s}(\Omega)}\ensuremath{| \!| \! |}\widehat{\Theta}\ensuremath{| \!| \! |}_{\text{pw}}\|\widehat{{\boldmath\Phi}}\|_{L^4(\Omega)}. \end{align} For $\Theta_M \in X_M$ with $\trinl\Theta_M\trinr_{\text{pw}}=1$, the linear functional $\Gamma({\boldmath\Psi},\Theta_M,\bullet)\in {\bf H}^{-1}(\Omega)$ leads to a unique solution ${\boldmath Z} \in X$ to the biharmonic problem $A({\boldmath Z},{\boldmath\Phi})=\Gamma({\boldmath\Psi},\Theta_M,{\boldmath\Phi})$ for all ${\boldmath\Phi}\in X$ with ${\boldmath Z}\in {\bf H}^{2+s}(\Omega)$ \cite{BlumRannacher}. For $\varphi_M\in \cM(\cT)$, the inverse estimate \begin{equation*} \|\varphi_M-E_M\varphi_M\|_{L^4(K)}\leq C h_K^{-1/2}\|\varphi_M-E_M\varphi_M\|_{L^2(K)} \; \text {for all} \; K\in\cT, \end{equation*} the bound for $\Gamma_{\text{pw}}$, and Lemma~\ref{hctenrich}.a imply $\delta_3\lesssim h_{\rm max}^{3/2}$. The remaining conditions for the parameters in the {\bf (H1)}-{\bf (H2)} and {\bf (H4)}-{\bf (H6)} are verified as in the proof of Theorem~\ref{apriori_NS_est_NCFEM}. For some ${\boldmath\Phi}_M\in X_M$ with $\trinl{\boldmath\Phi}_M\trinr_{\text{pw}}=1$, ${\rm apx}(\cT)=\|\widehat{N}({\boldmath\Psi})\|_{X_h^*}=\widehat{N}({\boldmath\Psi}; {\boldmath\Phi}_M) $. This, $N({\boldmath\Psi}; E_M{\boldmath\Phi}_M)=0$, \eqref{Gamma3bdd}, and Lemmas \ref{hctenrich}-\ref{EnrichSmooth} lead for $(\chi_1,\chi_2):={\boldmath\chi}:={\boldmath\Phi}_M-E_M{\boldmath\Phi}_M$ to \begin{align*} {\rm apx}(\cT) & = \widehat{N}({\boldmath\Psi}; {\boldmath\Phi}_M-E_M {\boldmath\Phi}_M) = A_{\text{pw}}({\boldmath\Psi}, {\boldmath\chi} )-F({\boldmath\chi}) +\Gamma_{\text{pw}}({\boldmath\Psi},{\boldmath\Psi},{\boldmath\chi}) \\ & = A_{\text{pw}}({\boldmath\Psi}-I_M {\boldmath\Psi},{\boldmath\chi}) - (f+ [u,v], \chi_1)_{L^2(\Omega)}+\frac 12 ([u,u], \chi_2)_{L^2(\Omega)}\\ &\lesssim \trinl {\boldmath\Psi}-I_M {\boldmath\Psi} \trinr_{\rm pw} + \text{osc}_0(f+ [u,v],\cT) + \text{osc}_0([u,u],\cT)\lesssim h_{\rm max}^{s} \end{align*} with arguments as in the final part of the proof of Theorem~\ref{apriori_NS_est_NCFEM}. Hence Theorems~\ref{thm3.1} and \ref{err_apriori_thm_vke} apply and prove {\bf (A)}-{\bf (C)}. \end{proof} For any $K\in\cT$ and $E\in\cE$, define the volume and edge error estimators by \begin{align*} \eta_K^2&:= h_K^4\left\|[u_M,v_M]+f\right\|_{L^2(K)}^2+ h_K^4 \left\|[u_M,u_M]\right\|_{L^2(K)}^2,\\ \eta_E^2 &:= h_E\left\|\left[D^2 u_M\right]_E\tau_E\right\|_{L^2(E)}^2 +h_E\left\|\left[D^2 v_M\right]_E\tau_E\right\|_{L^2(E)}^2. \end{align*} \begin{thm}[a posteriori]\label{reliability_NC} If ${\boldmath\Psi}=(u,v)\in X$ is a regular solution to $N({\boldmath\Psi})=0$, then there exist $\delta,\epsilon$, $C_{\rm rel}$, and $C_{\rm eff}$ such that, for any $\cT\in\bT(\delta)$, the discrete solution ${\boldmath\Psi}_{M}=(u_{M},v_{M})\in X_M:=\cM(\cT)\times \cM(\cT)$ to \eqref{vformdNC} with $\trinl {\boldmath\Psi}-{\boldmath\Psi}_{ M} \trinr_{\text{\rm pw}}\le \epsilon$ satisfies \begin{align*} C_{\rm rel}^{-2} \trinl{\boldmath\Psi}-{\boldmath\Psi}_{ M}\trinr_{\text{\rm pw}}^2 &\leq \sum_{K\in\cT}\eta_K^2+\sum_{E\in\cE}\eta_E^2\leq C_{\rm eff}^2 (\trinl {\boldmath \Psi}-{\boldmath\Psi}_{M}\trinr_{\text{\rm pw}}^2+{\rm osc}_0^2(f)). \end{align*} \end{thm} \begin{proof} Let ${\boldmath\Psi}_M$ be the solution to \eqref{vformdNC} close to ${\boldmath\Psi}$ and apply Theorem~\ref{abs_res_thm} with $Y=X, Y_h=X_M$, $v_h={\boldmath\Psi}_M$, and $Q=E_M$. Suppose that $\epsilon,\delta$ satisfy Theorem~\ref{apriori_VKE_est_NC} and, if necessary, are chosen smaller such that, for any $\cT\in\bT(\delta)$, exactly one discrete solution ${\boldmath\Psi}_{M}\in X_M$ to \eqref{vformdNC} satisfies $\trinl {\boldmath\Psi}-{\boldmath\Psi}_{M} \trinr_{\text{\rm pw}} \le \epsilon \le \beta/( 2(1+\Lambda )\| \Gamm \|)$. Lemma \ref{hctenrich}.b implie \trinl {\boldmath\Psi}_{ M}- E_{M} {\boldmath\Psi}_{ M} \trinr_{\text{\rm pw}} \le \Lambda \trinl {\boldmath\Psi}-{\boldmath\Psi}_{M} \trinr_{\text{\rm pw}} \le \Lambda \epsilon $. This and triangle inequalities show \begin{align*} \trinl E_M{\boldmath\Psi}_M\trinr + \trinl {\boldmath\Psi}_M\trinr_{\text{pw}} &\le \trinl {\boldmath\Psi}_M-E_M{\boldmath\Psi}_M \trinr_{\text{pw}} + 2 \trinl {\boldmath\Psi}_M\trinr_{\text{pw}} \le 2 \trinl {\boldmath\Psi}\trinr +(2+ \Lambda)\epsilon=:M; \\ \trinl {\boldmath\Psi}- E_M{\boldmath\Psi}_M\trinr &\le \trinl {\boldmath\Psi}- {\boldmath\Psi}_M\trinr_{\text{pw}}+ \trinl {\boldmath\Psi}_M- E_M{\boldmath\Psi}_M\trinr_{\text{pw}} \le (1+\Lambda)\epsilon \le \beta/( 2\|\Gamm \|). \end{align*} Consequently, the abstract residual \eqref{relib_eqn} in Theorem~\ref{abs_res_thm} implies \begin{align}\label{eqccverlateinMayno123} \trinl{\boldmath\Psi}-{\boldmath\Psi}_M\trinr_{\text{pw}}\leq 2\beta^{-1} \|N(E_M{\boldmath\Psi}_M)\|_{ X^*}+\trinl {\boldmath\Psi}_M-E_M{\boldmath\Psi}_M\trinr_{\text{pw}}. \end{align} There exists ${\boldmath\Phi}\in X$ with $\trinl {\boldmath\Phi}\trinr=1$ and \[ \|N(E_M{\boldmath\Psi}_M)\|_{ X^*} = N(E_M{\boldmath\Psi}_M;{\boldmath\Phi})=A_{\text{\rm pw}}(E_M{\boldmath\Psi}_M,{\boldmath\Phi})-F({\boldmath\Phi})+\Gamma(E_M{\boldmath\Psi}_M,E_M{\boldmath\Psi}_M,{\boldmath\Phi}) \] with the definition of $N$. This and the definition of $\widehat{N}({\boldmath\Psi}_M;{\boldmath\Phi})$ lead to \begin{align* \|N(E_M{\boldmath\Psi}_M)\|_{ X^*} &=\widehat{N}({\boldmath\Psi}_M;{\boldmath\Phi})+A_{\text{pw}}(E_M{\boldmath\Psi}_M-{\boldmath\Psi}_M,{\boldmath\Phi}) \nonumber \\ & \nonumber \; +\Gamma(E_M{\boldmath\Psi}_M,E_M{\boldmath\Psi}_M,{\boldmath\Phi})-\Gamma_{\text{pw}}({\boldmath\Psi}_M,{\boldmath\Psi}_M,{\boldmath\Phi})\\ &\le \widehat{N}({\boldmath\Psi}_M;{\boldmath\Phi}) + (1+M\|\Gamma_{\rm pw}\| )\trinl {\boldmath\Psi}_M-E_M{\boldmath\Psi}_M\trinr_{\text{pw}} \end{align*} with the bound of $A_{\text{pw}}$, elementary arguments with the trilinear form and its bound $\|\Gamma_{\rm pw}\|$ (deduced from Lemma~\ref{lemmadiscreteembeddings} as in Remark~\ref{remarkccnew12324boundedness}), and $M$ in the second step. Since ${\boldmath\Psi}_M$ solves \eqref{vformdNC}, \( \widehat{N}({\boldmath\Psi}_M;{\boldmath\Phi})=\widehat{N}({\boldmath\Psi}_M; {\boldmath\chi}) \) holds for ${\boldmath\chi}:=(\chi_1,\chi_2):={\boldmath\Phi}-I_M{\boldmath\Phi}$ with the Morley interpolation $ I_M {\boldmath\Phi}$ of ${\boldmath\Phi}$. Since Lemma~\ref{Morley_Interpolation}.a implies $ A_{\text{pw}}({\boldmath\Psi}_{M},{\boldmath\chi})=0$, the definitions of $\Gamma_{\rm pw}(\bullet,\bullet,\bullet)$ and $F(\bullet)$ lead to \begin{align*} \widehat{N}({\boldmath\Psi}_M; {\boldmath\Phi})&= \Gamma_{\text{pw}}({\boldmath\Psi}_M,{\boldmath\Psi}_M,{\boldmath\chi}) - F({\boldmath\chi}) \\ &= 1/2( [u_M,u_M], \chi_2 )_{L^2(\Omega)} - (f+[u_M,v_M], \chi_1)_{L^2(\Omega)} \\ &\le ( \sum_{K\in\cT} \eta_K^2 )^{1/2} \| h_\cT^{-2}({\boldmath\Phi}-I_M{\boldmath\Phi})\|_{L^2(\Omega)} \le C_A ( \sum_{K\in\cT} \eta_K^2 )^{1/2} \end{align*} with weighted Cauchy inequalities in the second last step and the constant $C_A\approx 1$ from Lemma~\ref{Morley_Interpolation}.b with $\trinl {\boldmath\Phi}\trinr=1$ in the end. The combination with \eqref{eqccverlateinMayno123} reads \begin{align*} \trinl{\boldmath\Psi}-{\boldmath\Psi}_M\trinr_{\text{pw}} \leq 2 \beta^{-1} C_A( \sum_{K\in\cT} \eta_K^2 )^{1/2} +(1+ 2\beta^{-1} (1+M\|\Gamma _{\text{pw}} \| ) ) \trinl {\boldmath\Psi}_M-E_M{\boldmath\Psi}_M\trinr_{\text{pw}}. \end{align*} The last term is controlled as in \eqref{eqccverlateinMayno33} and this concludes the proof of the reliability estimate with $C_{\rm rel}= \max\{ 2\beta^{-1} C_A ,(1+ 2\beta^{-1}(1+M \|\Gamma _{\text{pw}} \|))C_B\} $. The proof of the efficiency of the volume term $\eta_K$ is immediately adopted from that of \cite[Lemma 5.3]{CCGMNN18}. The arguments in the proof of efficiency for the edge term $\eta_E$ are the same as for the (linear) biharmonic equation and can be adopted from \cite[p. 322]{CarstensenGallistlHu2013}. Further details are omitted. \end{proof} \section*{Appendix} Some essentially known details are added for completeness. \subsection*{Proof of Corollary~\ref{rembestapproximation}} There is nothing to prove for $\beta_h\le 0$, so suppose $\beta_h>0$ in the sequel. Then the Galerkin projection $\hat G: \widehat{X}\to \widehat{X}$ onto $X_h$ is well defined by \[ \forall \widehat{x}\in \widehat{X}\, \forall y_h\in Y_h\,\exists ! \hat G \widehat{x}\in X_h \quad (\widehat{a}+\widehat{b})(\widehat{x}-\hat G \widehat{x},y_h)=0. \] The oblique projection $\hat G\in L( \widehat{X}; \widehat{X})$ is bounded and satisfies $\|\hat G\|=\|1-\hat G\|$ \cite{Kato60,Szyld06} provided $\{0\}\ne X_h\ne \widehat{X}$. Given any $\widehat{x}\in \widehat{X}$, \eqref{eqdis_inf_sup_defbetah} leads to $y_h\in Y_h$ with $\| y_h\|_{Y_h}\le 1/\beta_h$ and \begin{equation*} \| \hat G \widehat{x} \|_{X_h} =(a_h+b_h)(\hat G \widehat{x} ,y_h) =(\widehat{a}+\widehat{b})( \widehat{x} ,y_h) \leq M \| \widehat{x}\|_{\widehat{X}}/\beta_h. \end{equation*} This proves $\|1-\hat G\|=\| \hat G \|\leq M/\beta_h$. For all $x\in X$ and $x_h\in X_h$, this and $x_h= \hat G x_h$ ($\hat G$ is a projection) implies \[ \| x-\hat G x\|_{ \widehat{X} } = \| (1- \hat G)x\|_{ \widehat{X} }=\|(1- \hat G)(x-x_h)\|_{ \widehat{X} } \leq M \|x-x_h \|_{ \widehat{X} } /\beta_h. \] In other words, for the above solution $u$, \[ \beta_h \| u-\hat G u\|_{ \widehat{X} }/M \le \min_{x_h\in X_h} \| u-x_h \|_{ \widehat{X} }. \] Since $\hat G u$ is possibly different from the discrete solution $u_h\in X_h$ to $(a_h+b_h)(u_h,\bullet)=F_h:=F|_{Y_h}$ in $Y_h$. As it is well known in the context of the Strang-Fix lemmas that the remaining contribution $\hat G u-u_h\in X_h$ leads to an additional term $\| F_h- (\widehat{a}+\widehat{b})( u ,\bullet) \|_{Y_h^*}$. In fact, given $\hat G u-u_h\in X_h$, \eqref{eqdis_inf_sup_defbetah} leads to $y_h\in Y_h$ with $\| y_h\|_{Y_h}\le 1/\beta_h$ and \begin{align*} \|\hat Gu-u_h\|_{X_h} &=(a_h+b_h)(\hat G u-u_h ,y_h) =(\widehat{a}+\widehat{b})( u,y_h) - F_h(y_h) \\ & \le \| F_h- (\widehat{a}+\widehat{b})( u ,\bullet) \|_{Y_h^*}/\beta_h . \qed \end{align*} \subsection*{Proof of Lemma~\ref{SpecLem}} The first part of the assertion is included in \cite{SchatzWang96} and so is merely outlined for convenient reading of the second. The Rellich compact embedding theorem $H^1_0(\Omega)\stackrel{c}{\hookrightarrow} L^2(\Omega)$ leads to $\lt\stackrel{c}{\hookrightarrow} H^{-1}(\Omega)$ in the sequel. Hence $S:=\left\{ g\in\lt\, |\,\|g\|=1\right\}$ is pre-compact in $V^*$. The operator $A\in L(V;V^*)$, associated to the scalar product $a$ via $Av=a(v,\bullet)$ for all $v\in V$ (note $A$ in contrast to the coefficients ${\bf A}$); $A$ is invertible and $A^{-1}\in L(V^*;V)$ maps $S$ onto $W:=A^{-1}(S)$ pre-compact in $H^1_0(\Omega)$. The open balls $ B(z,\epsilon/6)$ in $V$ around $z\in \overline{W}$ with radius $\epsilon/6$ with respect to the norm $\|\bullet\|_a$ form an open cover of the compact set $ \overline{W}$ and so have a finite sub-cover for $z_1,\dots, z_J\in \overline{W}$, \begin{equation}\label{FE_Cover} W\subset \cup_{j=1}^J B(z_j,\epsilon/6)\subset V. \end{equation} Since $\cD(\Omega)$ is dense in $V$, there exists $\zeta_j\in \cD(\Omega)$ with $\| z_j-\zeta_j\|_a <\epsilon/6$. The smoothness of $\zeta_j$ proves \begin{equation}\label{F1bdd} \| \zeta_j-I_C\zeta_j\|_a \leq C h_{\max} \le C \delta \end{equation} for any triangulation $\cT\in\bT(\delta)$ and the nodal interpolation $I_C$ in $S_0^1(\cT)$; the constant $C$ depends on $\displaystyle\max_{j=1,\,\ldots,\, J}\|D^2\zeta_j\|$, the shape-regularity parameter $\kappa$, and on $\overline{\lambda}$. For any $g\in\lt\setminus \{0\}$ with $z=A^{-1}(g)/\|g\|\in W$ from \eqref{FE_Cover}, there exists at least one index $j\in\{1,\ldots,J\}$ with $z\in B(z_j,\epsilon/6)$. This, the choice of $\zeta_j$, and \eqref{F1bdd} with $ \delta:=\epsilon/(6C) $ prove \[ \| z-I_C\zeta_j\|_a \leq \| z-z_j\|_a+ \| z_j-\zeta_j\|_a+\| \zeta_j-I_C\zeta_j\|_a<{\epsilon}/{3}+C\delta <\epsilon/2. \] A rescaling of this leads to $\displaystyle \| A^{-1}(g)-\|g\| I_C\zeta_j\|_a =\|g\| \| z-I_C\zeta_j\|_a\leq\epsilon \|g\|/2.$ This proves that the first term in the asserted inequality is bounded by the right-hand side. The analysis of the second term considers the pre-compact subset ${\bf A}\nabla W=\{ {\bf A}\nabla z: Tz=g\in S\}$ of $L^2(\Omega;\bR^n)$. Since the open balls $B(Q,\epsilon/6)$ around $Q\in \overline{{\bf A}\nabla W}$ in the $L^2$ norm form an open cover of the compact closure $\overline{{\bf A}\nabla W}$ in $L^2(\Omega;\bR^n)$, there exists $Q_1,\dots, Q_K$ in $\overline{{\bf A}\nabla W}$ with \begin{equation}\label{FE_Cover2} {\bf A}\nabla W \subset \cup_{k=1}^K B(Q_k,\epsilon/6)\subset L^2(\Omega;\bR^n). \end{equation} Since $\cD(\Omega;\bR^n)$ is dense in $L^2(\Omega;\bR^n)$, there exists ${\Phi}_k\in \cD(\Omega;\bR^n)$ with $\| Q_k-{\Phi}_k\| <\epsilon/6$. The smoothness of ${\Phi}_k$ and a Poincar\'e inequality (on simplices with constant $h_T/\pi$) prove \begin{equation}\label{F1bdd2} \| {\boldmath\Phi}_k-\Pi_0 \Phi_k\| \leq ||\nabla \Phi_k||\, h_{\max}/\pi \le C \delta \end{equation} for any triangulation $\cT\in\bT(\delta)$ with the $L^2$ projection $\Pi_0$ onto $P_0(\cT;\bR^n)$. The constant $C=\max\{ ||\nabla \Phi_1||,\dots, ||\nabla \Phi_K||\}$ depends on the smoothness of the functions $\Phi_1$ ,\dots, $\Phi_K$. For any $g\in\lt\setminus \{0\}$ with $z=A^{-1}(g)/\|g\|\in W$ from \eqref{FE_Cover}, there exists at least one index $k\in\{1,\ldots,K\}$ with ${\bf A}\nabla z\in B(Q_k,\epsilon/6)$. This, the choice of $\Phi_k$, and \eqref{F1bdd2} with $ \delta:=\epsilon/(6C) $ prove \[ \| (1-\Pi_0) {\bf A}\nabla z \|\le \| {\bf A}\nabla z - \Pi_0 \Phi_k \| \le \| {\bf A}\nabla z -Q_k\|+\|Q_k -\Phi_k\|+\|(1-\Pi_0)\Phi_k\|<\epsilon/2. \] A rescaling of this proves $\displaystyle\| (1-\Pi_0) {\bf A}\nabla z \|\le \epsilon\|g\|/2$ for all $Az=g\in L^2(\Omega)$ (with arbitrary norm $\|g\|\ge 0$). This concludes the proof. \qed \bibliographystyle{amsplain}
1,108,101,565,338
arxiv
\section{Introduction} \subsection{Overview} Charmonium suppression is one of the classical probes used in heavy ion collisions. Since charm quark pairs originate during early hard processes, they go through all stages of the evolution of the system. A small fraction of such pairs $\sim O(10^{-2})$ produces bound $\bar c c$ states. By comparing the yield of these states in AA collisions to that in pp collisions (where matter is absent) one can observe their survival probability, giving us important information about the properties of the medium. Many mechanisms of $J/\psi$ suppression in matter were proposed over the years. The first, suggested by one of us in 1978 \cite{Shu_QGP}, is (i) a gluonic analog to ``photo-effect'' $g J/\psi\rightarrow \bar c c$. Perturbative calculations of its rate \cite{Kharzeev_Satz} predict a rather large excitation rate of the charmonium ground state. Since charmonia are surrounded by many gluons in QGP, this leads to a conclusion by Kharzeev and Satz\cite{Kharzeev_Satz} that nearly all charmonium states at RHIC should be rapidly destroyed. If so, the observed $J/\psi$ would come mostly from recombined charm quarks at chemical freezout, as advocated in \cite{Andronic:2003zv}. However this argument is only valid in weakly-coupled QGP, in which the charm quarks would fly away from each other as soon as enough energy is available. As we will show below, in $strongly$-coupled QGP (sQGP) propagation of charmed quark is in fact very different. Multiple momentum exchanges with matter will lead to rapid equilibration in momentum space, while motion in position space is slow and diffusive in nature. Persistent attraction between quarks makes the possibility of returning back to the ground state for the $J/\psi$ quite substantial, leading to a substantial survival probablity even after several fm/c time in sQGP. Another idea (ii) proposed by Matsui and Satz \cite{MS} focuses on the question of whether charmonium states do or do not survive {\em as a bound state}. They argued that because of the deconfinement and the Debye screening, the effective $\bar c c\,\,\,$ attraction in QGP is simply too small to hold them together as bound states. Quantum-mechanical calculations by Karsch et al \cite{KS} and others have used the free energy, obtained from the lattice, as an effective potential (at $T>T_c$) \begin{eqnarray} F(T,r)\approx -{4 \alpha(s)\over 3 r }\exp(-M_D(T)r)+F(T,\infty) \label{eqn_F} \end{eqnarray} They have argued that as the Debye screening radius $M_D^{-1}$ decreases with $T$ and becomes smaller than the root mean square radii of corresponding states $\chi,\psi',J/\psi,\Upsilon'',\Upsilon',\Upsilon...$, those states should subsequently melt. Furthermore, it was found that for $J/\psi$ the melting point is nearly exactly $T_c$, making it a famous ``QGP signal''. These arguments are correct asymptotically at high enough $T$, but the central issue is what happens at (so far experimentally accessible at RHIC) $T=(1-2)T_c$. Dedicated lattice studies \cite{quarkonia} extracted quarkonia spectral densities using the so called maximal entropy method (MEM) to analyze the temporal (Euclidean time) correlators. Contrary to the above-mentioned predictions, the peaks corresponding to $\eta_c,J/\psi$ states remains basically unchanged with $T$ in this region, indicating the dissolution temperature is as high as $T_{\psi}\approx (2.5-3)T_c$. Mocsy et al \cite{Mocsy:2006qz} have used the Schr\"odinger equation for the Green function in order to find an effective potential which would describe best not only the lowest s-wave states, but the whole spectral density. Recently \cite{Mocsy:2007bk} they have argued that a near-threshold enhancement is hard to distinguish from a true bound state: according to these authors, the above mentioned MEM dissolution temperature is perhaps too high. Another approach to charmonium in heavy-ion collisions, taken by Grandchamp and Rapp \cite{Grandchamp:2002iy}, does not rely on the perturbative calculation of the excitation cross-section. Charm rescattering is enhanced by formation of bound states in QGP. $J/\psi$ lifetimes were calculated at various temperatures using heavy quark effective theory \cite{vanHees:2004gq}: the resulting widths are typically a few hundred MeV. If so, the total number of rescatterings of $J/\psi$ in the fireball during its lifetime ($\sim 10 \, {\rm fm}/c$) is large (10-30). This model still has fairly large cross-sections for $J/\psi$ -annihilation, so in their so-called two-component model, many of the final charmonia measured are required to originate from statistical coalescence of single charm in the plasma into charmonium states. There are also other quantum-mechanical studies of the issue since the pioneering paper \cite{KS}. Zahed and Shuryak \cite{SZ_bound} argued that one should $not$ use the free energy $F(T,r)$ as the effective potential, because it corresponds to a static situation in which infinite time is available for a ``heat exchange'' with the medium. In the dynamical real-time situation they proposed to think in terms of level crossing and Landau-Zener formalism, widely used in various quantum-mechanical applications. In the ``fast'' limit (opposite to the ``adiabatically slow'' one) all level crossings should instead be ignored. This corresponds to retaining pure states (described by a wave function rather than density matrix) without the {\em entropy term} in $F$, which is nothing else but the internal energy \begin{eqnarray} V(T,r)=F(T,r)-TdF(T,r)/dT=F+TS \end{eqnarray} instead, as an alternative effective potential. Such potential $V(T,r)$ (extracted from the same lattice data) leads to much more stable bound states, putting charmonium melting temperature to higher $T\sim 3T_c$. A number of authors \cite{Wong_Alberico} have used effective potentials in between those two limiting cases. However, as it will be clear from what follows, we think it is not the bound states themselves which are important, but kinetics of transitions between them. In a nutshell, the main issue is {\em how small is the separation in the $\bar c c\,\,\,$ pair when the QGP is over}, not in which particular states they have been during this time. The heavy $Q \bar Q$ potential depends not only on the temperature but also on the velocity of the $\bar c c\,\,\,$ pair relative to the medium. This effect has been studied e.g. by means of the AdS/CFT correspondence in \cite{Ejaz:2007hg} and it was found that the bound state should not exist above a certain critical velocity. So, if the existence of a bound state is truly a prerequisite for $J/\psi$ survival, one would expect additional suppression at large $p_t$. This goes contrary to the well-known formation-time argument \cite{Karsch:1987uk} and the experimental evidence, indicating the disappearance of the $J/\psi$ suppression at large $p_t$. We think this as a good example of how important the real-time dynamics of the $\bar c c\,\,\,$ pair in medium is: and indeed we are going to follow it below from its birth to its ultimate fate. Let us now briefly review the experimental situation. For a long time it was dominated by the SPS experiments NA38/50/60, who have observed both ``normal'' nuclear absorption and an ``anomalous'' suppression, maximal in central Pb+Pb collisions \cite{NA60}. Since at RHIC QGP has a longer lifetime and reaches a higher energy density, straightforward extrapolations of the naive $J/\psi$ melting scenarios predicted near-total suppression. And yet, the first RHIC data apparently indicate a survival probability similar to that observed at the SPS. One possible explanation \cite{J/psi_recombination} is that the $J/\psi$ suppression is cancelled by a recombination process from unrelated (or non-diagonal) $\bar c c$ pairs floating in the medium. However this scenario needs quite accurate fine-tuning of two mechanisms. It also would require rapidity and momentum distributions of the $J/\psi$ at RHIC to be completely different from those in a single hard production. Another logical possibility \cite{Karsch:2005nk} is that the $J/\psi$ actually does survive both at SPS and RHIC: while the anomalous suppression observed may simply be due to suppression of higher charmonium states, $\psi^{'}$ and $\chi$, feeding down about 40\% of $J/\psi$ in pp collisions. These authors however have not attempted to explain why $J/\psi$ survival probablity can be close to one. This is precisely the goal of the present work, in which we study dynamically $how$ survival of $J/\psi$ happens. We will see that it is enhanced by two famous signatures of sQGP, namely (i) a {\em very small charm diffusion constant} and (ii) {\em strong mutual attraction} between charmed quarks in the QGP. We found that $J/\psi$ survival through the duration of the QGP phase $\tau\sim 5 {\rm fm}/c$ is about a half. The sequence of events can be schematically described as a four-stage process \begin{eqnarray} {{\Longrightarrow} \over {\rm{(\bar c c \, production)}}} \,\, f_{\rm{initial}} \,\,{{\Longrightarrow} \over {\rm{(mom. relaxation)}}} \,\,f_{\rm{quasi-equilibrium}}\end{eqnarray} $$ \,\,{{\Longrightarrow} \over {\rm{(leakage)}}} \,\,f_{\rm{final}} {{\Longrightarrow} \over {\rm{(projection)}}} \,\, J/\psi $$ A new element here is a two-time-scale evolution, including rather rapid momentum relaxation to a quasiequilibrium distribution which differs from the equilibrium one at large enough distances. \subsection{Charmonium potentials at $T>T_c$} The interaction of the $\bar c c\,\,\,$ pair will play a significant role in this paper, and thus we briefly review what is known about them. The details can be found in the original lattice results: we will point out only the most important qualitative features. Perturbatively, at high $T$ one expects a Coulomb-like force, attractive in the color singlet and repulsive in the color octet channel, with the relative strengths 8:(-1) (so that color average will produce zero effect). As shown by one of us many years ago\cite{Shu_QGP}, the Coulomb forces are screened by the gluon polarization operator at distances $\sim 1/gT$. Quantitative knowledge of the interaction comes from large set of lattice measurements of the free energies associated with a pair of heavy quarks in an equilibrium heat bath. These data include both results by the Bielefeld-BNL group and in dynamical QCD with $N_f=2$ by Aarts et al. \cite{Aarts:2007pk}. At $T>T_c$ one is in a deconfined phase, so at large quark separations one expects effective potentials to go to a $finite$ $V_\infty(T)$. Yet when the value of this potential significantly exceeds the temperature, the actual probablity of quark separation is small $\sim \exp(-V_\infty(T)/T)$. As we already mentioned in the preceding section, the appropriate potential for dynamical $\bar c c\,\,\,$ pair is not yet definitely determined, with suggestions ranging from free energy to potential energy measured on the lattice. The difference between the two -- the entropy associated with $\bar c c\,\,\,$ pair -- is very large near $T_c$, reaching the value $S\sim 20$ at its peak. It means that a huge number of excited states $\sim \exp(S)\sim \exp(20)$ would be excited in adiabatically slow motion of the pair. We think that in realistic motion of $\bar c c\,\,\,$ pair much less states are actually excited, and thus, following \cite{SZ_bound}, we will use the potential energy instead of free energy. A cost of this is much larger potential barrier, reducing $\bar c c\,\,\,$ dissociation. Indeed, $V_\infty(T)$ near $T_c$ is large, reaching about 4 GeV at its peak. For the simulation we need a parametrization of the heavy quark-antiquark potential above deconfinement as a function of temperature and separation. We use the same parametrization as in \cite{Mocsy:2006qz}: \begin{figure} \label{fig_pot} \includegraphics[width=8cm]{pot_lat} \caption{ (Color online.) Parametrization of the potential for lattice data from \protect\cite{Kaczmarek:2005zp}. From top to bottom: the potential at $T/T_c=1.02, 1.07, 1.13, 1.18, 1.64$ . } \end{figure} \begin{eqnarray} V(r,T) = -\alpha \frac{e^{-\mu(T)r^2}}{r}+\sigma r e^{-\mu(T)r^2}\\ \nonumber +C(T)(1-e^{-\mu(T)r^2}) \end{eqnarray} and fit to quenched-QCD lattice data in a temperature range of $1.02T_c$ to $1.64T_c$, assuming $\sigma$ is constant in temperature, $\mu$ varies linearly with temperature over this range, and $C(T)\propto (T/T_c-0.98)^{-1}$, so that it peaks sharply at $T_c$ as $U_{\inf}$ does. The result shown in Fig.\ref{fig_pot}(a) is for \begin{eqnarray} \mu(T) = (0.03+0.006T/T_c)\;{\rm GeV}^2 \end{eqnarray} \begin{eqnarray} \sigma = 0.22\;{\rm GeV}^2 \end{eqnarray} \begin{eqnarray} C(T) = \frac{0.15{\rm GeV}}{T/T_c-0.98} \end{eqnarray} with $\alpha$ set to $\frac{\pi}{12}$. As you can see from Fig.\ref{fig_pot}(a), this fit is not perfect. While it is easy to fit a function to a single temperature's data set, it is hard to find an adequate fit across temperatures. This fit however will prove sufficient, especially since it is relatively good for the small separations of interest. The classical Boltzmann factor $\exp(U(r)/T)$ for a Coulomb potential leads to a non-normalizable distribution. Quantum mechanics prevents this, which can be crudely modeled by an effective potential \begin{eqnarray} V_{eff}= \hbar^2/Mr^2+V(T,r)\end{eqnarray} which includes the so called localization energy. Dusling and one of us \cite{Dusling:2007cn} have determined more accurate effective quantum potentials, following ideas of Kelbg and others and performing path integrals. Perhaps those should be used in future more sophisticated simulations. In the present simulations we simply turn off the force below $0.2\;{\rm fm}$, approximating the effective potential by a constant. During the simulation, we allow our pairs to exist as either color singlets or color octets. While in a zero-temperature pp collision, confinement completely suppresses a $\bar c c\,\,\,$ pair's probability of existing in a color octet state, in the deconfined phase this possibility must be considered. We initially create a 1:8 ratio of color singlet to color octet pairs, as is expected statistically, and then during each timestep we decide if the pair exists as a color octet by comparing a random number between zero and one to $\frac{1}{Z}\exp(-(U_8-U_1)/T)$, with the color singlet and octet potential energies determined from \cite{bielefeld}. In other words, the pairs exist in thermal equilibrium in color space at all times. When a pair exists as a color octet, they do not interact in our simulation as the spatial variation of the octet free energies is quite small. Naively one would expect this to create a large difference with results where color singlet is assumed for all pairs, however this is not the case. One sees this by noticing that for the temperatures and distances of our interest (distances of separation for pairs likely to go into the $J/\psi$-state), $\frac{U_8-U_1}{T} \sim 10$, meaning that the octet state is suppressed by orders of magnitude. Lattice potentials do not follow the simple perturbative relation $V_1=-8V_8$ between singlet and octet potentials mentioned above. While the color singlet channel displays a significant attraction, the color octet channel has a potential which is remarkably flat ($r$-independent). On the basis of this feature of the lattice data we will ignore the force in the octet channel in the simulation. \subsection{Charm diffusion constant} Loosely speaking, the effect we are after in this work can be express as follows: the medium is trying to prevent the outgoing quarks from being separated. It has been conjectured by one of us that charm quark pairs should get stopped in QGP \cite{Shuryak_stuck_c}. RHIC single electron data suggest that charm quarks, and probably b quarks as well, are indeed equilibrating much more effectively than it was thought before. The first question one should address is how a charm-anticharm pair moves in sQGP, and what is the probablity to find the $\bar c c\,\,\,$ in close proximity after time $t$. Charm quarks are subject of three forces: (i) the $drag$ force, trying to reduce the difference between quark momentum distribution and that in (local) equilibrium quark and matter velocities \\ (ii) $stochastic$ (or Langevin) force from a heat bath, leading to thermal equilibration in momenta and spatial diffusion of the charm quarks \\ (iii) the $\bar c c\,\,\,$ mutual interaction. For pedagogical reasons it is useful to include them subsequently, first for a stationary non-floating matter at fixed temperature (using the Fokker-Planck equation) and then for a realistic nuclear geometry with a hydrodynamically expanding fireball. But before we do so, let us briefly remind why the case of heavy (charmed) quarks is so special. (More detailed discussion of that can be found in Moore and Teaney \cite{MT}.) A collision of a heavy quark with quasiparticles of QGP leads to change in its momentum $\Delta p_{HQ} \sim T$, so that the velocity is changed by a small amount \begin{eqnarray} \Delta v_{HQ} \sim T/M_{HQ} \ll 1 \label{vhq} \end{eqnarray} Therefore the velocity of the charm quark can only change significantly as a result of multiple collisions, in small steps. Thus the process can be described via appropriate differential equations such as the Fokker-Planck or the Langevin equations. Similarly one can argue that spatial diffusion of a heavy particle can also be described in this way, because the change in position between collisions are small and uncorrelated. An assumption necessary for Langevin dynamics to hold is that the ``kicks'' are random (uncorrelated). As explained above, for a single heavy quark it follows from the inequality $M>>T$, which guarantees that the quark relaxation time is long compared to the correlation time in matter. For a $\bar c c\,\,\,$ pair we need an additional requirement, that random forces on $each$ quark can be treated as $mutually$ uncorrelated. In order to see how good this approximation is, one should compare the spatial (equal time) correlation length $\xi(T)$ in the medium to the typical distance between quarks for paths which eventually (at the end of the plasma era) will become charmonia. We will provide two different estimates for the former quantity, which view QGP either as a perturbative ``gas'' or a strongly coupled ``liquid''. In a perturbative gas of gluons, the mass is small and in the lowest order momentum distribution is thermal. Thus the maximum of the momentum distribution is at $p\approx 2.7T$, about 1 GeV at the initial RHIC condition. The corresponding $half$ wavelength (the region where the field keeps at least its sign) \begin{eqnarray} \lambda/2=\pi/p\approx .6 \, {\rm fm}\sim \xi(T=.4\, {\rm GeV}) \end{eqnarray} In the liquid regime quasiparticles do not successfully model the degrees of freedom. However we do have phenomenological information about spatial correlations from hydrodynamics, which propose the so called ``sound absorption length'' as a measure above which different matter ``cells'' decorrelate. It is \begin{eqnarray} \Gamma_s={4\eta\over3 s T}\end{eqnarray} with $\eta/s$, the dimensionless ratio of viscosity to entropy density. Empirically, RHIC data are well described if it is of the order of $1/3\pi$ (the AdS/CFT strong-coupling limit), which suggests a spatial correlation length one order of magnitude smaller $\xi(T=.4\, {\rm GeV})\sim 0.05\, {\rm fm}$. Since the distance between $\bar c$ and $c$ which eventually become $J/\psi$ is about .5 fm, and since there are good reasons to believe the latter ``liquid'' estimate of $\xi$ is closer to reality, we conclude that the main Langevin assumption -- the independence of random forces for $\bar c$ and $c$ -- is well justified. Furthermore, one may think that the same assumption would even hold for $\bar b$ and $b$, although with worse accuracy. Since this small parameter in Eq.\ref{vhq} is central to what follows, let us remind for orientation of the reader that at RHIC we speak about the ratio for charmed quark $T/M_c=(1/6-1/5)$, or $T/M_b=(1/20-1/15)$ for $b$ quark. Although RHIC experiments with charm quarks include direct reconstruction of charmed mesons $D,D^*$ by the STAR collaboration, so far the existing vertex detectors are not sufficient for doing it effectively (upgrades are coming). Therefore the most relevant data on charm is based on observation of single electrons from heavy quark weak semileptonic decays. Apart of electromagnetic backgrounds, we do not really know whether electrons come from $c$ or $b$ decays: it is believed (but not yet proven) that the boundary between two regimes is at $p_t\approx 4\, {\rm GeV}$. Two experimentally observable quantities are (i) the charm suppression relative to the parton model (no matter) $R^e_{AA}$ and (ii) the azimuthal asymmetry of the electrons relative to impact parameter $v^e_2=<cos(2\phi)>$. Several theoretical groups have analyzed these data, in particular Moore and Teaney \cite{MT} provided information about the diffusion constant of a charm quark $D_c$ by Langevin simulations. A conclusion following from this work is that both $R^e_{AA}$ and $v^e_2$ observed at RHIC can be described by one value for the charm diffusion constant in the range \begin{eqnarray} \label{eqn_Dc} D_c\;(2\pi T)=1.5-3. \end{eqnarray} This can be compared with the perturbative (collisional) result at small $\alpha_s$ \begin{eqnarray} D^{pQCD}_c\;(2\pi T)=1.5/\alpha_s^{2}. \end{eqnarray} Assuming that the perturbative domain starts\footnote{Recall that at $4/3\alpha_s=1/2$ two scalar quarks should fall towards each other, according to the Klein-Gordon equation: so this is clearly not a perturbative region.} somewhere at $\alpha_s< 1/3$ one concludes that the empirical value (\ref{eqn_Dc}) is an order of magnitude smaller than the perturbative value. There are two studies of the diffusion constant at strong coupling. One comes from AdS/CFT correspondence~\cite{Maldacena:1997re} (and thus the results are for $\cal N$=4 supersymmetric Yang-Mills theory): the final expression found by Casalderrey-Solana and Teaney~\cite{Casalderrey-Solana:2006rq} is \begin{eqnarray} D_{HQ}={2 \over \pi T \sqrt{g^2 N_c}} \end{eqnarray} It is nicely consistent (via the Einstein relation) with the calculated drag force \begin{eqnarray} \label{eqn_drag_adscft} {dP\over dt}= -{\pi T^2\sqrt{g^2 N_c} v \over 2\sqrt{1-v^2}} \end{eqnarray} by Herzog et al \cite{Herzog:2006gh}. One assumption of this calculation is that the 't Hooft coupling is large $g^2 N_c\gg 1$, which means the diffusion constant is parametrically small, much less than momentum diffusion in the same theory $D_p=\eta/(\epsilon+p)\sim 1/4\pi T$. This result is also valid only for quarks heavy enough \begin{eqnarray} M_{HQ} > M_{eff}\sim \sqrt{g^2 N_c} T \end{eqnarray} which only marginally holds for charm quarks. Let us see what these numbers mean for RHIC (assuming that they are valid for QCD). The 'tHooft coupling $g^2 N_c=\alpha_s 4\pi N_c\approx 20-40$ is indeed large, while $M_{\rm{eff}} \approx 1-2 \, {\rm GeV}$ is not really small as compared to the charm quark mass: thus the derivation is only marginally true. Yet we proceed and get \begin{eqnarray} D_c\;(2\pi T)=4/\sqrt{g^2 N_c}\sim 0.5-1, \label{eqn_adscft_Dc} \end{eqnarray} which is in right ballpark of phenomenological numbers. Another approach to transport properties of a strongly coupled plasmas is classical molecular dynamics. A classical non-Abelian model for sQGP was suggested by Gelman et al \cite{GSZ}, and recently Liao and Shuryak \cite{LS_monop} have added magnetically charged quasiparticles. Those calculations also find that the diffusion constant $D$ rapidly decreases as a function of the dimensionless coupling constant $\Gamma$ as a power \begin{eqnarray} D\sim \left({1 \over \Gamma}\right)^{0.6-0.8} \end{eqnarray} in a liquid domain $\Gamma=1-100$. Qualitatively it is similar to the $1/\sqrt{g^2N_c}$ of the AdS/CFT result (\ref{eqn_adscft_Dc}). Unfortunately, at this moment there is no deep understanding of the underlying mechanism of both strong coupling calculations. But this is well beyond the aims of the present work: in what follows we will use $D_c\;(2\pi T)=1.5, 3$ as a range for our best current guess. \section{Charmonium in sQGP: Fokker-Planck formalism} \label{charmthermal} Before we described realistic Langevin simulations of RHIC collisions it is useful to see first the basic features analytically. In this section we describe how (1) the high drag coefficient $\eta_c$ causes rapid thermalization of the initially hard momentum distribution from PYTHIA, and (2) the small diffusion coefficient $D_c$, inversely proportional to $\eta_c$, combined with (3) the use of the static $Q\bar Q$ internal energy instead of the free energy for the pair interaction leads to a slow dissolution of a large peak in the distribution in position space, causing less suppression than expected by more naive models. To see the first feature, rapid thermalization in momentum space, consider a $\bar c c\,\,\,$ pair with the quarks emerging back-to-back from a hard process with momentum ${\bf p}>>T$, therefore having relative momentum $\Delta{\bf p}=2{\bf p}$. For ${\bf p}>>T$, as is the case initially for charm quarks created at the RHIC, we may neglect the random kicks that the quarks experience from the medium and consider only the drag: $\frac{d(\Delta{\bf p})}{d\tau}=-\eta_c{\bf p}$, where $\eta_c=\frac{T}{M_cD_c}$. This leads to a very simple formula for the relative momentum of a pair at early times: \begin{eqnarray} \Delta {\bf p}(t)=\exp(-\eta_c \tau ) \Delta {\bf p}(0)\;. \label{eqn_drag_only} \end{eqnarray} Using the AdS/CFT value for the diffusion coefficient, this leads to a drag coefficient $\eta_c \approx .6\;{\rm GeV}$. Let us now examine the distribution of $\bar c c\,\,\,$ pairs created with PYTHIA pp-event generation at energies of 200 GeV (see more on this in Section \ref{sec_init}). The initial transverse momentum distribution \footnote{The initial $rapidity$ distribution is wide also, but since longitudinally Bjorken-like hydrodynamics starts immediately, a charm quark finds itself with comoving matter with the same rapidity, and thus has little longitudinal drag. Transverse flow is slow to be developed, thus there is transverse drag.} is broad compared to the thermal distribution, and therefore we may apply Eq. \ref{eqn_drag_only} for early times. We parametrize the initial relative momentum distribution as in Section \ref{sec_pyth} and replace $\Delta {\bf p}$ with $\Delta {\bf p} \exp(\eta_c \tau )$, which is the formula for a pair's $initial$ relative momentum in terms of its relative momentum at proper time $\tau$. Next, we compute the overlap of this distribution at early times with the $J/\psi$ -state's Wigner quasi-probability distribution to determine the probability of a random $\bar c c\,\,\,$ taken from this ensemble to be measured as a $J/\psi$ particle, an approach detailed in Appendix \ref{prob}. Fig. \ref{pyth_proj_Jpsi} shows this probability as a function of time for $\eta_D=0.88$. The initial value of the projection is 0.8\%, on the same order of magnitude as the experimental value of 1\% for $\frac{\sigma_{J/\psi}}{\sigma_{c\bar c}}$ obtained from \cite{sjp}, \cite{scc}. The projection increases as the PYTHIA-with-drag distribution narrows but then drops when the width shrinks more than the $J/\psi$'s width in momentum space and as the quark pair's separation increases. Once the time reaches about 1 fm/c, the probability for a pair to go into a $J/\psi$-state is about where we started, and we quit looking at this approach after this much time because the mean transverse momentum for a quark is the thermal average. \begin{figure} \label{pyth_proj_Jpsi} \includegraphics[width=8cm]{prob_jpsi} \caption{Probability of a $\bar c c\,\,\,$ pair going into the $J/\psi$-state vs. time, for very early time.} \end{figure} After this first 1 fm/c of the QGP phase, the $\bar c c\,\,\,$ distribution has thermalized in momentum space and the evolution in position space (diffusion) needs to be examined. The root mean square distance for diffusive motion is given by the standard expression \begin{equation} \left \langle x^2 \right \rangle=6 D_c \tau \end{equation} where $\tau$ is the proper time and the interaction between the quarks has been neglected. The ``correlation volume'' in which one finds a quark after time $\tau$ is \begin{eqnarray} V_{\rm{corr}}={4\pi \over 3}(6 D_c \tau)^{3/2}\end{eqnarray} and one may estimate for the probability of the $\bar c c\,\,\,$ pair to be measured in the $J/\psi$ -state as \begin{eqnarray} \label{eqn_prob_survival} P(\tau)\sim R_{J/\psi}^3/(6 D_c \tau)^{3/2} \end{eqnarray} So neglecting the pair's interaction leads to a small probability that $J/\psi$ -states will survive by the hadronization time at the RHIC ($\tau\sim 10\,{\rm fm}/c$), even for small values of the diffusion coefficient. To get an idea for how this simple result is changed by the inclusion of an interaction between the constituent quarks in a given $\bar c c\,\,\,$-pair, let us examine the Fokker-Planck equation for the $\bar c c\,\,\,$ distribution in relative position: \begin{eqnarray} \label{eqn_FP} {\partial P \over \partial t}=D {\partial \over \partial{\bf r}} f_0 {\partial \over \partial{\bf r}} (P/f_0) \end{eqnarray} where $f_0(r)\propto exp(-V_{\bf eff}(r)/T)$ is the $equilibrium$ distribution in the magnitude of relative position $r$. By substituting the potential shown above at $T=1.25Tc$ and $D_c\times(2\pi T)=1$ into the Fokker-Planck equation (for demonstration in a single spatial dimension only) we solve it numerically and find how the relaxation process proceeds. A sample of such calculations is shown in Fig. \ref{fp_numeric}. It displays two important features of the relaxation process:\\ (i) during a quite short time $T, 1\, {\rm fm}$ the initial distribution (peaked at zero distance) relaxes locally to the near-equilibrium distribution with two peaks, corresponding to optimal distances of the equilibrium distribution $f_0$, where the effective potential is most attractive; \\ (ii) the second stage displays a slow ``leakage'', during which the maximum is decreasing while the tail of the distribution at large distances grows. It is slow because the right-hand side of the Fokker-Planck equation is close to zero, as the distribution is nearly $f_0$. The interaction drastically changes the evolution of the $\bar c c\,\,\,$ distribution in position space, and this will be demonstrated again in the full numerical simulation of the next section. \begin{figure} \label{fp_numeric} \includegraphics[width=8cm]{fp_numeric_7plot5} \caption{ (Color online.) Numerical solution of the one-dimensional Fokker-Planck equation for an interacting $\bar c c\,\,\,$ pair. The relaxation of the initial narrow Gaussian distribution is shown by curves (black, red,brown,green,blue, or top to bottom at r=0) corresponding to times $t=0,1,5,10\;{\rm fm}$, respectively.} \end{figure} \section{ Langevin evolution of the $\bar c c$ pairs } \label{sec_init} \subsection{ Langevin evolution in static medium: quasiequilibrium} Before we turn to expanding fireball, we first study the evolution of $\bar c c$ pairs at fixed $T=1.5T_c$ and in the absence of hydrodynamical flows. The first thing we would like to demonstrate is the strong influence of the heavy quark interaction. The resulting distribution over interquark separation at time $10\;{\rm fm}/c$ are shown in Fig. \ref{t=9}, with the interaction (red squares) and without (blue triangles). The value for a given pair's separation $r$ is weighted by $\frac{1}{r^2}$ so that the spatial phase space of the distribution is divided out. With the interaction on, we find the same behavior of a ``slowly dissolving lump'' as that seen in the solution of the Fokker-Planck equation when the interaction is ``turned on''. \begin{figure} \label{t=9} \includegraphics[width=8cm]{rtrsq_therm} \caption{(Color online.) Distribution over quark pair separation at fixed $T=1.5T_c$ after 9 fm/c, with (red squares) and without (green triangles) the $\bar c c\,\,\,$ potential. } \end{figure} Further study of this has shown convergence of its shape to a particular one, which persist for a long time and which we would call ``quasiequilibrium''\footnote{This situation should not be confused with stationary nonequilibrium solutions of the Fokker-Planck equation, in which there is a constant flow through the distribution because of matching source and sink.}. While the true equilibrium of course corresponds to complete dissolution of a single $\bar c c\,\,\,$ pair, it turns out that leakage to large distances affects the distributions of separation and energy in normalization only. In Fig. \ref{etot_pet_fit} is the energy distribution for the ensemble of pairs at $\tau=9\;{\rm fm}/c$ after evolving under Langevin dynamics at a fixed temperature $1.05T_c$, and it is shown to be the same distribution, up to normalization and statistical uncertainty, as the distribution reached by the pairs in a full heavy-ion simulation of the most central collisions. \begin{figure} \label{etot_pet_fit} \includegraphics[width=8cm]{etot_pet_fit} \caption{ Energy distribution of the $c \bar c$ pairs (in the center of mass frame of the pair) after Langevin evolution at a fixed temperature $T=1.05T_c$ for a time $t=9\;{\rm fm}/c$ long enough for quasiequilibrium to be reached. } \end{figure} We show the energy distribution in this region because it is related to a very important issue of the charmonium production, namely production of $\psi^{'},\chi$ states and subsequent feeddown into the $J/\psi$. When a quasiequilibrium distribution is reached, the production {\em ratios} of charmonium states are stabilized at thermal (statistical) model values, in spite of the fact that the overall normalization continues to shrink due to leakage into infinitely large phase space at large distances. (The energy distribution itself contains a Boltzmann factor but also the density of states. A model case of purely Coulomb interaction allows one to calculate it in a simple way: as shown in Appendix \ref{quantum} we found that in this case the absolute shape of the quasiequilibrium distribution is reproduced as well.) The existence of quasiequilibrium is in good correspondence to observations. It was noticed already a decade ago \cite{Sorge:1997bg} that the SPS data on centrality dependence of $N_{\psi^{'}}/N_{J/\psi}$ ratio approached the equilibrium value (for chemical freezout) \begin{eqnarray} {N_{\psi^{'}}\over N_{J/\psi}}=exp(-\Delta M /T_{ch}) \label{eqn_chem}\end{eqnarray} with the chemical freezout at $T_{ch}=170\,{\rm MeV}$, as is observed for ratios of other hadronic species. One possible explanation of it can be charmonium $recombination$ (from independent charm quarks) at chemical freezout, advocated by \cite{Andronic:2003zv} and others. However our findings show that the same ratio naturally appears even for a $single$ $\bar c c\,\,\,$ pair dissolving in a thermal medium, in a ``quasiequilibrium'' occurring at the leakage stage. Especially at SPS, when statistical recombination requires a charm density which is too large, this is an attractive possibility. \subsection{Production of the initial $\bar c c$ pairs} \label{sec_pyth} We start with $\bar c c\,\,\,$ events produced with PYTHIA, a particle physics event generator \cite{sjostrand}. PYTHIA yields $\bar c c\,\,\,$ pairs through a set of perturbative QCD events: through Monte-Carlo it will select initial partons from the parton distribution function of one's choosing and proceed through leading-order QCD differential cross sections to a random final state. The $p_t$ and rapidity distributions of charm produced in pp collisions is believed to be rather adequately represented. By using PYTHIA we do not however imply that it solved many open issues related with charm production, such as color octet versus singlet state. It also leaves open the very important issue of $feeddown$ from charmonium excited states (see below). One more open question -- needed to start our simulations -- is how to sample the distribution in position space. Indeed, each pQCD event generated in PYTHIA is a momentum eigenstate without any width, so by Heisenberg's uncertainty relation they are spatially delocalized, which is unrealistic. We proceed assuming the form for the initial phase-space distribution to be \begin{eqnarray} P_{\rm{init}}({\bf r}, {\bf p}) \propto P_{\rm{PYTH}}({\bf p}) \exp(-{\bf r}^2/2\sigma^2) \end{eqnarray} By setting $\sigma=0.3\;{\rm fm}$ one can tune energy distribution to give a reasonable probability for the formation of $J/\psi$ state in pp events. However this does $not$ yield correct formation probabilities of $\chi,\psi^{'}$. It is hardly surprising, since for example the $\psi^{'}$ wavefunction has a sign change at certain distances, and the exact production amplitude is required for a projection, an order-of-magnitude size estimate is not good enough. Since feeddown from those states contributes about 40\% of the observed $J/\psi$, we simply refrain from any quantitative statements about pp (and very peripheral AA) collisions, focusing only on distributions $after$ some amount of time spent in QGP. \begin{figure} \label{etot_vs_time} \includegraphics[width=8cm]{etot_vs_time} \caption{(Color online.) Evolving energy distribution for an ensemble of $\bar c c\,\,\,$ pairs at time moments $t=2,3,10\,$ fm/c (circles,squares and triangles, respectively).} \end{figure} \subsection{Langevin motion of $\bar c c\,\,\,$ pair in an expanding fireball} \label{fireball} Finally, we model the motion of a charm quark pair in an evolving fireball. We use the same framework and programs used in \cite{MT} to examine motion of a single charm quark for propagation of an interacting pair. We start with large number of $\bar c c\,\,\,$ produced with PYTHIA pQCD event generation, and randomly place them in position space throughout the fireball, using a Monte-Carlo Glauber calculation. Then, the pairs are evolved in time according to the Langevin equations: \begin{eqnarray} {d {\bf p} \over dt}=-\eta {\bf p} + {\bf \xi}-{\bf \nabla} U \end{eqnarray} \begin{eqnarray} {d {\bf r} \over dt}={{\bf p} \over m_c} \end{eqnarray} where ${\bf \xi}$ corresponds to a random force and $\eta$ the drag coefficient. The condition that the Langevin equations evolve a distribution towards thermal equilibrium gives the relation $\langle \xi_i(t) \xi_j(t') \rangle = 2MT \eta \delta_{ij} \delta(t-t')$. We proceed here with our diffusion constant set by the results of \cite{MT} to be $\eta_D=\frac{2\pi T^2}{1.5M}$. and as discussed earlier with our potential as $V$ instead of $F$. Now we examine the evolution of the quark pairs as discussed before, examining pairs at mid-rapidity in a boost-invariant, 2-dimensional ultra-relativistic gas simulation, the same hydrodynamical simulation used in \cite{MT}. We stop the Langevin-with-interaction evolution when $T<T_c$. The distribution over energy at different moments of the time is shown in Fig.\ref{etot_vs_time}. We will discuss our results subsequently, at two different levels of sophistication. First we will show results with only the total number of bound states monitored, and then we will show results where the different charmonia states and feeddown to $J/\psi$ are considered and compare these results with PHENIX data. How we determine whether or not to call a $\bar c c\,\,\,$ pair in our simulation bound, and what particular charmonium state the pair exists as if it is bound, is discussed in Appendix \ref{prob}. Fig. \ref{jpsioccb_y0} shows the number of ``bound'' $\bar c c\,\,\,$ pairs as a fraction of their total number, monitored throughout the course of the simulation as a function of time. Note that realistic hydrodynamical/Langevin simulation agrees qualitatively with the analytic results of Section \ref{charmthermal}. During the first fm/c one finds some boosts in the probability for a $\bar c c\,\,\,$ pair to be bound, due to rapid thermalization in momentum space. Later the probablity falls due to the slow diffusion in position space. This figure emphasizes our main qualitative finding, the survival probability of $J/\psi$ on the order of a half. \begin{figure}[t] \label{jpsioccb_y0} \includegraphics[width=8cm]{prob_bound} \caption{ (Color online.) Probability of $\bar c c\,\,\,$ pairs to be bound at RHIC Au+Au, $\sqrt{s}=200\;{\rm GeV}$, mid-rapidity.} \end{figure} \subsection{Shadowing and ``normal'' absorption} Experimental data include not only the ``QGP suppression'' we study in this work, but also (i) the initial-state effects (modified parton distributions in ``cold nuclear matter'') plus (ii) the so called ``normal nuclear absorption''. The way we have chosen to display PHENIX data \cite{phenix_nov06} is as follows: before we compare those with our results we ``factor out'' the cold nuclear matter effects, by defining (for any given rapidity $y$) the following ratio of Au+Au and d+Au data \begin{eqnarray} R^{anomalous}_{AA}(y)=\frac{R^{PHENIX}_{AA}(y)}{R_{pA}(y)R_{pA}(-y)} \end{eqnarray} to be called the ``anomalous suppression''. In principle those include only data: but unfortunately the large dAu sample taken in 2008 is not yet analyzed (at the time of this writing), while the 2003 set has error bars which are too large. This forces us to use a model at this point, following Kharzeev et al\cite{kharzeev_diss} with $R_{pA}=exp(-\sigma_{abs}\langle L \rangle n_0)$, where $\langle L \rangle$ is the mean path length of the $J/\psi$ through nuclear matter, $n_0$ is the nuclear density, and $\sigma_{\rm{abs}}$ is the nuclear absorption cross-section (parametrized from \cite{kharzeev_diss} to be $0.1 {\rm fm}^2$ for rapidity $y=0$). Finally, for rapidity $y=0$, we rewrite this as $(R_{pA}(y=0))^2=\exp(-\sigma_{abs} \left \langle N_{part} \right \rangle$, where $\left \langle N_{part} \right \rangle$ is the density per unit area of participants in the collision plane. We further used Glauber model calculations \cite{kharzeev_glauber} to determine $\left \langle N_{part} \right \rangle$ for a given $N_{\rm{part}}$ at PHENIX. We divide each Au+Au data point from PHENIX by this quantity and call it $ R^{anomalous}_{AA}(y=0)$ plotted as points in Figs.\ref{raa_vs_npart} and \ref{plot_feeddown}, to be compared with our simulation. \begin{figure} \label{raa_vs_npart} \includegraphics[width=8cm]{raa_compared_bound} \caption{(color online) Points show the magnitude of the anomalous suppression at mid-rapidity $ R^{anomalous}_{AA}(y=0)$ versus the centrality (the number of participants), using Au+Au PHENIX data. The curve is the probability to be bound (determined by the energy projection) at the end of the QGP era, when $T=T_c$. } \end{figure} \subsection{$\psi^{'}$ production and feeddown} \label{feeddown} Next we calculate the ratio $N_{\psi^{'}}/N_{J/\psi}$ in our simulations, for different centralities. There are well known NA38/50/60 measurements of this ratio at the SPS, but at RHIC it has been measured so far only in pp collisions by the PHENIX detector \cite{:2008as} to be 0.14 $\pm$ 0.04, which makes the ratio of direct $\psi^{'}$ to $J/\psi$ particles 0.24 as in \cite{Digal:2001ue}. Hopefully higher luminosity at RHIC will make possible a future measurement of this ratio in Au+Au collisions of various centralities. We calculate $N_{\psi^{'}}/N_{J/\psi}$ as follows: (i) first we run our simulation and determine the distributions $f(E)$ over the $\bar c c\,\,\,$ pair energy $E=E_{CM}-2M_c$ (in the pair center of mass frame); (ii) then we compare those to quasiequilibrium ones from simulations at fixed temperature (slightly above $T_c$) $f_0(E)$. Since both are done for the same interaction, in the ratio $f(E)/f_0(E)$ the density of states drops out. This ratio tells us how different the actual distribution is from that in quasiequilibrium. (iii) Then we form the $double$ ratios, at two relative energies corresponding to $\psi^{'}$ and $J/\psi$ masses (minus $2m_{charm}$) \begin{eqnarray} R_{\psi^{'}/J\psi}=\frac{f(.8\;{\rm GeV})}{f_0(.8\;{\rm GeV})}/\frac{f(.3\;{\rm GeV})}{f_0(.3\;{\rm GeV})} \label{double_ratio} \end{eqnarray} This now includes nonequilibrium effects for both of them. Finally (iv) we switch from continuum classical distributions to quantum one, assuming that in quasiequilbrium the relation (\ref{eqn_chem}) holds. If so, the particle ratio is a combination of nonequilibrium and equilibrium factors \begin{eqnarray} {N_{\psi^{'}} \over N_{J/\psi}}= R_{\psi^{'}/\psi}\exp(-\Delta m/T)\end{eqnarray} The double ratio ( or $\exp(\Delta M/T)N_{\psi^{'}} / N_{J/\psi}$) is plotted vs centrality in Fig. \ref{psiratio}. As one can see, it goes to unity for the most central collisions: so quasiequilibrium is actually reached in this case. For mid-central bins the $\psi^{'}$ production is about twice larger because of insufficient time. This is to be compared to the experimental pp value for the ratio, which is about 5. (We remind the reader that PYTHIA plus our classical projection method does not work for pp collisions.) \\ \begin{figure} \includegraphics[width=8cm]{psiratio} \caption{(Color online.) The double ratio $R_{\psi^{'}/\psi}$ defined in (\ref{double_ratio}) versus centrality (number of participants). One point (green box) at $N_{part}=2$ corresponds to experimental data for $\psi^{'}$ and the $direct$ $J/\psi$ , for pp collisions. } \label{psiratio} \end{figure} Finally, we use this result to estimate the effect of feeddown from higher states. To do this, we write the final number of $J/\psi$ particles observed as the number of directly produced $J/\psi$ particles plus the number of $J/\psi$ particles produced from feeddown from higher charmonium states: \begin{eqnarray} N^{final}_{J/\psi}=N^{direct}_{J/\psi}\left[1+ R_{\psi^{'}/\psi} \sum_i (\frac{g_i}{3}) \exp(-{\Delta M_i \over T})B_i\right] \end{eqnarray} where $i$ is summed over the $\chi_1$, $\chi_2$, and $\psi^{'}$ particles which contribute significantly to feeddown, $B_i$ represents their branching ratio into $J/\psi+...$, and $g_i$ is the degeneracy of the state (divided by 3, the degeneracy of the $J/\psi$), $\Delta m_i$ is the mass difference between the i-th state and the $J/\psi$. The $ R_{\psi^{'}/\psi}$ is the non-equilibrium factor discussed above: it is factored outside the sum because it is very similar for all these states. Now we are ready to discuss centrality dependence of the $J/\psi$ production $including$ the feeddown. We define for each centrality direct $N_{J/\psi}^{direct}(b)$ as the total number of $c \bar c$ pairs in our ensemble with energy (in its rest frame) less than $E<2M_{charm}+0.33\;{\rm GeV}$. The feedown gets its dependence on centrality from $ R_{\psi'/\psi}(b)$ determined from simulation. The absolute normalization of the results deserves special discussion. We predict the absolute probability of $J/\psi$ production, both direct and with feeddown, normalized per $\bar c c\,\,\,$ $pair$ produced in the same collisions (e.g. centrality bin). Unfortunately, the total cross section of charm production is not yet measured with sufficient accuracy to normalize results this way. The usual way to present these results is in the form of the so called $R_{AA}$ ratio, relating production in Au+Au at given centrality to that in pp, times the theoretical (Glauber) predictions for the number of hard collisions. In other words, the parton model for $\bar c c$ is used. Unfortunately, the experimental data about feeddown in pp are still uncertain enough to produce sufficiently large scale ambiguity. We cannot obtain this from our theory as well because (as explained above) our classical projections do not work adequately for pp (and very peripheral) collisions. Thus we see at the moment no sufficiently accurate way of $absolute$ comparison with the data. Because of that, we simply normalize our results assuming that there is no QGP suppression ( meaning $R^{anomalous}_{AA}(N_{part}<100)=1$). The results normalized like this are shown in Fig. \ref{plot_feeddown}: we conclude that although feeddown is not large, taking it into account helps bring the shape of the anomalous suppression closer to observations. \begin{figure} \includegraphics[width=8cm]{plot_feeddown} \caption{(Color online.) The points are PHENIX data for $ R^{anomalous}_{AA}(y=0)$, the same as used in Fig.\ref{raa_vs_npart}. Two curves are our model, with (solid, upper) and without (dashed, lower) feeddown.} \label{plot_feeddown} \end{figure} \subsection{Including the ``mixed phase''} In our work so far, we have only examined the evolution of the $c \bar c$ pairs during the QGP phase, stopping the evolution wherever the fluid's temperature reached $T_c$. However, in order to understand the evolution of charmonia to their hadronization we need to model the dynamics of the charm quarks also through the ``mixed phase'', also known as the near-$T_c$ region. In various hydrodynamic models which describe heavy ion collisions, the region of roughly $T=0.9-1.1\;T_c$ is treated as a separate ``mixed phase'' distinct from QGP and hadronic phases. Indeed, it has a very different equation of state: while the temperature and pressure remain nearly constant, the energy and entropy densities jump by a large factor \footnote{Although the exact nature of matter in the near-$T_c$ region is not yet understood, let us mention that the original ``mixed phase'' description, originating from the notion of the first-order phase transition, cannot be true, as ``supercooling'' and bubble formation expected in this case are not observed experimentally. Lattice gauge theory suggests a smooth crossover-type transition, with a high but finite peak in the specific heat. Recently there has been renewed interest in this region, after the so-called ``magnetic scenario'' for it has been proposed \cite{Liao:2006ry,Chernodub:2006gu}, describing it as a plasma containing a significant fraction of magnetic monopoles.}. What is very important for the present paper is that the near-$T_c$ region occupies a significant fraction of the fireball evolution, in spite of being a very narrow interval in terms of $T$. Indeed, one should rather focus not on $T$ but on the entropy density $s$, which shows a simple monotonous decrease with time $s\sim 1/\tau$ for all three phases. For a quantitative description of the mixed phase we used hydrodynamical calculations, known to describe radial and elliptic flow well, such as the work by Shuryak and Teaney \cite{Teaney:2001av}. It follows from their solution that the ``mixed phase'' persists for about $5 \, {\rm fm}/c$ after the deconfined phase, which is comparable to the lifetime of the deconfined phase at the very center of the fireball. Thus it is by no means a small effect, and should be included in any realistic treatment of a heavy-ion collision. The flow during this time was found to be well approximated by a Hubble-like expansion with radial velocity $v=Hr$ and time-independent $H \approx 0.075 \, {\rm fm}^{-1}$ for central collisions. For a collision with a nonzero impact parameter (below we will consider $b=6.5 \, {\rm fm}$), the anisotropy of this expansion can be parameterized similarly: \begin{eqnarray} v_i=H_i x_i\end{eqnarray} with $i=1,2$ and $H_x=0.078\, {\rm fm}^{-1},H_y=0.058 \, {\rm fm}^{-1}$: thus anisotropy is only about 80\% by this late stage. It is fair to say that we have a fairly reasonable understanding of how the medium flows for these later stages: thus in our simulations we have used those parameterizations instead of numerical solutions to hydrodynamics, which were necessary for the QGP phase. Let us start with two extreme scenarios for the dynamics of the charm quarks during this phase of the collision:\\ 1.) the charm quarks are completely ``stopped'' in the medium, so that they experience the same Hubble-like flow as matter; \\ 2.) $\bar c c$ pairs do not interact at all with the medium near-$T_c$, moving ballistically with constant velocity for the corresponding time in the collision. If the first scenario were true, the effect of Hubble flow would be to increase all momenta of particles by the same multiplicative factors $p_i(t)=p_i(0)\exp(H_it)$. With sufficiently high drag, Langevin dynamics would bring the charm quarks rapidly to a thermal distribution, and since $M>>T$ it is a good approximation in this case to say that the heavy quarks have been "stopped". However, we will show below that at the ``realistic'' value used for the drag $\eta_c$ this does $not$ happen during the time allocated to the mixed phase, there is instead ongoing ``stopping'' of the charm quarks relative to fluid elements. (This also will be important for the evolution of the azimuthal anisotropy $v_2(p_t)$ for single charm and for charmonium). The second scenario predicts $v_2(p_t)$ for single charm quarks which is far smaller than what is measured. We do not consider this scenario further even though something might be said for modelling charmonium in the mixed phase as interacting far less than single charm. \begin{figure}[t] \includegraphics[width=8cm]{pt_hubble2} \caption{(Color online.) The charm $p_t$-distribution after the mixed phase compared with the distribution without flow, the distribution orignating from PYTHIA, and the distribution before the mixed phase} \label{plot_pt_hubble} \end{figure} Several single charm $p_t$-distributions are shown in Figure \ref{plot_pt_hubble} (normalized for simplicity to unity). The initial distribution after hard production predicted by PYTHIA is the largest at high $p_t$: this is compared to the Langevin evolution before (squares) and after (triangles) the mixed phase, for a semi-central RHIC collision ($b=7\;{\rm fm}$). In order to see that radial flow still is important, we have also shown what happens if Langevin evolution happen on top of unmoving fixed-T plasma (circles). This comparison demonstrates once again the main point of this paper, that for charm quarks and charmonium in a heavy-ion collision equilibration is never complete, even in momentum space: so the specific timescales of different phases of matter are of fundamental importance. Unfortunately, in the near-$T_c$ region it is much less clear how to describe the $c-\bar c$ interaction. As we have learned from lattice data, the difference between free-energy and potential-energy potentials are very drastic in this case: in the former case the tension (the force from the linear potential) disappears while in the latter it becomes about 5 times stronger than it is in vacuum. As discussed in refs\cite{Liao:2007mj,Liao:2008vj}, the latter is presumably due to a metastable electric flux tube. Which potential to use depends on timescales of the $c-\bar c$ dynamics, which are not well understood at this point. Therefore we took for now a conservative approach, assuming that at the near-$T_c$ stage charm pairs interact according to the simple Coulomb interaction $V=-\alpha_s/r$. Additionally, in our model for this phase we assume the interaction of the charm quarks with the medium can be modelled with the same Langevin dynamics with the temperature approximated as a fixed $T=T_c$ and the flow given as above. We found that with the simple Coulomb potential used in the mixed phase, the survival probability dropped slightly but not significantly: and although we do not discuss other possibilities in this work further, in principle this can be changed if the potential to be used has significant tension. One final interesting observable would be a measurement of charmonium elliptic flow, characterized by the azimuthal anisotropy parameter $v_2=<cos(2\phi)>$, induced by ellipticity of the flow on charmed quarks. A measurement with low statistics has been already made at PHENIX \cite{Silvestre:2008nk}: both PHENIX and STAR are working on higher statistics data on tape now. The result of our calculation of $v_2(p_t)$, both for single charm quarks and for the $J/\psi$, is shown in Figure \ref{plot_v2}. Greco, Ko, and Rapp also made predictions for the $v_2$ for $D$ and $J/\psi$ \cite{Greco:2003vf}, based on a completely different scenario: in their case charm distributions are completely equilibrated and the charmonium states coalesce from them at hadronization. In spite of such difference, our predictions are similar: $v_2$ of $J/\psi$ should be less than the $v_2$ of single charm for low $p_t$, but then increase past the $v_2$ of single charm at $p_t> 3\, GeV$. This shows, that the observation of charmonium $v_2$ can not be considered the argument for coalescence. \begin{figure}[t] \includegraphics[width=8cm]{v2jpsi2} \caption{(Color online.) The azimuthal anisotropy versus transverse momentum for both single charm and for $J/\psi$.} \label{plot_v2} \end{figure} \section{Summary} We have studied a relaxation process of a $\bar c c\,\,\,$ pair produced by hard processes in heavy ion collision throughout the sQGP stage using hydrodynamics+Langevin. The main elements of the paper are: (i) inclusion of the interaction force between charm quarks and (ii) emphasis on deviations from equilibrium during the finite QGP lifetime. Our main finding is that the lifetime of sQGP is not sufficient to reach the equilibrium distribution of the pairs in space, allowing for a significant fraction of $J/\psi$ $\sim 1/2$ to survive. This probability for charmonium dissociation in sQGP is larger than in earlier perturbative estimates, or for Langevin diffusion ignoring mutual interaction. That is why there is no large difference between suppression at RHIC and SPS, in spite of the longer QGP lifetime in the former case. While the momentum relaxation is rather rapid, we found that later evolution reaches the so-called quasiequilibrium regime, which is maintained during all time of QGP expansion. The spatial distributions after some time develop a ``core'' in which $\bar c c\,\,\,$ pairs remain in close proximity due to the remaining effective attraction in sQGP, combined with a relatively slow leakage into a spreading tail toward large $r$. We propose quasiequilibrium as the clue to an explanation of the apparently thermal ratios of $\psi^{'}$ and $J/\psi$, especially at SPS. The shape of the centrality dependence of our survival probabality is in agreement with data. Therefore, although we have not yet directly evaluated ``nondiagonal'' recombination, we think that most of the $J/\psi$ observed at SPS and RHIC are still from the ``diagonal'' pairs. This and other issues will of course be clarified, as more simulations and data in different kinematic domains become available. The calculation is extended to the near-$T_c$ region -- known as a ``mixed phase'' in hydrodynamical calculations. Its duration for RHIC collisions is about 5 fm/c, comparable to that of the QGP. We have used simple Hubble-like parameterization and minimal Coulomb potential, and predict both the charmonium $p_t$ spectra and azimuthal asymmetry parameter $v_2$ which is expected to be measured soon . \vskip 1.0cm {\bf Acknowledgments.\,\,} We thank P. Petreczky for providing lattice data on internal energy $V(T,r)$ used in this work. C. Young would like to thank K. Dusling for useful discussions on various topics in this work. We especially thank D. Teaney for permitting us use of his hydrodynamics+Langevin code and providing much needed assistance. This work is partially supported by the US-DOE grants DE-FG02-88ER40388 and DE-FG03-97ER4014.
1,108,101,565,339
arxiv
\section{Motivation and objectives} The use of computational fluid dynamics (CFD) for external aerodynamic applications has been a key tool for aircraft design in the modern aerospace industry. CFD methodologies with increasing functionality and performance have greatly improved our understanding and predictive capabilities of complex flows. These improvements suggest that Certification by Analysis (CbA) --prediction of the aerodynamic quantities of interest by numerical simulations~\citep{Clark2020} may soon be a reality. CbA is expected to narrow the number of wind tunnel experiments, reducing both the turnover time and cost of the design cycle. However, flow predictions from the state-of-the-art CFD solvers are still unable to comply with the stringent accuracy requirements and computational efficiency demanded by the industry. These limitations are imposed, largely, by the defiant ubiquity of turbulence. In the present work, we investigate the performance of wall-modeled large-eddy simulation (WMLES) to predict the mean flow quantities over the fuselage and wing-body junction of the NASA Juncture Flow Experiment \citep{Rumsey2019}. Computations submitted to previous AIAA Drag Prediction Workshops~\citep{Vassberg2008} have displayed large variations in the prediction of separation, skin friction, and pressure in the corner-flow region near the wing trailing edge. To improve the performance of CFD, NASA has developed a validation experiment for a generic full-span wing-fuselage junction model at subsonic conditions. The reader is referred to \citet{Rumsey2019} for a summary of the history and goals of the NASA Juncture Flow Experiment~\citep[see also][]{Rumsey2016a, Rumsey2016b}. The geometry and flow conditions are designed to yield flow separation in the trailing edge corner of the wing, with recirculation bubbles varying in size with the angle of attack (AoA). The model is a full-span wing-fuselage body that was configured with truncated DLR-F6 wings, both with and without leading-edge horn at the wing root. The model has been tested at a chord Reynolds number of 2.4 million, and AoA ranging from -10 degrees to +10 degrees in the Langley 14- by 22-foot Subsonic Tunnel. An overview of the experimental measurements can be found in \citet{Kegerise2019}. The main aspects of the planning and execution of the project are discussed by \citet{Rumsey2018}, along with details about the CFD and experimental teams. To date, most CFD efforts on the NASA Juncture Flow Experiment have been conducted using RANS or hybrid-RANS solvers. \citet{Lee2017} performed the first CFD analysis to aid the NASA Juncture Flow committee in selecting the wing configuration for the final experiment. \citet{Lee2018} presented a preliminary CFD study of the near wing-body juncture region to evaluate the best practices in simulating wind tunnel effects. \citet{Rumsey2019} used FUN3D to investigate the ability of RANS-based CFD solvers to predict the flow details leading up to separation. The study comprised different RANS turbulence models such as a linear eddy viscosity one-equation model, a nonlinear version of the same model, and a full second-moment seven-equation model. \citet{Rumsey2019} also performed a grid sensitivity analysis and CFD uncertainty quantification. Comparisons between CFD simulations and the wind tunnel experimental results have been recently documented by \citet{Lee2019}. NASA has recognized WMLES as a critical pacing item for ``developing a visionary CFD capability required by the notional year 2030''. According to NASA's recent CFD Vision 2030 report \citep{Slotnick2014}, hybrid RANS/LES \citep{Spalart1997, Spalart2009} and WMLES \citep{Bose2018} are identified as the most viable approaches for predicting realistic flows at high Reynolds numbers in external aerodynamics. However, WMLES has been less thoroughly investigated. In the present study, we perform WMLES of the NASA Juncture Flow. Other attempts of WMLES of the same flow configuration include the works by \cite{Iyer2020}, \cite{Ghate2020}, and \cite{Lozano_AIAA_2020}. These authors highlighted the capabilities of WMLES for predicting wall pressure, velocity and Reynolds stresses, especially compared with RANS-based methodologies. Nonetheless, it was noted that WMLES is still far from providing the robustness and stringent accuracy required for CbA, especially in the separated regions and wing-fuselage corners. The goal of this brief is to systematically quantify some of these errors. Modeling improvements to alleviate these limitations are discussed in the companion brief by \cite{Lozano_brief_2020_2}, which can be found in this volume. This brief is organized as follows. The flow setup, mathematical modeling, and numerical approach are presented in Section \ref{sec:numerical}. The strategies for grid generation are discussed in Section \ref{sec:gridding}. The results are presented in Section \ref{sec:results}, which includes the prediction and error scaling of the mean velocity profiles and Reynolds stresses for three different locations on the aircraft: the upstream region of the fuselage, the wing-body juncture, and the wing-body juncture close to the trailing edge. Finally, conclusions are offered in Section \ref{sec:conclusions}. \section{Numerical Methods}\label{sec:numerical} \subsection{Flow conditions and computational setup}\label{sec:setup} We use the NASA Juncture Flow geometry with a wing based on the DLR-F6 and a leading-edge horn to mitigate the effect of the horseshoe vortex over the wing-fuselage juncture. The model wingspan is nominally 3397.2~mm, the fuselage length is 4839.2~mm, and the crank chord (chord length at the Yehudi break) is $L=557.1$~mm. The frame of reference is such that the fuselage nose is located at $x = 0$, the $x$-axis is aligned with the fuselage centerline, the $y$-axis denotes spanwise direction, and the $z$-axis is the vertical direction. The associated instantaneous velocities are denoted by $u$, $v$, and $w$, and occasionally by $u_1$, $u_2$, and $u_3$. In the experiment, the model was tripped near the front of the fuselage and on the upper and lower surfaces of both wings. In our case, preliminary calculations showed that tripping was also necessary to trigger the transition to turbulence over the wing. Hence, the geometry of the wing was modified by displacing in the $z$ direction a line of surface mesh points close to the leading edge by 1~mm along the suction side of the wing, and by -1~mm along the pressure side. The tripping lines follow approximately the location of the tripping dots used in the experimental setup for the left wing (lower surface $x = (4144-y)/2.082$; upper surface $x = (3775-y)/1.975$ for $y<-362$ and $x = (2847-y)/1.532$ for $y>-362$). Tripping using dots mimicking the experimental setup was also tested. It was found that the results over the wing-body juncture show little sensitivity to the tripping due to the presence of the incoming boundary layer from the fuselage. No tripping was needed on the fuselage, which naturally transitioned from laminar to turbulence. In the wind tunnel, the model was mounted on a sting aligned with the fuselage axis. The sting was attached to a mast that emerged from the wind tunnel floor. Here, all calculations are performed in free air conditions, and the sting and mast are ignored. The computational setup is such that the dimensions of the domain are about five times the length of the fuselage in the three directions. The Reynolds number is $Re = L U_\infty/\nu=2.4$ million based on the crank chord length $L$, and freestream velocity $U_\infty$. The freestream Mach number is Ma $= 0.189$, the freestream temperature is $T = 288.84$ K, and the dynamic pressure is 2476 Pa. We impose a uniform plug flow as the inflow boundary condition. The Navier--Stokes characteristic boundary condition for subsonic non-reflecting outflow is imposed at the lateral boundaries, outflow and top boundaries~\citep{Poinsot1992}. At the walls, we impose Neumann boundary conditions with the shear stress provided by the wall model as described in Section \ref{sec:models}. \subsection{Subgrid-scale and wall modeling}\label{sec:models} The simulations are conducted with the high-fidelity solver charLES developed by Cascade Technologies, Inc. The code integrates the compressible LES equations using a kinetic-energy conserving, second-order accurate, finite volume method. The numerical discretization relies on a flux formulation which is approximately entropy preserving in the inviscid limit, thereby limiting the amount of numerical dissipation added into the calculation. The time integration is performed with a third-order Runge-Kutta explicit method. The SGS model is the dynamic Smagorinsky model \citep{Germano1991} with the modification by \cite{Lilly1992}. We utilize a wall model to overcome the restrictive grid-resolution requirements to resolve the small-scale flow motions in the vicinity of the walls. The no-slip boundary condition at the walls is replaced by a wall-stress boundary condition. The wall stress is obtained from the wall model and the walls are assumed isothermal. We use an algebraic equilibrium wall model derived from the integration of the one-dimensional equilibrium stress model along the wall-normal direction \citep{Wang2002, Kawai2012, Larsson2015}. The matching location for the wall model is the first off-wall cell center of the LES grid. No temporal filtering or treatments were used. \section{Grid strategies and cost}\label{sec:gridding} \subsection{Grid generation: constant-size grid vs. boundary-layer-conforming grid} \label{subsec:tbl} The mesh generation is based on a Voronoi hexagonal close-packed point-seeding method. We follow two strategies for grid generation: \begin{itemize} \item[(i)] Constant-size grid. In the first approach, we set the grid size in the vicinity of the aircraft surface to be roughly isotropic $\Delta\approx \Delta_x \approx \Delta_y \approx \Delta_z$. The number of layers in the direction normal to the wall of size $\Delta$ is also specified. We set the far-field grid resolution, $\Delta_\mathrm{far}>\Delta$, and create additional layers with increasing grid size to blend the near-wall grid with the far-field size. The meshes are constructed using a Voronoi diagram and ten iterations of Lloyd's algorithm to smooth the transition between layers with different grid resolutions. Figure \ref{fig:tbl_grids}(a) illustrates the grid structure for $\Delta=2$~mm and $\Delta_\mathrm{far}=80$~mm. This grid-generation approach is algorithmically simple and efficient. However, it is agnostic to details of the actual flow such as wake/shear regions and boundary-layer growth. This implies that flow regions close to the fuselage nose and wing leading edge are underresolved (less than one point per boundary-layer thickness), whereas the wing trailing edge and the downstream-fuselage regions are seeded with up to hundreds of points per boundary-layer thickness. The gridding strategy (ii) aims at alleviating this issue. \begin{figure} \begin{center} \includegraphics[width=1.0\textwidth]{Grid_regular.pdf} \end{center} \begin{center} \includegraphics[width=1.0\textwidth]{Grid_custom.pdf} \end{center} \caption{Visualization of Voronoi control volumes for (top) constant-size grid following strategy i) with $\Delta=2$~mm and $\Delta_\mathrm{far}=80$~mm and (bottom) boundary-layer-conforming grid following strategy ii) with $N_{bl} = 5$ and $Re_\Delta^\mathrm{min}=8 \times 10^3$.\label{fig:tbl_grids}} \end{figure} \item[(ii)] Boundary-layer-conforming grid. In the second gridding strategy, we account for the actual growth of the turbulent boundary layers $\delta$ by seeding the control volumes consistently. We refer to this approach as boundary-layer-conforming grid (BL-conforming grid). The method necessitates two parameters. The first one is the number of points per boundary-layer thickness, $N_{bl}$, such that $\Delta_x \approx \Delta_y \approx \Delta_z \approx \Delta \approx \delta/N_{bl}$, which is a function of space. The second parameter is less often discussed in the literature and is the minimum local Reynolds number that we are willing to represent in the flow, $Re_{\Delta}^\mathrm{min} \equiv \Delta_\mathrm{min} U_\infty/\nu$, where $\Delta_\mathrm{min}$ is the smallest grid resolution allowed. This is a necessary constraint as $\delta \rightarrow 0$ at the body leading edge, which would impose a large burden on the number of points required. The last condition is a geometric constraint such that $\Delta$ is smaller than the local radius of curvature $R$ of the surface. The grid is then constructed by seeding points within the boundary layer with space-varying grid size % \begin{equation} \Delta(x,y,z) \approx \mathrm{min}\left[ \mathrm{max}\left( \frac{\gamma\delta}{N_{tbl}}, \frac{Re_{\Delta}^\mathrm{min} \nu}{U_\infty}\right), \beta R \right], \end{equation} where $\gamma=1.2$ is a correction factor for $\delta$ to ensure the near-wall grid contains the instantaneous boundary layer, and $\beta=1/2$. Note that the grid is still locally isotropic and the characteristic size of the control volumes is $\delta/N_{tbl}$ in the three spatial directions. Figure \ref{fig:tbl_grids}(b) shows the structure of a BL-conforming grid with $N_{bl} = 5$ and $Re_\Delta^\mathrm{min}=8 \times 10^3$. Additional control volumes of increasing size are created to blend the near-wall grid with the far-field grid of size $\Delta_{\mathrm{far}}$. \end{itemize} The gridding approach above requires an estimation of the boundary-layer thickness at each location of the aircraft surface. The method proposed here is based on measuring the deviation of the viscous flow solution from the reference inviscid flow. This is achieved by conducting two simulations: one WMLES, whose velocity is denoted as $\boldsymbol{u}$, and one inviscid simulation (no SGS model and free-slip at the wall), with velocity denoted by $\boldsymbol{u}_I$. The grid generation of both simulations follows strategy (i) with $\Delta =2$~mm. Boundary layers at the leading edge with thickness below $2$~mm are estimated by extrapolating the solution using a power law. Two examples of mean velocity profiles for $\boldsymbol{u}$ and $\boldsymbol{u}_I$ are shown in Figures \ref{fig:tbl_thickness}(a) and (b). The three-dimensional surface representing the boundary layer edge $S_\mathrm{tbl}$ is identified as the loci of \begin{equation}\label{eq:Stbl} S_\mathrm{tbl} \equiv \left\{ (x,y,z) : \frac{||\langle \boldsymbol{u}_I(x,y,z) \rangle - \langle \boldsymbol{u}(x,y,z) \rangle||}{|| \langle \boldsymbol{u}_I(x,y,z) \rangle||} = 0.01 \right\}, \end{equation} where $\langle \cdot \rangle$ denotes time-average. Finally, at each point of the aircraft surface $(x_a,y_a,z_a)$, the boundary-layer thickness $\delta$ is defined as the minimum spherical distance between $\boldsymbol{x}_a = (x_a,y_a,z_a)$ and $\boldsymbol{x} = (x,y,z) \in S_{tbl}$, \begin{equation}\label{eq:delta_def} \delta(\boldsymbol{x}_a) \equiv ||\boldsymbol{x}_a - \boldsymbol{x}||_\mathrm{min}, \ \forall \boldsymbol{x} \in S_{tbl}. \end{equation} The boundary-layer thickness for the flow conditions of the NASA Juncture Flow is shown in Figure \ref{fig:tbl_thickness}(c) and ranges from $0$~mm at the leading edge of the wing to $\sim 30$~mm at the trailing edge of the wing. Thicker boundary layers about $50$~mm are found in the downstream region of the fuselage. Equation (\ref{eq:Stbl}) might be interpreted as the definition for a turbulent/nonturbulent interface, although it also applies to laminar regions. Other approaches for defining $S_{\mathrm{tbl}}$ were also explored and combined with Eq. (\ref{eq:delta_def}), such as isosurfaces of Q-criterion. Nonetheless, the present flow case is dominated by attached boundary layers and Eq. (\ref{eq:Stbl}) yields reasonable results for the purpose of generating a BL-conforming grid. \begin{figure} \begin{center} \subfloat[]{\includegraphics[width=0.47\textwidth]{tbl_WMLES_vs_slip1.pdf}} \hspace{0.1cm} \subfloat[]{\includegraphics[width=0.47\textwidth]{tbl_WMLES_vs_slip2.pdf}} \end{center} \begin{center} \includegraphics[width=0.8\textwidth]{JF_tbl_thickness.pdf} \end{center} \caption{The three mean velocity components for $\langle \boldsymbol{u}\rangle$ (lines with symbols) and $\langle \boldsymbol{u}_I\rangle$ (dotted lines), and boundary layer height (dashed). The locations of the mean profiles are indicated in panel (c) by the solid line for panel (a) and the cross for panel (b). (c) Boundary-layer thickness (in millimeters) for the NASA Juncture Flow at AoA = 5 degrees and $Re= 2.4 \times 10^6$. \label{fig:tbl_thickness}} \end{figure} \subsection{Number of grid points}\label{subsec:number} We estimate the number of grid points (or control volumes) to conduct WMLES of the NASA Juncture Flow as a function of the number of points per boundary-layer thickness ($N_{bl}$) and the minimum grid Reynolds number ($Re_{\Delta}^\mathrm{min}$). We assume the gridding strategy (ii) and utilize the Juncture Flow geometry. The boundary-layer thickness was obtained following the procedure in Section \ref{subsec:tbl}. The total number of points, $N_\mathrm{points}$, to grid the boundary layer spanning the surface area of the aircraft $S_a$ is \begin{equation}\label{eq:points} N_\mathrm{points} = \int_{0}^{\delta} \int_{S_a} \frac{1}{\Delta(x_{||},y_{||})^3} \mathrm{d}x_{||} \mathrm{d}y_{||} \mathrm{d}n = \int_{S_a} \frac{N_{bl}}{\Delta(x_{\parallel},y_{\parallel})^2} \mathrm{d}x_{||} \mathrm{d}y_{||}, \end{equation} where $x_{\parallel}$, $y_{\parallel}$ are the aircraft wall-parallel directions, and $n$ is the wall-normal direction~\citep[see also][]{Chapman1979, Spalart1997, Choi2012}. Equation (\ref{eq:points}) is integrated numerically and the results are shown in Figure \ref{fig:cost}. The cost map in Figure \ref{fig:cost}(a) contains $\log_{10}(N_\mathrm{points})$ as a function of $N_{bl}$ and $Re_{\Delta}^\mathrm{min}$. The accuracy of the solution is expected to improve for increasing values of $N_{bl}$, i.e., higher energy content resolved by the LES grid, and decrease with increasing $Re_{\Delta}^\mathrm{min}$. The latter sets the minimum boundary-layer thickness that can be resolved by the LES grid (i.e., the largest thickness of the subgrid boundary layer). Figure \ref{fig:cost}(b) provides a visual illustration of the subgrid-boundary-layer region for $Re_\Delta^\mathrm{min} < 10^4$, which is confined to a small region (less than 10\% of the chord) at the leading edge of the wing. \begin{figure} \begin{center} \subfloat[]{\includegraphics[width=0.49\textwidth]{Cost_Nbl_ReDeltaMin.pdf}} \hspace{0.3cm} \subfloat[]{\includegraphics[width=0.44\textwidth]{Examples_ReDeltaMin.pdf}} \end{center} \begin{center} \subfloat[]{\includegraphics[width=0.47\textwidth]{Cost_Nbl.pdf}} \hspace{0.1cm} \subfloat[]{\includegraphics[width=0.47\textwidth]{Cost_Re.pdf}} \end{center} \caption{ (a) Logarithm of the number of points ($\log_{10}N_\mathrm{points}$) required for WMLES of the NASA Juncture Flow geometry as a function of the number of grid points per boundary-layer thickness ($N_{bl}$) and minimum grid Reynolds number ($Re_\Delta^\mathrm{min}$) for $Re = 5 \times 10^6$. (b) Subgrid boundary-layer region (in red) for $Re_\Delta^\mathrm{min} = 10^4$ at $Re = 5 \times 10^6$. Panels (c) and (d) are the number of grid points as a function of (c) $N_{bl}$ and (d) $Re$.\label{fig:cost}} \end{figure} The Reynolds number considered in Figure \ref{fig:cost}(a) is $Re=5 \times 10^6$, which is representative of wind tunnel experiments. A Reynolds number typical of aircraft in flight conditions is $Re\approx5 \times 10^7$, which would increase $N_\mathrm{points}$ by a factor of ten due to the thinning of the boundary layers. More precisely, the increase in $N_\mathrm{points}$ is proportional to $Re$ and $N_{bl}$, and roughly inversely proportional to $Re_\Delta^\mathrm{min}$. The scaling properties of Eq. (\ref{eq:points}) can be explained by assuming that the boundary layer over the aircraft is fully turbulent and grows as $\delta \sim (x-x_e)[(x-x_e)U_\infty/\nu]^{-m}$, where $x_e$ is the streamwise distance to the closest leading edge and $m\approx-1/7$ for a zero-pressure gradient flat-plate turbulent boundary layer (ZPGTBL). If we further assume that $Re \gg Re_\Delta^\mathrm{min}$, the number of control volumes can be shown to scale as \begin{equation}\label{eq:growth} N_\mathrm{points} \sim N_{bl} Re \left( Re_\Delta^\mathrm{min} \right)^{-5/6}, \end{equation} which is confirmed in Figures \ref{fig:cost}(c) and (d) using the data obtained using Eq. (\ref{eq:points}) for the NASA Juncture Flow. The two isolines in Figure \ref{fig:cost}(a) bound the region $N_\mathrm{points}=100$ million to $1000$ million grid points, which are within the reach of current computing resources available to the industry. For example, for $N_{bl}\approx 10$ and $Re_\Delta^\mathrm{min} \approx 10^4$, the required number of points is $\sim 400$ million, which can be currently simulated in a few days using thousands of cores. However, if the desired accuracy for the quantities of interest is such that $N_{bl}\approx 20$ and $Re_\Delta^\mathrm{min} \approx 10^3$, the number of grid points rises up to $3000$ million, which renders WMLES unfeasible as a routine tool for the industry. Hence, the key to the success of WMLES as a designing tool resides in the accuracy of the solution achieved as a function of $N_{bl}$ and $Re_\Delta^\mathrm{min}$. This calls for a systematic error characterization of the quantities of interest, which is the objective of the present preliminary work. \section{Error scaling of WMLES} \label{sec:results} The solutions provided by WMLES are grid-dependent and multiple computations are required in order to faithfully assess the quality of results. This raises the fundamental question of what is the expected WMLES error as a function of the flow parameters and grid resolution. Here, we follow the systematic error-scaling characterization from \cite{Lozano2019a}. Taking the experimental values ($q^\mathrm{exp}$) as ground truth, the error in the quantity $\langle q \rangle$ can be expressed as \begin{equation}\label{eq:error_general} \varepsilon_q \equiv \frac{||\langle q^\mathrm{exp}\rangle-\langle q \rangle||_n}{||\langle q^\mathrm{exp}\rangle||_n} = f\left( \frac{\Delta}{\delta}, Re, \mathrm{Ma},\mathrm{geometry},...\right), \end{equation} where $||\cdot||_n$ is the L$_2$-norm along the spatial coordinates of $\langle q\rangle$, and the error function can depend on additional non-dimensional parameters of the problem. For a given geometry and flow regime, the error function in Eq. (\ref{eq:error_general}) in conjunction with the cost map in Figure \ref{fig:cost}(a) determines whether WMLES is a viable approach in terms of accuracy and computational resources available. For the NASA Juncture Flow, the geometry, $Re$, and Ma are fixed parameters. If we further assume that the error follows a power law, Eq. (\ref{eq:error_general}) can be simplified as \begin{equation}\label{eq:error_simply} \varepsilon_q = c_q \left( \Delta/\delta \right)^{\alpha_q}, \end{equation} where $c_q$ and $\alpha_q$ are constants that depend on the modeling approach and flow region (i.e., laminar, fully turbulent, separation,...). For turbulent channel flows, \cite{Lozano2019a} showed that $\alpha_q \approx 1$ and $c_q$ is of the same order for various SGS models. We focus on the error scaling of pointwise time-averaged velocity profiles and pressure coefficient, which are used as a proxy to measure the quality of the WMLES solution. From an engineering viewpoint, the lift and drag coefficients are the most pressing quantities of interest in aerodynamics applications. However, these are integrated quantities which do not provide flow details and are susceptible to error cancellation. The granularity provided by pointwise time-averaged quantities allows us to detect modeling deficiencies and aids the development of new models. Unfortunately, the pointwise friction coefficient is not available from the experimental campaign, which hinders our ability to assess the performance of the wall models more thoroughly. \subsection{WMLES Cases and flow uncertainties}\label{Cases} We perform WMLES of the NASA Juncture Flow with a leading-edge horn at $Re=2.4 \times 10^6$ and AoA=$5^\circ$. Seven cases are considered. In the first six cases, we use grids generated using strategy (i) with constant grid size in millimeters akin to the example offered in Figure \ref{fig:tbl_grids}(a). In this case, the direct impact of $Re_\Delta^{min}$ can be absorbed into $\Delta/\delta$ as is done in Eq. (\ref{eq:error_simply}). The grid sizes considered are $\Delta \approx 6.7, 4.3, 2.2, 1.1$ and $0.5$ millimeters, which are labeled as C-D7, C-D4, C-D2, C-D1, and C-D0.5, respectively. Cases C-D7, C-D4, and C-D2 are obtained by refining the grid across the entire aircraft surface. For cases C-D1 and C-D0.5, the grid size is, respectively, 1.1 and 0.5 millimeters only within a refinement box along the fuselage and wing-body juncture defined by $x\in[1770,2970]$~mm, $y\in[-300,-200]$~mm, and $z\in[-50,150]$~mm. An additional case is considered to assess the impact on the accuracy of BL-conforming Voronoi grids. The grid is generated using strategy (ii) for $N_{bl} = \delta/\Delta = 5$ and $Re_\Delta^\mathrm{min}=8\times 10^3$, as shown in Figure \ref{fig:tbl_grids}(b). The case is denoted as C-N5-Rem8e3. In the following, $\langle \cdot \rangle$ and $(\cdot)'$ denote time-average and fluctuating component, respectively. For comparison purposes, the profiles are interpolated to the grid locations of the experiments. Statistical uncertainties in WMLES quantities are estimated assuming uncorrelated and randomly distributed errors following a normal distribution. The uncertainty in $\langle q \rangle$ is then estimated as $\Delta \langle q \rangle \equiv \sigma/\sqrt{N_s}$, where $N_s$ is the number of samples for the computing $\langle q \rangle$, and $\sigma$ is the standard deviation of the samples. The uncertainties for the mean velocity profiles and pressure coefficient were found to be below 1\% and are not reported in the plots. \subsection{Mean velocity profiles and Reynolds stresses} \label{subsec:vel_stress} We consider three locations on the aircraft: (1) the upstream region of the fuselage, (2) the wing-body juncture, and (3) wing-body juncture close to the trailing edge. The mean velocity profiles are shown in panel (a) of Figures \ref{fig:fuselage}, \ref{fig:juncture}, and \ref{fig:separation} for each location considered and the errors are quantified in Figure \ref{fig:Errors_all}(a). We assume that $\langle u_i \rangle \approx \langle u_i^{\mathrm{exp}} \rangle$ and the mean velocity from WMLES is directly comparable with unfiltered experimental data ($\langle u_i^{\mathrm{exp}} \rangle$). The approximation is reasonable for quantities dominated by large-scale contributions, as is the case for $\langle u_i \rangle$. Figure \ref{fig:fuselage}(a) shows that $\langle u_i \rangle$ from WMLES converges to the experimental results with grid refinement. The turbulent boundary layer over the fuselage is about $10$ to $20$~mm thick, which yields roughly 3--6 points per boundary-layer thickness at the grid resolutions considered. Assuming that the flow at a given station can be approximated by a local canonical ZPGTBL, the expected error in the mean velocities can be estimated as $\varepsilon_m \approx 0.16 \Delta/\delta$~\citep{Lozano2019a}. For the current $\Delta$, this yields errors of 2\%--8\%, consistent with the results in Figure \ref{fig:Errors_all}(a) (red symbols). The situation differs for the wing-body juncture and wing trailing edge. Despite the finer grid sizes, errors in the mean flow prediction are about 15\% in the juncture (black symbols in Figure \ref{fig:Errors_all}a) and even 100\% in the trailing edge (blue symbols in Figure \ref{fig:Errors_all}a). The larger errors may be attributed to the presence of a three-dimensional boundary layers and flow separation in the vicinity of the wing-body juncture and trailing edge, which makes the error scaling predicated upon the assumption of local similarity to a ZPGTBL inappropriate. \begin{figure} \begin{center} \subfloat[]{\includegraphics[width=0.44\textwidth]{umean_x1168.4_z0_AoA5.pdf}} \hspace{0.5cm} \subfloat[]{\includegraphics[width=0.47\textwidth]{uiuj_x1168.4_z0_AoA5.pdf}} \end{center} \caption{(a) Mean velocity profiles and (b) Reynolds stresses at location 1: upstream region of the fuselage $x=1168.4$~mm and $z=0$~mm (red line in Figure \ref{fig:Errors_all}(b)). Solid lines with symbols denote WMLES for cases C-D7 ($\square$), C-D4 ($\triangleright$), and C-D2 ($\circ$). Colors denote different velocity components. Panel (b) only includes case C-D2 and the shaded area represents statistical uncertainty. Experiments are denoted by dashed lines. The distance $y$ is normalized by the local boundary-layer thickness $\delta$ at that location. \label{fig:fuselage}} \end{figure} \begin{figure} \begin{center} \subfloat[]{\includegraphics[width=0.44\textwidth]{umean_x2747.6_y239.1_AoA5.pdf}} \hspace{0.5cm} \subfloat[]{\includegraphics[width=0.47\textwidth]{uiuj_x2747.6_y239.1_AoA5.pdf}} \end{center} \caption{Same as Figure \ref{fig:fuselage} for location 2: wing-body juncture at $x=2747.6$~mm and $y=239.1$~mm (black line in Figure \ref{fig:Errors_all}(b)). In panel (a), lines with symbols are for cases C-D2 ($\circ$), C-D1 ($\triangleleft$), and C-D0.5 ($\diamond$). In panel (b), the case shown in C-D0.5. \label{fig:juncture}} \end{figure} \begin{figure} \begin{center} \subfloat[]{\includegraphics[width=0.44\textwidth]{umean_x2922.6_y239.1_AoA5.pdf}} \hspace{0.5cm} \subfloat[]{\includegraphics[width=0.47\textwidth]{uiuj_x2922.6_y239.1_AoA5.pdf}} \end{center} \caption{Same as Figure \ref{fig:fuselage} for location 3: wing-body juncture close to the trailing-edge at $x=2922.6$~mm and $y=239.1$~mm (blue line in Figure \ref{fig:Errors_all}(b)). In panel (a), lines with symbols are for cases C-D2 ($\circ$), C-D1 ($\triangleleft$), and C-D0.5 ($\diamond$). In panel (b), the case shown in C-D0.5.\label{fig:separation}} \end{figure} The resolved portion of the tangential Reynolds stresses is shown in Figure panels (b) of Figures \ref{fig:fuselage}, \ref{fig:juncture}, and \ref{fig:separation}. The trends followed by $\langle u_i' u_j'\rangle$ are correctly captured at the different stations investigated, although their magnitudes tend to be systematically underpredicted, especially for the juncture region and trailing edge. Estimations from \cite{Lozano2019a} suggest that the error for the turbulence intensities should scale as $\sim (\Delta/\delta)^{2/3}$, which for the present grid resolution implies $\sim10$--$20\%$ error. The result is consistent with the typical $\Delta$ for WMLES, which supports a limited fraction of the turbulent fluctuations. Assuming $\langle u_i^\mathrm{exp} \rangle \approx \langle u_i\rangle$, then $\langle {u_i'}^\mathrm{exp} {u_j'}^\mathrm{exp}\rangle \approx \langle u_i' u_j'\rangle + \langle \tau_{ij}^\mathrm{SGS}\rangle$, where $\tau_{ij}^\mathrm{SGS}$ is the subgrid-scale tensor. Thus, for the current grid sizes we can expect $|\langle u_i' u_j'\rangle| < |\langle {u_i'}^\mathrm{exp} {u_j'}^\mathrm{exp}\rangle|$, i.e., severe underprediction of the tangential Reynolds stress by WMLES. Figure \ref{fig:Errors_all}(a) is the cornerstone of the present study. It shows the relative errors in the prediction of the mean velocity profiles for the three regions considered: upstream fuselage, wing-body juncture, and wing-body juncture at the trailing edge. The three regions are marked with lines in Figure \ref{fig:Errors_all}(b) using the same colors as in Figure \ref{fig:Errors_all}(a). The turbulent flow in the fuselage resembles a ZPGTBL. As such, the wall- and SGS models, which have been devised for and validated in flat-plate turbulence, perform accordingly. On the contrary, there is a clear decline of current models in the wing-body juncture and trailing edge region, which are dominated by secondary motions in the corner and flow separation. Not only is the magnitude of the errors larger in the latter locations, but the error rate of convergence is also slower ($\varepsilon_m\sim(\Delta/\delta)^{0.5}$) when compared with the equilibrium conditions encountered in ZPGTBL ($\varepsilon_m\sim(\Delta/\delta)$). The situation could be even more unfavorable, as \cite{Lozano2019a} has shown that for refining grid resolutions the convergence of WMLES towards the DNS solution may follow a non-monotonic behavior due to the interplay between numerical and SGS model errors. \begin{figure} \begin{center} \subfloat[]{\includegraphics[width=0.485\textwidth]{E_mean_all_AoA5.pdf}} \subfloat[]{\includegraphics[width=0.46\textwidth]{JF_all_lines.pdf}} \end{center} \caption{(a) Error in the mean velocity profile prediction by WMLES as a function of the grid resolution. The different colors denote the three locations indicated in panel (b) using the same color code.\label{fig:Errors_all}} \end{figure} \subsection{Pressure coefficient}\label{subsec:pressure} The surface pressure coefficient over the wing, $C_p$, is shown in Figure \ref{fig:Cp} along the chord of the wing. The predictions are compared with experimental data at three different $y$-locations. The locations selected are denoted by red lines in Figure \ref{fig:Cp}(d). Overall, WMLES agrees with the experimental data to within 1--5\% error. The predictions are still to within 5\% accuracy even at the coarsest grid resolutions considered, which barely resolved the boundary layer. The main discrepancies with experiments are located at the leading edge of the wing. The accurate prediction of $C_p$ is a common observation in CFD of external aerodynamics. The result can be attributed to the outer-layer nature of the mean pressure, which becomes less sensitive to the details of near-wall turbulence. Under the thin boundary-layer assumption, the integrated mean momentum equation in the spanwise direction shows that $\langle p \rangle + \rho \langle v^2 \rangle \approx p_e(x) \Rightarrow p_\mathrm{wall} = p_e(x)$, where $p_e$ is the far-field pressure. Hence, the pressure at the surface is mostly imposed by the inviscid imprint of the outer flow. This is demonstrated by performing an additional calculation similar to C-D2 but imposing the free-slip boundary condition at the wall such that boundary layers are unable to develop (Figure \ref{fig:Cp}(c)). The conditions for $p_\mathrm{wall} = p_e(x)$ would not hold, for example, when the wall radius of curvature is comparable to the boundary-layer thickness. This is the case in the vicinity of the wing leading-edge, which is the region where the accuracy of $C_p$ provided for WMLES is the poorest and most sensitive to $\Delta$. The tripping methodology used in WMLES differs from the experimental setup, which may also contribute to the discrepancies observed. The outer-flow character of $C_p$ is encouraging for the prediction of the pressure-induced components of the lift and drag coefficients. It also suggests that $C_p$ might not be a challenging quantity to predict in the presence of wall-attached boundary layers. Thus, the community should focus the efforts on other important quantities of interest such as the numerical prediction of the skin-friction coefficient and its challenging experimental measurement. \begin{figure} \begin{center} \includegraphics[width=1.\textwidth]{Cp.pdf} \end{center} \caption{ The surface pressure coefficient $C_p$ along the wing. Panels (a) and (b) show $C_p$ for cases C-D7, C-D4 and C-D2. Panel (c) shows $C_p$ for case C-D2 and a case identical to C-D2 but imposing free-slip boundary condition at the walls. (d) Locations over the wing selected to represent $C_p$ in panels (a), (b) and (c). \label{fig:Cp}} \end{figure} \subsection{Separation bubble}\label{subsec:bubble} For completeness, we also report the results on the size of the separation bubble. The mean wall-stress streamlines for case C-D0.5 are shown in Figure \ref{fig:bubble_tau}. The figure also contains a depiction of the average length and width of the separation bubble, which are about $100$ mm and $60$ mm, respectively, for case C-D0.5. Direct comparison of these dimensions with oil-film experimental results show that the current WMLES prediction is about 15\% lower than the experimental measurements ($120 \times 80$ mm), consistent with previous WMLES investigations~\citep{Lozano_AIAA_2020, Iyer2020, Ghate2020}. Nonetheless, note that the sizes of the separation zone from WMLES are obtained from the tangential wall-stress streamlines after the wall stress is averaged in time, whereas the experimental sizes are obtained from the pattern resulting from the oil-film time evolution. Albeit both methodologies provide an average description of the size of the separation zone, they do not allow for a one to one comparison and we should not interpret the present differences as a faithful quantification of the errors. Hence, a methodology allowing for quantitative comparisons of separated-flow patterns between CFD and experimental data remains an open challenge. \begin{figure} \centering \includegraphics[width=0.8\textwidth]{bubble.pdf} \caption{Streamlines of the average tangential wall-stress. The results are for case C-D0.5. \label{fig:bubble_tau}} \end{figure} \subsection{Improvements with boundary-layer-conforming grids} \label{subsec:targeted} We evaluate potential improvements with BL-conforming grids by comparing the results for case C-N5-Rem8e3 with case C-D2. The grid for C-N5-Rem8e3 is generated for $N_{bl}=5$ and $Re_\Delta^\mathrm{min}=10^4$ following the procedure described in Section \ref{subsec:tbl}. For reference, case C-D2 has 32 million control volumes, whereas case C-N5-Rem8e3 has 11 million control volumes. The mean velocity profiles for C-N5-Rem8e3 are shown in Figure \ref{fig:custom} and compared with C-D2 at three locations. Some moderate improvements are achieved at the fuselage despite both cases sharing the same $N_{bl}$ at that location. The improvements are accentuated at the juncture and trailing edge, where C-N5-Rem8e3 outperforms C-D2 with less than a fourth of the grid points per $\delta$ in each of the three spatial directions. The results suggest that grids designed to specifically target the boundary layer could improve the overall accuracy of WMLES. The outcome might be explained by considering the upstream history of the WMLES solution at a given station. Let's assume the simpler scenario of WMLES of a flat-plate turbulent boundary layer along $x$ using two grids: a constant-size grid and a BL-conforming grid. If we take an $x$-location at which both grids have the same $N_{bl}$, the upstream flow for the constant-size grid is underresolved compared to the BL-conforming grid due to the thinner $\delta$ upstream the flow (hence, less points per $\delta$ and larger errors). On the other hand, the BL-conforming grid maintains a constant grid resolution scaled in $\delta$ units and effectively more resolution upstream the flow. Furthermore, even if at a given $x$ location $N_{bl}$ is larger for the constant-size grid, the solution could be worse because of error propagation from the upstream flow. By assuming that energy-containing eddies with lifetimes $\delta/U_\infty$ are advected by $U_\infty$, we can estimate the downstream propagation of errors at a given location $x_0$ by $\Delta x_e = \int_{x_0}^x \delta_0/\delta \, \mathrm{d}x$, which is the streamwise distance required for the energy-containing eddies to forget their past history. In ZPGTBL at high $Re$~\citep{Sillero2013}, $\Delta x_e$ can reach values of $\Delta x_e \approx 100\delta_0$. The long convective distance for error propagation ($\sim 200$~mm for $\delta_0 \approx \Delta=2$~mm) combined with the higher upstream errors for constant-size grids may explain the improved results for C-N5-Rem8e3 reported in Figure~\ref{fig:custom}. \begin{figure} \begin{center} \subfloat[]{\includegraphics[width=0.32\textwidth]{Custom_umean_x1168.4_z0_AoA5.pdf}} \hspace{0.05cm} \subfloat[]{\includegraphics[width=0.32\textwidth]{Custom_umean_x2747.6_y239.1_AoA5.pdf}} \hspace{0.05cm} \subfloat[]{\includegraphics[width=0.32\textwidth]{Custom_umean_x2922.6_y239.1_AoA5.pdf}} \end{center} \caption{Mean velocity profiles for case C-N5-Rem8e3 (solid lines with $\bullet$) and C-D2 (solid lines with $\circ$) at (a) location 1: upstream region of the fuselage $x=1168.4$~mm and $z=0$~mm (b) location 2 wing-body juncture $x=2747.6$~mm and $y=239.1$~mm, and (c) location 3: wing-body juncture close to the trailing edge at $x=2922.6$~mm and $y=239.1$~mm. Experiments are denoted by dashed lines. Colors denote different velocity components. The distance $y$ is normalized by the local boundary-layer thickness $\delta$ at that location.\label{fig:custom}} \end{figure} \section{Conclusions}\label{sec:conclusions} We have performed WMLES of the NASA Juncture Flow using charLES with Voronoi grids. The simulations were conducted for an angle of attack of $5^\circ$ and $Re=2.4 \times 10^6$. We have characterized the errors in the prediction of mean velocity profiles and pressure coefficient for three different locations over the aircraft: the upstream region of the fuselage, the wing-body juncture, and the wing-body juncture close to the trailing-edge. The last two locations are characterized by strong mean-flow three-dimensionality and separation. The prediction of the pressure coefficient is below 5\% error for all grid sizes considered, even when boundary layers were marginally resolved. We have shown that this good accuracy can be attributed to the outer-layer nature of the mean pressure, which becomes less sensitive to flow details within the turbulent boundary layer. A summary of the errors incurred by WMLES in predicting mean velocity profiles is shown in Figure \ref{fig:Errors_all} for the three locations considered. The message conveyed by the error analysis is that WMLES performs as expected in regions where the flow resembles a zero-pressure-gradient flat plate boundary layer. However, there is a clear decline of the current models in the presence of wing-body junctions and, more acutely, in separated zones. Moreover, the slow convergence to the solution in these regions renders the brute-force grid-refinement approach to improve the accuracy of the solution unfeasible. The results reported above pertain to the mean velocity profile predicted using the typical grid resolution for external aerodynamics applications, i.e., 5--20 points per boundary-layer thickness. The impact of the above deficiencies on the skin friction prediction is uncertain due to the lack of experimental measurements. Yet, it is expected that the errors observed in the mean velocity profile would propagate to the wall stress. Finally, we have shown that boundary-layer-conforming grids (i.e., grids maintaining a constant number of points per boundary-layer thickness) allow for a more efficient distribution of grid points and smaller errors. It was argued that the improved accuracy might be due to the reduced propagation of WMLES errors in the streamwise direction. The increased accuracy provided by boundary-layer-conforming grids will be investigated in more detail in future studies along with the effect of targeted refinement in the wakes. Our results suggest that novel modeling venues encompassing physical insights, together with numerical and gridding advancements, must be exploited to attain predictions within the tolerance required for Certification by Analysis. \section*{Acknowledgments} A.L.-D. acknowledges the support of NASA under grant No. NNX15AU93A. We thank Jane Bae and Konrad Goc for helpful comments. \bibliographystyle{ctr}
1,108,101,565,340
arxiv
\section{Introduction} \label{sec:intro} Gradually typed programming languages are designed to resolve the conflict between static and dynamically typed programming styles \citep{tobin-hochstadt06,tobin-hochstadt08,siek-taha06}. A gradual language allows a smooth transition from dynamic to static typing through gradual addition of types to dynamically typed programs, and allows for safe interactions between more statically typed and more dynamically typed components. With such an enticing goal there has been extensive research on gradual typing---e.g., \cite{siek-taha06,gronski06,wadler-findler09,Ina:2011zr,swamy14,Allende:2013aa}---with recent work aimed at extending gradual typing to more advanced language features, such as parametric polymorphism~\cite{ahmed17,igarashipoly17}, effect tracking~\cite{gradeffects2014}, typestate~\cite{Wolff:2011:GT}, session types~\cite{igarashisession17}, and refinement types \linebreak \cite{lehmann17}. Formalizing the idea of a ``smooth transition'', a key property that every gradually typed language should satisfy is \citeauthor*{refined}'s \emph{(dynamic)\footnote{The same work also introduces a \emph{static} gradual guarantee that says that changing the types in a program to be less dynamic means type checking becomes stricter. We do not consider this in our paper because we only consider the semantics of cast calculi, not the type systems of gradual surface languages. We discuss the relationship further in \secref{sec:rel:gradual-guarantee}} gradual guarantee}, which we refer to as \emph{graduality} (by analogy with parametricity). Graduality enables programmers to modify their program from a dynamically typed to a statically typed style, and vice-versa, with confidence that the program's behavior only changes in predictable ways. Specifically, it says that changing the types in a program to be ``less dynamic''/''more precise''---i.e., changing from the dynamic type to some more precise type such as integers or functions---either produces the same behavior as the original program or causes a dynamic type error. Conversely, if a program does not error and some types are made ``more dynamic''/''less precise'' then the program has the exact same behavior. This is an important reasoning principle for programmers as the alternative would be quite counterintuitive: for instance, changing certain type annotations might cause a terminating program to diverge, or a program that prints your calendar to tweet your home address! This distinguishes dynamic type checking in gradual typing from exceptions: raising an exception is a valid program behavior that can be caught and handled by a caller, whereas a dynamic type error is always considered to be a bug, and terminates the program. More formally, the notion of when a type $\sA$ is ``less dynamic'' than another type $\sB$ is specified by a \emph{type dynamism} relation (also known as type precision or na\"ive subtyping), written $\sA \sqsubseteq \sB$, which is defined for simple languages as the least congruence relation such that the dynamic type $\sfont{\mathord{?}}$ is the \emph{most} dynamic type: $\sA \sqsubseteq \sfont{\mathord{?}}$. Then, \emph{term dynamism} (also known as term precision) is the natural extension of type dynamism to terms, written $\st \sqsubseteq \ss$. The graduality theorem is then that if $\st \sqsubseteq \ss$, then the behavior of $\st$ must be ``less dynamic'' than the behavior of $\ss$---that is, either $\st$ produces a runtime type error or both terms have the exact same behavior. We say $\st$ is ``less dynamic'' in the sense that it has \emph{fewer behaviors}. Unfortunately, for the majority of gradually typed languages, the (dynamic) gradual guarantee is considered quite challenging to prove, and there is only limited guidance about how to design new languages so that they satisfy this property. There are two notable exceptions: Abstracting Gradual Typing (AGT) \cite{garcia16} and the Gradualizer \cite{gradualizer16, gradualizer17} provide systematic methods and formal tools, respectively, for deriving a gradually typed language from a statically typed language, and they both provide the gradual guarantee by construction. However, while they provide a proof of the gradual guarantee for languages produced in the respective frameworks, most gradually typed languages are not produced in this way; for instance, Typed Racket’s approach to gradual typing \cite{tobin-hochstadt06,tobin-hochstadt08} is not explained by either system. Furthermore, both Gradualizer and AGT base their semantics on static type checking itself, but this is the reverse of the semantic view of type checking. In the semantic viewpoint, type checking should be justified by a sensible semantics, and not the other way around. \paragraph{Type Dynamism and Embedding-Projection Pairs} While the gradual guarantee as presented in \citet{refined} makes type dynamism a central component, the semantic meaning of type dynamism is unclear. This is not just a philosophical question: it is unclear how to extend type dynamism to new language features. For instance, polymorphic gradually typed languages have been developed recently by \citet{ahmed17} and \citet{igarashipoly17}, but the two papers have different definitions of type dynamism, and neither attempts a proof of the (dynamic) gradual guarantee. The AGT \cite{garcia16} approach gives a systematic definition of type dynamism in terms of sets of static types, but that definition is difficult to separate from the rest of their framework, whereas we would like a definition that can be interpreted in any gradually typed language. At present, the best guidance we have comes from the gradual guarantee itself: the dynamic type should be the greatest element, and the gradual guarantee should hold. We propose a semantic definition for type dynamism that naturally leads to a clean formulation and proof of the gradual guarantee: An ordering $\sA \sqsubseteq \sB$ should hold when the casts between the two types form an \emph{embedding-projection pair}. What does this mean? First, in order to support interaction between statically typed and dynamic typed code while still maintaining the guarantees of the static types, gradually typed languages include \emph{casts}\footnote{It is not literally true that every gradual language uses this presentation of casts from cast calculi, but in order for a language to be gradually typed, some means of casting between types must be available, such as embedding dynamic code in statically typed code, or type annotations. We argue that the properties of casts we identify here should apply to those presentations as well.} $\obcast\sA\sB$ that dynamically check if a value of type $\sA$ corresponds to a valid inhabitant of the type $\sB$, and if so, transform its value to have the right type. Then if $\sA \sqsubseteq \sB$, we say that the casts $\obcast\sA\sB$ and $\obcast\sB\sA$ form an \emph{embedding-projection pair}, which means that they satisfy the following two properties that describe acceptable behaviors when casting between the two types: \emph{retraction} and \emph{projection}. First, $\sA$ should be a \emph{stricter} type than $\sB$, so anything satisfying $\sA$ should also satisfy $\sB$. This is captured in the \emph{retraction} property: if we cast a value $\sv : \sA$ from $\sA$ to $\sB$ and then back down to $\sA$, we should get back an equivalent value because $\sv$ should satisfy the type of $\sA$ and $\sB$. Formally, $\obcast\sB\sA\obcast\sA\sB\st \approx \st$ where $\approx$ means \emph{observational equivalence} of the programs: when placed in the same spot in a program, they produce the same behavior. Second, casts should only be doing type \emph{checking}, and not otherwise changing the behavior of the term. Since $\sB$ is a weaker property than $\sA$, if we cast a value of $\sv : \sB$ down to $\sA$, there may be a runtime type error. However, if $\sv$ really does satisfy $\sA$ the cast succeeds, and if we cast back to $\sB$ we should get back a value with similar behavior to $\sv$. If $\sB$ is a first-order type like booleans or numbers, we should get back exactly the same value. However, if $\sA, \sB$ are higher-order types like functions or objects, then it is impossible to check if a value of one type $\sB$ satisfies $\sA$. For instance, if $\sB = \sfont{\mathord{?}} \mathrel{\to_{s}} \sfont{\mathord{?}}$ and $\sA = \sfont{\mathord{?}} \mathrel{\to_{s}} \sfont{\mathbb{N}}$, then it is not decidable whether or not a value of $\sB$ will always return a number on every input. Instead, following \cite{findler-felleisen02}, gradual type casts \emph{wrap} the function with a cast on its outputs and if at any point it returns something that is not a number, a type error is raised. So if $\sv : \sB$ is cast to $\sA$ and back, we cannot expect to always get an equivalent value back, but the result should \emph{error more}---that is, either the cast to $\sA$ raises an error, or we get back a new value $\svpr : \sB$ that has the same behavior as $\sv$ except it sometimes raises a type error. We formalize this as \emph{observational error approximation} and write the ordering $\st \sqsubseteq \stpr$ as ``$\st$ errors more than $\stpr$''. We then use this to formalize the \emph{projection} property: $\obcast\sA\sB\obcast\sB\sA\ss \sqsubseteq \ss$. Notice how the justification for the projection property uses the same intuition as graduality: that casts should only be doing \emph{checking} and not completely changing a program's behavior. This is the key to why embedding-projection pairs help to formulate and prove graduality: we view graduality as the natural extension of the projection property from a property of casts to a property of arbitrary gradually typed programs. This gives us nice properties of some casts, but what do we know about casts that are \emph{not} upcasts or downcasts? In traditional formulations, gradual typing includes casts $\obcast\sA\sB$ between types that are \emph{shallowly compatible}---i.e, that are not guaranteed to fail. For instance, we can cast a pair where the left side is known to be a number $\sfont{\mathbb{N}} \mathbin{\sfontsym{\times}} \sfont{\mathord{?}}$ to a type where the right side is known to be a number $\sfont{\mathord{?}} \mathbin{\sfontsym{\times}} \sfont{\mathbb{N}}$ with casts succeeding on values where both sides are numbers. The resulting cast $\obcast{\sfont{\mathbb{N}}\mathbin{\sfontsym{\times}}\sfont{\mathord{?}}}{\sfont{\mathord{?}}\mathbin{\sfontsym{\times}}\sfont{\mathbb{N}}}$ is neither an upcast nor a downcast. We argue that the formulation based on these ``general'' casts is ill behaved from a meta-theoretic perspective: you are quite limited in your ability to break casts for larger types into casts for smaller types. Most notably, the \emph{composition} of two general casts is very rarely the same as the direct cast. For instance, casting from $\sfont{\mathbb{N}}$ to $\sfont{\mathord{?}}\mathrel{\to_{s}}\sfont{\mathord{?}}$ and back to $\sfont{\mathbb{N}}$ always errors, but obviously the direct cast $\obcast\sfont{\mathbb{N}}\snatty$ is the identity. We show that upcast and downcasts on the other hand satisfy a \emph{decomposition} theorem: if $\sAone \sqsubseteq \sAtwo\sqsubseteq \sAin{3}$, then the upcast from $\sAone$ to $\sAin{3}$ factors through $\sAtwo$ and similarly for the downcast. Furthermore, if we disregard \emph{performance} of the casts, and only care about the observational behavior, we show that any ``general'' cast is the composition of an upcast followed by a downcast.\footnote{Note that this is not the same as the factorization of casts known as ``threesomes'', see \secref{section:related:casts} for a comparison.} For instance, our cast from before $\obcast{\sfont{\mathbb{N}}\mathbin{\sfontsym{\times}}\sfont{\mathord{?}}}{\sfont{\mathord{?}}\mathbin{\sfontsym{\times}}\sfont{\mathbb{N}}}$ is observationally equivalent to the composition of first \emph{up}casting to a pair where both sides are dynamically typed $\sfont{\mathord{?}}\mathbin{\sfontsym{\times}} \sfont{\mathord{?}}$ and then \emph{down}casting: $\obcast{\sfont{\mathord{?}}\mathbin{\sfontsym{\times}}\sfont{\mathord{?}}}{\sfont{\mathord{?}}\mathbin{\sfontsym{\times}}\sfont{\mathbb{N}}}\obcast{\sfont{\mathbb{N}}\mathbin{\sfontsym{\times}}\sfont{\mathord{?}}}{\sfont{\mathord{?}}\mathbin{\sfontsym{\times}}\sfont{\mathord{?}}}$. We show that \emph{all} the casts in a standard gradually typed language exhibit this factorization, which means that for the purposes of formulating and proving graduality, we need only discuss upcasts and downcasts. For implementation, it is more convenient to have a primitive notion of coercion/direct cast to eliminate/collapse casts \cite{herman-tomb-flanagan-2010,siek-wadler10}, but we argue that the correctness of such an implementation should be justified by a simulation relation with a simpler semantics, meaning the implementation would inherit a proof of graduality from the simpler semantics as well. To prove these equivalence and approximation results, we develop a novel step-indexed logical relation that is sound for observational error approximation. We also develop high-level reasoning principles from the relation so that our main lemmas do not involve any manual step-manipulation. Finally, based on our semantic interpretation of type dynamism as embedding-projection pairs, we provide a refined analysis of the proof theory of type dynamism as a \emph{syntax for building ep pairs}. We give a semantics for these proof terms analogous to the coercion interpretation of subtyping derivations. Similar to subtyping, we prove a \emph{coherence} theorem which gives, as a corollary, our decomposition theorem for upcasts and downcasts. \paragraph{Graduality} In \citet{refined}, they prove the (dynamic) gradual guarantee by an operational simulation argument whose details are quite tied to the specific cast calculus used. Using the ep pairs, we provide a more semantic formulation and proof of graduality. First, we use our analysis of type dynamism as denoting ep pairs to define graduality as a kind of observational error approximation \emph{up to upcast/downcast}, building on the axiomatic semantics of graduality in \citet{newlicata2018}. We then prove the graduality theorem using our logical relation for error approximation. Notably, the decomposition theorem for ep pairs leads to a clean, uniform proof of the cast case of graduality. \paragraph{Overview of Technical Development and Contributions} In this paper, we show how to prove graduality for a standard gradually typed cast calculus by translating it into a simple typed language with recursive types and errors. Specifically, our development proceeds as follows: \begin{enumerate} \item We present a standard gradually typed cast calculus ($\lambda_{G}$) and its operational semantics, using ``general'' casts (\secref{sec:gradual}). \item We present a simple typed language with recursive types and a type error ($\lambda_{T,\mho}$), into which we translate the cast calculus. Casts in $\lambda_{G}$ are translated to contracts implemented in the typed language (\secref{sec:typed}). \item We develop a novel step-indexed logical relation that is sound for our notion of observational error approximation (\secref{sec:logrel}). We prove transitivity of the logical relation and other high-level reasoning principles so that our main lemmas for ep-pairs and graduality do not involve any manual step-manipulation. \item We present a novel analysis of type dynamism as a coherent syntax for ep pairs and show that all of the casts of the gradual language can be factorized as an upcast followed by a downcast (\secref{sec:ep-pairs}). \item We give a semantic formulation of graduality and then prove it using our error-approximation logical relation and ep pairs (\secref{sec:graduality}). \end{enumerate} \ifshort Proofs and definitions elided from this paper are presented in full in the extended version of the paper \cite{newahmed2018-extended}. \fi \section{Gradual Cast Calculus} \label{sec:gradual} Our starting point is a fairly typical gradual cast calculus, called $\lambda_{G}$, in the style of \citet{wadler-findler09} and \citet{refined}. A cast calculus is usually the target of an elaboration pass from a gradually typed surface language. The gradually typed surface language makes mixing static and dynamic code seamless, for instance a typed function on numbers $f : \mathbb{N} \to \mathbb{N}$ can be applied to a dynamically typed value $x :\mathord{?}$ and the result is well typed $f(x) : \mathbb{N}$. Since $x$ is not known to be a number, at runtime a dynamic check is performed: if $x$ is a number, $f$ is run with its value and otherwise a dynamic type error is raised. In the surface language, this checking behavior takes place at every elimination form: pattern matching, referencing a field, etc. The cast calculus makes the dynamic type checking separate from the elimination forms using explicit cast forms. If $\st : \sA$ in the cast calculus, then we can cast it to another type $\sB$ using the cast form $\obcast\sA\sB\st$. This means we can use the ordinary typed reduction rules for elimination forms, and all the details of checking are isolated to the cast reductions. We choose to use a cast calculus, rather than a gradual surface language, since we are chiefly concerned with the semantics of the language, rather than gradual type checking. \begin{figure} \begin{mathpar} \begin{array}{lrcl} \mbox{Types} & \sty,\styalt & \bnfdef & {\sfont{\mathord{?}}} \bnfalt {{\sunitty}}\bnfalt {{\spairty{\sty}{\styalt}}}\bnfalt {{\ssumty{\sty}{\styalt}}}\bnfalt {{\sfunty{\sty}{\styalt}}}\\ \mbox{Tags} & \stagty & \bnfdef & \sunitty\bnfalt\sfont{\mathord{?}}\mathbin{\sfontsym{\times}}\sfont{\mathord{?}}\bnfalt\sfont{\mathord{?}}\mathbin{\sfontsym{+}} \sfont{\mathord{?}}\bnfalt\sfont{\mathord{?}}\mathrel{\to_{s}}\sfont{\mathord{?}}\\ \mbox{Terms} & \sterm,\stermalt & \bnfdef & \sfontsym{\mho} \bnfalt {\svar} \bnfalt \obcast{\sA}{\sB}\st\bnfalt {\stunit} \bnfalt {\stpair{\stermone}{\stermtwo}}\bnfalt {\stmatchpair{\sx}{\sy}{\st}{\ss}}\\ & & \bnfalt & {\stinj{\sterm}}\bnfalt \stinjpr{\sterm}\bnfalt {{\stcase{\sterm}{\svarone}{\stermone}{\svartwo}{\stermtwo}}} \bnfalt {\stfun{\svar}{\sA}{\sterm}} \bnfalt {\stapp{\st}{\ss}}\\ \mbox{Values} & \sval & \bnfdef & \obcast{\stagty}{\sfont{\mathord{?}}}{\sval} \bnfalt {\stunit} \bnfalt {\stpair{\svalone}{\svaltwo}} \bnfalt {\stinj{\sval}} \bnfalt {\stinjpr{\sval}} \bnfalt {\stfun{\svar}{\sty}{\sterm}} \\ \mbox{Evaluation Contexts} & \sectxt & \bnfdef & \shw{\sfontsym{\cdot}} \bnfalt \obcast{\sA}{\sB}{\sectxt}\bnfalt \stpair{\sectxt}{\ss}\bnfalt \stpair{\sv}{\sectxt} \bnfalt \stmatchpair{\sx}{\sy}{\sectxt}{\ss}\\ &&\bnfalt& \stinj{\sectxt}\bnfalt \stinjpr{\sectxt}\bnfalt \stcase{\sectxt}{\sxone}{\stone}{\sxtwo}{\sttwo}\bnfalt \sectxt\,\ss\bnfalt \sv\,\sectxt\\ \mbox{Environments} & \sfont{\Gamma} & \bnfdef & \sfont{\cdot} \bnfalt \senvext{\sfont{\Gamma}}{\svar}{\sty}\\ \mbox{Substitutions} & \sgamma & \bnfdef & \cdot \bnfalt \sgamma, \sv/\sx \end{array} \end{mathpar} \caption{$\lambda_{G}$ Syntax} \label{fig:glang:syntax} \end{figure} \begin{figure} \flushleft{\fbox{\small{$\sjudgtc{\sfont{\Gamma}}{\st}{\sA}$}}} \vspace{-2ex} \begin{mathpar} \inferrule{~}{\sjudgtc{\sfont{\Gamma}}{\sfontsym{\mho}}{\sA}}\and \inferrule{~}{\sjudgtc{\sfont{\Gamma},\sx:\sA,\senvpr}{\sx}{\sA}}\and \inferrule{\sjudgtc{\sfont{\Gamma}}{\st}{\sA}} {\sjudgtc{\sfont{\Gamma}}{\obcast{\sA}{\sB}{\st}}{\sB}}\and \inferrule{~}{\sjudgtc{\sfont{\Gamma}}{\stunit}{\sunitty}}\and \inferrule {\sjudgtc{\sfont{\Gamma}}{\stermone}{\styone}\and \sjudgtc{\sfont{\Gamma}}{\stermtwo}{\stytwo}} {\sjudgtc{\sfont{\Gamma}}{\stpair{\stermone}{\stermtwo}}{\spairty{\styone}{\stytwo}}}\and \inferrule {\sjudgtc{\sfont{\Gamma}}{\st}{\sAone\mathbin{\sfontsym{\times}}\sAtwo}\and \sjudgtc{\sfont{\Gamma},\sx:\sAone,\sy:\sAtwo}{\ss}{\sB}} {\sjudgtc{\sfont{\Gamma}}{\stmatchpair{\sx}{\sy}{\st}{\ss}}{\sB}}\and \iflong \inferrule {\sjudgtc{\sfont{\Gamma}}{\st}{\sA}} {\sjudgtc{\sfont{\Gamma}}{\stinj{\st}}{\ssumty{\sA}{\sApr}}}\and \inferrule {\sjudgtc{\sfont{\Gamma}}{\st}{\sApr}} {\sjudgtc{\sfont{\Gamma}}{\stinjpr{\st}}{\ssumty{\sA}{\sApr}}}\and \inferrule{\sjudgtc{\sfont{\Gamma}}{\st}{\sA\mathbin{\sfontsym{+}}\sApr}\and \sjudgtc{\senvext{\sfont{\Gamma}}{\sx}{\sA}}{\ss}{\sB}\and \sjudgtc{\senvext{\sfont{\Gamma}}{\sxpr}{\sApr}}{\sspr}{\sB}} {\sjudgtc{\sfont{\Gamma}}{\stcase{\st}{\sx}{\ss}{\sxpr}{\sspr}}{\sB}}\quad \fi \inferrule {\sjudgtc{\sfont{\Gamma},\sx:\sA}{\st}{\sB}} {\sjudgtc{{\sfont{\Gamma}}}{\stfun{\sx}{\sA}{\st}}{\sA\mathrel{\to_{s}}\sB}}\and \inferrule {\sjudgtc{\sfont{\Gamma}}{\st}{\sA\mathrel{\to_{s}}\sB}\and \sjudgtc{\sfont{\Gamma}}{\ss}{\sA}} {\sjudgtc{\sfont{\Gamma}}{\st\,\ss}{\sB}} \end{mathpar} \caption{$\lambda_{G}$ Typing Rules \ifshort (excerpt)\fi} \label{fig:glang:typing:extended} \end{figure} We present the syntax of $\lambda_{G}$ (pronounced ``lambda gee'' and typeset in $\sfont{\textsf{blue sans-serif font}}$) in \figref{fig:glang:syntax}, and \ifshort most of\fi the typing rules in \figref{fig:glang:typing:extended}. The language is call-by-value and includes standard type formers, namely, the unit type $\sunitty$, product type $\mathbin{\sfontsym{\times}}$, sum type $\mathbin{\sfontsym{+}}$, and function type $\mathrel{\to_{s}}$, with standard typing rules. The language also includes some features specific to gradual typing: a dynamic type $\sfont{\mathord{?}}$, a dynamic type error $\sfontsym{\mho}$ and casts $\obcast\sA\sB\st$. Following previous work, the interface for the dynamic type $\sfont{\mathord{?}}$ is given by the casts themselves, and not distinct introduction and elimination forms. The values of the dynamic type are of the form $\obcast{\stagty}{\sfont{\mathord{?}}}\sv$ where $\stagty$ ranges over \emph{tag types}, defined in \figref{fig:glang:syntax}. The tag types are so called because they represent the ``tags'' used to distinguish between the basic sorts of dynamically typed values. Every type except $\sfont{\mathord{?}}$ has an ``underlying'' tag type we write as $\floor\sA$ and define in \figref{fig:glang:tag}. These tag types are the cases of the dynamic type $\sfont{\mathord{?}}$ seen as a sum type, which is how we model it in \secref{sec:typed:translated}. For any two types $\sA, \sB$, we can form the cast $\obcast\sA\sB$ which at runtime will attempt to coerce a term $\sv : \sA$ into a valid term of type $\sB$. If the value cannot sensibly be interpreted as a value in $\sB$, the cast \emph{fails} and reduces to the dynamic type error $\sfontsym{\mho}$. The type error is like an uncatchable exception, modeling the fact that the program crashes with an error message when a dynamic type error is encountered. In this paper we consider all type errors to be equivalent. The calculus is based on that of \citet{wadler-findler09}, but does not have blame and removes the restriction that types must be compatible in order to define a cast. \begin{figure} \flushleft{\fbox{\small{$\floor{\sA} \defeq \stagty$}}~\mbox{\small{where~$\sA \neq \sfont{\mathord{?}}$}}} \vspace{-4ex} \begin{mathpar} \begin{array}{rcl} \floor{\sunitty} & \defeq & \sunitty\\ \floor{\sA \mathbin{\sfontsym{\times}} \sB} & \defeq & \sfont{\mathord{?}} \mathbin{\sfontsym{\times}} \sfont{\mathord{?}} \\ \floor{\sA \mathbin{\sfontsym{+}} \sB} & \defeq & \sfont{\mathord{?}} \mathbin{\sfontsym{+}} \sfont{\mathord{?}} \\ \floor{\sA \mathrel{\to_{s}} \sB} & \defeq & \sfont{\mathord{?}} \mathrel{\to_{s}} \sfont{\mathord{?}} \\ \end{array} \end{mathpar} \caption{$\lambda_{G}$: Tag of a (non-dynamic) Type} \label{fig:glang:tag} \end{figure} \begin{figure} \begin{mathpar} \sinholestep {\stcase{\parened{\stinj{\sv}}}{\sx}{\st}{\sxpr}{\stpr}} {\st{}[\sv/\sx]} \inferrule {} {\sinholestep {\stcase{\parened{\stinjpr{\sv}}}{\sx}{\st}{\sxpr}{\stpr}} {\stpr{}[\sv/\sxpr]}} \inferrule {} {\sinholestep {\stmatchpair{\sxone}{\sxtwo}{\stpair{\svone}{\svtwo}}{\st}} {\st[\svone/\sxone,\svtwo/\sxtwo]}} \inferrule{} {\sinholestep{\stapp{\parened{\stufun{\svar}{\sterm}}}{\sval}}{\subst{\sterm}{\sval}{\svar}}} \inferrule{} {\sectxt\hw\sfontsym{\mho} \step \sfontsym{\mho}} \end{mathpar}\\ \hrule \begin{mathpar} \inferrule*[right=DynDyn] {} {\sinholestep{\obcast{\sfont{\mathord{?}}}{\sfont{\mathord{?}}}{\sv}}{\sv}} \inferrule*[right=TagUp] {\sA \neq \sfont{\mathord{?}} \and \floor\sA \neq \sA} {\sinholestep {\obcast{\sA}{\sfont{\mathord{?}}}{\sv}} {\obcast{\floor\sA}{\sfont{\mathord{?}}}{\sparened{\obcast{\sA}{\floor\sA}{\sv}}}}} \inferrule*[right=TagDn] {\sA \neq \sfont{\mathord{?}} \and \floor\sA \neq \sA} {\sinholestep {\obcast{\sfont{\mathord{?}}}{\sA}{\sv}} {\obcast{\floor\sA}{\sA}\obcast{\sfont{\mathord{?}}}{\floor\sA}{\sv}}} \inferrule*[right=TagMatch] {} {\sinholestep{{\obcast{\sfont{\mathord{?}}}{\stagty}{\obcast{\stagty}{\sfont{\mathord{?}}}{\sv}}}}{\sv}} \inferrule*[right=TagMismatch] {\stagty \neq \stagtypr} {\sectxt\hw{\obcast{\sfont{\mathord{?}}}{\stagty}{\obcast{\stagtypr}{\sfont{\mathord{?}}}{\sv}}} \step \sfontsym{\mho}} \inferrule*[right=TagMismatch'] {\sA,\sB \neq \sfont{\mathord{?}}\and \floor\sA\neq\floor\sB} {\sectxt\hw{\obcast{\sA}{\sB}\sv} \step \sfontsym{\mho}} \inferrule*[right=Pair] {} {\sinholestep {\obcast{\spairty{\sAone}{\sBone}}{\spairty{\sAtwo}{\sBtwo}}{\stpair{\sv}{\svpr}}} {\stpair{\obcast{\sAone}{\sAtwo}{\sv}}{\obcast{\sBone}{\sBtwo}{\svpr}}}} \inferrule*[right=Sum] {} {\sinholestep {\obcast{\ssumty{\sAone}{\sBone}}{\ssumty{\sAtwo}{\sBtwo}}{\stinj{\sv}}} {\obcast{\sAone}{\sAtwo}{\sv}}} \inferrule*[right=Sum'] {} {\sinholestep {\obcast{\ssumty{\sAone}{\sBone}}{\ssumty{\sAtwo}{\sBtwo}}{\stinjpr{\sv}}} {\obcast{\sBone}{\sBtwo}{\sv}}} \inferrule*[right=Fun] {} {\sinholestep{\obcast{\sfunty{\sAone}{\sBone}}{\sfunty{\sAtwo}{\sBtwo}}{\sv}}{\stfun{\sx}{\sAtwo}{\obcast{\sBone}{\sBtwo}{\sparened{\stapp{\sv}{\sparened{\obcast{\sAtwo}{\sAone}{\sx}}}}}}}} \end{mathpar} \caption{$\lambda_{G}$ Operational Semantics: non-casts (top) and casts (bottom)} \label{fig:glang:opsem} \end{figure} \figref{fig:glang:opsem} presents the operational semantics of the gradual language in the style of \citet{felleisen-hieb}, using \emph{evaluation contexts} $\sectxt$ to specify a left-to-right, call-by-value evaluation order. The top of the figure shows the reductions \emph{not} involving casts. This includes the standard reductions for pairs, sums, and functions using the obvious notion of substitution $\st{}[\sgamma]$, in addition to a reduction $\sectxt\hw{\sfontsym{\mho}} \step \sfontsym{\mho}$ to propagate a dynamic type error to the top level. More importantly, the bottom of the figure shows the reductions of \emph{casts}, specifying the dynamic type checking necessary for gradual typing. First (\textsc{DynDyn}), casting from dynamic to itself is the identity. For any type $\sA$ that is not a tag type (checked by $\floor\sA \neq \sA$) or the dynamic type, casting to the dynamic type first casts to its underlying tag type $\floor\sA$ and then tags it at that type (\textsc{TagUp}). Similarly, casting down from the dynamic type first casts to the underlying tag type (\textsc{TagDn}). The next two rules are the primitive reductions for tags: if you project at the correct tag type, you get the underlying value out (\textsc{TagMatch}) and otherwise a dynamic type error is raised (\textsc{TagMismatch}). Similarly, the next rule (\textsc{TagMismatch'}) says that if two types are incompatible in that they have distinct tag types and neither is dynamic, then the cast errors. The next three (\textsc{Fun, Pair, Sum}) are the standard ``wrapping'' implementations of contracts/casts \cite{findler-felleisen02}, also familiar from subtyping. For the function cast $\obcast{\sAone\mathrel{\to_{s}}\sBone}{\sAtwo\mathrel{\to_{s}}\sBtwo}$, note that while the output type is the same direction $\obcast{\sBone}{\sBtwo}$, the input cast is flipped: $\obcast{\sAtwo}{\sAone}$. We note that this standard operational semantics is quite complex for such a \emph{small} language. In particular, it is more complicated than the operational semantics of typed and dynamically typed languages of similar size. Typed languages have reductions for each elimination form and dynamically typed languages add only the possibility of type error to those reductions. Here on the other hand, the semantics is \emph{not} modular in the same way: there are five rules involving the dynamic type and four of them involve comparing arbitrary types. For these reasons, we find the cast calculus presentation inconvenient for semantic analysis, and we choose not to develop our theory of graduality or even prove type safety \emph{directly} for this language. Instead, we will \emph{translate} the cast calculus into a typed language where the casts are translated to functions implemented in the language, i.e. contracts \cite{findler-felleisen02}. This has the advantage of reducing the size of the language, making ``language-level'' theorems like type safety and soundness of a logical relation easier to prove. Finally, note that our central theorems are still about the gradual language, but we will prove them by lifting results about their translations using an \emph{adequacy} theorem (\cref{lem:adequacy}). \section{Translating Gradual Typing} \label{sec:typed} We now translate our cast calculus into a simpler, non-gradual typed language with errors. We then prove an adequacy theorem that enables us to prove theorems about gradual programs by reasoning about their translations. \subsection{Typed Language with Errors} The typed language we will translate into is $\lambda_{T,\mho}$ (pronounced ``lambda tee error'' and typeset in $\mfont{\textbf{bold red serif font}}$), a call-by-value typed lambda calculus with iso-recursive types and an uncatchable error. \figref{fig:tlang:syntax} shows the syntax of the language. \figref{fig:tlang:typing} shows some of the typing rules; the rest are completely standard. \begin{figure} \begin{mathpar} \begin{array}{lrcl} \mbox{Types} & \mty,\mtyalt & \bnfdef & \mmuty{\malpha}{\mty} \bnfalt \malpha \bnfalt \munitty\bnfalt \mA \mathbin{\mfontsym{\times}} \mB\bnfalt \msumty{\mty}{\mtyalt} \bnfalt \mfunty{\mty}{\mtyalt} \\ \mbox{Terms} & \mt,\ms & \bnfdef & {\mfontsym{\mho}} \bnfalt {\mvar} \bnfalt \mtlet\mx\mt\ms\bnfalt {\mtroll{\mA}{\mterm}} \bnfalt {\mtunroll{\mterm}} \bnfalt \mtunit\bnfalt \mtpair\mt\ms\\ & & \bnfalt & \mtmatchpair{\mx}{\my}{\mt}{\ms}\bnfalt {\mtinj{\mterm}} \bnfalt {\mtinjpr{\mterm}} \\ & & \bnfalt & {\mtcase{\mterm}{\mvarone}{\mtermone}{\mvartwo}{\mtermtwo}}\bnfalt {\mtfun{\mvar}{\mA}{\mterm}} \bnfalt {\mtapp{\mterm}{\mtermalt}} \\ \mbox{Values} & \mval & \bnfdef & \mvar \bnfalt \mtroll{\mA}{\mval} \bnfalt \mtunit \bnfalt \mtpair\mv\mv \bnfalt\mtinj{\mval} \bnfalt\mtinjpr{\mval} \bnfalt \mtfun{\mvar}{\mA}{\mterm}\\ \mbox{Evaluation Contexts} & \mectxt & \bnfdef & {{\mfont{\lbrack{}{\cdot}\rbrack{}}}} \bnfalt \mtlet\mx\mE\ms\bnfalt {\mtroll{\mA}{\mectxt}} \bnfalt {\mtunroll{\mectxt}} \bnfalt {\mtpair\mE\mt}\bnfalt \mtpair\mv\mE\bnfalt \mtmatchpair{\mx}{\my}{\mE}{\ms}\\ & & \bnfalt & {\mtinj{\mectxt}} \bnfalt {\mtinjpr{\mectxt}} \bnfalt {\mtcase{\mectxt}{\mvarone}{\mtermone}{\mvartwo}{\mtermtwo}}\bnfalt {\mtapp{\mectxt}{\mtermalt}} \bnfalt {\mtapp{\mval}{\mectxt}} \\ \iflong \mbox{Contexts} & \mctxt & \bnfdef & {{\mfont{\lbrack{}{\cdot}\rbrack{}}}} \bnfalt \mtlet\mx\mC\ms\bnfalt \mtlet\mx\mt\mC\bnfalt {\mtroll{\mA}{\mctxt}} \bnfalt {\mtunroll{\mctxt}}\\ &&\bnfalt& {\mtpair{\mctxt}{\mt}}\bnfalt {\mtpair{\mt}{\mctxt}}\bnfalt {\mtmatchpair{\mx}{\my}{\mctxt}{\mt}}\bnfalt {\mtmatchpair{\mx}{\my}{\mt}{\mctxt}} \\ & & \bnfalt & {\mtinj{\mctxt}} \bnfalt {\mtinjpr{\mctxt}}\bnfalt {\mtcase{\mctxt}{\mvarone}{\mtermone}{\mvartwo}{\mtermtwo}} \\ &&\bnfalt& {\mtcase{\mterm}{\mvarone}{\mctxtone}{\mvartwo}{\mtermtwo}}\bnfalt {\mtcase{\mterm}{\mvarone}{\mtermone}{\mvartwo}{\mctxttwo}} \\ & & \bnfalt & {\mtfun{\mvar}{\mA}{\mctxt}} \bnfalt {\mtapp{\mctxt}{\mtermalt}} \bnfalt {\mtapp{\mterm}{\mctxt}}\\ \fi \mbox{Environments} & \menv & \bnfdef & \mfontsym{\cdot} \bnfalt \menvext{\menv}{\mvar}{\mty}\\ \mbox{Substitutions} & \mgamma & \bnfdef & \cdot \bnfalt \mgamma, \mv/\mx \end{array} \end{mathpar} \caption{$\lambda_{T,\mho}$ Syntax} \label{fig:tlang:syntax} \end{figure} \begin{figure} \flushleft{\fbox{\small{$\mjudgtc{\menv}{\mt}{\mty}$}}} \vspace{-2ex} \begin{mathpar} \inferrule{~}{\mjudgtc{\menv}{\mfontsym{\mho}}{\mty}}\and \inferrule {\mvar : \mty \in \menv} {\mjudgtc{\menv}{\mvar}{\mty}}\and \iflong \inferrule {\mjudgtc{\menv}{\mt}{\mA} \and \mjudgtc{\menv,\mx:\mA}{\ms}{\mB}} {\mjudgtc{\menv}{\mtlet{\mx}{\mt}{\ms}}{\mB}}\and \fi \inferrule{\mjudgtc{\menv}{\mterm}{\mA[\mmuty{\malpha}{\mA}/\malpha]}} {\mjudgtc{\menv}{\mtroll{\mmuty{\malpha}{\mA}}{\mterm}}{\mmuty{\malpha}{\mA}}}\and \inferrule{\mjudgtc{\menv}{\mterm}{\mmuty{\malpha}{\mA}}} {\mjudgtc{\menv}{\mtunroll{\mterm}}{\mA[\mmuty{\malpha}{\mA}/\malpha]}}\and \iflong \inferrule {~} {\mjudgtc{\menv}{\mtunit}{\munitty}}\and \inferrule {\mjudgtc{\menv}{\mt}{\mA}\and \mjudgtc{\menv}{\ms}{\mB}} {\mjudgtc{\menv}{\mtpair{\mt}{\ms}}{\mA \mathbin{\mfontsym{\times}} \mB}}\and \inferrule {\mjudgtc{\menv}{\mt}{\mAone\mathbin{\mfontsym{\times}} \mAtwo}\and \mjudgtc{\menv,\mx:\mAone,\my:\mAtwo}{\ms}{\mB}} {\mjudgtc{\menv}{\mtmatchpair{\mx}{\my}{\mt}{\ms}}{\mB}}\and \inferrule {\mjudgtc{\menv}{\mterm}{\mA}} {\mjudgtc{\menv}{\mtinj{\mterm}}{\mA \mathbin{\mfontsym{+}} \mApr}}\and \inferrule {\mjudgtc{\menv}{\mterm}{\mApr}} {\mjudgtc{\menv}{\mtinjpr{\mterm}}{\msumty{\mA}{\mApr}}}\and \inferrule {\mjudgtc{\menv}{\mterm}{\mA \mathbin{\mfontsym{+}} \mApr} \and \mjudgtc{\menv,\mx:\mA}{\ms}{\mB} \and \mjudgtc{\menv,\mxpr:\mApr}{\mspr}{\mB}} {\mjudgtc{\menv}{\mtcase{\mterm}{\mx}{\ms}{\mxpr}{\mspr}}{\mB}}\quad \inferrule {\mjudgtc{\menv,\mx:\mA}{\mt}{\mB}} {\mjudgtc{\menv}{\mtfun{\mx}{\mA}{\mt}}{\mA \mathbin{\mfontsym{\to}} \mB}}\quad \inferrule {\mjudgtc{\menv}{\mt}{\mA \mathbin{\mfontsym{\to}} \mB} \and \mjudgtc{\menv}{\ms}{\mA}} {\mjudgtc{\menv}{\mt\,\ms}{\mB}} \fi \end{mathpar} \caption{$\lambda_{T,\mho}$ Typing Rules \ifshort (excerpt)\fi} \label{fig:tlang:typing} \end{figure} The types of the language are similar to the cast calculus: they include the standard type formers of products, sums, and functions. Rather than the specific dynamic type, we include the more general, but standard, iso-recursive type $\mmuty{\malpha}{\mA}$, which is isomorphic to the unfolding $\mA[\mmuty{\malpha}{\mA}/\malpha]$ by the terms $\mtroll{\mmuty{\malpha}{\mA}}{\cdot}$ and $\mtunroll{\cdot}$. As in the source language we have an uncatchable error $\mfontsym{\mho}$. \begin{figure} \begin{mathpar} \inferrule{} {\mectxt[\mfontsym{\mho}] \stepsin{0} \mfontsym{\mho}}\and \inferrule{} {\inholestep{\mtlet\mx\mv\ms}{\ms{}[\mv/\mx]}}\and \inferrule {} {\inholestepsin{1} {\mtunroll{(\mtroll{\mA}{\mval})}} {\mval}}\and \inferrule {} {\inholestep {\mtmatchpair{\mxone}{\mxtwo}{\mtpair{\mvone}{\mvtwo}}{\mt}} {\mt[\mvone/\mxone,\mvtwo/\mxtwo]}}\and \inferrule {} {\inholestep{(\mtfun{\mx}{\mA}{\mt})\,\mv}{\mt{}[\mv/\mx]}}\and \inferrule {} {\inholestep {\mtcase{\parened{\mtinj{\mv}}}{\mx}{\mt}{\mxpr}{\mtpr}} {\mt{}[\mv/\mx]}}\and \inferrule {} {\inholestep {\mtcase{\parened{\mtinjpr{\mv}}}{\mx}{\mt}{\mxpr}{\mtpr}} {\mtpr{}[\mv/\mxpr]}} \end{mathpar} \begin{mathpar} \inferrule {~} {\mt \bigstep{0} \mt}\and \inferrule {\mt \stepsin{i} \mtpr \and \mtpr \bigstep{j} \mtpr[2]} {\mt \bigstep{i + j} \mtpr[2]} \end{mathpar} \caption{$\lambda_{T,\mho}$ Operational Semantics} \label{fig:tlang:opsem} \end{figure} \figref{fig:tlang:opsem} presents the operational semantics of the language. For the purposes of later defining a step-indexed logical relation, we assign a weight to each small step of the operational semantics that is $1$ for unrolling a value of recursive type and $0$ for other reductions. We then define a ``quantitative'' reflexive, transitive closure of the small-step relation $\mt \bigstep{i} \mtpr$ that adds the weights of its constituent small steps. When the number of steps is irrelevant, we just use $\step$ and $\mathrel{\Mapsto}$. We can then establish some simple facts about this operational semantics. \begin{lemma}[Subject Reduction] If $\cdot\vdash \mt : \mA$ and $\mt \mathrel{\Mapsto} \mtpr$ then $\cdot\vdash\mtpr : \mA$. \end{lemma} \begin{lemma}[Progress] If $\cdot \vdash \mt : \mA$ and $\mt$ is not a value or $\mfontsym{\mho}$, then there exists $\mtpr$ with $\mt \step \mtpr$. \end{lemma} \iflong \begin{proof} By induction on the typing derivation for $\mt$. \end{proof} \fi \begin{lemma}[Determinism] If $\mt \step \ms$ and $\mt \step \mspr$, then $\ms = \mspr$. \end{lemma} \subsection{Translating Gradual Typing} \label{sec:typed:translated} Next we translate the cast calculus into our typed language, and prove that the cast calculus semantics is in a simulation relation with the typed language. Since the two languages share so much of their syntax, most of the translation is a simple ``color change'', only the parts that are truly components of gradual typing need much translation. \begin{figure} \begin{mathpar} \begin{array}{rcl} \semantics{\sfont{\mathord{?}}} &\defeq& \mmuty{\malpha}{\munitty \mathbin{\mfontsym{+}} (\mpairty{\malpha}{\malpha}) \mathbin{\mfontsym{+}} (\malpha \mathbin{\mfontsym{+}} \malpha) \mathbin{\mfontsym{+}} (\mfunty{\malpha}{\malpha})} \\ \semantics{\sunitty} &\defeq& \munitty\\ \semantics{\spairty{\sA}{\sB}} &\defeq& \mpairty{\semantics{\sA}}{\semantics{\sB}}\\ \semantics{\ssumty{\sA}{\sB}} &\defeq& \msumty{\semantics{\sA}}{\semantics{\sB}}\\ \semantics{\sfunty{\sA}{\sB}} &\defeq& \mfunty{\semantics{\sA}}{\semantics{\sB}}\\ \end{array} \end{mathpar} \caption{Type Translation} \label{fig:type-translation} \end{figure} Our translation is type preserving, so we first define a \emph{type translation} in \figref{fig:type-translation}. The dynamic type is interpreted as a recursive sum of the translations of the tag types of the gradual language. The unit, pair, sum and function types are all interpreted as the corresponding connectives in the typed language. \begin{figure} \fbox{\small{$\semantics{\st}$}} \small{$~~$where if $~\sxone:\sAone,\ldots,\sxn:\sAn \vdash \st : \sA~$ then $~\mxone:\semantics{\sAone},\ldots,\mxn : \semantics{\sAn} \vdash \semantics{\st} : \semantics{\sA}$} \hfill \begin{mathpar} \begin{array}{rcl} \semantics{\svar} & \defeq & \mvar\\ \semantics{\obcast\sA\sB\st} & \defeq & \mectxt_{\obcast\sA\sB}\hw{\semantics{\st}}\\ \semantics\stunit & \defeq & \mtunit\\ \semantics{\stpair{\stone}{\sttwo}} & \defeq& \mtpair{\semantics{\stone}}{\semantics{\sttwo}}\\ \semantics{\stmatchpair\sx\sy\st\ss} & \defeq & \mtmatchpair\mx\my{\semantics\st}{\semantics\ss}\\ \semantics{\stinj\st} & \defeq & \mtinj{\semantics\st}\\ \semantics{\stinjpr\st} & \defeq & \mtinjpr{\semantics\st}\\ \semantics{\stcase{\st}{\sx}{\ss}{\sxpr}{\sspr}} & \defeq & \mtcase{\semantics\st}{\mx}{\semantics\ss}{\mxpr}{\semantics\sspr}\\ \semantics{\stfun{\sx}{\sA}{\st}} & \defeq & \mtfun{\mx}{\semantics{\sA}}{\semantics{\st}}\\ \semantics{\st\,\ss} & \defeq & \semantics{\st}\,\semantics{\ss}\\ \end{array} \end{mathpar} \caption{Term Translation} \label{fig:term-translation} \end{figure} \begin{figure} \fbox{\small{$\mectxt_{\obcast{\sA}{\sB}}$}} \small{$~~$where $~\mx : \semantics \sA \vdash \mE_{\obcast{\sA}{\sB}}\hw{\mx} : \semantics \sB$} \hfill \begin{mathpar} \begin{array}{rcl} \mectxt_{\obcast{\sfont{\mathord{?}}}{\sfont{\mathord{?}}}} & \defeq & {\mfont{\lbrack{}{\cdot}\rbrack{}}}\\ \mectxt_{\obcast{\sAone\mathbin{\sfontsym{\times}}\sBone}{\sAtwo\mathbin{\sfontsym{\times}}\sBtwo}} & \defeq & \mectxt_{\obcast{\sAone}{\sAtwo}} \mathbin{\mfontsym{\times}} \mectxt_{\obcast{\sBone}{\sBtwo}}\\ \mectxt_{\obcast{\sAone\mathbin{\sfontsym{+}}\sBone}{\sAtwo\mathbin{\sfontsym{+}}\sBtwo}} & \defeq & \mectxt_{\obcast{\sAone}{\sAtwo}} \mathbin{\mfontsym{+}} \mectxt_{\obcast{\sBone}{\sBtwo}}\\ \mectxt_{\obcast{\sAone \mathrel{\to_{s}} \sBone}{\sAtwo\mathrel{\to_{s}}\sBtwo}} & \defeq & \mectxt_{\obcast{\sAtwo}{\sAone}} \mathbin{\mfontsym{\to}} \mectxt_{\obcast{\sBone}{\sBtwo}}\\ \mectxt_{\obcast{\stagty}{\sfont{\mathord{?}}}} & \defeq & \mtroll{\semantics\sfont{\mathord{?}}}{\mtsum{\stagty}{\mfont{\lbrack{}{\cdot}\rbrack{}}}}\\ \mectxt_{\obcast{\sfont{\mathord{?}}}{\stagty}} & \defeq & \mtelsecase{(\mtunroll{\mfont{\lbrack{}{\cdot}\rbrack{}}})}{\stagty}{\mx}{\mx}{\mfontsym{\mho}}\\ \mectxt_{\obcast{\sA}{\sfont{\mathord{?}}}} & \defeq & \mectxt_{\obcast{\floor\sA}{\sfont{\mathord{?}}}}\hw{\mectxt_{\obcast{\sA}{\floor\sA}}{\mfont{\lbrack{}{\cdot}\rbrack{}}}} \qquad \mbox{if $\sA \neq \sfont{\mathord{?}}, \floor\sA$}\\ \mectxt_{\obcast{\sfont{\mathord{?}}}{\sA}} & \defeq & {\mectxt_{\obcast{\floor\sA}{\sA}}}\hw{\mectxt_{\obcast{\sfont{\mathord{?}}}{\floor\sA}}{\mfont{\lbrack{}{\cdot}\rbrack{}}}} \qquad \mbox{if $\sA \neq \sfont{\mathord{?}},\floor\sA$}\\ \mectxt_{\obcast{\sA}{\sB}} & \defeq & \mtlet{\mx}{{\mfont{\lbrack{}{\cdot}\rbrack{}}}}{\mfontsym{\mho}} \qquad \mbox{if $\sA,\sB\neq\sfont{\mathord{?}}$ and $\floor\sA\neq\floor\sB$}\\ \end{array} \end{mathpar} \caption{Direct Cast Translation} \label{fig:direct-cast-translation} \end{figure} \begin{figure} \begin{mathpar} \begin{array}{rcl} \mectxt \mathbin{\mfontsym{\times}} \mectxtpr & \defeq & \mtmatchpair{\mx}{\mxpr}{{\mfont{\lbrack{}{\cdot}\rbrack{}}}}{\mtpair{\mectxt\hw\mx}{\mectxtpr\hw\mxpr}}\\ \mectxt \mathbin{\mfontsym{+}} \mectxtpr & \defeq & \mtcase{{\mfont{\lbrack{}{\cdot}\rbrack{}}}} {\mx}{\mectxt\hw\mx} {\mxpr}{\mectxtpr\hw\mxpr}\\ \mectxt \mathbin{\mfontsym{\to}} \mectxtpr & \defeq & \mtlet{\mvarin{f}}{{\mfont{\lbrack{}{\cdot}\rbrack{}}}}{\mtufun{\mvarin{a}}{\mectxtpr\hw{\mvarin{f}\,(\mectxt\hw{\mvarin{a}})}}} \end{array} \end{mathpar} \caption{Functorial Action of Type Connectives} \label{fig:functor} \end{figure} Next, we define the translation of terms in \figref{fig:term-translation}, which is type preserving in that if $\sxone:\sAone,\ldots,\sxn:\sAn \vdash \st : \sA$ then $\mxone:\semantics{\sAone},\ldots,\mxn : \semantics{\sAn} \vdash \semantics{\st} : \semantics{\sA}$. Again, most of the translation is just a change of hue. The most important rule of the term translation is that of casts. A cast $\obcast\sA\sB$ is translated to an \emph{evaluation context} $\mectxt_{\obcast\sA\sB}$ of the appropriate type, which are defined in \figref{fig:direct-cast-translation}. Each case of the definition corresponds to one or more rules of the operational semantics. The product, sum, and function rules use the definitions of functorial actions of their types from \figref{fig:functor}. We separate them because we will use the functoriality property in several definitions, theorems, and proofs later. \subsection{Operational Properties} Next, we consider the relationship between the operational semantics of the two languages and how to lift properties of the typed language to the gradual language. We want to view the translation of the cast calculus into the typed language as \emph{definitional}, and in that regard view the operational semantics of the source language as being based on the typed language. We capture this relationship in the following \emph{forward} simulation theorem, which says that any reduction in the cast calculus corresponds to (and is \emph{justified by}) multiple steps in the target: \begin{lemma}[Translation Preserves Values, Evaluation Contexts] \label{lem:sem-val-evctx}{~} \begin{enumerate} \item For any value $\sv$, $\semantics{\sv}$ is a value. \item For any evaluation context $\sectxt$, $\semantics\sectxt$ is an evaluation context. \end{enumerate} \end{lemma} \begin{lemma}[Simulation of Operational Semantics] \label{lem:forward-simulation}{~}\\ If $\st \step \stpr$ then there exists $\ms$ with $\semantics{\st} \step \ms$ and $\ms \mathrel{\Mapsto} \semantics{\stpr}$. \end{lemma} \iflong\begin{proof} By cases of $\st \step \stpr$. The non-cast cases are clear by \cref{lem:sem-val-evctx}. \begin{enumerate} \item DynDyn \begin{align*} \mE_{\obcast\sfont{\mathord{?}}\sdynty}\hw{\semantics\sv} &= \mtlet\mx{\semantics\sv}\mx\\ &\step \semantics\sv \end{align*} \item TagUp: Trivial because $\semantics{\obcast\sA\sfont{\mathord{?}}\sv} = \semantics{\obcast{\floor\sA}{\sfont{\mathord{?}}}\obcast\sA{\floor\sA}\sv}$. \item TagDn: Trivial because $\semantics{\obcast\sfont{\mathord{?}}\sA\sv} = \semantics{\obcast{\floor\sA}\sA\obcast{\sfont{\mathord{?}}}{\floor\sA}\sv}$. \item (TagMatch) Valid because \begin{align*} \mtelsecasevert{\mtunroll\mtroll{\semantics\sfont{\mathord{?}}}{\mtsum{\stagty}{\semantics\sv}}}{\stagty}{\mx}{\mx}{\mfontsym{\mho}} &\step \mtelsecasevert{{\mtsum{\stagty}{\semantics\sv}}}{\stagty}{\mx}{\mx}{\mfontsym{\mho}}\\ &\step \semantics\sv \end{align*} \item (TagMismatch) Valid because \begin{align*} \mtelsecasevert{\mtunroll\mtroll{\semantics\sfont{\mathord{?}}}{\mtsum{\stagtypr}{\semantics\sv}}}{\stagty}{\mx}{\mx}{\mfontsym{\mho}} &\step \mtelsecasevert{{\mtsum{\stagtypr}{\semantics\sv}}}{\stagty}{\mx}{\mx}{\mfontsym{\mho}}\\ &\step \mfontsym{\mho}\\ & = \semantics\sfontsym{\mho} \end{align*} \item (TagMismatch') Valid because \[ \mtlet\mx{\semantics\sv}\mfontsym{\mho} \step \mfontsym{\mho} \] \item Pair Valid by \begin{align*} \mectxt_{\obcast{\sAone\mathbin{\sfontsym{\times}}\sBone}{\sAtwo\mathbin{\sfontsym{\times}}\sBtwo}}\hw{\mtpair{\semantics\sv}{\semantics\svpr}} &= \mtlet{\mx}{\my}{\mtpair{\semantics\sv}{\semantics\svpr}}{\mtpair{\mectxt_{\obcast{\sAone}{\sAtwo}}\hw\mx}{\mectxt_{\obcast{\sBone}{\sBtwo}}\hw\my}}\\ &\step {\mtpair{\mectxt_{\obcast{\sAone}{\sAtwo}}\hw{\semantics\sv}}{\mectxt_{\obcast{\sBone}{\sBtwo}}\hw{\semantics\svpr}}}\\ &= \semantics{\stpair{{\obcast{\sAone}{\sAtwo}}{\sv}}{{{\obcast{\sBone}{\sBtwo}}{\svpr}}}} \end{align*} \item Sum \begin{align*} \mectxt_{\obcast{\sAone\mathbin{\sfontsym{\times}}\sBone}{\sAtwo\mathbin{\sfontsym{\times}}\sBtwo}}\hw{\mtinj{\semantics\sv}} &= \mtcasevert{\mtinj{\semantics\sv}} {\mx}{\mE_{\obcast{\sAone}\sAtwo}\hw{\mx}} {\mxpr}{\mE_{\obcast{\sBone}\sBtwo}\hw{\mxpr}}\\ &\step \mE_{\obcast\sAone\sAtwo}\hw{\semantics\sv}\\ &= \semantics{\obcast\sAone\sAtwo\sv} \end{align*} \item Sum' \begin{align*} \mectxt_{\obcast{\sAone\mathbin{\sfontsym{\times}}\sBone}{\sAtwo\mathbin{\sfontsym{\times}}\sBtwo}}\hw{\mtinjpr{\semantics\sv}} &= \mtcasevert{\mtinjpr{\semantics\sv}} {\mx}{\mE_{\obcast{\sAone}\sAtwo}\hw{\mx}} {\mxpr}{\mE_{\obcast{\sBone}\sBtwo}\hw{\mxpr}}\\ &\step \mE_{\obcast\sBone\sBtwo}\hw{\semantics\sv}\\ &= \semantics{\obcast\sBone\sBtwo\sv} \end{align*} \item (Fun) Valid because \begin{align*} \mectxt_{\obcast{\sAone\mathrel{\to_{s}}\sBone}{\sAtwo\mathrel{\to_{s}}\sBtwo}}\hw{\semantics\sv} &\step \mtletvert{\mxin f}{\semantics\sv}{}\\ &\step \mtfun{\mxin a}{\mAtwo}{\mectxt_{\obcast\sBone\sBtwo}\hw{\semantics\sv\,(\mectxt_{\obcast\sAtwo\sAone}\hw{\mxin a})}}\\ &= \semantics{\stfun{\sxin a}{\sAtwo} {\obcast{\sBone}{\sBtwo} {({\sv}\,({\obcast{\sAtwo}{\sAone}{\sxin{a}}}))}}} \end{align*} \end{enumerate} \end{proof}\fi To lift theorems for the gradual language from the typed language, we need to establish an \emph{adequacy} theorem, which says that the operational behavior of a translated term determines the behavior of the original source term. To do this we use the following backward simulation theorem. \ifshort \begin{lemma}[Backward Simulation]{~} \begin{enumerate} \item If $\semantics{\st}$ is a value, $\st \stepstar \sv$ for some $\sv$ with $\semantics\st=\semantics\sv$. \item If $\semantics{\st} = \mfontsym{\mho}$, then $\st \stepstar \sfontsym{\mho}$. \item If $\semantics\st \step \ms$ then there exists $\sspr$ with $\st \step \sspr$ and $\ms \mathrel{\Mapsto} \semantics\sspr$. \end{enumerate} \end{lemma} \else \begin{lemma}[Translation reflects Results] \label{lem:reflect} \begin{enumerate} \item If $\semantics{\st}$ is a value, $\st \stepstar \sv$ for some $\sv$ with $\semantics\st=\semantics\sv$. \item If $\semantics{\st} = \mfontsym{\mho}$, then $\st \stepstar \sfontsym{\mho}$. \end{enumerate} \end{lemma} \begin{proof} By induction on $\st$. For the non-casts, follows by inductive hypothesis. For the casts, only two cases can be values: \begin{enumerate} \item $\obcast\sfont{\mathord{?}}\sdynty\st$: if $\semantics{\obcast\sfont{\mathord{?}}\sdynty\st} = \semantics\st$ is a value then by inductive hypothesis, $\st$ is a value, so $\obcast\sfont{\mathord{?}}\sdynty\sv \step \sv$. \item $\obcast{\stagty}\sfont{\mathord{?}}\st$: if $\mtroll{\semantics\sfont{\mathord{?}}}{\mtsum{\stagty}{\semantics\st}}$ is a value, then $\semantics\st$ is a value so by inductive hypothesis $\st \stepstar \sv$ so $\obcast{\stagty}{\sfont{\mathord{?}}}\st \stepstar \obcast{\stagty}{\sfont{\mathord{?}}}\sv$. \end{enumerate} For the error case, there is only one case where it is possible for $\semantics\st = \mfontsym{\mho}$ without $\st = \sfontsym{\mho}$: \begin{enumerate} \item For $\obcast\sfont{\mathord{?}}\sdynty\ss$, if $\semantics{\obcast\sfont{\mathord{?}}\sdynty}\semantics\ss = \semantics\ss$ is an error then clearly $\semantics\ss = \mfontsym{\mho}$ so by inductive hypothesis $\ss \stepstar \sfontsym{\mho}$ and because casts are strict, \[ \obcast\sfont{\mathord{?}}\sdynty\ss \stepstar \ss \] \end{enumerate} \end{proof} \begin{lemma}[Backward Simulation] \label{lem:backward-simulation} If $\semantics\st \step \ms$ then there exists $\sspr$ with $\st \step \sspr$ and $\ms \mathrel{\Mapsto} \semantics\sspr$. \end{lemma} \begin{proof} By induction on $\st$. We show two illustrative cases, the rest follow by the same reasoning. \begin{enumerate} \item $\semantics{\stmatchpair{\sx}{\sy}{\st}{\ss}} = \mtmatchpair{\mx}{\my}{\semantics\st}{\semantics\ss}$. If $\semantics\st$ is not a value, then we use the inductive hypothesis. If $\semantics\st$ is a value and $\st$ is then by \lemref{lem:reflect} $\st \stepstar \sv$ and then we can reduce the pattern-match in source and target. \item $\semantics{\obcast{\sAone\mathrel{\to_{s}}\sBone}{\sAtwo\mathrel{\to_{s}}\sBtwo}\st} = (\mE_{\obcast{\sAtwo}{\sAone}} \mathbin{\mfontsym{\to}} \mE_{\obcast{\sBone}{\sBtwo}})\hw{\semantics\st}$. If $\semantics\st$ is not a value, we use the inductive hypothesis. Otherwise, if it is a value and $\st$ is not, we use \lemref{lem:reflect} to get ${\obcast{\sAone\mathrel{\to_{s}}\sBone}{\sAtwo\mathrel{\to_{s}}\sBtwo}\st} \stepstar{\obcast{\sAone\mathrel{\to_{s}}\sBone}{\sAtwo\mathrel{\to_{s}}\sBtwo}\sv}$. Then we use the same argument as the proof of \lemref{lem:forward-simulation}. \item $\semantics{\obcast{\sA}{\sfont{\mathord{?}}}\st}=\mectxt_{\obcast{\floor\sA}{\sfont{\mathord{?}}}}\hw{\mectxt_{\obcast{\sA}{\floor\sA}}\hw{\semantics\st}}$ then we use the same argument as the case for $\obcast{\sA}{\floor\sA}\st$, e.g., the function case above. \end{enumerate} \end{proof} \fi \begin{theorem}[Adequacy] \label{lem:adequacy}{~} \begin{enumerate} \item $\semantics\st \mathrel{\Mapsto} \mv$ if and only if $\st \stepstar \sv$ with $\semantics\sv = \mv$. \item$\semantics\st \mathrel{\Mapsto} \mfontsym{\mho}$ if and only if $\st \stepstar \sfontsym{\mho}$. \item $\semantics\st$ diverges if and only if $\st$ diverges \end{enumerate} \end{theorem} \iflong\begin{proof} The forward direction for values and errors is given by forward simulation \lemref{lem:forward-simulation}. The backward direction for values and errors is given by induction on $\semantics\st\mathrel{\Mapsto} \mtpr$, backward simulation \lemref{lem:backward-simulation} and reflection of results \cref{lem:reflect}. If $\st$ diverges, then by the backward value and error cases, it follows that $\semantics\st$ does not run to a value or error. By type safety of the typed language, $\semantics\st$ diverges. Finally, if $\semantics\st$ diverges, we show that $\st$ diverges. If $\semantics\st \step \ms$, then by backward simulation, there exists $\sspr$ with $\semantics\st \step \sspr$ and $\ms \mathrel{\Mapsto} \semantics\sspr$. Since $\semantics\st \mathrel{\Mapsto} \semantics\sspr$, we know $\semantics\sspr$ diverges, so by coinduction $\sspr$ diverges and therefore $\st$ diverges. \end{proof}\fi While this has reduced the number of primitives of the language, reasoning about the behavior of the translated casts isn't any simpler than the original operational semantics since they have the same behavior. For simpler reasoning about cast behavior, we will move further away from a direct simulation of the source operational semantics, to a second semantics based on ep pairs that is observationally equivalent but also conceptually simpler and helps prove the gradual guarantee. However, in order to prove that the second semantics is equivalent, we first need to develop a usable theory of observational equivalence and approximation. \section{A Logical Relation for Error Approximation} \label{sec:logrel} Next, we define observational equivalence and error approximation of programs in the gradual and typed languages, the two properties with which we formulate embedding-projection pairs. To facilitate proofs of error approximation, we develop a novel step-indexed logical relation. Since our notion of approximation is non-standard, the use of step-indexing in our logical relation is inconvenient to use directly. So, on top of the ``implementation'' of the logical relation as a step-indexed relation, we prove many high-level lemmas so that all proofs in the next sections are performed relative to these lemmas, and none manipulate step indices directly. \subsection{Observational Equivalence and Approximation} A suitable notion of equivalence for programs is \emph{observational equivalence}. We say $\st$ is observationally equivalent to $\stpr$ if replacing one with the other in the context of a larger program produces the same result (termination, error, or divergence). We formalize this saying a program \emph{context} $\sC$ is a term with a single hole $\hole$. A context is typed $\senvpr \vdash \sC\hw{\sfont{\Gamma}\vdash \cdot : \sA} : \sApr$ when for any term $\sfont{\Gamma}\vdash \st : \sA$, replacing the hole with $\st$ results in a well-typed $\senvpr\vdash\sC\hw{\st} : \sApr$ While this notion of observational equivalence is entirely standard, the notion of \emph{approximation} we use---which we call error approximation---is not the standard notion of observational approximation. Usually, we would say $\st$ observationally approximates $\stpr$ if, when placing them into the same context $\sC$, either $\sC[\st]$ diverges or they both terminate or both error. We call this form of approximation \emph{divergence approximation}. However, for \emph{gradual typing} we are not particularly interested in when one program diverges more than another, but rather when it produces more \emph{type errors}. We might be tempted to conflate the two, but their behavior is quite distinct! We can never truly know if a black-box program will continue indefinitely: it would frustrate any programmer to use a language that runs forever when accidentally using a function as a number. The reader should keep this difference in mind when seeing how our logical relation differs form the standard treatment. In the rest of this paper, when discussing the two together we will clearly distinguish between divergence and error approximation, but when there is no qualifier, approximation is meant as \emph{error approximation}. \begin{definition}[Gradual Observational Equivalence, Error Approximation] For any well typed terms $\sfont{\Gamma} \vdash \st, \stpr : \sA$, \begin{enumerate} \item Define $\sfont{\Gamma} \vDash \st \approx^{\text{obs}} \stpr : \sA$, pronounced ``$\st$ is observationally equivalent to $\stpr$'' to hold when for any $\cdot \vdash \sC\hw{\sfont{\Gamma}\vdash\cdot:\sA} : \sB$, either $\sC[\st]$ and $\sC[\stpr]$ both reduce to a value, both reduce to an error, or both diverge. \item Define $\sfont{\Gamma} \vDash \st \sqsubseteq^{\text{obs}} \stpr : \sA$, pronounced ``$\st$ observationally (error) approximates $\stpr$'' to hold when for any $\cdot \vdash \sC\hw{\sfont{\Gamma}\vdash\cdot:\sA} : \sB$, either $\sC[\st]$ reduces to $\sfontsym{\mho}$ or both $\sC[\st]$ and $\sC[\stpr]$ reduce to a value or both diverge. \end{enumerate} \end{definition} As with divergence approximation, we can prove two programs are observationally equivalent by showing each error approximates the other. \begin{lemma}[Equivalence is Approximation Both Ways] $\sfont{\Gamma} \vDash \stone \approx^{\text{obs}} \sttwo : \sA$ if and only if both $\sfont{\Gamma} \vDash \stone \sqsubseteq^{\text{obs}} \sttwo : \sA$ and $\sfont{\Gamma} \vDash \sttwo \sqsubseteq^{\text{obs}} \stone : \sA$. \end{lemma} We define typed observational equivalence $\menv \vDash \mt \approx^{\text{obs}} \mtpr : \mA$ and observational error approximation $\menv \vDash \mt \sqsubseteq^{\text{obs}} \mtpr : \mA$ with the exact same definition as for the gradual language above, but in $\mfont{\textbf{red}}$ instead of $\sfont{\textsf{blue}}$. We rarely work with the gradual language directly, instead we prove approximation results for their translations. \iflong \begin{definition}[Typed Observational equivalence, Error Approximation] For any well typed terms $\menv \vdash \mt, \mtpr : \mA$, \begin{enumerate} \item Define $\menv \vDash \mt \approx^{\text{obs}} \mtpr : \mA$, pronounced ``$\mt$ is observationally equivalent to $\mtpr$'' to hold when for any $\mC[\cdot:\mA] : \sB$, either $\mC[\mt]$ and $\mC[\mtpr]$ both reduce to a value, both reduce to an error, or both diverge. \item Define $\menv \vDash \mt \sqsubseteq^{\text{obs}} \mtpr : \mA$, pronounced ``$\mt$ observationally (error) approximates to $\mtpr$'' to hold when for any $\mC[\cdot:\mA] : \sB$, either $\mC[\mt]$ reduces to $\mfontsym{\mho}$ or both $\mC[\mt]$ and $\mC[\mtpr]$ reduce to a value or both diverge. \end{enumerate} \end{definition} \fi This is justified by the following lemma, a consequence of our \emph{adequacy} result (\cref{lem:adequacy}). \begin{lemma}[Typed Observational Approximation implies Gradual Observational Approximation] \label{lem:typed-gradual-approx} If $\semantics\sfont{\Gamma} \vDash \semantics\stone \sqsubseteq^{\text{obs}} \semantics\sttwo : \semantics\sA$ then $\sfont{\Gamma} \vDash \stone \sqsubseteq^{\text{obs}} \sttwo : \sA$ \end{lemma} \iflong\begin{proof} For any $\sC$, by compositionality of the translation, $\semantics{\sC[\stone]} = \semantics{\mC}[\semantics\stone]$ and $\semantics{\sC[\sttwo]} = \semantics{\sC}[\semantics\sttwo]$. Then we analyze $\semantics\stone \sqsubseteq^{\text{obs}} \semantics\sttwo$ \begin{enumerate} \item If $\semantics\sC[\semantics\stone] \bigstep \mfontsym{\mho}$ then \cref{lem:adequacy} states that $\sC[\stone] \bigstep \sfontsym{\mho}$. \item If $\semantics\sC[\semantics\stone]$ diverges, then $\semantics\sC[\semantics\sttwo]$ also diverges and therefore by \cref{lem:adequacy}, $\sC[\stone]$ and $\sC[\sttwo]$ diverge. \item If $\semantics\sC[\semantics\stone] \mathrel{\Mapsto} \svone$, then $\semantics\sC[\semantics\sttwo] \mathrel{\Mapsto} \svtwo$ and therefore by \cref{lem:adequacy}, $\sC[\stone] \stepstar \svone$ and $\sC[\sttwo] \stepstar \svtwo$. \end{enumerate} \end{proof}\fi \subsection{Logical Relation} Observational equivalence and approximation are extremely difficult to prove directly, so we use the usual method of proving observational results by using a \emph{logical relation} that we prove \emph{sound} with respect to observational approximation. Due to the non-well-founded nature of recursive types (and the dynamic type specifically), we develop a \emph{step-indexed} logical relation following \citet{ahmed06:lr}. We define our logical relation for error approximation in \figref{fig:lr}. Because our notion of error approximation is not the standard notion of approximation, the definition is a bit unusual, but this is necessary for technical reasons. It is key to compositional reasoning about embedding-projection pairs that approximation be \emph{transitive} and care must be taken to show transitivity for a step-indexed relation. However, for standard definitions of logical relations for observational equivalence, it is difficult to prove transitivity directly. Therefore, it is often established through indirect reasoning---e.g., by setting up a biorthogonal ($\top\top$-closed) logical relation so one can easily show it is complete with respect to observational equivalence, which in turn implies that it must be transitive since observational equivalence is easily proven transitive. The reason establishing transitivity is tricky is that a step-indexed relation is \emph{not} transitive at a \emph{fixed} index, i.e., if $e_1 \isim i e_2$ and $e_2 \isim i e_3$ it is not necessarily the case that $e_1 \isim i e_3$. For instance, $e_1 \isim i e_2$ might be related because $e_1$ terminates in less than $i$ steps and has the same behavior as $e_2$ which takes more than $i$ steps to terminate, whereas $e_2 \isim i e_3$ are related because they both take $i$ steps of reduction so cannot be distinguished in $i$ steps but have different behavior when run for more steps. One direct method for proving transitivity, originally presented in \citet{ahmed06:lr}, is to observe that two terms are observationally equivalent when each divergence approximates the other, and then use a step-indexed relation for divergence approximation. Because a conjunction of transitive relations is transitive, this proves transitivity of equivalence. A step-indexed relation for divergence approximation can be shown to have a kind of ``half-indexed'' transitivity, i.e., if $e_1 \iprec i e_2$ and for every natural $j$, we know $e_2 \iprec j e_3$ then $e_1 \iprec i e_3$. We have a similar issue with error approximation: the na\"ive logical relation for error approximation is not clearly transitive. Inspired by the case of observational equivalence, we similarly ``split'' our logical relation into two relations that can be proven transitive by an argument similar to divergence approximation. However, unlike for observational equivalence, our two relations are not the same. Instead, one $\mathrel{\dynr\prec}$ is error approximation up to divergence on the \emph{left} and the other $\mathrel{\dynr\succ}$ is error approximation up to divergence on the \emph{right}. For a given natural number $i \in \mathbb{N}$ and type $\mA$, and \emph{closed terms} $\mtone,\mttwo$ of type $\mA$, $\mtone \paramlrdynr{\ltdynr}{t}{i}{\mA} \mttwo$ intuitively means that, if we only inspect $\mtone$'s behavior up to $i$ uses of $\mtunroll\cdot$, then it appears that $\mtone$ error approximates $\mttwo$. Less constructively, it means that we cannot show that $\mtone$ does \emph{not} error approximate $\mttwo$ when limited to $i$ uses of $\mtunroll\cdot$. However, even if we knew $\mtone \paramlrdynr{\ltdynr}{t}{i}{\mA} \mttwo$ for \emph{every} $i \in \mathbb{N}$, it still might be the case that $\mtone$ diverges, since no finite number of unrolling can ever exhaust $\mtone$'s behavior. So we also require that we know $\mtone \paramlrdynr{\gtdynr}{t}{i}{\mA} \mttwo$, which means that up to $i$ uses of unroll on $\mttwo$, it appears that $\mtone$ error approximates $\mttwo$. \begin{figure} \begin{mathpar} \begin{array}{rcl} \paramlrdynr{\ltdynr}{t}{i}{\mA}, \paramlrdynr{\gtdynr}{t}{i}{\mA} & \subseteq & \{\mt \mathrel{|} \cdot \vdash \mt : \mA\}^2\\ \mtone \paramlrdynr{\ltdynr}{t}{i}{\mA} \mttwo & \defeq & (\exists \mtonepr.~ \mtone \bigstep{i+1}\mtonepr)\\ & & \vee (\exists j\leq i.~ \mtone \bigstep{j} \mfontsym{\mho})\\ & & \vee (\exists j\leq i, \mvone \paramlrdynr{\ltdynr}{v}{i-j}{\mA} \mvtwo.~ \mtone \bigstep{j} \mvone \wedge \mttwo \mathrel{\Mapsto} \mvtwo)\\ \mtone \paramlrdynr{\gtdynr}{t}{i}{\mA} \mttwo & \defeq & (\exists \mttwopr.~ \mttwo \bigstep{i+1}\mttwopr)\\ & & \vee (\exists j\leq i.~ \mttwo \bigstep{j} \mfontsym{\mho} \wedge \mtone \mathrel{\Mapsto} \mfontsym{\mho})\\ & & \vee (\exists j\leq i, \mvtwo.~ \mttwo \bigstep{j} \mvtwo \wedge\\ & & \qquad (\mtone \mathrel{\Mapsto} \mfontsym{\mho} \vee \exists \mvone.~ \mtone \mathrel{\Mapsto} \mvone \wedge \mvone \paramlrdynr{\gtdynr}{v}{i-j}{\mA} \mvtwo)\\\\ \paramlrdynr{\ltdynr}{v}{i}{\mA}, \paramlrdynr{\gtdynr}{v}{i}{\mA} & \subseteq & \{\mv \mathrel{|} \cdot \vdash \mv : \mA \}^2 \qquad \text{where } {\mathrel{\dynr\sim}} \in \{{\paramlrdynr{\ltdynr}{\cdot}{\cdot}{\cdot}},{\paramlrdynr{\gtdynr}{\cdot}{\cdot}{\cdot}} \}\\ \mvone \paramlrdynr{\simdynr}{v}{0}{\mmuty{\malpha}{\mA}} \mvtwo & \defeq& \top\\ \mtroll{\mmuty{\malpha}{\mA}}{\mvone} \paramlrdynr{\simdynr}{v}{i+1}{\mmuty{\malpha}{\mA}} \mtroll{\mmuty{\malpha}{\mA}}{\mvtwo} & \defeq & \mvone \paramlrdynr{\simdynr}{v}{i}{\mA[\malpha\mapsto \mmuty{\malpha}{\mA}]} \mvtwo\\ \mtunit \paramlrdynr{\simdynr}{v}{i}{\munitty} \mtunit & \defeq & \top\\ \mtpair{\mvone}{\mvonepr} \paramlrdynr{\simdynr}{v}{i}{\mpairty{\mA}{\mApr}} \mtpair{\mvtwo}{\mvtwopr} & \defeq & \mvone \paramlrdynr{\simdynr}{v}{i}{\mA} \mvtwo \wedge \mvonepr \paramlrdynr{\simdynr}{v}{i}{\mApr} \mvtwopr\\ \mvone \paramlrdynr{\simdynr}{v}{i}{\msumty{\mA}{\mB}} \mvtwo & \defeq & (\exists (\mvonepr \paramlrdynr{\simdynr}{v}{i}{\mA} \mvtwopr) \wedge \mvone = \mtinj{\mvonepr}\wedge \mvtwo = \mtinj{\mvtwopr}) \\ & & \vee (\exists (\mvonepr \paramlrdynr{\simdynr}{v}{i}{\mB} \mvtwopr) \wedge \mvone = \mtinjpr{\mvonepr}\wedge \mvtwo = \mtinjpr{\mvtwopr})\\ \mvone \paramlrdynr{\simdynr}{v}{i}{\mfunty{\mA}{\mB}} \mvtwo & \defeq & \forall j \leq i. \forall (\mvonepr \paramlrdynr{\simdynr}{v}{j}{\mA} \mvtwopr).~ \mtapp{\mvone}{\mvonepr} \paramlrdynr{\simdynr}{t}{i}{\mB} \mtapp{\mvtwo}{\mvtwopr}\\\\ \cdot \paramlrdynr{\simdynr}{v}{i}{\cdot} \cdot & \defeq & \top\\ \mgammain{1},\mvin{1}/\mx \paramlrdynr{\simdynr}{v}{i}{\menv,\mvar:\mA} \mgammain{2},\mvin{2}/\mx & \defeq & \mgammain{1} \paramlrdynr{\simdynr}{v}{i}{\menv} \mgammain{2} \wedge \mvin{1} \paramlrdynr{\simdynr}{v}{i}{\mA} \mvin{2}\\\\ \menv \vDash \mtone \mathrel{\dynr\sim} \mttwo : \mA & \defeq & \forall i \in \mathbb{N}, (\mgammain{1} \paramlrdynr{\simdynr}{v}{i}{\menv} \mgammain{2}).~ \mtone[\mgammain{1}] \paramlrdynr{\simdynr}{t}{i}{\mA} \mttwo[\mgammain{2}]\\\\ \menv \vDash \mtone \sqsubseteq \mttwo : \mA & \defeq & \menv \vdash \mtone \mathrel{\dynr\prec} \mttwo : \mA \wedge \menv \vdash \mtone \mathrel{\dynr\succ} \mttwo : \mA \end{array} \end{mathpar} \caption{$\lambda_{T,\mho}$ Error Approximation Logical Relation} \label{fig:lr} \end{figure} The above intuition should help to understand the definition of error approximation for terms (i.e., the relations $\mathrel{\dynr\prec}_t$ and $\mathrel{\dynr\succ}_t$). The relation $\mtone\paramlrdynr{\ltdynr}{t}{i}{\mA} \mttwo$ is defined by inspection of $\mtone$'s behavior: it holds if $\mtone$ is still running after $i+1$ unrolls; or if it steps to an error in fewer than $i$ unrolls; or if it results in a value in fewer than $i$ unrolls and also $\mttwo$ runs to a value and those values are related for the remaining steps. The definition of $\mtone \paramlrdynr{\gtdynr}{t}{i}{\mA} \mttwo$ is defined by inspection of $\mttwo$'s behavior: it holds if $\mttwo$ is still running after $i+1$ unrolls; or if $\mttwo$ steps to an error in fewer than $i$ steps then $\mtone$ errors as well; or if $\mttwo$ steps to a value, either $\mtone$ errors or steps to a value related for the remaining steps. While the relations and $\mathrel{\dynr\succ}_t$ on terms are different, fortunately, the relations on values are essentially the same, so we abstract over the cases by having the symbol $\mathrel{\dynr\sim}$ to range over either $\mathrel{\dynr\prec}$ or $\mathrel{\dynr\succ}$. For values of recursive type, if the step-index is $0$, we consider them related, because otherwise we would need to perform an unroll to inspect them further. Otherwise, we decrement the index and check if they are related. Decrementing the index here is exactly what makes the definition of the relation well-founded. For the standard types, the value relation definition is indeed standard: pairs are related when the two sides are related, sums must be the same case and functions must be related when applied to any related values in the future (i.e., when we may have exhausted some of the available steps). Finally, we extend these relations to \emph{open} terms in the standard way: we define substitutions to be related pointwise (similar to products) and then say that $\menv \vDash \mtone \mathrel{\dynr\sim} \mttwo : \mA$ holds if for every pair of substitutions $\mgammaone, \mgammatwo$ related for $i$ steps, the terms after substitution, written $\mtone[\mgammaone]$ and $\mttwo[\mgammatwo]$, are related for $i$ steps. Then our resulting relation $\menv \vDash \mtone \sqsubseteq \mttwo$ is defined to hold when $\mtone$ error approximates $\mttwo$ up to divergence of $\mtone$ ($\mathrel{\dynr\prec}$), \emph{and} up to divergence of $\mttwo$ ($\mathrel{\dynr\succ}$). \iflong \begin{figure} \begin{mathpar} \inferrule{~}{\semhomoltdyn{\menv}{\mfontsym{\mho}}{\mfontsym{\mho}}{\mty}} \inferrule {\mvar : \mty \in \menv} {\semhomoltdyn{\menv}{\mvar}{\mvar}{\mty}} \inferrule {\semhomoltdyn{\menv}{\mtone}{\mttwo}{\mA} \and \semhomoltdyn{\menv,\mx:\mA}{\msone}{\mstwo}{\mB}} {\semhomoltdyn{\menv}{\mtlet{\mx}{\mtone}{\msone}}{\mtlet{\mx}{\mttwo}{\mstwo}}{\mB}} \inferrule{\semhomoltdyn{\menv}{\mtermone}{\mtermtwo}{\mA[\mmuty{\malpha}{\mA}/\malpha]}} {\semhomoltdyn{\menv}{\mtroll{\mmuty{\malpha}{\mA}}{\mtermone}}{\mtroll{\mmuty{\malpha}{\mA}}{\mtermtwo}}{\mmuty{\malpha}{\mA}}} \inferrule{\semhomoltdyn{\menv}{\mtermone}{\mtermtwo}{\mmuty{\malpha}{\mA}}} {\semhomoltdyn{\menv}{\mtunroll{\mtermone}}{\mtunroll{\mtermtwo}}{\mA[\mmuty{\malpha}{\mA}/\malpha]}} \inferrule {~} {\semhomoltdyn{\menv}{\mtunit}{\mtunit}{\munitty}} \inferrule {\semhomoltdyn{\menv}{\mtone}{\mttwo}{\mA}\and \semhomoltdyn{\menv}{\msone}{\mstwo}{\mB}} {\semhomoltdyn{\menv}{\mtpair{\mtone}{\msone}}{\mtpair{\mttwo}{\mstwo}}{\mA \mathbin{\mfontsym{\times}} \mB}} \inferrule {\semhomoltdyn{\menv}{\mtone}{\mttwo}{\mAone\mathbin{\mfontsym{\times}} \mAtwo}\and \semhomoltdyn{\menv,\mx:\mAone,\my:\mAtwo}{\msone}{\mstwo}{\mB}} {\semhomoltdyn{\menv}{\mtmatchpair{\mx}{\my}{\mtone}{\msone}}{\mtmatchpair{\mx}{\my}{\mttwo}{\mstwo}}{\mB}} \inferrule {\semhomoltdyn{\menv}{\mtermone}{\mtermtwo}{\mA}} {\semhomoltdyn{\menv}{\mtinj{\mtermone}}{\mtinj{\mtermtwo}}{\mA \mathbin{\mfontsym{+}} \mApr}} \inferrule {\semhomoltdyn{\menv}{\mtermone}{\mtermtwo}{\mApr}} {\semhomoltdyn{\menv}{\mtinjpr{\mtermone}}{\mtinjpr{\mtermtwo}}{\msumty{\mA}{\mApr}}} \inferrule {\semhomoltdyn{\menv}{\mtermone}{\mtermtwo}{\mA \mathbin{\mfontsym{+}} \mApr} \and \semhomoltdyn{\menv,\mx:\mA}{\msone}{\mstwo}{\mB} \and \semhomoltdyn{\menv,\mxpr:\mApr}{\msonepr}{\mstwopr}{\mB}} {\semhomoltdyn{\menv}{\mtcase{\mtermone}{\mx}{\msone}{\mxpr}{\msonepr}}{\mtcase{\mtermtwo}{\mx}{\mstwo}{\mxpr}{\mstwopr}}{\mB}} \inferrule {\semhomoltdyn{\menv,\mx:\mA}{\mtone}{\mttwo}{\mB}} {\semhomoltdyn{\menv}{\mtufun{\mx}{\mtone}}{\mtufun{\mx}{\mttwo}}{\mA \mathbin{\mfontsym{\to}} \mB}} \inferrule {\semhomoltdyn{\menv}{\mtone}{\mtone}{\mA \mathbin{\mfontsym{\to}} \mB} \and \semhomoltdyn{\menv}{\msone}{\mstwo}{\mA}} {\semhomoltdyn{\menv}{\mtone\,\msone}{\mttwo\,\mstwo}{\mB}} \end{mathpar} \caption{$\lambda_{T,\mho}$ Error Approximation Congruence Rules} \label{fig:congruence} \end{figure} \fi We need the following standard lemmas. \begin{lemma}[Downward Closure] \label{lem:downward-closed} If $j \leq i$ then \begin{enumerate} \item If $\mtone \paramlrdynr{\simdynr}{t}{i}{\mA} \mttwo$ then $\mtone \paramlrdynr{\simdynr}{t}{j}{\mA} \mttwo$ \item If $\mvone \paramlrdynr{\simdynr}{v}{i}{\mA} \mvtwo$ then $\mvone \paramlrdynr{\simdynr}{v}{j}{\mA} \mvtwo$. \end{enumerate} \end{lemma} \iflong \begin{proof} By lexicographic induction on the pair $(i,\mA)$. \end{proof} \fi \begin{lemma}[Anti-Reduction] \label{lem:anti-red} This theorem is different for the two relations as we allow arbitrary steps on the ``divergence greater-than'' side. \begin{enumerate} \item If $\mtone \mathrel{\ltdyn\prec}^{i}_{t,\mA} \mttwo$ and $\mtonepr \bigstep{j} \mtone$ and $\mttwopr \mathrel{\Mapsto} \mttwo$ then $\mtonepr \mathrel{\ltdyn\prec}^{i + j}_{t,\mA} \mttwopr$. \item If $\mtone \mathrel{\ltdyn\succ}^{i}_{t,\mA} \mttwo$ and $\mttwopr \bigstep{j} \mttwo$ and $\mtonepr \mathrel{\Mapsto} \mtone$, then $\mtonepr \mathrel{\ltdyn\succ}^{i + j}_{t,\mA} \mttwopr$. \end{enumerate} \iflong A simple corollary that applies in common cases to both relations is that if $\mtone \mathrel{\ltdyn\sim}^{i}_{t,\mA} \mttwo$ and $\mtonepr \bigstep{0} \mtone$ and $\mttwopr \bigstep{0} \mttwo$, then $\mtonepr \mathrel{\ltdyn\sim}^{i}_{t,\mA} \mttwopr$. \fi \end{lemma} \iflong \begin{proof} By direct inspection and downward closure (\lemref{lem:downward-closed}). \end{proof} \fi \begin{lemma}[Monadic Bind] \label{lem:monadic-bind} For any $i \in \mathbb{N}$, if for any $j \leq i$ and $\mvone \paramlrdynr{\simdynr}{t}{j}{\mA} \mvtwo$, we can show $\mectxtone\hw{\mvone} \paramlrdynr{\simdynr}{t}{j}{\mA} \mectxttwo\hw{\mvtwo}$ holds, then for any $\mtone \paramlrdynr{\simdynr}{t}{i}{\mA} \mttwo$, it is the case that $\mectxtone\hw{\mvone} \paramlrdynr{\simdynr}{t}{i}{\mA}\mectxttwo\hw{\mvtwo}$. \end{lemma} \iflong \begin{proof} We consider the proof for $\paramlrdynr{\gtdynr}{t}{i}{\mA}$, the other is similar/easier. By case analysis of $\mtone \paramlrdynr{\gtdynr}{t}{i}{\mA} \mttwo$. \begin{enumerate} \item If $\mttwo$ takes $i+1$ steps, so does $\mectxttwo\hw{\mttwo}$. \item If $\mttwo \stepsin{j\leq i} \mfontsym{\mho}$ and $\mtone \stepstar \mfontsym{\mho}$, then first of all $\mectxttwo\hw{\mttwo} \stepsin{j}\mectxttwo\hw{\mfontsym{\mho}}\step \mfontsym{\mho}$. If $j+1 = i$, we are done. Otherwise $\mectxttwo\hw{\mttwo} \stepsin{j+1\leq i} \mfontsym{\mho}$ and $\mectxtone\hw{\mtone} \stepstar \mfontsym{\mho}$. \item Assume there exist $j \leq i$, $\mvone \paramlrdynr{\gtdynr}{v}{i - j}{\mA} \mvtwo$ and $\mttwo \stepsin{j} \mvtwo$ and $\mtone \stepstar \mvone$. Then by assumption, $\mectxtone\hw{\mvone} \paramlrdynr{\gtdynr}{t}{i - j} \mectxttwo\hw{\mvtwo}$. Then by antireduction (\lemref{lem:anti-red}), $\mectxtone\hw{\mtone}\paramlrdynr{\gtdynr}{t}{i}\mectxttwo\hw{\mttwo}$. \end{enumerate} \end{proof} \fi We then prove that our logical relation is sound for observational error approximation by showing that it is a congruence relation \ifshort(see the extended version \cite{newahmed2018-extended})\fi and showing that if we can prove error approximation up to divergence on the left \emph{and} on the right, then we have true error approximation. \iflong \begin{lemma}[Congruence for Logical Relation] \label{lem:cong} All of the congruence rules in \figref{fig:congruence} are valid. \end{lemma} \begin{proof} Each case is done by proving the implication for $\mathrel{\ltdyn\prec}^i$ and $\mathrel{\ltdyn\succ}^i$. Most cases follow by monadic bind (\lemref{lem:monadic-bind}), downward closure (\lemref{lem:downward-closed}) and direct use of the inductive hypotheses. We show some illustrative cases. \begin{enumerate} \item Given $\mgammaone \mathrel{\dynr\sim}{i}{\menv} \mgammatwo$, we need to show $\mtfun{\mx}{\mA}{\mtone[\mgammaone]} \paramlrdynr{\simdynr}{t}{i}{\mA\mathbin{\mfontsym{\to}}\mB} \mtfun{\mx}{\mA}{\mttwo[\mgammatwo]}$. Since they are values, we show they are related values. Given any $\mvone \paramlrdynr{\simdynr}{v}{j}{\mA} \mvtwo$ with $j\leq i$, each side $\beta$ reduces in $0$ unroll steps so it is sufficient to show \[ \mtone[\mgammaone,\mvonepr/\mx] \paramlrdynr{\ltdynr}{t}{j}{\mB} \mttwo[\mgammatwo,\mvtwopr/\mx]\] Which follows by inductive hypothesis and downward-closure and the substitution relation. \end{enumerate} \end{proof} \fi \begin{theorem}[Logical Relation implies Observational Error Approximation] \label{thm:log-to-obs} If $\menv \vDash \mtone \sqsubseteq \mttwo : \mA$, then $\menv \vDash \mtone \sqsubseteq^{\text{obs}} \mttwo : \mA$ \end{theorem} \iflong\begin{proof} If $\menv\vDash \mtone \sqsubseteq \mttwo : \mA$, then for any closing context, by \cref{lem:cong}, $\cdot \vDash \mC[\mtone] \sqsubseteq \mC[\mttwo] : \sB$ holds. Then we do a case analysis of $\mC[\mtone]$'s behavior. \begin{enumerate} \item If $\mC[\mtone]$ diverges, then for any $i$, since $\mC[\mtone] \paramlrdynr{\gtdynr}{t}{i}{\sB} \mC[\mttwo]$, only the $\mC[\mttwo] \bigstep{i} \mtpr$ is possible, so $\mC[\mttwo]$ also diverges. \item If $\mC[\mtone] \bigstep{i} \mfontsym{\mho}$ we're done. \item If $\mC[\mtone] \bigstep{i} \mtunit$, then because $\mC[\mtone] \paramlrdynr{\ltdynr}{t}{i}{\sB} \mC[\mttwo]$, we know $\mC[\mttwo] \mathrel{\Mapsto} \mtunit$. \end{enumerate} \end{proof}\fi \subsection{Approximation and Equivalence Lemmas} \label{sec:lemmas} The step-indexed logical relation is on the face of it quite complex, especially due to the splitting of error approximation into two step-indexed relations. However, we should view the step-indexed relation as an ``implementation'' of the high-level concept of error approximation, and we work as much as possible with the error approximation relation $\menv \vDash \mtone \sqsubseteq \mttwo : \mA$. In order to do this we now prove some high-level lemmas, which are proven using the step-indexed relations, but allow us to develop conceptual proofs of the key theorems of the paper. First, there is reflexivity, also known as the \emph{fundamental lemma}, which is proved using the same congruence cases as the soundness theorem (\cref{thm:log-to-obs}.) Note that by the definition of our logical relation, this is really a kind of \emph{monotonicity} theorem for every term in the language, the first component of our graduality proof. \begin{corollary}[Reflexivity] \label{lem:fund-lemma} If $\menv \vdash \mt : \mA$ then $\menv \vDash \mt \sqsubseteq \mt : \mA$ \end{corollary} \iflong \begin{proof} By induction on the typing derivation of $\mt$, in each case using the corresponding congruence rule \ifshort proved earlier. \fi \iflong from \lemref{lem:cong}.\fi \end{proof} \fi Next, crucial to reasoning about ep pairs is the use of \emph{transitivity}, a notoriously tedious property to prove for step-indexed logical relations. This is where our splitting of error-approximation into two pieces proves essential, adapting the approach for divergence-approximation relations introduced in \citet{ahmed06:lr}. The proof works as follows: due to the function and open-term cases, we cannot simply prove transitivity in the limit directly. Instead we get a kind of ``asymmetric'' transitivity: if $\mtone \paramlrdynr{\ltdynr}{t}{i}{\mA} \mttwo$ and \emph{for any} $j\in\mathbb{N}$, $\mttwo \paramlrdynr{\ltdynr}{t}{j}{\mA} \mtthree$, then we know $\mtone \paramlrdynr{\ltdynr}{t}{i}{\mA} \mtthree$. We abbreviate the $\forall j$ part as $\mttwo \paramlrdynr{\ltdynr}{t}{\omega}{\mA} \mtthree$ in what follows. The key to the proof is in the function and open terms cases, which rely on reflexivity, \cref{lem:fund-lemma}, as in \citet{ahmed06:lr}. Reflexivity says that when we have $\mvone \paramlrdynr{\ltdynr}{v}{i}{\mA} \mvtwo$ then we also have $\mvtwo \paramlrdynr{\ltdynr}{v}{\omega}{\mA} \mvtwo$, which allows us to use the inductive hypothesis. \begin{lemma}[Transitivity for Closed Terms/Values] \label{lem:trans-closed-homo} The following are true for any $\mA$. \begin{enumerate} \item If $\mtone \paramlrdynr{\ltdynr}{t}{i}{\mA} \mttwo$ and $\mtin{2} \paramlrdynr{\ltdynr}{t}{\omega}{\mA} \mtin{3}$ then $\mtin{1} \paramlrdynr{\ltdynr}{t}{i}{\mA}\mtin{3}$. \item If $\mvone \paramlrdynr{\ltdynr}{t}{i}{\mA} \mvtwo$ and $\mvin{2} \paramlrdynr{\ltdynr}{t}{\omega}{\mA} \mvin{3}$ then $\mvin{1} \paramlrdynr{\ltdynr}{t}{i}{\mA}\mvin{3}$. \end{enumerate} Similarly, \begin{enumerate} \item If $\mtone \paramlrdynr{\gtdynr}{t}{\omega}{\mA} \mttwo$ and $\mtin{2} \paramlrdynr{\gtdynr}{t}{i}{\mA} \mtin{3}$ then $\mtin{1} \paramlrdynr{\gtdynr}{t}{i}{\mA}\mtin{3}$. \item If $\mvone \paramlrdynr{\gtdynr}{t}{\omega}{\mA} \mvtwo$ and $\mvin{2} \paramlrdynr{\gtdynr}{t}{i}{\mA} \mvin{3}$ then $\mvin{1} \paramlrdynr{\gtdynr}{t}{i}{\mA}\mvin{3}$. \end{enumerate} \end{lemma} \iflong \begin{proof} We prove the $\paramlrdynr{\ltdynr}{t}{i}{\mA}$ and $\paramlrdynr{\ltdynr}{v}{i}{\mA}$ mutually by induction on $(i,\mA)$. The other logical relation is similar. Most value cases are simple uses of the inductive hypotheses. \begin{enumerate} \item (Terms) By case analysis of $\mtone \paramlrdynr{\ltdynr}{t}{i}{\mA} \mttwo$. \begin{enumerate} \item If $\mtone \bigstep{i+1} \mtonepr$ or $\mtone \bigstep{j\leq i} \mfontsym{\mho}$, we have the result. \item Let $j\leq i$, $k \in \mathbb{N}$ and $(\mvone \paramlrdynr{\ltdynr}{v}{i}{\mA} \mvtwo)$ with $\mtone \bigstep{j} \mvone$ and $\mttwo \bigstep{k} \mvtwo$. By inductive hypothesis for values, it is sufficient to show that $\mtin{3} \mathrel{\Mapsto} \mvin{3}$ and $\mvtwo \paramlrdynr{\ltdynr}{v}{\omega}{\mA} \mvin{3}$. Since $\mtin{2} \paramlrdynr{\ltdynr}{t}{\omega}{\mA} \mtin{3}$, in particular we know $\mtin{2} \paramlrdynr{\ltdynr}{t}{k + l}{\mA} \mtin{3}$ for every $l \in \mathbb{N}$, so since $\mtin{2} \bigstep{k} \mvin{2}$, we know that $\mtin{3} \mathrel{\Mapsto} \mvin{3}$ and $\mvin{2} \paramlrdynr{\ltdynr}{v}{l}{\mA} \mvin{3}$, for every $l$, i.e., $\mvin{2} \paramlrdynr{\ltdynr}{v}{\omega}{\mA} \mvin{3}$. \end{enumerate} \item (Function values) Suppose $\mvin{1} \paramlrdynr{\ltdynr}{v}{i}{\mfunty{\mA}{\mB}} \mvin{2}$ and $\mvin{2} \paramlrdynr{\ltdynr}{v}{\omega}{\mfunty{\mA}{\mB}} \mvin{3}$. Then, let $j \leq i$ and $\mvinpr{1} \paramlrdynr{\ltdynr}{v}{j}{\mA} \mvinpr{2}$. We need to show $\mtapp{\mvin{1}}{\mvinpr{1}} \paramlrdynr{\ltdynr}{t}{j}{\mB} \mtapp{\mvin{3}}{\mvinpr{2}}$. By inductive hypothesis, it is sufficient to show $\mtapp{\mvin{1}}{\mvinpr{1}} \paramlrdynr{\ltdynr}{t}{j}{\mB} \mtapp{\mvin{2}}{\mvinpr{2}}$ and $\mtapp{\mvin{2}}{\mvinpr{2}} \paramlrdynr{\ltdynr}{t}{\omega}{\mB} \mtapp{\mvin{3}}{\mvinpr{2}}$. The former is clear. The latter follows by the congruence rule for application \lemref{lem:cong} and reflexivity \cref{lem:fund-lemma} on $\mvinpr{2}$: since $\cdot \vdash \mvinpr{2}: \mA$, we have $\mvinpr{2} \paramlrdynr{\ltdynr}{v}{\omega}{\mA} \mvinpr{2}$. \end{enumerate} \end{proof} \fi \begin{lemma}[Transitivity] \label{lem:trans} If $\menv \vdash \mtone \sqsubseteq \mttwo : \mA$ and $\menv \vdash \mttwo \sqsubseteq \mtin{3} : \mA$ then $\menv \vdash \mtone \sqsubseteq \mtin{3} : \mA$. \end{lemma} \iflong \begin{proof} The argument is essentially the same as the function value case, invoking the fundamental lemma \cref{lem:fund-lemma} for each component of the substitutions and transitivity for the closed relation \lemref{lem:trans-closed-homo} \end{proof} \fi Next, we want to extract approximation and equivalence principles for \emph{open} programs from syntactic operational properties of \emph{closed} programs. First, obviously any operational reduction is a contextual equivalence, and the next lemma extends that to open programs. Note that we use $\mathrel{\gtdyn\ltdyn}$ to mean approximation in both directions, i.e., equivalence: \begin{lemma}[Open $\beta$ Reductions] \label{lem:open-beta} Given $\menv \vdash \mt : \mA$, $\menv \vdash \mtpr : \mA$, if for every $\mgamma : \Gamma$, $\mt[\mgamma] \mathrel{\Mapsto} \mtpr{}[\mgamma]$, then $\menv \vDash \mt \mathrel{\gtdyn\ltdyn} \mtpr : \mA$. \end{lemma} \iflong\begin{proof} By reflexivity \cref{lem:fund-lemma} on $\mtpr, \mgamma$ and anti-reduction \cref{lem:anti-red}. \end{proof}\fi We call this open $\beta$ reduction because we will use it to justify equivalences that look like an operational reduction, but have open values (i.e. including variables) rather than closed as in the operational semantics. For instance, \[ \mtlet{\mx}{\my}{\mt} \mathrel{\gtdyn\ltdyn} \mt[\my/\mx] \] and \[\mtmatchpair{\mx}{\my}{\mtpair{\mxpr}{\mypr}}{\mt} \mathrel{\gtdyn\ltdyn} \mt[\mxpr/\mx,\mypr/\my] \] Additionally, it is convenient to use \emph{$\eta$} expansions for our types as well. Note that since we are using a call-by-value language, the $\eta$ expansion for functions is restricted to \emph{values}. \begin{lemma}[$\eta$ Expansion] {~} \begin{enumerate} \item For any $\menv \vdash \mv : \mA \mathbin{\mfontsym{\to}} \mB$, $\mv \mathrel{\gtdyn\ltdyn} \mtfun{\mx}{\mA}{\mv\,\mx}$ \item For any $\menv,\mx:\mA\mathbin{\mfontsym{+}} \mApr,\menvpr \vdash \mt : \mB$, \[ \mt \mathrel{\gtdyn\ltdyn} \mtcase{\mx}{\my}{\mt[\mtinj\mypr/\mx]}{\mypr}{\mt[\mtinjpr\mypr/\mx]} \] \item For any $\menv,\mx:\mA\mathbin{\mfontsym{\times}} \mApr,\menvpr \vdash \mt : \mB$, \[ \mt \mathrel{\gtdyn\ltdyn} \mtmatchpair{\my}{\mypr}{\mx}{\mt[\mtpair{\my}{\mypr}/\mx]} \] \end{enumerate} \end{lemma} \iflong \begin{proof} All are consequences of \cref{lem:open-beta}. \end{proof} \fi Next, with term constructors that involve continuations, we often need to rearrange the programs such as the ``case-of-case'' transformation. These are called commuting conversions and are presented in \figref{fig:comm-conv}. \begin{figure} \begin{mathpar} \begin{array}{rcl} \mectxt\hw{\mtlet{\mx}{\mt}{\ms}} & \mathrel{\gtdyn\ltdyn} & \mtlet{\mx}{\mt}{\mectxt\hw\ms}\\ \mectxt\hw{\mtmatchpair{\mx}{\my}{\mt}{\ms}} & \mathrel{\gtdyn\ltdyn} & \mtmatchpair{\mx}{\my}{\mt}{\mectxt\hw\ms}\\ \mectxt\hw{\mtcase{\mt}{\mx}{\ms}{\mxpr}{\mspr}} & \mathrel{\gtdyn\ltdyn} & \mtcase{\mt}{\mx}{\mectxt\hw\ms}{\mxpr}{\mectxt\hw\mspr} \end{array} \end{mathpar} \caption{Commuting Conversions} \label{fig:comm-conv} \end{figure} \begin{lemma}[Commuting Conversions] \label{lem:comm-conv} All of the commuting conversions in \figref{fig:comm-conv} are equivalences. \end{lemma} \iflong\begin{proof} By monadic bind, anti-reduction and the reflexivity (\cref{lem:monadic-bind,lem:anti-red,lem:fund-lemma}). \end{proof}\fi Next, the following theorem is the main reason we so heavily use \emph{evaluation contexts}. It is a kind of open version of the monadic bind lemma \cref{lem:monadic-bind}. \begin{lemma}[Evaluation contexts are linear] \label{lem:evctx-linear} If $\menv \vdash \mt : \mA$ and $\menv,\mx:\mA \vdash \mectxt\hw{\mx} : \mB$, then \[ \mtlet{\mx}{\mt}{\mectxt\hw{\mx}} \mathrel{\gtdyn\ltdyn} \mectxt\hw{\mt} \] \end{lemma} \iflong\begin{proof} By a commuting conversion and an open $\beta$ reduction, connected by transitivity \cref{lem:comm-conv,lem:open-beta,lem:trans} \begin{align*} \mtlet{\mx}{\mt}{\mectxt\hw{\mx}} &\mathrel{\gtdyn\ltdyn} \mectxt\hw{\mtlet{\mx}{\mt}{\mx}}\\ &\mathrel{\gtdyn\ltdyn} \mectxt\hw{\mt} \end{align*} \end{proof}\fi \iflong As a simple example, consider the following standard equivalence of let and $\lambda$, which we will need later and prove using the above lemmas: \begin{lemma}[Let-$\lambda$ Equivalence] \label{lem:let-lambda} For any $\menv, \mx:\mA \vdash \mt : \mB$ and $\menv \vdash \ms : \mA$, \[ (\mtfun{\mx}{\mA}{\mt})\,\ms \mathrel{\gtdyn\ltdyn} \mtlet{\mx}{\ms}{\mt} \] \end{lemma} \begin{proof} First, we lift $\mt$ using linearity of evaluation contexts, then an open $\beta$-reduction, linked by transitivity: \begin{align*} (\mtfun{\mx}{\mA}{\mt})\,\ms &\mathrel{\gtdyn\ltdyn} \mtletvert{\mx}{\ms}{(\mtfun{\mx}{\mA}{\mt})} &\mathrel{\gtdyn\ltdyn}\mtletvert{\mx}{\ms}{\mt} \end{align*} \end{proof} \fi The concepts of pure and terminating terms are useful because when subterms are pure or terminating, they can be moved around to prove equivalences more easily. \begin{definition}[Pure, Terminating Terms] \begin{enumerate} \item A term $\menv\vdash \mt : \mA$ is \emph{terminating} if for any closing $\mgamma$, either $\mt[\mgamma] \mathrel{\Mapsto} \mfontsym{\mho}$ or $\mt[\mgamma] \mathrel{\Mapsto} \mv$ for some $\mv$. \item A term $\menv\vdash \mt : \mA$ is \emph{pure} if for any closing $\mgamma$, $\mt[\mgamma] \mathrel{\Mapsto} \mv$ for some $\mv$. \end{enumerate} \end{definition} \iflong The following terminology and proof are taken from \cite{fuhrmann1999direct}. \begin{lemma}[Pure Terms are Thunkable] For any pure $\menv \vdash \mt : \mA$, \[ \mtlet\mx\mt{\mtfun{\my}{\mB}{\mx}} \mathrel{\gtdyn\ltdyn} \mtfun\my\mB\mt \] \end{lemma} \begin{proof} There are two cases $\mathrel{\dynr\prec},\mathrel{\dynr\succ}$. \begin{enumerate} \item Let $\mgammaone \paramlrdynr{\ltdynr}{v}{i}{\menv} \mgammatwo$ and define $\mtone = \mt{}[\mgammaone]$ and $\mttwo = \mt{}[\mgammatwo]$. Then we know $\mtone \paramlrdynr{\ltdynr}{t} i \mA \mttwo$. \begin{enumerate} \item If $\mtone \bigstep{i+1}$ we're done. \item It is impossible that $\mtone \bigstep \mfontsym{\mho}$ because $\mt$ is terminating. \item If $\mtone\bigstep{j\leq i} \mvone$, then we know that $\mtone \mathrel{\Mapsto} \mvtwo$ with $\mvone \paramlrdynr{\ltdynr}{v}{i-j}{\mA} \mvtwo$. Next, \[ \mtlet\mx\mtone{\mtfun{\my}{\mB}{\mx}} \bigstep{j} \mtfun{\my}{\mB}{\mvone} \] Then it is sufficient to show $\mtfun{\my}{\mB} \paramlrdynr{\ltdynr}{v}{i-j}{\mB\mathbin{\mfontsym{\to}}\mA} \mtfun{\my}{\mttwo}$, i.e. that for any $\mvonepr \paramlrdynr{\ltdynr}{v}{k\leq(i-j)}{\mB} \mvtwopr$ that \[ (\mtfun{\my}{\mB}{\mvone})\,\mvonepr \paramlrdynr{\ltdynr}{t}{k}{\mA} (\mtfun{\my}{\mB}{\mttwo})\,\mvtwopr \] The left side steps \[ (\mtfun{\my}{\mB}{\mvone})\,\mvonepr \bigstep{0} \mvone \] And the right side steps \[ (\mtfun{\my}{\mB}{\mtone})\,\mvtwopr \bigstep{0} \mtone \bigstep{j} \mvtwo \] And $\mvone \paramlrdynr{\ltdynr}{v}{k}{\mA} \mvtwo$ by assumption above. \end{enumerate} \item Let $\mgammaone \mathrel{\dynr\succ}^{i-j}_{\menv} \mgammatwo$ and define $\mtone = \mt{}[\mgammaone]$ and $\mttwo = \mt{}[\mgammatwo]$. Then we know $\mtone \paramlrdynr{\gtdynr}{t} i \mA \mttwo$. Since $\mt$ is terminating we know $\mtone \mathrel{\Mapsto} \mvone$ and for some $k$, $\mttwo \bigstep{k} \mvtwo$. Then we need to show And we need to show ${\mtfun\my\mB\mvone} \paramlrdynr{\gtdynr}{t}{i}{\mB\mathbin{\mfontsym{\to}}\mA} \mtfun\my\mB\mttwo$. Given any $\mvonepr \paramlrdynr{\gtdynr}{v}{k\leq i} \mvtwopr$, we need to show \[ (\mtfun\my\mB\mvone)\,\mvonepr \paramlrdynr{\ltdynr}{t}{k}{\mB} (\mtfun\my\mB\mttwo)\,\mvtwopr \] The $\beta$ reduction takes $0$ steps, then $\mttwo$ starts running. If $k > i$, there is nothing left to show. Otherwise, $k \leq i$ and we know $\mvone \paramlrdynr{\gtdynr}{v}{i-k}{\mA} \mvtwo$ which is the needed result. \end{enumerate} \end{proof} \fi \begin{lemma}[Pure Terms are Essentially Values] \label{lem:pure-subst} If $\menv \vdash \mt : \mA$ is a pure term, then for any $\menv,\mx : \mA \vdash \ms : \mB$, $\mtlet{\mx}{\mt}{\ms} \mathrel{\gtdyn\ltdyn} \ms[\mt/\mx]$ holds. \end{lemma} \iflong \begin{proof} First, since by open $\beta$ we have $(\mtfun{\my}{\munitty}\mt\,\mtunit) \mathrel{\gtdyn\ltdyn} \mt$, by congruence (\cref{lem:cong}) \[ \ms[\mt/\mx] \mathrel{\gtdyn\ltdyn} \ms[(\mtfun{\my}{\munitty}\mt\,\mtunit)\,\mtunit/\mx] \] And by reverse $\beta$ reduction, this is further equivalent to \[ \mtletvert{\mxin{f}}{\mtfun\my\munitty\mt}{ \ms[(\mxin{f}\,\mtunit)/\mx] } \] By thunkability of $\mt$ and a commuting conversion this is equivalent to: \[ \mtletvert\mx\mt{ \mtletvert{\mxin{f}}{\mtfun\my\munitty\mx}{ \ms{}[(\mxin{f}\,\mtunit)/\mx] } } \] Which by $\beta$ reduction at each $\mx$ in $\mx$ is: \[ \mtletvert\mx\mt{ \mtletvert{\mxin{f}}{\mtfun\my\munitty\mx}{ \ms[\mx] } } \] And a final $\beta$ reduction eliminates the auxiliary $\mxin{f}$: \[ \mtletvert\mx\mt{ \ms[\mx]} \] \end{proof} \fi Also, since we consider all type errors to be equal, terminating terms can be reordered: \begin{lemma}[Terminating Terms Commute] \label{lem:term-comm} If $\menv \vdash \mt : \mA$ and $\menv\vdash \mtpr : \mApr$ and $\menv,\mx:\mA,\my:\mApr \vdash \ms : \mB$, then \( \mtlet{\mx}{\mt}{ \mtlet{\mxpr}{\mtpr}{ \ms} } \mathrel{\gtdyn\ltdyn} \mtlet{\mxpr}{\mtpr}{ \mtlet{\mx}{\mt}{ \ms} } \) \end{lemma} \iflong \begin{proof} By symmetry it is sufficient to prove one direction. Let $i \in \mathbb{N}$. \begin{enumerate} \item Let $\mgammaone \mathrel{\dynr\prec}{i}{\menv} \mgammatwo$. We need to show \[ \mtletvert{\mx}{\mt{}[\mgammaone]}{ \mtletvert{\mxpr}{\mtpr{}[\mgammaone]}{ \ms{}[\mgammaone]} } \paramlrdynr{\ltdynr}{t}{i}{\mB} \mtletvert{\mxpr}{\mtpr{}[\mgammatwo]}{ \mtletvert{\mx}{\mt{}\mgammatwo}{ \ms{}[\mgammatwo]} } \] Note that this is true if the left side diverges or errors, so this is true with no conditions on $\mt,\mtpr$ By \cref{lem:fund-lemma}, we know $\mt[\mgammaone] \paramlrdynr{\ltdynr}{t}{i}{\mA} \mt[\mgammatwo]$ and $\mtpr{}[\mgammaone] \paramlrdynr{\ltdynr}{t}{i}{\mApr} \mtpr{}[\mgammatwo]$. We do a joint case analysis on these two facts. \begin{enumerate} \item If $\mt[\mgammaone] \bigstep{i+1}$, done. \item If $\mt[\mgammaone] \bigstep{j\leq i} \mfontsym{\mho}$, done. \item If $\mt[\mgammaone] \bigstep{j\leq i} \mvone$, then also $\mt[\mgammatwo] \mathrel{\Mapsto} \mvtwo$. \begin{enumerate} \item If $\mtpr{}[\mgammaone] \bigstep{(i-j)+1}$, done. \item If $\mtpr{}[\mgammaone] \bigstep{k \leq (i-j)}\mfontsym{\mho}$, done. \item If $\mtpr{}[\mgammaone] \bigstep{k\leq(i-j)} \mvonepr$, then $\mtpr{}[\mgammatwo] \mathrel{\Mapsto} \mvtwopr$ with $\mvonepr \paramlrdynr{\ltdynr}{v}{i-(j+k)}{\mApr} \mvtwopr$ and the result follows by \cref{lem:fund-lemma} for $\ms$ because we know \[ \ms{}[\mgammaone,\mvone/\mx,\mvonepr/\mxpr] \paramlrdynr{\ltdynr}{t}{i-(j+k)}{\mApr} \ms{}[\mgammatwo,\mvtwo/\mx,\mvtwopr/\mxpr]\] \end{enumerate} \end{enumerate} \item Let $\mgammaone \paramlrdynr{\gtdynr}{v}{i}{\menv} \mgammatwo$. We need to show \[ \mtletvert{\mx}{\mt{}[\mgammaone]}{ \mtletvert{\mxpr}{\mtpr{}[\mgammaone]}{ \ms{}[\mgammaone]} } \paramlrdynr{\gtdynr}{t}{i}{\mB} \mtletvert{\mxpr}{\mtpr{}[\mgammatwo]}{ \mtletvert{\mx}{\mt{}\mgammatwo}{ \ms{}[\mgammatwo]} } \] By \cref{lem:fund-lemma}, we know $\mt[\mgammaone] \paramlrdynr{\gtdynr}{t}{i}{\mA} \mt[\mgammatwo]$ and $\mtpr{}[\mgammaone] \paramlrdynr{\gtdynr}{t}{i}{\mApr} \mtpr{}[\mgammatwo]$. We do a joint case analysis on these two facts. \begin{enumerate} \item If $\mtpr{}[\mgammatwo] \bigstep{i+1}$, done. \item If $\mtpr{}[\mgammatwo] \bigstep{j\leq i} \mfontsym{\mho}$ and $\mtpr{}[\mgammaone] \mathrel{\Mapsto} \mfontsym{\mho}$. In this case we know the right hand side errors, so we must show the left side errors. Since $\mt$ is \emph{terminating}, either $\mt[\mgammaone] \mathrel{\Mapsto} \mfontsym{\mho}$ (done) or $\mt[\mgammaone] \mathrel{\Mapsto} \mvone$. In the latter case we are also done because: \[ \mtletvert{\mx}{\mt{}[\mgammaone]}{ \mtletvert{\mxpr}{\mtpr{}[\mgammaone]}{ \ms{}[\mgammaone]} } \mathrel{\Mapsto} { \mtletvert{\mxpr}{\mtpr{}[\mgammaone]}{ \ms{}[\mgammaone,\mv/\mx]} } \mathrel{\Mapsto} \mfontsym{\mho} \] \item If $\mtpr{}[\mgammatwo] \bigstep{j\leq i} \mvtwopr$ then either $\mtpr{}[\mgammaone] \bigstep \mfontsym{\mho}$ or $\mtpr{}[\mgammaone] \bigstep \mvonepr \paramlrdynr{\ltdynr}{v}{i-j}{\mApr} \mvtwopr$. Next, consider $\mt[\mgammatwo]$. \begin{enumerate} \item If $\mt[\mgammatwo] \bigstep{(i-j)+1}$ done. \item If $\mt[\mgammatwo] \bigstep{k\leq (i-j)} \mfontsym{\mho}$, then we know also that $\mt[\mgammaone] \mathrel{\Mapsto} \mfontsym{\mho}$, there is nothing left to show. \item If $\mt[\mgammatwo] \bigstep{k\leq (i-j)} \mvtwo$, then either $\mt[\mgammaone] \mathrel{\Mapsto} \mfontsym{\mho}$ or $\mt[\mgammaone]\mathrel{\Mapsto} \mvone \paramlrdynr{\ltdynr}{v}{i-(j+k)}{\mA} \mvtwo$. If $\mtpr{}[\mgammaone]$ or $\mt{}[\mgammaone]$ errors, done, otherwise both return values and the result follows by \cref{lem:fund-lemma} for $\ms$. \end{enumerate} \end{enumerate} \end{enumerate} \end{proof} \fi \section{Casts as Embedding-Projection Pairs} \label{sec:ep-pairs} In this section, we show how arbitrary casts can be broken down into \emph{embedding-projection} pairs. First, we define type dynamism and show that casts between less and more dynamic types form an ep pair. Then we will show that every cast is a composition of an upcast and a downcast. \subsection{Embedding-Projection Pairs} First, we define ep pairs with respect to \emph{logical} approximation. Note that since logical approximation implies observational error approximation, these are also ep pairs with respect to observational error approximation. However, in theorems where we \emph{construct} new ep pairs from old ones, we will need that the input ep pairs are logical ep pairs, not just observational, because we have not proven that logical approximation is \emph{complete} for observational error approximation. As with casts, we use evaluation contexts for convenience. \begin{definition}[EP Pair] A (logical) ep pair $(\mEin{e},\mEin{p}) : \mA \mathrel{\triangleleft} \mB$ is a pair of an \emph{embedding} $\mEin{e}\hw{\cdot:\mA} : \mB$ and a \emph{projection} $\mEin{p}\hw{\cdot:\mB} :\mA$ satisfying \begin{enumerate} \item Retraction: ${\mx : \mA \vdash \mx \mathrel{\gtdyn\ltdyn} \mEin{p}\hw{\mEin{e}\hw{\mx}} : \mA}$ \item Projection: ${\my : \mB \vdash \mEin{e}\hw{\mEin{p}\hw{\my}} \sqsubseteq \my : \mB}$ \end{enumerate} \end{definition} Next, we prove that in any embedding-projection pair that embeddings are pure (always produce a value with no effects) and that projections are terminating (either error or produce a value). Paired with the lemmas we have proven about pure and terminating programs in the previous section, we will be able to prove theorems about ep pairs more easily. \begin{lemma}[Embeddings are Pure] \label{lem:emb-pure} If $\mEin{e},\mEin{p} : \mA \mathrel{\triangleleft} \mB$ is an embedding-projection pair then $\mx : \mA \vdash \mEin{e}\hw{\mx} : \mB$ is pure. \end{lemma} \begin{proof} The ep pair property states that \( \mx : \mA \vDash \mEin{p}\hw{\mEin{e}\hw{\mx}} \mathrel{\gtdyn\ltdyn} \mx : \mA \) Given any value $\cdot \vdash \mv : \mA$, by \lemref{lem:fund-lemma}, we know \( \mv \paramlrdynr{\ltdynr}{t}{0}{\mA} \mEin{p}\hw{\mEin{e}\hw{\mv}} \) and since $\mv \bigstep{0} \mv$, this means there exists $\mvpr$ such that $\mEin{p}\hw{\mEin{e}\hw{\mv}} \mathrel{\Mapsto} \mvpr$, and since $\mEin{p}$ is an evaluation context, this means there must exist $\mvpr[2]$ with $\mEin{p}\hw{\mEin{e}\hw{\mv}} \mathrel{\Mapsto} \mEin{p}\hw{\mvpr[2]}\mathrel{\Mapsto} \mvpr$. \end{proof} \begin{lemma}[Projections are Terminating] \label{prj-term} If $\mEin{e},\mEin{p} : \mA \mathrel{\triangleleft} \mB$ is an embedding-projection pair then $\my : \mB \vdash \mEin{p}\hw{\my} : \mA$ is terminating. \end{lemma} \begin{proof} The ep pair property states that \( \my : \mB \vDash \mEin{e}\hw{\mEin{p}\hw{\my}} \sqsubseteq \my : \mB \) Given any $\mv : \mB$, by \lemref{lem:fund-lemma}, we know \( \mEin{e}\hw{\mEin{p}\hw{\mv}} \paramlrdynr{\gtdynr}{t}{0}{\mB} \mv \) so therefore either $\mEin{e}\hw{\mEin{p}\hw{\mv}}\mathrel{\Mapsto} \mfontsym{\mho}$, which because $\mEin{e}$ is pure means ${\mEin{p}\hw{\mv}} \mathrel{\Mapsto} \mfontsym{\mho}$, or $\mEin{e}\hw{\mEin{p}\hw{\mv}} \mathrel{\Mapsto} \mvpr$ which by strictness of evaluation contexts means ${\mEin{p}\hw{\mv}} \mathrel{\Mapsto}\mvpr[2]$ for some $\mvpr[2]$. \end{proof} Crucially, ep pairs can be constructed using simple function composition. First, the identity function is an ep pair by reflexivity. \begin{lemma}[Identity EP Pair] For any type $\mA$, ${\mfont{\lbrack{}{\cdot}\rbrack{}}},{\mfont{\lbrack{}{\cdot}\rbrack{}}} : \mA \mathrel{\triangleleft} \mA$. \end{lemma} Second, if we compose the embeddings one way and projections the opposite way, the result is an ep pair, by congruence. \begin{lemma}[Composition of EP Pairs] For any ep pairs $\mE_{e,1},\mE_{p,1} : \mAone \mathrel{\triangleleft} \mAtwo$ and $\mE_{e,2},\mE_{p,2} : \mAtwo \mathrel{\triangleleft} \mAin{3}$, $\mE_{e,2}\hw{\mE_{e,1}},\mE_{p,1}\hw{\mE_{p,2}} : \mAone \mathrel{\triangleleft} \mAin{3}$. \end{lemma} \subsection{Type Dynamism} \newcommand{\bifdynrrule}[2]{\inferrule*[right={#1}]{\sAone \sqsubseteq \sAtwo \and \sBone \sqsubseteq \sBtwo}{#2{\sAone}{\sBone} \sqsubseteq #2{\sAtwo}{\sBtwo}}} \begin{figure} \flushleft{\fbox{\small{$\sA \sqsubseteq \sB$}}} \vspace{-2ex} \begin{mathpar} \inferrule*[right=Reflexivity] {~} {{\sty} \sqsubseteq {\sty}}\and \inferrule*[right=Transitivity] {{\sAone}\sqsubseteq{\sAtwo}\and {\sAtwo}\sqsubseteq{\sAthree}} {{\sAone} \sqsubseteq{\sAthree}}\and \inferrule*[right=Dyn-Top] {~} {\sA \sqsubseteq {\sfont{\mathord{?}}}}\and \bifdynrrule{Sum}{\ssumty}\and \bifdynrrule{Prod}{\spairty}\and \bifdynrrule{Fun}{\sfunty}\and \end{mathpar} \caption{Type Dynamism} \label{fig:type-dynamism} \end{figure} Next, we consider type dynamism and its relationship to the casts. The type dynamism relation is presented in \figref{fig:type-dynamism}. The relation $\sA \sqsubseteq \sB$ reads as ``$\sA$ is less dynamic than $\sB$'' or ``$\sA$ is more precise than $\sB$''. For the purposes of its definition, we can say that it is the least reflexive and transitive relation such that every type constructor is monotone and $\sfont{\mathord{?}}$ is the greatest element. Even the function type is monotone in both input and output argument and for this reason, type dynamism is sometimes called \emph{na\"ive subtyping} \cite{wadler-findler09}. However, this gives us no \emph{semantic} intuition about what it could possibly mean. We propose that $\sA \sqsubseteq \sB$ should hold when the casts between $\sA$ and $\sB$ form an embedding-projection pair $\mE_{\obcast\sA\sB},\mE_{\obcast\sB\sA} : \sA \mathrel{\triangleleft} \sB$. We can then view each of the cases of the gradual guarantee as being \emph{compositional rules} for constructing ep pairs. Reflexivity and transitivity correspond to the identity and composition of ep pairs, and the monotonicity of types comes from the fact that \emph{every} functor preserves ep pairs. Taking this idea further, we can view type dynamism not just as an \emph{analysis} of pre-existing gradual type casts, but by considering its simple \emph{proof theory}, we can view proofs of type dynamism as \emph{synthesizing} the definitions of casts. To accomplish this, we give a \emph{refined} formulation of the proof theory of type dynamism in \figref{fig:type-dynamism-proofs}, which includes explicit proof terms $c : \sA \sqsubseteq \sB$. The methodology behind the presentation is to make reflexivity, transitivity, and the fact that $\mathord{?}$ is a greatest element into \emph{admissible} properties of the system, rather than primitive rules. First, by making proofs admissible, we see in detail how bigger casts are built up from small pieces. Second, this formulation satisfies a \emph{canonicity} property: there is exactly one proof of any given derivation, which simplifies the definition of the semantics. By giving a presentation where derivations are canonical, the typical ``coherence'' theorem, that says any two derivations have equivalent semantics, becomes trivial. An alternative formulation would define an ep-pair semantics where reflexivity and transitivity denote identity and composition of ep pairs, and then prove that any two derivations have equivalent semantics. Instead, we define admissible constructions for reflexivity and transitivity, and then prove a \emph{decomposition lemma} (\cref{lem:decomposition}) that states that the ep-pair semantics interprets our admissible reflexivity derivation as identity and transitivity derivation as a composition. In short, our presentation makes it obvious that the semantics is coherent, but not that it is built out of composition, whereas the alternative makes it obvious that the semantics is built out of composition, but not that it is coherent. \begin{figure} \flushleft{\fbox{\small{$c : \sA \sqsubseteq \sB$}}} \vspace{-4ex} \begin{mathpar} \inferrule {\sty \in \{\sunitty,\sfont{\mathord{?}} \}} {id(\sA) : {\sty} \sqsubseteq {\sty}}\and \inferrule {\sA \neq \sfont{\mathord{?}} \and c : \sty \sqsubseteq \floor\sA\\ \inferrule{~}{tag({\floor\sA}) : \floor\sA \sqsubseteq \sfont{\mathord{?}}}} {tag({\floor\sA}) \circ c : \sty \sqsubseteq {\sfont{\mathord{?}}}}\\ \inferrule {c : \sAone \sqsubseteq \sAtwo\and d : \sBone \sqsubseteq \sBtwo} {c \times d : \sAone \mathbin{\sfontsym{\times}} \sBone \sqsubseteq \sAtwo \mathbin{\sfontsym{\times}} \sBtwo}\and \inferrule {c : \sAone \sqsubseteq \sAtwo\and d : \sBone \sqsubseteq \sBtwo} {c \plus d : \sAone \mathbin{\sfontsym{+}} \sBone \sqsubseteq \sAtwo \mathbin{\sfontsym{+}} \sBtwo}\and \inferrule {c : \sAone \sqsubseteq \sAtwo\and d : \sBone \sqsubseteq \sBtwo} {c \to d : \sAone \mathrel{\to_{s}} \sBone \sqsubseteq \sAtwo \mathrel{\to_{s}} \sBtwo} \end{mathpar} \caption{Canonical Proof Terms for Type Dynamism} \label{fig:type-dynamism-proofs} \end{figure} We present the proof terms for type dynamism in \figref{fig:type-dynamism-proofs}. As in presentations of sequent calculus, we include the identity ep pair (reflexivity) only for the base types $\sunitty,\sfont{\mathord{?}}$. The next rule $tag(\floor{\sA}) \circ c : \sA \sqsubseteq \sfont{\mathord{?}}$ states that any casts between a non-dynamic type $\sA$ and the dynamic type $\sfont{\mathord{?}}$ are the composition $\circ$ of, first, a tagging-untagging ep-pair with its underlying tag type $tag(\floor{\sA}) : \floor{\sA} \sqsubseteq \sfont{\mathord{?}}$ and an ep pair from $\sA$ to its tag type $c : \sA \sqsubseteq \floor{\sA}$. The product, sum, and function rules are written to evoke that their ep pairs use the functorial action. As mentioned the proof terms are \emph{canonical}, meaning there is at most one derivation of any $\sA \sqsubseteq \sB$. \begin{lemma}[Canonical Type Dynamism Derivations] Any two derivations $c, d : \sA \sqsubseteq \sB$ are equal $c = d$. \end{lemma} \iflong \begin{proof} By induction on $c$, noting in each case that exactly one case is possible. \end{proof} \fi \begin{figure} \begin{minipage}{0.45\textwidth} \begin{mathpar} \begin{array}{rcl} id(\sfont{\mathord{?}}) &\defeq& id(\sfont{\mathord{?}})\\ id(\sunitty) &\defeq& id(\sunitty)\\ id({\sAone \mathbin{\sfontsym{\times}} \sAtwo}) &\defeq & id(\sAone) \times id(\sAtwo)\\ id({\sAone \mathbin{\sfontsym{+}} \sAtwo}) &\defeq & id(\sAone) \plus id(\sAtwo)\\ id({\sAone \mathrel{\to_{s}} \sAtwo}) &\defeq & id(\sAone) \to id(\sAtwo) \end{array} \end{mathpar} \end{minipage} \begin{minipage}{0.45\textwidth} \begin{mathpar} \begin{array}{rcl} (tag(\floor\sA) \circ c) \circ d &\defeq& tag(\floor\sA) \circ (c \circ d)\\ (id(\sA)) \circ d &\defeq & d\\ (c \times d) \circ (c' \times d') & \defeq & (c \circ c') \times (d \circ d')\\ (c \plus d) \circ (c' \plus d') & \defeq & (c \circ c') \plus (d \circ d')\\ (c \to d) \circ (c' \to d') & \defeq & (c \circ c') \to (d \circ d') \end{array} \end{mathpar} \end{minipage} \begin{mathpar} \begin{array}{rcl} top(\sfont{\mathord{?}}) & \defeq & id(\sfont{\mathord{?}})\\ top(\sunitty) & \defeq & tag(\sunitty) \circ id(\sunitty)\\ top(\sA \mathbin{\sfontsym{\times}} \sB) & \defeq& tag(\sfont{\mathord{?}}\mathbin{\sfontsym{\times}}\sfont{\mathord{?}}) \circ (top(\sA) \times top(\sB))\\ top(\sA \mathbin{\sfontsym{+}} \sB) & \defeq& tag(\sfont{\mathord{?}}\mathbin{\sfontsym{+}}\sfont{\mathord{?}}) \circ (top(\sA) \plus top(\sB))\\ top(\sA \mathrel{\to_{s}} \sB) & \defeq& tag(\sfont{\mathord{?}}\mathrel{\to_{s}}\sfont{\mathord{?}}) \circ (top(\sA) \to top(\sB))\\ \end{array} \end{mathpar} \caption{Type Dynamism Admissible Proof Terms} \label{fig:type-dyn-admissible} \end{figure} Next, we need to show that the rules in \figref{fig:type-dynamism} are all \emph{admissible} in the refined system \figref{fig:type-dynamism-proofs}. The proof of admissibility is given in \figref{fig:type-dyn-admissible}. First, to show reflexivity is admissible, we construct the proof $id(\sA) : \sA \sqsubseteq \sA$. It is primitive for $\sfont{\mathord{?}}$ and $\sunitty$ and we use the congruence rule to lift the others. Second, to show transitivity is admissible, for every $d : \sAone \sqsubseteq \sAtwo$ and $c : \sAtwo \sqsubseteq \sAin{3}$, we construct their \emph{composite} $c \circ d : \sAone \sqsubseteq \sAin{3}$ by recursion on $c$. If $c$ is a primitive composite with a tag, we use associativity of composition to push the composite in. If $c$ is the identity, the composite is just $d$. Otherwise, both $c$ and $d$ must be between a connective, and we push the compositions in. Finally, we show that $\sfont{\mathord{?}}$ is the most dynamic type by constructing a derivation $top(\sA) : \sA \sqsubseteq \sfont{\mathord{?}}$ for every $\sA$. For $\sfont{\mathord{?}}$, it is just the identity; for the remaining types, we use the tag ep pair and compose with lifted uses of $top$. \begin{figure} \begin{minipage}{0.45\textwidth} \begin{mathpar} \begin{array}{rcl} m \in \{e,p\}\\ \overline e & \defeq & p\\ \overline p & \defeq & e\\ \mE_{m,id(\sA)} & \defeq & {\mfont{\lbrack{}{\cdot}\rbrack{}}}\\ \mE_{e,tag(\stagty)} & \defeq & \mtroll{\semantics\sfont{\mathord{?}}}{\mtsum{\stagty}{{\mfont{\lbrack{}{\cdot}\rbrack{}}}}}\\ \mE_{p,tag(\stagty)} & \defeq & \mtelsecase{\mtunroll{\mfont{\lbrack{}{\cdot}\rbrack{}}}}{{\stagty}}{\mx}{\mx}{\mfontsym{\mho}}\\ \end{array} \end{mathpar} \end{minipage} \begin{minipage}{0.45\textwidth} \begin{mathpar} \begin{array}{rcl} \mE_{e,tag(\stagty) \circ d} & \defeq & \mE_{e,tag(\stagty)}\hw{\mE_{e,d}}\\ \mE_{p,tag(\stagty) \circ d} & \defeq & \mE_{p,d}\hw{\mE_{p,tag(\stagty)}}\\ \mE_{m,c \times c'} & \defeq & \mE_{m,c} \mathbin{\mfontsym{\times}} \mE_{m,c'}\\ \mE_{m,c \plus c'} & \defeq & \mE_{m,c} \mathbin{\mfontsym{+}} \mE_{m,c'}\\ \mE_{m,c \to c'} & \defeq & \mE_{\overline m,c} \mathbin{\mfontsym{\to}} \mE_{m,c'} \end{array} \end{mathpar} \end{minipage} \caption{Type Dynamism Cast Translation} \label{fig:type-dyn-cast-translation} \end{figure} Next, we construct a \emph{semantics} for the type dynamism proofs that justifies the intuition we have given so far; it is presented in \figref{fig:type-dyn-cast-translation}. Every type dynamism proof $c : \sA \sqsubseteq \sB$ defines a \emph{pair} of an embedding $\mectxt_{e,c}$ and a projection $\mectxt_{p,c}$. Since many rules are the same for embeddings and projections, we use $m \in \{e,p\}$ to abstract over the \emph{mode} of the cases. We define the \emph{complement} of a mode $\overline m$ to swap between embeddings and projections; it is used in the function case. The primitive identity casts are interpreted as the identity, and the primitive composition of casts $tag(\stagty) \circ d$ is interpreted as the composition of ep pairs of $tag(\stagty)$ and $d$. The tag type derivation is interpreted by the same definition as the cast in \figref{fig:direct-cast-translation}: tagging puts the correct sum case and $\mtroll{\sfont{\mathord{?}}}{}$, and untagging unwraps if its the correct sum case and otherwise errors. We abbreviate this as pattern matching with an ``else'' clause, where the else clause stands for all of the clauses that do not match the tag type $\stagty$. The desugaring to repeated case statements on sums should be clear. The product and sum type are just given by their functorial action with the same mode. The function type similarly uses its functorial action, but swaps from embedding to projection or vice-versa on the domain side. This shows that there is nothing strange about the function rule: it is the same construction as for subtyping, but constructing arrows back and forth at the same time. The fact that contravariant functors are covariant with respect to ep pairs in this way is \emph{precisely} the reason they are used extensively in domain theory. We next verify that these actually are embedding-projection pairs. To do this, we use the identity and composition lemmas proved before, but we also need to use \emph{functoriality} of the actions of type constructors, meaning that the action of the type interacts well with identity and composition of evaluation contexts. \newcommand{\quad\text{and}\quad}{\quad\text{and}\quad} \begin{lemma}[Identity Extension] \label{lem:id-ext} \[ {\mfont{\lbrack{}{\cdot}\rbrack{}}} \mathbin{\mfontsym{\times}} {\mfont{\lbrack{}{\cdot}\rbrack{}}} \mathrel{\gtdyn\ltdyn} {\mfont{\lbrack{}{\cdot}\rbrack{}}} \quad\text{and}\quad {\mfont{\lbrack{}{\cdot}\rbrack{}}} \mathbin{\mfontsym{+}} {\mfont{\lbrack{}{\cdot}\rbrack{}}} \mathrel{\gtdyn\ltdyn} {\mfont{\lbrack{}{\cdot}\rbrack{}}}\quad\text{and}\quad {\mfont{\lbrack{}{\cdot}\rbrack{}}} \mathbin{\mfontsym{\to}} {\mfont{\lbrack{}{\cdot}\rbrack{}}} \mathrel{\gtdyn\ltdyn} {\mfont{\lbrack{}{\cdot}\rbrack{}}} \] \end{lemma} \iflong \begin{proof} All are instances of $\eta$ expansion. \end{proof} \fi In a call-by-value language, the functoriality rules do not hold in general for the product functor, but they do for terminating programs because their order of evaluation is irrelevant. Also notice that when composing using the functorial action of the function type $\mathbin{\mfontsym{\to}}$, the composition flips on the domain side, because the function type is \emph{contravariant} in its domain. \begin{lemma}[Functoriality for Terminating Programs] \label{lem:functor} The following equivalences are true for any well-typed, \emph{terminating} evaluation contexts. \begin{mathpar} \begin{array}{rcl} (\mectxttwo \mathbin{\mfontsym{\times}} \mectxttwopr)\hw{\mectxtone \mathbin{\mfontsym{\times}} \mectxtonepr} & \mathrel{\gtdyn\ltdyn} & (\mectxttwo\hw\mectxtone) \mathbin{\mfontsym{\times}} (\mectxttwopr\hw\mectxtonepr)\\ (\mectxttwo \mathbin{\mfontsym{+}} \mectxttwopr)\hw{\mectxtone \mathbin{\mfontsym{+}} \mectxtonepr} & \mathrel{\gtdyn\ltdyn} & (\mectxttwo\hw\mectxtone) \mathbin{\mfontsym{+}} (\mectxttwopr\hw\mectxtonepr)\\ (\mectxttwo \mathbin{\mfontsym{\to}} \mectxttwopr)\hw{\mectxtone \mathbin{\mfontsym{\to}} \mectxtonepr} & \mathrel{\gtdyn\ltdyn} & (\mectxtone\hw\mectxttwo) \mathbin{\mfontsym{\to}} (\mectxttwopr\hw\mectxtonepr) \end{array} \end{mathpar} \end{lemma} \iflong \begin{proof} \begin{enumerate} \item ($\mathbin{\mfontsym{\to}}$) We need to show (after a commuting conversion) \[ \mtletvert{\mx_f}{{\mfont{\lbrack{}{\cdot}\rbrack{}}}}{ \mtletvert{\my_f}{\mtufun{\mx_a}{\mectxtonepr\hw{\mx_f\,(\mectxtone\hw{\mx_a})}}}{ {\mtufun{\my_a}{\mectxttwopr\hw{\my_f\,(\mectxttwo\hw{\my_a})}}}}} \mathrel{\gtdyn\ltdyn} \mtletvert{\mx_f}{{\mfont{\lbrack{}{\cdot}\rbrack{}}}}{ \mtufun{\my_a}{\mectxttwopr\hw{\mectxtonepr\hw{\mx_f\,(\mectxtone\hw{\mectxttwo\hw{\my_a}})}}}} \] First, we substitute for $\my_f$ and then lift the argument $\mectxttwo\hw{\my_a}$ out and $\beta$ reduce: \begin{align*} \mtletvert{\mx_f}{{\mfont{\lbrack{}{\cdot}\rbrack{}}}}{ \mtletvert{\my_f}{\mtufun{\mx_a}{\mectxtonepr\hw{\mx_f\,(\mectxtone\hw{\mx_a})}}}{ {\mtufun{\my_a}{\mectxttwopr\hw{\my_f\,(\mectxttwo\hw{\my_a})}}}}} &\mathrel{\gtdyn\ltdyn} \mtletvert{\mx_f}{{\mfont{\lbrack{}{\cdot}\rbrack{}}}}{\mtufun{\my_a}{\mectxttwopr\hw{({\mtufun{\mx_a}{\mectxtonepr\hw{\mx_f\,(\mectxtone\hw{\mx_a})}}})\,(\mectxttwo\hw{\my_a})}}}\\ &\mathrel{\gtdyn\ltdyn} \mtletvert{\mx_f}{{\mfont{\lbrack{}{\cdot}\rbrack{}}}}{ {\mtufun{\my_a}{\mtletvert{\mx_a}{\mectxttwo\hw{\my_a}}{\mectxttwopr\hw{({\mtufun{\mx_a}{\mectxtonepr\hw{\mx_f\,(\mectxtone\hw{\mx_a})}}})\,\mx_a}}}} }\\ &\mathrel{\gtdyn\ltdyn} \mtletvert{\mx_f}{{\mfont{\lbrack{}{\cdot}\rbrack{}}}}{ {\mtufun{\my_a}{\mtletvert{\mx_a}{\mectxttwo\hw{\my_a}}{\mectxttwopr\hw{{{\mectxtonepr\hw{\mx_f\,(\mectxtone\hw{\mx_a})}}}}}}} }\\ &\mathrel{\gtdyn\ltdyn} \mtletvert{\mx_f}{{\mfont{\lbrack{}{\cdot}\rbrack{}}}}{ {\mtufun{\my_a}{{\mectxttwopr\hw{{{\mectxtonepr\hw{\mx_f\,(\mectxtone\hw{\mectxttwo\hw{\my_a}})}}}}}}}} \end{align*} \item ($\mathbin{\mfontsym{+}}$) We need to show \[ \mtcasevert{\mtcasevert{\mx}{\my}{\mtinj{\mectxtone\hw{\my}}}{\mypr}{\mtinjpr{\mectxtonepr\hw{\mypr}}}}{\my}{\mtinj{\mectxttwo\hw{{\my}}}}{\mypr}{\mtinjpr{\mectxttwo\hw{{\mypr}}}} \mathrel{\gtdyn\ltdyn} \mtcasevert{\mx}{\my}{\mtinj{\mectxttwo\hw{\mectxtone\hw{\my}}}}{\mypr}{\mtinjpr{\mectxttwo\hw{\mectxtonepr\hw{\mypr}}}} \] First, we do a case-of-case commuting conversion, then lift the discriminees out, $\beta$ reduce and restore them. \begin{align*} \mtcasevert{\mtcasevert{\mx}{\my}{\mtinj{\mectxtone\hw{\my}}}{\mypr}{\mtinjpr{\mectxtonepr\hw{\mypr}}}}{\my}{\mtinj{\mectxttwo\hw{{\my}}}}{\mypr}{\mtinjpr{\mectxttwo\hw{{\mypr}}}} &\mathrel{\gtdyn\ltdyn} {\mtcasevert{\mx}{\my}{\mtcasevert{\mtinj{\mectxtone\hw{\my}}}{\my}{\mtinj{\mectxttwo\hw{{\my}}}}{\mypr}{\mtinjpr{\mectxttwo\hw{{\mypr}}}}}{\mypr}{\mtcasevert{\mtinjpr{\mectxtonepr\hw{\mypr}}}{\my}{\mtinj{\mectxttwo\hw{{\my}}}}{\mypr}{\mtinjpr{\mectxttwo\hw{{\mypr}}}}}}\\ &\mathrel{\gtdyn\ltdyn} {\mtcasevert{\mx}{\my}{\mtletvert{\my}{{{\mectxtone\hw{\my}}}}{\mtcasevert{\mtinj\my}{\my}{\mtinj{\mectxttwo\hw{{\my}}}}{\mypr}{\mtinjpr{\mectxttwo\hw{{\mypr}}}}}} {\mypr}{\mtletvert{\mypr}{{\mectxtonepr\hw{\mypr}}}{\mtcasevert{\mtinjpr\mypr}{\my}{\mtinj{\mectxttwo\hw{{\my}}}}{\mypr}{\mtinjpr{\mectxttwo\hw{{\mypr}}}}}}}\\ &\mathrel{\gtdyn\ltdyn} {\mtcasevert{\mx}{\my}{\mtletvert{\my}{{{\mectxtone\hw{\my}}}}{{\mtinj{\mectxttwo\hw{{\my}}}}}} {\mypr}{\mtletvert{\mypr}{{\mectxtonepr\hw{\mypr}}}{{\mtinjpr{\mectxttwo\hw{{\mypr}}}}}}}\\ &\mathrel{\gtdyn\ltdyn} {\mtcasevert{\mx}{\my}{{{\mtinj{\mectxttwo\hw{{\mectxtone\hw{\my}}}}}}} {\mypr}{{{\mtinjpr{\mectxttwo\hw{{\mectxtonepr\hw{\mypr}}}}}}}}\\ \end{align*} \item ($\mathbin{\mfontsym{\times}}$) We need to show \[ \mtmatchpairvert{\mx}{\mxpr}{\mx_p}{ \mtmatchpairvert{\my}{\mypr}{\mtpair{\mectxtone\hw{\mx}}{\mectxtonepr\hw{\mxpr}}} {\mtpair{\mectxttwo\hw{\my}}{\mectxttwopr\hw{\mypr}}}} \mathrel{\gtdyn\ltdyn} \mtmatchpairvert{\mx}{\mxpr}{\mx_p}{ \mtpair{\mectxttwo\hw{\mectxtone\hw{\mx}}}{\mectxttwopr{\mectxtonepr\hw{\mxpr}}}} \] First, we make the evaluation order explicit, then re-order using the fact that terminating programs commute \cref{lem:term-comm}. \begin{align*} \mtmatchpairvert{\mx}{\mxpr}{\mx_p}{ \mtmatchpairvert{\my}{\mypr}{\mtpair{\mectxtone\hw{\mx}}{\mectxtonepr\hw{\mxpr}}} {\mtpair{\mectxttwo\hw{\my}}{\mectxttwopr\hw{\mypr}}}} &\mathrel{\gtdyn\ltdyn} \mtmatchpairvert{\mx}{\mxpr}{\mx_p}{ \mtletvert{\mxone}{{\mectxtone\hw{\mx}}}{ \mtletvert{\mxonepr}{\mectxtonepr\hw{\mxpr}}{ \mtmatchpairvert{\my}{\mypr}{\mtpair{\mxone}{\mxonepr}}{ \mtletvert{\mxtwo}{{\mectxttwo\hw{\my}}}{ \mtletvert{\mxtwopr}{{\mectxttwopr\hw{\mypr}}}{ \mtpair{\mxtwo}{\mxtwopr} }}}}}}\\ &\mathrel{\gtdyn\ltdyn} \mtmatchpairvert{\mx}{\mxpr}{\mx_p}{ \mtletvert{\mxone}{{\mectxtone\hw{\mx}}}{ \mtletvert{\mxonepr}{\mectxtonepr\hw{\mxpr}}{ \mtletvert{\mxtwo}{{\mectxttwo\hw{\mxone}}}{ \mtletvert{\mxtwopr}{{\mectxttwopr\hw{\mxonepr}}}{ \mtpair{\mxtwo}{\mxtwopr} }}}}}\\ &\mathrel{\gtdyn\ltdyn} \mtmatchpairvert{\mx}{\mxpr}{\mx_p}{ \mtletvert{\mxone}{{\mectxtone\hw{\mx}}}{ \mtletvert{\mxtwo}{{\mectxttwo\hw{\mxone}}}{ \mtletvert{\mxonepr}{\mectxtonepr\hw{\mxpr}}{ \mtletvert{\mxtwopr}{{\mectxttwopr\hw{\mxonepr}}}{ \mtpair{\mxtwo}{\mxtwopr} }}}}}\\ &\mathrel{\gtdyn\ltdyn} \mtmatchpairvert{\mx}{\mxpr}{\mx_p}{ \mtpair{\mectxttwo\hw{\mectxtone\hw{\mx}}}{{{\mectxttwopr\hw{\mectxtonepr\hw{\mxpr}}}}}} \end{align*} \end{enumerate} \end{proof} \fi With these cases covered, we can show the casts given by type dynamism really are ep pairs. \begin{lemma}[Type Dynamism Derivation denotes EP Pair] \label{lem:dyn-der-ep} For any derivation $c : \sA \sqsubseteq \sB$, then $\mE_{e,c}, \mE_{p,c} : \semantics{\sA} \mathrel{\triangleleft} \semantics{\sB}$ are an ep pair. \end{lemma} \iflong \begin{proof} By induction on the derivation $c$. \begin{enumerate} \item (Identity) ${\mE_{e,id(\sA)},\mE_{p,id(\sA)} : \semantics\sA \mathrel{\triangleleft} \semantics\sA}$. This case is immediate by \cref{lem:fund-lemma}. \item (Composition) $\inferrule{\mE_{e,c},\mE_{p,c} : \semantics\sAone \mathrel{\triangleleft} \semantics\sAtwo \and \mE_{e,c'},\mE_{p,c'} : \semantics\sAtwo \mathrel{\triangleleft} \semantics\sAthree}{\mE_{e,c'}\hw{\mE_{e,c}},\mE_{p,c}\hw{\mE_{p,c'}} : \semantics\sAone\mathrel{\triangleleft} \semantics\sAthree}$. We need to show the retract property: \[ \mx:\semantics\sAone \vDash \mE_{p,c}\hw{\mE_{p,c'}\hw{\mE_{e,c'}\hw{\mE_{e,c}\hw{\mx}}}} \mathrel{\gtdyn\ltdyn} \mx : \semantics\sAone \] and the projection property: \[ \my:\semantics\sAthree \vDash \mE_{e,c'}\hw{\mE_{e,c}\hw{\mE_{p,c}\hw{\mE_{p,c'}\hw{\my}}}} \sqsubseteq \my : \semantics\sAthree \] Both follow by congruence and the inductive hypothesis, we show the projection property: \begin{align*} \mE_{e,c'}\hw{\mE_{e,c}\hw{\mE_{p,c}\hw{\mE_{p,c'}\hw{\my}}}} & \sqsubseteq \mE_{e,c'}\hw{\mE_{p,c'}\hw{\my}}\tag{inductive hyp, cong \ref{lem:cong}}\\ & \sqsubseteq \my \tag{inductive hyp} \end{align*} \item (Tag) $\inferrule{~}{\mtroll{\semantics\sfont{\mathord{?}}}{\mtsum{\stagty}{{\mfont{\lbrack{}{\cdot}\rbrack{}}}}},\mtelsecasevert{\mtunroll{{\mfont{\lbrack{}{\cdot}\rbrack{}}}}}{\semantics{\stagty}}{\mx}{\mx}{\mfontsym{\mho}} : \semantics\sA \mathrel{\triangleleft} \semantics\sB}$.\\ The retraction case follows by $\beta$ reduction \begin{align*} \mtelsecasevert{\mtunroll{\mtroll{\semantics\sfont{\mathord{?}}}{\mtsum{\stagty}{\mx}}}} {\mtsum{\stagty}{\mxin{\stagty}}}{\mxin{\stagty}}{\mfontsym{\mho}} & \mathrel{\gtdyn\ltdyn} \mx \end{align*} For the projection case, we need to show \[ \mtroll{\semantics\sfont{\mathord{?}}}{\mtsum{\stagty}{\left(\mtelsecasevert{\mtunroll{\my}} {\mtsum{\stagty}{\mxin{\stagty}}}{\mxin{\stagty}}{\mfontsym{\mho}} \right)}} \sqsubseteq \my \] First, on the left side, we do a commuting conversion (\cref{lem:comm-conv}) and then use linearity of evaluation contexts to reduce the cases to error: \begin{align*} \mtroll{\semantics\sfont{\mathord{?}}}{\mtsum{\stagty}{\left(\mtelsecasevert{\mtunroll{\my}} {\mtsum{\stagty}{\mxin{\stagty}}}{\mxin{\stagty}}{\mfontsym{\mho}} \right)}} & \mathrel{\gtdyn\ltdyn} \mtelsecasevert{\mtunroll{\my}} {\mtsum{\stagty}{\mxin{\stagty}}}{\mtroll{\semantics\sfont{\mathord{?}}}{\mtsum{\stagty}{\mxin{\stagty}}}}{\mtroll{\semantics\sfont{\mathord{?}}}{\mtsum{\stagty}{\mfontsym{\mho}}}}\\ & \mathrel{\gtdyn\ltdyn} \mtelsecasevert{\mtunroll{\my}} {\mtsum{\stagty}{\mxin{\stagty}}} {\mtroll{\semantics\sfont{\mathord{?}}}{\mtsum{\stagty}{\mxin{\stagty}}}} {\mfontsym{\mho}}\\ \end{align*} Next, we $\eta$-expand the right hand side \begin{align*} \my & \mathrel{\gtdyn\ltdyn} \mtroll{\semantics\sfont{\mathord{?}}}{\mtunroll{\my}}\\ & \mathrel{\gtdyn\ltdyn} \etatagcases{\mtunroll{\my}}{\mtroll{\semantics\sfont{\mathord{?}}}} \end{align*} The result follows by congruence because $\mfontsym{\mho} \sqsubseteq \mt$ for any $\mt$. \item (Functions) $\inferrule{\mE_{e,c},\mE_{p,c} : \sAone \mathrel{\triangleleft} \sAtwo\and\mE_{e,c'},\mE_{p,c'} : \sBone \mathrel{\triangleleft} \sBtwo}{\mE_{p,c} \mathbin{\mfontsym{\to}} \mE_{e,c'},\mE_{e,c} \mathbin{\mfontsym{\to}} \mE_{p,c'} : \semantics\sAone \mathbin{\mfontsym{\to}} \semantics\sBone \mathrel{\triangleleft} \semantics\sAtwo \mathbin{\mfontsym{\to}} \semantics\sBtwo}$ We prove the projection property, the retraction proof is similar. We want to show \[ \my : \semantics\sAtwo \mathbin{\mfontsym{\to}} \semantics\sBtwo\vDash (\mE_{e,c} \mathbin{\mfontsym{\to}} \mE_{p,c'})\hw{(\mE_{p,c} \mathbin{\mfontsym{\to}} \mE_{e,c'})\hw{\my}} \sqsubseteq \my : \semantics\sAtwo \mathbin{\mfontsym{\to}} \semantics\sBtwo \] Since embeddings and projections are terminating, we can apply functoriality \lemref{lem:functor} to show the left hand side is equivalent to \[ ((\mE_{p,c}\hw{\mE_{e,c}}) \mathbin{\mfontsym{\to}} (\mE_{p,c'}\hw{\mE_{e,c'}}))\hw{\my}\] which by congruence and inductive hypothesis is $\sqsubseteq$: \[ ({\mfont{\lbrack{}{\cdot}\rbrack{}}} \mathbin{\mfontsym{\to}} {\mfont{\lbrack{}{\cdot}\rbrack{}}})\hw{\my} \] which by identity extension \cref{lem:id-ext} is equivalent to $\my$. \item (Products) By the same argument as the function case. \item (Sums) By the same argument as the function case. \end{enumerate} \end{proof} \fi Next, while we showed that transitivity and reflexivity were admissible with the $id(\sA)$ and $c \circ d$ definitions, their semantics are not given directly by the identity and composition of evaluation contexts. We justify this notation by the following theorems. First, $id(\sA)$ is the identity by identity extension. \begin{lemma}[Reflexivity Proofs denote Identity] \label{lem:id-casts} For every $\sA$, $\mE_{e,id(\sA)} \mathrel{\gtdyn\ltdyn} {\mfont{\lbrack{}{\cdot}\rbrack{}}}$ and $\mE_{p,id(\sA)} \mathrel{\gtdyn\ltdyn} {\mfont{\lbrack{}{\cdot}\rbrack{}}}$. \end{lemma} \iflong \begin{proof} By induction on $\sA$, using the identity extension lemma. \end{proof} \fi Second, we have our key \emph{decomposition} theorem. While the composition theorem says that the composition of any two ep pairs is an ep pair, the \emph{decomposition} theorem is really a theorem about the \emph{coherence} of our type dynamism proofs. It says that given any ep pair given by $c : \sAin{1} \sqsubseteq \sAin{3}$, if we can find a middle type $\sAin{2}$, then we can \emph{decompose} $c$'s ep pairs into a composition. This theorem is used extensively, especially in the proof of the gradual guarantee. \begin{lemma}[Decomposition of Upcasts, Downcasts] \label{lem:decomposition} For any derivations $c : \sAone \sqsubseteq \sAtwo$ and $c' : \sAtwo \sqsubseteq \sAthree$, the upcasts and downcasts given by their composition $c' \circ c$ are equivalent to the composition of their casts given by $c,c'$: \begin{align*} \mx : \semantics\sAone \vDash \mE_{e, c' \circ c}\hw{\mx} &\mathrel{\gtdyn\ltdyn} \mE_{e,c'}\hw{\mE_{e,c}\hw{\mx}} : \semantics\sAthree\\ \my : \semantics\sAthree \vDash \mE_{p, c' \circ c}\hw{\my} &\mathrel{\gtdyn\ltdyn} \mE_{p,c}\hw{\mE_{p,c'}\hw{\my}} : \semantics\sAone \end{align*} \end{lemma} \iflong \begin{proof} By induction on the pair $c,c'$, following the recursive definition of $c \circ c'$. \begin{enumerate} \item $(tag(\stagty) \circ c) \circ d \defeq tag(\stagty) \circ (c \circ d)$. By inductive hypothesis and strict associativity of composition of evaluation contexts. \item $id(\sAone) \circ d \defeq d$ reflexivity. \item $(c \times d) \circ (c' \times d') \defeq (c \circ c') \times (d \circ d')$ By inductive hypothesis and functoriality \cref{lem:functor}. \item $(c \plus d) \circ (c' \plus d') \defeq (c \circ c') \plus (d \circ d')$ By inductive hypothesis and functoriality \cref{lem:functor}. \item $(c \to d) \circ (c' \to d') \defeq (c \circ c') \to (d \circ d')$ By inductive hypothesis and functoriality \cref{lem:functor}. \end{enumerate} \end{proof} \fi Finally, now that we have established the meaning of type dynamism derivations and proven the decomposition theorem, we can dispense with direct manipulation of derivations. So we define the following notation for ep pairs that just uses the types: \begin{definition}[EP Pair Semantics] Given $c : \sA \sqsubseteq \sB$, we define $\mE_{m,\sA,\sB} = \mE_{m,c}$. \end{definition} \subsection{Casts Factorize into EP Pairs} Next, we show how the upcasts and downcasts are sufficient to construct all the casts of $\lambda_{G}$. First, when $\sA \sqsubseteq \sB$, the ep pair semantics and the cast semantics coincide: \begin{lemma}[Upcasts and Downcasts are Casts] \label{lem:ud-are-casts} If $\sA \sqsubseteq \sB$ then $\mE_{\obcast{\sA}{\sB}} \mathrel{\gtdyn\ltdyn} \mE_{e,\sA,\sB}$ and $\mE_{\obcast{\sB}{\sA}} \mathrel{\gtdyn\ltdyn} \mE_{p,\sA,\sB}$. \end{lemma} \iflong \begin{proof} By induction following the recursive definition of $\mE_{\obcast\sA\sB}$ \begin{enumerate} \item $\mectxt_{\obcast{\sfont{\mathord{?}}}{\sfont{\mathord{?}}}} \defeq {\mfont{\lbrack{}{\cdot}\rbrack{}}}$ By reflexivity. \item $\mectxt_{\obcast{\sAone\mathbin{\sfontsym{+}}\sBone}{\sAtwo\mathbin{\sfontsym{\times}}\sBtwo}} \defeq \mectxt_{\obcast{\sAone}{\sAtwo}} \mathbin{\mfontsym{+}} \mectxt_{\obcast{\sBone}{\sBtwo}}$ By inductive hypothesis and congruence. \item $\mectxt_{\obcast{\sAone\mathbin{\sfontsym{\times}}\sBone}{\sAtwo\mathbin{\sfontsym{\times}}\sBtwo}} \defeq \mectxt_{\obcast{\sAone}{\sAtwo}} \mathbin{\mfontsym{\times}} \mectxt_{\obcast{\sBone}{\sBtwo}}$ By inductive hypothesis and congruence. \item $\mectxt_{\obcast{\sAone \mathrel{\to_{s}} \sBone}{\sAtwo\mathrel{\to_{s}}\sBtwo}} \defeq \mectxt_{\obcast{\sAtwo}{\sAone}} \mathbin{\mfontsym{\to}} \mectxt_{\obcast{\sBone}{\sBtwo}}$ By inductive hypothesis and congruence. \item $\mectxt_{\obcast{\stagty}{\sfont{\mathord{?}}}} \defeq \mtroll{\semantics\sfont{\mathord{?}}}{\mtsum{\stagty}{\mfont{\lbrack{}{\cdot}\rbrack{}}}}$ By reflexivity \item $\mectxt_{\obcast{\sfont{\mathord{?}}}{\stagty}} \defeq \mtelsecase{\mtunroll{\mfont{\lbrack{}{\cdot}\rbrack{}}}}{\stagty}{\mx}{\mx}{\mfontsym{\mho}}$ By reflexivity. \item $(\sA \neq \sfont{\mathord{?}},\floor\sA)\quad\mectxt_{\obcast{\sA}{\sfont{\mathord{?}}}} \defeq \mectxt_{\obcast{\floor\sA}{\sfont{\mathord{?}}}}\hw{\mectxt_{\obcast{\sA}{\floor\sA}}{\mfont{\lbrack{}{\cdot}\rbrack{}}}}$ By inductive hypothesis and decomposition of ep pairs. \item $(\sA \neq \sfont{\mathord{?}},\floor\sA)\quad\mectxt_{\obcast{\sfont{\mathord{?}}}{\sA}} \defeq {\mectxt_{\obcast{\floor\sA}{\sA}}}\hw{\mectxt_{\obcast{\sfont{\mathord{?}}}{\floor\sA}}{\mfont{\lbrack{}{\cdot}\rbrack{}}}}$ By inductive hypothesis and decomposition of ep pairs. \item $(\sA,\sB\neq\sfont{\mathord{?}} \wedge \floor\sA\neq\floor\sB)\quad\mectxt_{\obcast{\sA}{\sB}} \defeq \mtlet{\mx}{{\mfont{\lbrack{}{\cdot}\rbrack{}}}}{\mfontsym{\mho}}$ Not possible that $\sA \sqsubseteq \sB$. \end{enumerate} \end{proof} \fi Next, we show that the ``general'' casts of the gradual language can be \emph{factorized} into a composition of an upcast followed by a downcast. First, we show that factorizing through any type is equivalent to factorizing through the dynamic type, as a consequence of the \emph{retraction} property of ep pairs. \begin{lemma}[Any Factorization is equivalent to Dynamic] For any $\sAone,\sAtwo,\sApr$ with $\sAone \sqsubseteq \sApr$ and $\sAtwo \sqsubseteq \sApr$, $\mE_{p,\sAtwo,\sfont{\mathord{?}}}\hw{\mE_{e,\sAone,\sfont{\mathord{?}}}}\mathrel{\gtdyn\ltdyn} \mE_{p,\sAtwo,\sApr}\hw{\mE_{e,\sAone,\sApr}}$. \end{lemma} \begin{proof} By decomposition and the retraction property: \[ \mE_{p,\sAtwo,\sfont{\mathord{?}}}\hw{\mE_{e,\sAone,\sfont{\mathord{?}}}} \mathrel{\gtdyn\ltdyn} \mE_{p,\sAtwo,\sfont{\mathord{?}}}\hw{\mE_{p,\sApr,\sfont{\mathord{?}}}\hw{\mE_{e,\sApr,\sfont{\mathord{?}}}\hw{\mE_{e,\sAone,\sfont{\mathord{?}}}}}} \mathrel{\gtdyn\ltdyn} \mE_{p,\sAtwo,\sApr}\hw{\mE_{e,\sAone,\sApr}} \] \end{proof} By transitivity of equivalence, this means that factorization through one $\sB$ is as good as any other. So to prove that every cast factors as an upcast followed by a downcast, we can choose whatever middle type is most convenient. This lets us choose the simplest type possible in the proof. For instance, when factorizing a function cast $\obcast{\sAone\mathrel{\to_{s}}\sBone}{\sAtwo\mathrel{\to_{s}}\sBtwo}$, we can use the function tag type as the middle type $\sfont{\mathord{?}} \mathrel{\to_{s}} \sfont{\mathord{?}}$ and then the equivalence is a simple use of the inductive hypothesis and the functoriality principle. \begin{lemma}[Every Cast Factors as Upcast, Downcast] \label{lem:up-down-factorization} For any $\sAone,\sAtwo,\sApr$ with $\sAone \sqsubseteq \sApr$ and $\sAtwo \sqsubseteq \sApr$, the cast from $\sAone$ to $\sAtwo$ factors through $\sApr$: \( \mx : \semantics\sA \vDash \mE_{\obcast{\sA}{\sAtwo}}\hw\mx \mathrel{\gtdyn\ltdyn} \mE_{p,\sAtwo,\sApr}\hw{\mE_{e,\sA,\sApr}\hw\mx} : \semantics\sAtwo \) \end{lemma} \begin{proof} \begin{enumerate} \item If $\sAone \sqsubseteq \sAtwo$, then we choose $\sApr = \sAtwo$ and we need to show that \( \mE_{\obcast\sAone\sAtwo} \mathrel{\gtdyn\ltdyn} \mE_{p,\sAtwo\sAtwo}\hw{\mE_{e,\sAone\sAtwo}} \) this follows by \cref{lem:ud-are-casts} and \cref{lem:id-casts}. \item If $\sAtwo \sqsubseteq \sAone$, we use a dual argument to the previous case. We choose $\sApr = \sAone$ and we need to show that \[ \mE_{\obcast\sAone\sAtwo} \mathrel{\gtdyn\ltdyn} \mE_{p,\sAone\sAtwo}\hw{\mE_{e,\sAone\sAone}} \] this follows by \cref{lem:ud-are-casts} and \cref{lem:id-casts}. \item $\mectxt_{\obcast{\sAone\mathrel{\to_{s}}\sBone}{\sAtwo\mathbin{\sfontsym{\times}}\sBtwo}} \defeq \mectxt_{\obcast{\sAone}{\sAtwo}} \mathbin{\mfontsym{\to}} \mectxt_{\obcast{\sBone}{\sBtwo}}$ We choose $\sApr = \sfont{\mathord{?}} \mathrel{\to_{s}} \sfont{\mathord{?}}$. By inductive hypothesis, \[ \mE_{\obcast{\sAtwo}{\sAone}} \mathrel{\gtdyn\ltdyn} \mE_{p,\sAone,\sfont{\mathord{?}}}\hw{\mE_{e,\sAtwo,\sfont{\mathord{?}}}} \quad\text{and}\quad \mE_{\obcast{\sBone}{\sBtwo}} \mathrel{\gtdyn\ltdyn} \mE_{p,\sBtwo,\sfont{\mathord{?}}}\hw{\mE_{e,\sBone,\sfont{\mathord{?}}}}\] Then the result holds by functoriality: \begin{align*} \mE_{\obcast{\sAone\mathrel{\to_{s}}\sBone}{\sAtwo\mathrel{\to_{s}}\sBtwo}} &= \mE_{\obcast\sAtwo\sAone} \mathbin{\mfontsym{\to}} \mE_{\obcast\sBone\sBtwo}\\ &\mathrel{\gtdyn\ltdyn} (\mE_{p,\sAone,\sfont{\mathord{?}}}\hw{\mE_{e,\sAtwo,\sfont{\mathord{?}}}}) \mathbin{\mfontsym{\to}} (\mEin{p,\sBtwo,\sfont{\mathord{?}}}\hw{\mEin{e,\sBone,\sfont{\mathord{?}}}})\\ &\mathrel{\gtdyn\ltdyn} (\mE_{e,\sAtwo,\sfont{\mathord{?}}} \mathbin{\mfontsym{\to}} \mE_{p,\sBtwo,\sfont{\mathord{?}}})\hw{\mE_{p,\sAone,\sfont{\mathord{?}}} \mathbin{\mfontsym{\to}} \mE_{e,\sBone,\sfont{\mathord{?}}}}\\ & = \mE_{p,\sAtwo\mathrel{\to_{s}}\sBtwo,\sfont{\mathord{?}}\mathrel{\to_{s}}\sfont{\mathord{?}}}\hw{\mE_{e,\sAone\mathrel{\to_{s}}\sBtwo,\sfont{\mathord{?}}\mathrel{\to_{s}}\sfont{\mathord{?}}}} \end{align*} \item (Products, Sums) Same argument as function case. \item $(\sAone,\sAtwo\neq\sfont{\mathord{?}} \wedge \floor\sAone\neq\floor\sAtwo)\quad\mectxt_{\obcast{\sAone}{\sAtwo}} \defeq \mtlet{\mx}{{\mfont{\lbrack{}{\cdot}\rbrack{}}}}{\mfontsym{\mho}}$ We choose $\sApr = \sfont{\mathord{?}}$, so we need to show: \( \mtlet{\mx}{{\mfont{\lbrack{}{\cdot}\rbrack{}}}}{\mfontsym{\mho}} \mathrel{\gtdyn\ltdyn} \mE_{p,\sAtwo,\sfont{\mathord{?}}}\hw{\mE_{e,\sAone,\sfont{\mathord{?}}}} \). By embedding, projection decomposition this is equivalent to \[ \mtlet{\mx}{{\mfont{\lbrack{}{\cdot}\rbrack{}}}}{\mfontsym{\mho}} \mathrel{\gtdyn\ltdyn} \mE_{p,\sAtwo,\floor\sAtwo}\hw{\mE_{p,\floor\sAtwo,\sfont{\mathord{?}}}\hw{\mE_{e,\sAone,\floor\sAone}\hw{\mE_{e,\sAone,\floor\sAone}}}} \] Which holds by open $\beta$ because the embedding ${\mE_{e,\sAone,\floor\sAone}}$ is pure and $\floor\sAone\neq\floor\sAtwo$. \end{enumerate} \end{proof} \section{Graduality from EP Pairs} \label{sec:graduality} We now define and prove graduality of our cast calculus. Graduality, briefly stated, means that if a program is changed to make its types less dynamic, but otherwise the syntax is the same, then the \emph{operational behavior} of the term is ``less dynamic''\footnote{Here we invoke the meaning of dynamic as ``active'': less dynamic terms are less active in that they kill the program with a type error where a more dynamic program would have continued to run.} in that either the new term has the same behavior as the old, or it raises a type error, \emph{hiding} some behavior of the original term. Graduality, like parametricity, says that a certain type of syntactic change (making types less dynamic) results in a predictable semantic change (make behavior less dynamic). We define these two notions as \emph{syntactic} and \emph{semantic} term dynamism. \begin{figure} \fbox{\small{$\tmdynr{\senvone}{\senvtwo}{\stone}{\sttwo}{\sAone}{\sAtwo}$}}\hfill \begin{mathpar} \inferrule {\tmdynr{\senvone}{\senvtwo}{\stone}{\sttwo}{\sAone}{\sAtwo} \and \sBone \sqsubseteq \sBtwo} {\tmdynr{\senvone}{\senvtwo}{\obcast\sAone\sBone\stone}{\obcast\sAtwo\sBtwo\sttwo}{\sBone}{\sBtwo}} \inferrule {{\senvone}\sqsubseteq{\senvtwo}\and \sAone\sqsubseteq\sAtwo\and \senvonepr \sqsubseteq \senvtwopr} {\tmdynr{\senvone,\sxone:\sAone,\senvtwo}{\senvtwo,\sxtwo:\sAtwo,\senvtwopr}{\sxone}{\sxtwo}{\sAone}{\sAtwo}} \inferrule {{\senvone}\sqsubseteq{\senvtwo}} {\tmdynr{\senvone}{\senvtwo}{\stunit}{\stunit}{\sunitty}{\sunitty}}\and \inferrule {\tmdynr{\senvone}{\senvtwo}{\stone}{\sttwo}{\styone}{\stytwo}\\ \tmdynr{\senvone}{\senvtwo}{\stonepr}{\sttwopr}{\styonepr}{\stytwopr}} {\tmdynr{\senvone}{\senvtwo}{\stpair{\stone}{\stonepr}}{\stpair{\sttwo}{\sttwopr}}{\spairty{\sAone}{\sAonepr}}{\spairty{\sAtwo}{\sAtwopr}}} \inferrule {\tmdynr{\senvone}{\senvtwo}{{\stone}}{{\sttwo}}{\spairty{\sAone}{\sAonepr}}{\spairty{\sAtwo}{\sAtwopr}}\\ \tmdynr{\senvone,{\sxone:\sAone},{\sxonepr:\sApr}}{\senvtwo,{\sxtwo:\sAtwo},{\sxtwopr:\sAtwopr}}{{\stonepr}}{{\sttwopr}}{\sBone}{\sBtwo} } {\tmdynr{\senvone}{\senvtwo}{\stmatchpair{\sxone:\sAone}{\sxonepr:\sApr}{\stone}{\stonepr}}{\stmatchpair{\sxtwo:\sAtwo}{\sxtwopr:\sAtwopr}{\sttwo}{\sttwopr}}{\sBone}{\sBtwo}} \inferrule {\tmdynr{\senvone}{\senvtwo}{\stone}{\sttwo}{\sAone}{\sAtwo}\and \sjudgtyprec{\sAonepr}{\sAtwopr}} {\tmdynr{\senvone}{\senvtwo}{\stinj{\stermone}}{\stinj{\stermtwo}}{\ssumty{\sAone}{\sAonepr}}{\ssumty{\sAtwo}{\sAtwopr}}}\and \inferrule {\tmdynr{\senvone}{\senvtwo}{\stone}{\sttwo}{\sAonepr}{\sAtwopr}\and \sjudgtyprec{\sAone}{\sAtwo}} {\tmdynr{\senvone}{\senvtwo}{\stinjpr{\stermonepr}}{\stinjpr{\stermtwopr}}{\ssumty{\sAone}{\sAonepr}}{\ssumty{\sAtwo}{\sAtwopr}}}\and \inferrule {\tmdynr{\senvone}{\senvtwo}{\stone}{\sttwo}{\ssumty{\sAone}{\sAonepr}}{\ssumty{\sAtwo}{\sAtwopr}}\\ \tmdynr{\senvone,\sxone:\sAone}{\senvtwo,\sxtwo:\sAtwo}{\ssone}{\sstwo}{\sBone}{\sBtwo}\\ \tmdynr{\senvone,\sxonepr:\sAonepr}{\senvtwo,\sxtwopr:\sAtwopr}{\ssonepr}{\sstwopr}{\sBone}{\sBtwo}} {\tmdynr{\senvone}{\senvtwo}{\stcase{\stone}{\sxone:\sAone}{\ssone}{\sxonepr:\sAonepr}{\ssonepr}}{\stcase{\sttwo}{\sxtwo:\sAtwo}{\sstwo}{\sxtwopr:\sAtwopr}{\sstwopr}}{\sBone}{\sBtwo}} \inferrule {\tmdynr{{\senvone},{\svarone}:\sAone}{{\senvtwo},{\svartwo}:\sAtwo}{\stone}{\sttwo}{\sBone}{\sBtwo}} {\tmdynr{\senvone}{\senvtwo}{\stfun{\svarone}{\sAone}{\stone}}{\stfun{\svartwo}{\sAtwo}{\sttwo}}{\sfunty{\sAone}{\sBone}}{\sfunty{\sAtwo}{\sBtwo}}}\and \inferrule{\tmdynr{\senvone}{\senvtwo}{\stone}{\sttwo}{\sfunty{\sAone}{\sBone}}{\sfunty{\sAtwo}{\sBtwo}}\and \tmdynr{\senvone}{\senvtwo}{\ssone}{\sstwo}{{\sAone}}{{\sAtwo}}} {\tmdynr{\senvone}{\senvtwo}{\stapp{\stone}{\ssone}}{\stapp{\sttwo}{\sstwo}}{\sBone}{\sBtwo}} \end{mathpar} \caption{Syntactic Term Dynamism} \label{fig:term-dynamism} \end{figure} \begin{figure} \flushleft{\fbox{\small{$\senvone \sqsubseteq \senvtwo$}}} \vspace{-4ex} \begin{mathpar} \inferrule{~}{\cdot \sqsubseteq \cdot}\and \inferrule {\senvone\sqsubseteq\senvtwo \and \sAone \sqsubseteq \sAtwo} {\senvone,\sxone:\sAone \sqsubseteq\senvtwo,\sxtwo : \sAtwo} \end{mathpar} \caption{Environment Dynamism} \label{fig:env-dynamism} \end{figure} We present syntactic term dynamism in \figref{fig:term-dynamism}, based on the rules of \citet{refined}. Syntactic term dynamism captures the above idea of changing a program to use less dynamic types. If $\stone \sqsubseteq \sttwo$, we think of $\sttwo$ as being rewritten to $\stone$ by changing the types to be less dynamic. While we will sometimes abbreviate syntactic term dynamism as $\vdash \stone \sqsubseteq \sttwo$, the full form is $\tmdynr{\senvone}{\senvtwo}{\stone}{\sttwo}{\sAone}{\sAtwo}$ and is read as ``$\stone$ is syntactically less dynamic than $\sttwo$''. The syntax evokes the invariant that if you rewrite $\sttwo$ to use less dynamic types $\stone$, then its inputs must be given less dynamic types $\senvone\sqsubseteq\senvtwo$ and its outputs must be given less dynamic types $\sAone\sqsubseteq\sAtwo$. We extend type dynamism to environment dynamism in \figref{fig:env-dynamism} to say $\senvone \sqsubseteq \senvtwo$ when $\senvone,\senvtwo$ have the same length and the corresponding types are related. The rules of syntactic term dynamism capture exactly the idea of ``types on the left are less dynamic''. Viewed order-theoretically, these rules say that all term constructors are \emph{monotone} in types and terms. The second piece of graduality is a \emph{semantic formulation} of term dynamism. The intuition described above is that $\stone$ should be \emph{semantically} less dynamic than $\sttwo$ when it has the same behavior as $\sttwo$ except possibly when it errors. Note that if $\senvone = \senvtwo$ and $\sAone = \sAtwo$, this is exactly what observational error approximation formalizes. Of course, since we can cast between any two types, we can cast any term to be of a different type. Our definition for semantic term dynamism will then be contextual approximation \emph{up to cast}: \begin{definition}[Observational Term Dynamism] We say $\senvone \vdash \stone : \sBone$ is \emph{observationally less dynamic} than $\senvtwo \vdash \sttwo : \sBtwo$, written $\senvone\sqsubseteq\senvtwo \vDash \stone \sqsubseteq^{\text{obs}} \sttwo : \sBone \sqsubseteq \sBtwo$ when \[ \senvone \vDash \obcast\sBone\sBtwo\stone \sqsubseteq^{\text{obs}} \stletvert{\sxin{2,1}}{\obcast{\sAin{1,1}}{\sAin{2,1}}{\sxin{1,1}}}{ \begin{stackTL} \vdots\\ \stletvert{\sxin{2,n}}{\obcast{\sAin{1,n}}{\sAin{2,n}}{\sxin{1,n}}}{ \sttwo} \end{stackTL} } : \sBtwo \] where $\senvone = \sxin{1,1}:\sAin{1,1},\ldots,\sxin{1,n}:\sAin{1,n}$ and $\senvtwo = \sxin{2,1}:\sAin{2,1},\ldots,\sxin{2,n}:\sAin{2,n}$. Or, abbreviated as: \[ \senvone \vDash \obcast\sBone\sBtwo\stone \sqsubseteq^{\text{obs}} \stlet{\senvtwo}{\obcast\senvone\senvtwo\senvone}{\sttwo}:\sBtwo \] \end{definition} Note that we have chosen to use the two \emph{upcasts}, but there are three other ways we could have inserted casts to give $\stone,\sttwo$ the same type: we can use upcasts or downcasts on the inputs and we can use upcasts or downcasts on the outputs. We will show based on the ep-pair property of upcasts and downcasts that all of these are equivalent (\cref{lem:alternative}). We then define graduality to mean that syntactic term dynamism implies semantic term dynamism: \begin{theorem}[Graduality] \label{thm:graduality} If $\tmdynr\senvone\senvtwo\stone\sttwo\sAone\sAtwo$, then $\obstmdynr\senvone\senvtwo\stone\sttwo\sAone\sAtwo$ \end{theorem} \begin{proof} By \cref{lem:log-to-obs:graduality,lem:adequacy,lem:fund-lemma,thm:logical-graduality}. \end{proof} Next, we present our logical relations method for proving graduality. First, to prove an approximation result for terms in $\lambda_{G}$, we will prove approximation for their translations in $\lambda_{T,\mho}$, justified by our adequacy theorem. Second, to prove observational approximation, we will use our logical relation, justified by our soundness theorem. For that we use the following ``logical'' formulation of term dynamism. \begin{definition}[Logical Term Dynamism] For any $\semantics\senvone\vdash\mtone:\semantics\sAone$ and $\semantics\senvtwo\vdash \mttwo : \semantics\sAtwo$ with $\senvone\sqsubseteq \senvtwo$ and $\sAone \sqsubseteq \sAtwo$, we define $\logtmdynr\senvone\senvtwo\mtone\mttwo\sAone\sAtwo$ as \[ \tmlogapprox{\semantics\senvone}{\theemb\sAone\sAtwo{\mtone}}{\letembnovert{\mttwo}}{\semantics\sAtwo} \] where the right hand side is defined analogous to the environment cast $\obcast\senvone\senvtwo$. \end{definition} \begin{lemma}[Logical Term Dynamism implies Observational Term Dynamism] \label{lem:log-to-obs:graduality} For any $\senvone\vdash\stone:\sAone$ and $\senvtwo\vdash \sttwo : \sAtwo$ with $\senvone\sqsubseteq \senvtwo$ and $\sAone \sqsubseteq \sAtwo$, if $\logtmdynr{\senvone}{\senvtwo}{\semantics\stone}{\semantics\sttwo}{\sAone}{\sAtwo}$ then $\obstmdynr{\senvone}{\senvtwo}{\stone}{\sttwo}{\sAone}{\sAtwo}$. \end{lemma} \begin{proof} By \cref{thm:log-to-obs,lem:typed-gradual-approx}. \end{proof} Now that we are in the realm of logical approximation, we have all the lemmas of \secref{sec:lemmas} at our disposal, and we now start putting them to work. First, as mentioned before, we show that at least with logical term dynamism, the use of upcasts was arbitrary; we could have used downcasts instead. The property we need is that the upcast and downcast are \emph{adjoint} (in the language of category theory), also known as a \emph{Galois connection}, which is a basic consequence of the definition of ep pair: \begin{lemma}[EP Pairs are Adjoint] \label{lem:ep-adjoint} For any ep pair $(\mEin{e},\mEin{p}) : \mAone \mathrel{\triangleleft} \mAtwo$, and terms $\menv \vdash \mtone : \mAone, ,\menv \vdash \mttwo : \mAtwo$, \begin{mathpar} \menv \vDash \mEin{e}\hw{\mtone} \mathrel{\ltdyn} \mttwo : \mAtwo \quad\iff\quad \menv \vDash \mtone \mathrel{\ltdyn} \mEin{p}\hw{\mttwo} : \mAone \end{mathpar} \end{lemma} \begin{proof} The two proofs are dual \ifshort so we show just the $\Rightarrow$ implication. % By the retraction property $\mtone \mathrel{\ltdyn} \mEin{p}\hw{\mEin{e}{\mtone}}$, so by transitivity it is sufficient to show $\mEin{p}\hw{\mEin{e}{\mtone}} \mathrel{\ltdyn} \mEin{p}\hw{\mttwo}$, which follows by congruence and the assumption\fi. \iflong \begin{mathpar} \inferrule*[right=Transitivity] {\inferrule*[right=EP Pair] {~} {\menv \vDash \mtone \mathrel{\ltdyn} \mEin{p}\hw{\mEin{e}\hw{\mtone}} : \mAone}\\ \inferrule*[right=Congruence] {\inferrule*[right=Assumption]{~}{\menv \vDash \mEin{e}\hw{\mtone} \mathrel{\ltdyn} \mttwo : \mAtwo}} {\menv \vDash \mEin{p}\hw{\mEin{e}\hw{\mtone}}\mathrel{\ltdyn} \mEin{p}\hw{\mttwo} : \mAone} } {\menv \vDash \mtone \mathrel{\ltdyn} \mEin{p}\hw{\mttwo} : \mAone} \iflong \inferrule*[right=Transitivity] {\inferrule*[right=Congruence] {\inferrule*[right=Assumption]{~}{\menv \vDash \mEin{e}\hw\mtone \mathrel{\ltdyn} \mEin{p}\hw{\mttwo} : \mAone}} {\menv \vDash \mtone \mathrel{\ltdyn} \mEin{e}\hw{\mEin{p}\hw{\mttwo}} : \mAone}\\ \inferrule*[right=EP Pair] {~} {\menv \vDash \mEin{e}\hw{\mEin{p}\hw{\mttwo}} \mathrel{\ltdyn} \mttwo} } {\menv \vDash \mEin{e}\hw\mtone \mathrel{\ltdyn} \mttwo : \mAtwo} \fi \end{mathpar} \fi \end{proof} \begin{lemma}[Adjointness on Inputs] \label{lem:adj-inp} If $\menv,\mxone:\mAone \vdash \mtone : \mB$ and $\menv,\mxtwo:\mAtwo \vdash \mttwo : \mB$, and $\mEin{e},\mEin{p} : \mAone \mathrel{\triangleleft} \mAtwo$, then \[ \tmlogapprox{\menv,\mxone:\mAone}{\mtone}{\mtlet{\mxtwo}{\mEin{e}\hw{\mxone}}{\mttwo}}{\mB} \quad \iff\quad \tmlogapprox{\menv,\mxtwo:\mAtwo}{\mtlet{\mxone}{\mEin{p}\hw{\mxtwo}}{\mtone}}{\mttwo}{\mB} \] \end{lemma} \begin{proof} By a similar argument to \cref{lem:ep-adjoint} \end{proof} \begin{lemma}[Alternative Formulations of Logical Term Dynamism] \label{lem:alternative} The following are equivalent \begin{enumerate} \item $\tmlogapprox{\semantics\senvone}{\theemb\sAone\sAtwo{\mtone}}{\letembnovert{\mttwo}}{\semantics\sAtwo}$ \item $\tmlogapprox{\semantics\senvone}{\mtone}{\letembnovert{\theprj\sAone\sAtwo{\mttwo}}}{\semantics\sAtwo}$ \item $\tmlogapprox{\semantics\senvone}{\letprjnovert{\mtone}}{{\theprj\sAone\sAtwo{\mttwo}}}{\semantics\sAtwo}$ \item $\tmlogapprox{\semantics\senvone}{\theemb\sAone\sAtwo{\letprjnovert{\mtone}}}{{\mttwo}}{\semantics\sAtwo}$ \end{enumerate} \end{lemma} \begin{proof} By induction on $\senvone$, using \cref{lem:ep-adjoint} and \lemref{lem:adj-inp} \end{proof} Finally, to prove the graduality theorem, we do an induction over all the cases of syntactic term dynamism. Most important is the cast case $\obcast\sAone\sBone\stone \sqsubseteq \obcast\sAtwo\sBtwo\sttwo$ which is valid when $\sAone \sqsubseteq \sAtwo$ and $\sBone\sqsubseteq\sBtwo$. We break up the proof into 4 atomic steps using the factorization of general casts into an upcast followed by a downcast (\cref{lem:up-down-factorization}): $\mE_{\obcast\sAone\sAtwo} \mathrel{\gtdyn\ltdyn} \mectxt_{p,\sAtwo,\sfont{\mathord{?}}}\hw{\mE_{e,\sAone,\sfont{\mathord{?}}}}$. The four steps are upcast on the left, downcast on the left, upcast on the right, and downcast on the right. These are presented as rules for logical dynamism in \figref{fig:term-dynamism-casts}. Each of the inference rules accounts for two cases. The \textsc{Cast-Right} rule says first that if $\mtone \mathrel{\ltdyn} \mttwo : \sAone \sqsubseteq \sAtwo$ that it is OK to cast $\mttwo$ to $\sBtwo$, as long as $\sBtwo$ is more dynamic than $\sAone$, and the cast is either an upcast or downcast. Here, our explicit inclusion of $\sAone \sqsubseteq \sBtwo$ in the syntax of the term dynamism judgment should help: the rule says that adding an upcast or downcast to $\mttwo$ results in a more dynamic term than $\mtone$, \emph{whenever it is even sensible to ask}: i.e., if it were not the case that $\sAone \sqsubseteq \sBtwo$, the judgment would not be well-formed, so the judgment holds whenever it makes sense! The \textsc{Cast-Left} rule is dual. \begin{figure} \begin{mathpar} \inferrule*[right=Cast-Right] {\logtmdynr{\senvone}{\senvtwo}{\mtone}{\mttwo}{\sAone}{\sAtwo} \and \sAone \sqsubseteq \sBtwo\and (\sAtwo \sqsubseteq \sBtwo \vee \sBtwo \sqsubseteq \sAtwo) } {\logtmdynr{\senvone}{\senvtwo}{\mtone}{\mE_{\obcast\sAtwo\sBtwo}\hw{\mttwo}}{\sAone}{\sBtwo}} \inferrule*[right=Cast-Left] {\logtmdynr{\senvone}{\senvtwo}{\mtone}{\mttwo}{\sAone}{\sAtwo} \and \sBone \sqsubseteq \sAtwo \and (\sAone \sqsubseteq \sBone \vee \sBone \sqsubseteq \sAone)} {\logtmdynr{\senvone}{\senvtwo}{\mE_{\obcast{\sAone}{\sBone}}\hw{\mtone}}{\mttwo}{\sBone}{\sAtwo}} \end{mathpar} \caption{Term Dynamism Upcast, Downcast Rules} \label{fig:term-dynamism-casts} \end{figure} These four rules, combined with our factorization of casts into upcast followed by downcast suffice to prove the congruence rule for casts (we suppress the context $\senvone \sqsubseteq \senvtwo \vDash$, which is the same in each line): \begin{mathpar} \inferrule*[right=\cref{lem:up-down-factorization}] { \inferrule*[right=Cast-Right]{ \inferrule*[right=Cast-Left]{ \inferrule*[right=Cast-Left]{ \inferrule*[right=Cast-Right]{ {{{\semantics\stone}} \sqsubseteq {{\semantics\sttwo}} : \sAone \sqsubseteq \sAtwo} } {{{\semantics\stone}} \sqsubseteq {\mE_{e,\sAtwo,\sfont{\mathord{?}}}\hw{\semantics\sttwo}} : \sAone \sqsubseteq \sfont{\mathord{?}}} } {{{\mE_{e,\sAone,\sfont{\mathord{?}}}\hw{\semantics\stone}} \sqsubseteq {\mE_{e,\sAtwo,\sfont{\mathord{?}}}\hw{\semantics\sttwo}}} : \sfont{\mathord{?}} \sqsubseteq \sfont{\mathord{?}}} } {{\mE_{p,\sBone,\sfont{\mathord{?}}}\hw{\mE_{e,\sAone,\sfont{\mathord{?}}}\hw{\semantics\stone}} \sqsubseteq {\mE_{e,\sAtwo,\sfont{\mathord{?}}}\hw{\semantics\sttwo}}} : \sBone \sqsubseteq \sfont{\mathord{?}}} } {\mE_{p,\sBone,\sfont{\mathord{?}}}\hw{\mE_{e,\sAone,\sfont{\mathord{?}}}\hw{\semantics\stone}} \sqsubseteq \mE_{p,\sBtwo,\sfont{\mathord{?}}}\hw{\mE_{e,\sAtwo,\sfont{\mathord{?}}}\hw{\semantics\sttwo}} : \sBone \sqsubseteq \sBtwo} } {{\semantics{\obcast\sAone\sBone\stone}} \sqsubseteq {\semantics{\obcast{\sAtwo}{\sBtwo}\sttwo}} : {\sBone} \sqsubseteq {\sBtwo}} \end{mathpar} Next, we show the 4 rules are valid, as simple consequences of the ep pair property and the decomposition theorem. Also note that while there are technically 4 cases, each comes in a pair where the proofs are exactly dual, so conceptually speaking there are only 2 arguments. \begin{lemma}[Upcast, Downcast Dynamism] \label{lem:up-down-cases} The four rules in \figref{fig:term-dynamism-casts} are valid. \end{lemma} \begin{proof} In each case we choose which case of \cref{lem:alternative} is simplest. \begin{enumerate} \item \textsc{Cast-Left} with $\sAone \sqsubseteq \sBone \sqsubseteq \sAtwo$. We need to show $\mE_{e,\sBone,\sAtwo}\hw{\mE_{e,\sAone,\sBone}\hw{\mtone}} \sqsubseteq \letemb{\mttwo}$. By decomposition and congruence, $\mE_{e,\sBone,\sAtwo}\hw{\mE_{e,\sAone,\sBone}\hw{\mtone}} \mathrel{\gtdyn\ltdyn} \mE_{e,\sAone,\sAtwo}$ so the conclusion holds by transitivity and the premise. \item \textsc{Cast-Right} with $\sAone \sqsubseteq \sBtwo \sqsubseteq \sAtwo$. We need to show $\letprj{\mtone} \sqsubseteq \mE_{p,\sAone,\sBtwo}\hw{\mE_{p,\sBtwo,\sAtwo}\hw{\mttwo}}$. By decomposition and congruence, $\mE_{p,\sAone,\sBtwo}\hw{\mE_{p,\sBtwo,\sAtwo}\hw{\mttwo}} \mathrel{\gtdyn\ltdyn} \mE_{p,\sAone,\sAtwo}\hw{\mttwo}$, so the conclusion holds by transitivity and the premise. \item \textsc{Cast-Left} with $\sBone \sqsubseteq \sAone \sqsubseteq \sAtwo$. We need to show $\mE_{p,\sBone,\sAone}\hw{\letprj\mtone} \sqsubseteq \mE_{p,\sBone,\sAtwo}\hw{\mttwo}$. By decomposition, $\mE_{p,\sBone,\sAtwo}\hw{\mttwo} \mathrel{\gtdyn\ltdyn} \mE_{p,\sBone,\sAone}\hw{\mE_{p,\sAone,\sAtwo}\hw{\mttwo}}$, so by transitivity it is sufficient to show \[ \mE_{p,\sBone,\sAone}\hw{\letprjnovert\mtone} \sqsubseteq \mE_{p,\sBone,\sAone}\hw{\mE_{p,\sAone,\sAtwo}\hw{\mttwo}} \] which follows by congruence and the premise. \item \textsc{Cast-Right} with $\sAone \sqsubseteq \sAtwo \sqsubseteq \sBtwo$. We need to show $\mE_{e,\sAone,\sBtwo}\hw{\mtone} \sqsubseteq \mE_{e,\sAtwo,\sBtwo}\hw{\letemb{\mttwo}}$. By decomposition, $\mE_{e,\sAone,\sBtwo}\hw{\mtone} \mathrel{\gtdyn\ltdyn} \mE_{e,\sAtwo,\sBtwo}\hw{\mE_{e,\sAone,\sAtwo}\hw{\mtone}}$, so by transitivity it is sufficient to show \[ \mE_{e,\sAtwo,\sBtwo}\hw{\mE_{e,\sAone,\sAtwo}\hw{\mtone}} \sqsubseteq \mE_{e,\sAtwo,\sBtwo}\hw{\letembnovert{\mttwo}} \] which follows by congruence and the premise. \end{enumerate} \end{proof} \ifshort The non-cast cases are too long to include here, but are included in the extended version \cite{newahmed2018-extended}. They are proven using the definitions of the ep pairs for each type connective and the lemmas of \secref{sec:lemmas}. We note that the proofs are \emph{modular} in that for instance, the proofs about function types only involve the functorial action of the function type and do not depend on any other types being present in the language. \fi Finally, we prove the graduality theorem by induction on syntactic term dynamism derivations, finishing the proof of \cref{thm:graduality}. \begin{theorem}[Logical Graduality] \label{thm:logical-graduality} If $\senvone \sqsubseteq \senvtwo \vdash \stone \sqsubseteq \sttwo : \sAone$, then $\logtmdynr{\senvone}{\senvtwo}{\semantics\stone}{\semantics\sttwo}{\sAone}{\sAtwo}$. \end{theorem} \iflong \begin{proof} By induction on syntactic term dynamism rules. \begin{enumerate} \item To show To show $\inferrule{\logtmdynr{\senvone}{\senvtwo}{\semantics\stone}{\semantics\sttwo}{\sBone}{\sBtwo}}{\logtmdynr{\senvone}{\senvtwo}{\semantics{\obcast\sAone\sBone\stone}}{\semantics{\obcast{\sAtwo}{\sBtwo}\sttwo}}{\sBone}{\sBtwo}}$ we use \cref{lem:up-down-cases} and the argument above. \item $\inferrule {{\senvone}\sqsubseteq{\senvtwo}\and \sAone\sqsubseteq\sBone\and \senvonepr \sqsubseteq \senvtwopr} {\semtmdynr{\senvone,\sxone:\sAone,\senvtwo}{\senvtwo,\sxtwo:\sAtwo,\senvtwopr}{\sxone}{\sxtwo}{\sAone}{\sAtwo}}$ We need to show: \[ \semantics{\senvone} \vDash \theemb{\sAone}{\sAtwo}{\mxone} \sqsubseteq \letemb{\mxtwo}\] Since embeddings are pure \cref{lem:emb-pure,lem:pure-subst} we can substitute them in and then the two sides are literally the same. \[ \letemb{\mxtwo} \mathrel{\gtdyn\ltdyn} \mxtwo[\theemb{\sAone^i}{\sAtwo^i}{\mxone^i}/\mxtwo^i] = \theemb{\sAone}{\sAtwo}{\mxone} \] \item $\inferrule {\semtmdynr{\senvone}{\senvtwo}{\stone}{\sttwo}{\sAone}{\sAtwo}\and \sjudgtyprec{\sAonepr}{\sAtwopr}} {\semtmdynr{\senvone}{\senvtwo}{\stinj{\stermone}}{\stinj{\stermtwo}}{\ssumty{\sAone}{\sAonepr}}{\ssumty{\sAtwo}{\sAtwopr}}}$ Expanding definitions, we need to show: \[ \mtcasevert{\mtinj{\semantics{\stone}}} {\mx}{\mtinj({\theemb{\sAone}{\sAtwo}{\mx}})} {\mx}{\mtinjpr({\theemb{\sAonepr}{\sAtwopr}{\mxpr}})} \sqsubseteq \letemb{\semantics{\sttwo}} \] By open $\beta$ (\cref{lem:open-beta}), the left side can be reduced, which we can then substitute into due to linearity of evaluation contexts (\cref{lem:evctx-linear}): \begin{align*} \mtcase{\mtinj{\semantics{\stone}}} {\mx}{\mtinj({\theemb{\sAone}{\sAtwo}{\mx}})} {\mx}{\mtinjpr({\theemb{\sAonepr}{\sAtwopr}{\mxpr}})} & \mathrel{\gtdyn\ltdyn} \mtletvert{\mx}{\semantics{\stone}}{\mtinj({\theemb{\sAone}{\sAtwo}{\mx}})}\\ & \mathrel{\gtdyn\ltdyn} (\mtinj({\theemb{\sAone}{\sAtwo}{\mx}}))[\semantics{\stone}/\mx]\\ & = \mtinj({\theemb{\sAone}{\sAtwo}{\semantics{\stone}}}) \end{align*} So by transitivity it is sufficient to show \[ \mtinj({\theemb{\sAone}{\sAtwo}{\semantics{\stone}}}) \sqsubseteq \mtinj{\left(\letemb{\semantics{\sttwo}}\right)}\] which follows by congruence (\cref{lem:cong}). \item $\inferrule {\semtmdynr{\senvone}{\senvtwo}{\stone}{\sttwo}{\sAonepr}{\sAtwopr}\and \sjudgtyprec{\sAone}{\sAtwo}} {\semtmdynr{\senvone}{\senvtwo}{\stinjpr{\stermonepr}}{\stinjpr{\stermtwopr}}{\ssumty{\sAone}{\sAonepr}}{\ssumty{\sAtwo}{\sAtwopr}}}$ essentially the same as the previous case. \item $\inferrule {\semtmdynr{\senvone}{\senvtwo}{\stone}{\sttwo}{\ssumty{\sAone}{\sAonepr}}{\ssumty{\sAtwo}{\sAtwopr}}\\ \semtmdynr{\senvone,\sxone:\sAone}{\senvtwo,\sxtwo:\sAtwo}{\ssone}{\sstwo}{\sBone}{\sBtwo}\\ \semtmdynr{\senvone,\sxonepr:\sAonepr}{\senvtwo,\sxtwopr:\sAtwopr}{\ssonepr}{\sstwopr}{\sBone}{\sBtwo}} {\semtmdynr{\senvone}{\senvtwo}{\stcase{\stone}{\sxone:\sAone}{\ssone}{\sxonepr:\sAonepr}{\ssonepr}}{\stcase{\sttwo}{\sxtwo:\sAtwo}{\sstwo}{\sxtwopr:\sAtwopr}{\sstwopr}}{\sBone}{\sBtwo}}$ Expanding definitions, we need to show \[ \theemb{\sBone}{\sBtwo}{\mtcasevert {\semantics{\stone}} {\mtinj \mxone}{\semantics{\msone}} {\mtinjpr \mxonepr}{\semantics{\msonepr}} } \sqsubseteq \letemb{\mtcasevert{\semantics{\sttwo}}{\mtinj\mxtwo}{\semantics{\mstwo}}{\mtinjpr\mxtwopr}{\semantics{\mstwopr}}} \] First, we do some simple rewrites: on the left side, we use a commuting conversion to push the embedding into the continuations: \[ \theemb{\sBone}{\sBtwo}{\mtcasevert{\semantics{\stone}} {\mtinj \mxone}{\semantics{\msone}} {\mtinjpr \mxonepr}{\semantics{\msonepr}}} \mathrel{\gtdyn\ltdyn} \mtcasevert{\semantics{\stone}} {\mxone}{\theemb{\sBone}{\sBtwo}{\semantics{\msone}}} {\mxonepr}{\theemb{\sBone}{\sBtwo}{\semantics{\msonepr}}} \] And on the right side we use the fact that embeddings are pure and so can be moved freely: \[ \letemb{\mtcasevert{\semantics{\sttwo}}{\mtinj\mxtwo}{\semantics{\mstwo}}{\mtinjpr\mxtwopr}{\semantics{\mstwopr}}} \mathrel{\gtdyn\ltdyn} \mtcasevert{\letemb{\semantics{\sttwo}}} {\mxtwo}{\letemb{\semantics{\mstwo}}} {\mxtwopr}{\letemb{\semantics{\mstwopr}}} \] Next as with many of the elim forms, we ``ep-expand'' the discriminee on the left side, and then simplify based on the definition of $\theprj{\sAone\mathbin{\sfontsym{+}}\sAonepr}{\sAtwo\mathbin{\sfontsym{+}}\sAtwopr}\cdot$, using the case-of-case commuting conversion and open $\beta$ \cref{lem:comm-conv,lem:open-beta}: \begin{align*} \mtcasevert{\semantics{\stone}} {\mxone}{\theemb{\sBone}{\sBtwo}{\semantics{\msone}}} {\mxonepr}{\theemb{\sBone}{\sBtwo}{\semantics{\msonepr}}} &\mathrel{\gtdyn\ltdyn} \mtcasevert{\theprj{\sAone\mathbin{\sfontsym{+}}\sAonepr}{\sAtwo\mathbin{\sfontsym{+}}\sAtwopr}{\theemb{\sAone\mathbin{\sfontsym{+}}\sAonepr}{\sAtwo\mathbin{\sfontsym{+}}\sAtwopr}{\stone}}} {\mxone}{\theemb{\sBone}{\sBtwo}{\semantics{\msone}}} {\mxonepr}{\theemb{\sBone}{\sBtwo}{\semantics{\msonepr}}}\\ \text{(definition)}&= \mtcasevert{\left({\mtcasevert{\theemb{\sAone\mathbin{\sfontsym{+}}\sAonepr}{\sAtwo\mathbin{\sfontsym{+}}\sAtwopr}{\stone}} {\mxtwo}{\mtinj{\theprj{\sAone}{\sAtwo}{\mxtwo}}} {\mxtwopr}{\mtinj{\theprj{\sAonepr}{\sAtwopr}{\mxtwopr}}} }\right)} {\mxone}{\theemb{\sBone}{\sBtwo}{\semantics{\msone}}} {\mxonepr}{\theemb{\sBone}{\sBtwo}{\semantics{\msonepr}}}\\ \text{(comm conv \ref{lem:comm-conv}, open $\beta$ \ref{lem:open-beta})}& \mathrel{\gtdyn\ltdyn} \mtcasevert{\theemb{\sAone\mathbin{\sfontsym{+}}\sAonepr}{\sAtwo\mathbin{\sfontsym{+}}\sAtwopr}{\stone}} {\mxtwo}{\mtletvert{\mxone}{\theprj{\sAone}{\sAtwo}{\mxtwo}}{{\theemb{\sBone}{\sBtwo}{\semantics{\msone}}}}} {\mxtwopr}{\mtletvert{\mxonepr}{\theprj{\sAonepr}{\sAtwopr}{\mxtwopr}}{\theemb{\sBone}{\sBtwo}{\semantics{\msonepr}}}} \end{align*} Then the final step follows by congruence and adjointness on inputs \cref{lem:cong,lem:adj-inp}: \[ \mtcasevert{\theemb{\sAone\mathbin{\sfontsym{+}}\sAonepr}{\sAtwo\mathbin{\sfontsym{+}}\sAtwopr}{\stone}} {\mxtwo}{\mtletvert{\mxone}{\theprj{\sAone}{\sAtwo}{\mxtwo}}{\theemb{\sBone}{\sBtwo}{\semantics{\msone}}}} {\mxtwopr}{\mtletvert{\mxonepr}{\theprj{\sAonepr}{\sAtwopr}{\mxtwopr}}{\theemb{\sBone}{\sBtwo}{\semantics{\msonepr}}}} \sqsubseteq \mtcasevert{\letemb{\semantics{\sttwo}}} {\mxtwo}{\letemb{\semantics{\mstwo}}} {\mxtwopr}{\letemb{\semantics{\mstwopr}}} \] \item $\inferrule{{\senvone}\sqsubseteq{\senvtwo}} {\semtmdynr{\senvone}{\senvtwo}{\stunit}{\stunit}{\sunitty}{\sunitty}}$. Expanding we need to show \[ \theemb{\sunitty}{\sunitty} \mtunit \sqsubseteq \letemb{\mtunit} \] By definition, the left side is just $\mtunit$ and the right side after a substitution, valid because embeddings are pure \cref{lem:emb-pure,lem:pure-subst}. \item $\inferrule {\semtmdynr{\senvone}{\senvtwo}{\stone}{\sttwo}{\sAone}{\sAtwo}\\ \semtmdynr{\senvone}{\senvtwo}{\ssone}{\sstwo}{\sBone}{\sBtwo}} {\semtmdynr{\senvone}{\senvtwo}{\stpair{\stone}{\ssone}}{\stpair{\sttwo}{\sstwo}}{{\sAone} \mathbin{\sfontsym{\times}} {\sBone}}{{\sAtwo} \mathbin{\sfontsym{\times}} {\sBtwo}}}$. Expanding definitions, we need to show \[ \theemb{\sAone \mathbin{\sfontsym{\times}} \sBone}{\sAtwo \mathbin{\sfontsym{\times}} \sBtwo} {\mtpair{\semantics\stone}{\semantics\ssone}} \sqsubseteq \letemb{\mtpair{\semantics\sttwo}{\semantics\sstwo}} \] On the right, we duplicate the embeddings, justified by \cref{lem:emb-pure,lem:pure-subst}, to set up congruence: \[ \letemb{\mtpair{\semantics\sttwo}{\semantics\sstwo}} \mathrel{\gtdyn\ltdyn} \mtpair{\letemb{\semantics\sttwo}}{\letemb{\semantics\sstwo}} \] On the left, we use linearity of evaluation contexts to lift the terms out, then perform some open $\beta$ reductions and put the terms back in: \begin{align*} \theemb{\sAone \mathbin{\sfontsym{\times}} \sBone}{\sAtwo \mathbin{\sfontsym{\times}} \sBtwo} {\mtpair{\semantics\stone}{\semantics\ssone}} & \mathrel{\gtdyn\ltdyn} \mtletvert{\sx}{\semantics\stone}{ \mtletvert{\sy}{\semantics\ssone}{ \mtmatchpair{\sx}{\sy}{\mtpair{\sx}{\sy}}{\mtpair{\theemb{\sAone}{\sAtwo}{\sx}}{\theemb{\sBone}{\sBtwo}{\sy}}} } }\\ \by{open $\beta$}{lem:open-beta}&\mathrel{\gtdyn\ltdyn} \mtletvert{\sx}{\semantics\stone}{ \mtletvert{\sy}{\semantics\ssone}{ {\mtpair{\theemb{\sAone}{\sAtwo}{\sx}}{\theemb{\sBone}{\sBtwo}{\sy}}} } }\\ \by{linearity}{lem:evctx-linear}&\mathrel{\gtdyn\ltdyn} {\mtpair{\theemb{\sAone}{\sAtwo}{\semantics\stone}}{\theemb{\sBone}{\sBtwo}{\semantics\ssone}}} \end{align*} With the final step following by congruence (\cref{lem:cong}) and the premise: \[ {\mtpair{\theemb{\sAone}{\sAtwo}{\semantics\stone}}{\theemb{\sBone}{\sBtwo}{\semantics\ssone}}} \sqsubseteq \mtpair{\letemb{\semantics\sttwo}}{\letemb{\semantics\sstwo}} \] \item $\inferrule {\semtmdynr{\senvone}{\senvtwo}{{\stone}}{{\sttwo}}{\spairty{\sAone}{\sAonepr}}{\spairty{\sAtwo}{\sAtwopr}}\\ \semtmdynr{\senvone,{\sxone:\sAone},{\sxonepr:\sApr}}{\senvtwo,{\sxtwo:\sAtwo},{\sxtwopr:\sAtwopr}}{{\stonepr}}{{\sttwopr}}{\sBone}{\sBtwo} } {\semtmdynr{\senvone}{\senvtwo}{\stmatchpair{\sxone:\sAone}{\sxonepr:\sApr}{\stone}{\stonepr}}{\stmatchpair{\sxtwo:\sAtwo}{\sxtwopr:\sAtwopr}{\sttwo}{\sttwopr}}{\sBone}{\sBtwo}}$ Expanding definitions, we need to show \[ \semantics\senvone\vDash \theemb{\sBone}{\sBtwo}{{\mtmatchpair{\mxone}{\mxonepr}{\semantics\stone}{\semantics\stonepr}}} \sqsubseteq \letemb{\mtmatchpair{\mxtwo}{\mxtwopr}{\semantics\sttwo}{\semantics\sttwopr}} : \semantics\sBtwo \] On the right side, in anticipation of a use of congruence, we push the embeddings in \cref{lem:emb-pure,lem:pure-subst}: \[ \letemb{\mtmatchpair{\mxtwo}{\mxtwopr}{\semantics\sttwo}{\semantics\sttwopr}} \mathrel{\gtdyn\ltdyn} \mtmatchpairvert{\mxtwo}{\mxtwopr}{\left(\letemb{\semantics\sttwo}\right)}{\letemb{\semantics\sttwopr}} \] On the left side, we perform a commuting conversion, ep expand the discriminee and do some open $\beta$ reductions to simplify the expression. \begin{align*} \theemb{\sBone}{\sBtwo}{{\mtmatchpair{\mxone}{\mxonepr}{\semantics\stone}{\semantics\stonepr}}} & \mathrel{\gtdyn\ltdyn} {\mtmatchpair{\mxone}{\mxonepr}{\semantics\stone}{\theemb{\sBone}{\sBtwo}{\semantics\stonepr}}}\\ \by{ep pair}{lem:dyn-der-ep}& \mathrel{\gtdyn\ltdyn} {\mtmatchpairvert{\mxone}{\mxonepr}{\theprj{\spairty{\sAone}{\sAonepr}}{\spairty{\sAtwo}{\sAtwopr}}{\theemb{\spairty{\sAone}{\sAonepr}}{\spairty{\sAtwo}{\sAtwopr}}{\semantics\stone}}}{\theemb{\sBone}{\sBtwo}{\semantics\stonepr}}}\\ \text{(definition)}&= {\mtmatchpairvert{\mxone}{\mxonepr} {\mtmatchpairvert{\mxtwo}{\mxtwopr} {\theemb{\spairty{\sAone}{\sAonepr}}{\spairty{\sAtwo}{\sAtwopr}}{\semantics\stone}} {\mtpair{\theprj{\sAone}{\sAtwo}{\mxtwo}}{\theprj{\sAonepr}{\sAtwopr}{\mxtwo}}}} {\theemb{\sBone}{\sBtwo}{\semantics\stonepr}}}\\ \by{linearity}{lem:evctx-linear}&\mathrel{\gtdyn\ltdyn} {\mtmatchpairvert{\mxone}{\mxonepr} {\mtmatchpairvert{\mxtwo}{\mxtwopr} {\theemb{\spairty{\sAone}{\sAonepr}}{\spairty{\sAtwo}{\sAtwopr}}{\semantics\stone}} {\mtletvert{\mxone}{\theprj{\sAone}{\sAtwo}{\mxtwo}}{ \mtletvert{\mxonepr}{\theprj{\sAonepr}{\sAtwopr}{\mxtwopr}} {\mtpair{\mxone}{\mxonepr}}}}} {\theemb{\sBone}{\sBtwo}{\semantics\stonepr}}}\\ \by{comm. conv.}{lem:comm-conv}&\mathrel{\gtdyn\ltdyn} {\mtmatchpairvert{\mxtwo}{\mxtwopr} {\theemb{\spairty{\sAone}{\sAonepr}}{\spairty{\sAtwo}{\sAtwopr}}{\semantics\stone}} {\mtletvert{\mxone}{\theprj{\sAone}{\sAtwo}{\mxtwo}}{ \mtletvert{\mxonepr}{\theprj{\sAonepr}{\sAtwopr}{\mxtwopr}} {\mtmatchpairvert{\mxone}{\mxonepr}{\mtpair{\mxone}{\mxonepr}} {\theemb{\sBone}{\sBtwo}{\semantics\stonepr}}}}}}\\ \by{open $\beta$}{lem:open-beta}&\mathrel{\gtdyn\ltdyn} {\mtmatchpairvert{\mxtwo}{\mxtwopr} {\theemb{\spairty{\sAone}{\sAonepr}}{\spairty{\sAtwo}{\sAtwopr}}{\semantics\stone}} {\mtletvert{\mxone}{\theprj{\sAone}{\sAtwo}{\mxtwo}}{ \mtletvert{\mxonepr}{\theprj{\sAonepr}{\sAtwopr}{\mxtwopr}} {\theemb{\sBone}{\sBtwo}{\semantics\stonepr}}}}} \end{align*} The final step is by congruence and adjointness on inputs (\cref{lem:cong,lem:adj-inp}): \[ {\mtmatchpairvert{\mxtwo}{\mxtwopr} {\theemb{\spairty{\sAone}{\sAonepr}}{\spairty{\sAtwo}{\sAtwopr}}{\semantics\stone}} {\mtletvert{\mxone}{\theprj{\sAone}{\sAtwo}{\mxtwo}}{ \mtletvert{\mxonepr}{\theprj{\sAonepr}{\sAtwopr}{\mxtwopr}} {\theemb{\sBone}{\sBtwo}{\semantics\stonepr}}}}} \sqsubseteq \mtmatchpairvert{\mxtwo}{\mxtwopr}{\left(\letemb{\semantics\sttwo}\right)}{\letemb{\semantics\sttwopr}} \] \item $\inferrule {\semtmdynr{{\senvone},{\svarone}:\sAone}{{\senvtwo},{\svartwo}:\sAtwo}{\stone}{\sttwo}{\sBone}{\sBtwo}} {\semtmdynr{\senvone}{\senvtwo}{\stfun{\svarone}{\sAone}{\stone}}{\stfun{\svartwo}{\sAtwo}{\sttwo}}{\sfunty{\sAone}{\sBone}}{\sfunty{\sAtwo}{\sBtwo}}}$. Expanding definitions, we need to show \[ \semantics\senvone \vDash \mtletvert{\mxin{f}}{\mtfun{\mvarone}{\semantics\sAone}{\semantics\stone}} {\mtfun{\mxtwo}{\mAtwo} {\theemb{\sBone}{\sBtwo}{\mxin{f}\,(\theprj{\sAone}{\sAtwo}{\mxtwo})}}} \sqsubseteq \letemb{\mtfun{\mxtwo}{\mAtwo}{\semantics{\mttwo}}} \] First we simplify by performing some open $\beta$ reductions on the left and let-$\lambda$ equivalence and a commuting conversion (\cref{lem:open-beta,lem:let-lambda,lem:comm-conv}): \begin{align*} \mtletvert{\mxin{f}}{\mtfun{\mvarone}{\semantics\sAone}{\semantics\stone}} {\mtfun{\mxtwo}{\mAtwo} {\theemb{\sBone}{\sBtwo}{\mxin{f}\,(\theprj{\sAone}{\sAtwo}{\mxtwo})}}} &\mathrel{\gtdyn\ltdyn} {\mtfun{\mxtwo}{\mAtwo} {\theemb{\sBone}{\sBtwo}{\mtletvert{\mxone}{\theprj{\sAone}{\sAtwo}{\mxtwo}}{\semantics \mtone}}}}\\ &\mathrel{\gtdyn\ltdyn} {\mtfun{\mxtwo}{\mAtwo}{\mtletvert{\mxone}{\theprj{\sAone}{\sAtwo}{\mxtwo}}{\theemb{\sBone}{\sBtwo}{\semantics \mtone}}}} \end{align*} and on the right, we move the embedding into the body, which is justified because embeddings are essentially values (\cref{lem:emb-pure,lem:pure-subst}): \[ \letemb{\mtfun{\mxtwo}{\mAtwo}{\semantics{\mttwo}}} \mathrel{\gtdyn\ltdyn} \mtfun{\mxtwo}{\mAtwo}{\letemb{\semantics{\mttwo}}} \] The final step is justified by congruence \cref{lem:cong} and adjointness on inputs \cref{lem:adj-inp} and the premise: \[ {\mtfun{\mxtwo}{\mAtwo}{\mtletvert{\mxone}{\theprj{\sAone}{\sAtwo}{\mxtwo}}{\theemb{\sBone}{\sBtwo}{\semantics \mtone}}}} \mathrel{\gtdyn\ltdyn} \mtfun{\mxtwo}{\mAtwo}{\letemb{\semantics{\mttwo}}} \] \item $\inferrule{\semtmdynr{\senvone}{\senvtwo}{\stone}{\sttwo}{\sfunty{\sAone}{\sBone}}{\sfunty{\sAtwo}{\sBtwo}}\and \semtmdynr{\senvone}{\senvtwo}{\ssone}{\sstwo}{{\sAone}}{{\sAtwo}}} {\semtmdynr{\senvone}{\senvtwo}{\stapp{\stone}{\ssone}}{\stapp{\sttwo}{\sstwo}}{\sBone}{\sBtwo}}$. Expanding definitions, we need to show \[ \semantics\senvone \vDash \theemb{\sBone}{\sBtwo}{\semantics{\stone}\,\semantics{\ssone}} \sqsubseteq \letemb{\semantics{\sttwo}\,\semantics{\sstwo}} : \semantics{\sBtwo} \] First, we duplicate the embedding on the right hand side, justified by purity of embeddings, to set up a use of congruence later: \[ \letemb{\semantics{\sttwo}\,\semantics{\sstwo}} \mathrel{\gtdyn\ltdyn} \left(\letemb{\semantics{\sttwo}}\right)\left(\letemb{\semantics{\sstwo}}\right) \] Next, we use linearity of evaluation contexts \cref{lem:evctx-linear} so that we can do reductions at the application site without worrying about evaluation order: \[ \theemb{\sBone}{\sBtwo}{\semantics{\stone}\,\semantics{\ssone}} \mathrel{\gtdyn\ltdyn} \mtletvert{\mxin{f}}{\semantics{\stone}}{ \mtletvert{\mxin{a}}{\semantics{\ssone}}{ \theemb{\sBone}{\sBtwo}{\mxin{f}\,\mxin{a}} }} \] Next, we ep-expand $\mxin{f}$ (\cref{lem:dyn-der-ep}) and perform some $\beta$ reductions, use the ep property and then reverse the use of linearity. \begin{align*} \mtletvert{\mxin{f}}{\semantics{\stone}}{ \mtletvert{\mxin{a}}{\semantics{\ssone}}{ \theemb{\sBone}{\sBtwo}{\mxin{f}\,\mxin{a}} }} & \mathrel{\gtdyn\ltdyn} \mtletvert{\mxin{f}}{\semantics{\stone}}{ \mtletvert{\mxin{a}}{\semantics{\ssone}}{ \theemb{\sBone}{\sBtwo} {\theprj{\sAone\mathrel{\to_{s}}\sBone}{\sAtwo\mathrel{\to_{s}}\sBtwo}{\theemb{\sAone\mathrel{\to_{s}}\sBone}{\sAtwo\mathrel{\to_{s}}\sBtwo}{\mxin{f}}}\,\mxin{a}}}}\\ \by{open $\beta$}{lem:open-beta}& \mathrel{\gtdyn\ltdyn} \mtletvert{\mxin{f}}{\semantics{\stone}}{ \mtletvert{\mxin{a}}{\semantics{\ssone}}{ \theemb{\sBone}{\sBtwo}{\theprj{\sBone}{\sBtwo}{\left({\theemb{\sAone\mathrel{\to_{s}}\sBone}{\sAtwo\mathrel{\to_{s}}\sBtwo}{\mxin{f}}}\right)\,\left(\theemb{\sAone}{\sAtwo}{\mxin{a}}\right)}} }}\\ \by{ep pair}{lem:dyn-der-ep}& \sqsubseteq \mtletvert{\mxin{f}}{\semantics{\stone}}{ \mtletvert{\mxin{a}}{\semantics{\ssone}}{ \left({\theemb{\sAone\mathrel{\to_{s}}\sBone}{\sAtwo\mathrel{\to_{s}}\sBtwo}{\mxin{f}}}\right)\,\left(\theemb{\sAone}{\sAtwo}{\mxin{a}}\right) }}\\ \by{linearity}{lem:evctx-linear}& \mathrel{\gtdyn\ltdyn} \left({\theemb{\sAone\mathrel{\to_{s}}\sBone}{\sAtwo\mathrel{\to_{s}}\sBtwo}{\semantics \stone}}\right)\,\left(\theemb{\sAone}{\sAtwo}{\semantics \ssone}\right) \end{align*} With the final step being congruence \cref{lem:cong}: \[ \left({\theemb{\sAone\mathrel{\to_{s}}\sBone}{\sAtwo\mathrel{\to_{s}}\sBtwo}{\semantics \stone}}\right)\,\left(\theemb{\sAone}{\sAtwo}{\semantics \ssone}\right) \mathrel{\gtdyn\ltdyn} \left(\letemb{\semantics{\sttwo}}\right)\left(\letemb{\semantics{\sstwo}}\right) \] \end{enumerate} \end{proof} \fi \section{Related Work and Discussion} \label{section:related} Our analysis of graduality as observational approximation and dynamism as ep pairs builds on the axiomatic and denotational semantics of graduality for a call-by-name language presented in \cite{newlicata2018}. The semantics there gives axioms of type and term dynamism that imply that upcasts and downcasts are embedding-projection pairs. Our analysis here is complementary: we present the graduality theorem as a concrete property of a gradual language defined with an operational semantics. Our graduality logical relation should serve as a concrete \emph{model} of a call-by-value version of gradual type theory, similar to the call-by-name denotational models presented there. Furthermore, we show here how this interpretation of graduality maps back to a standard cast calculus presentation of gradual typing. \paragraph{Graduality vs Gradual Guarantee} \label{sec:rel:gradual-guarantee} The notion of graduality we present here is based on the \emph{dynamic gradual guarantee} by Siek, Vitousek, Cimini, and Boyland \cite{refined, boyland14}. The dynamic gradual guarantee says that syntactic term dynamism is an \emph{invariant} of the operational semantics up to error on the less dynamic side. More precisely, if $\cdot \vdash t_1 \sqsubseteq t_2 : A_1 \sqsubseteq A_2$ then either $t_1 \stepstar \mho$ or both $t_1,t_2$ diverge or $t_1 \stepstar v_1$ and $t_2 \stepstar v_2$ with $v_1 \sqsubseteq v_2$. Observe that when restricting $A_1 = A_2 = 1$, this is precisely the relation on closed programs out of which we build our definition of semantic term dynamism. We view their formulation of the dynamic gradual guarantee as a syntactic \emph{proof technique} for proving graduality of the system. Graduality should be easier to formulate for different presentations of gradual typing because it does not require a second syntactic notion of term dynamism for the implementation language. In the proofs of the gradual guarantee in \citet{refined}, they have to develop new rules for term dynamism for their cast calculus, that they do not attempt to justify at an intuitive level. Additionally, they have to change their translation from the gradual surface language to the cast calculus, because the traditional translation did not preserve the rigid syntactic formulation of term dynamism. In more detail, when a dynamically typed term $t : \mathord{?}$ was applied to a term $s : A$, in their original formulation this was translated as \[ \semantics{t~s} = (\langle(A \to \mathord{?}) \Leftarrow \mathord{?}\rangle\semantics{t})\:\semantics{s} \] but if the term in function position had a function type $t' : \mathord{?} \to \mathord{?}$, it was translated as \[ \semantics{t'\,s} = (\semantics{t}\:(\langle \mathord{?} \Leftarrow A\rangle\semantics{s})\] But if $t' \sqsubseteq t$, we would not have $\semantics{t' s} \sqsubseteq \semantics{t s}$ because the function position on the left has type $\mathord{?} \to \mathord{?}$ which is \emph{more dynamic} than on the right which has $A \to \mathord{?}$. While changing this was perfectly reasonable to do to use their syntactic proof method, we can see that from the \emph{semantic} point of view of graduality there was nothing wrong with their original translation and it could have been validated using a logical relation. Another significant difference between our work and theirs is that we identify the central role of embedding-projection pairs in graduality, and take advantage of it in our proof. As mentioned above, they add rules to term dynamism for the cast calculus without justification. These rules are the generalization of our \textsc{Cast-Right} and \textsc{Cast-Left} \emph{without} the restriction that the casts be upcasts or downcasts: \begin{mathpar} \inferrule*[right=Cast-Right'] {\tmdynr{\senvone}{\senvtwo}{\stone}{\sttwo}{\sAone}{\sAtwo} \and \sAone \sqsubseteq \sBtwo} {\tmdynr{\senvone}{\senvtwo}{\stone}{{\obcast\sAtwo\sBtwo}{\sttwo}}{\sAone}{\sBtwo}}\quad \inferrule*[right=Cast-Left'] {\tmdynr{\senvone}{\senvtwo}{\stone}{\sttwo}{\sAone}{\sAtwo} \and \sBone \sqsubseteq \sAtwo} {\tmdynr{\senvone}{\senvtwo}{{\obcast{\sAone}{\sBone}}{\stone}}{\sttwo}{\sBone}{\sAtwo}} \end{mathpar} These are valid rules in our system, but by identifying the subset of upcasts and downcasts, we prove the validity of the rules from earlier, intuitive rules: decomposition, congruence, and the ep-pair properties. Furthermore, while we do not take these rules as primitive it is notable that these two rules imply that upcasts and downcasts are adjoint---i.e., if $\sAone \sqsubseteq \sAtwo$, the following are provable for $\st : \sAone$ and $\ss : \sAtwo$: \[ \st \sqsubseteq \obcast{\sAtwo}{\sAone}{\obcast\sAone\sAtwo{\st}} \qquad \obcast{\sAone}{\sAtwo}{\obcast\sAtwo\sAone{\ss}} \sqsubseteq \ss \] \citet{refined} also present a theorem called the \emph{static} gradual guarantee that pertains to the type checking of gradually typed programs. The static gradual guarantee says that if a term $\Gamma \vdash t_1 : A_1$ type checks, and $t_2$ is syntactically more dynamic, then $\Gamma \vdash t_2 : A_2$ with a more dynamic type, i.e., $A_1 \sqsubseteq A_2$. We view this as a \emph{corollary} to graduality. If type checking is a compositional procedure that seeks to rule out dynamic type errors, then if $t_1$ is syntactically less dynamic than $t_2$, then it is also semantically less dynamic, meaning every type error in $t_2$'s behavior was already present in $t_1$, so it should also type check. \paragraph{Types as EP Pairs} \label{section:related:retraction} The interpretation of types as retracts of a single domain originated in \citet{scott71} and is a common tool in denotational semantics, especially in the presence of a convenient \emph{universal} domain. A retraction is a pair of morphisms $s : A \to B$, $r : B \to A$ that satisfy the retraction property $r \circ s = \text{id}_A$, but not necessarily the projection property $s \circ r \mathrel{\sqsubseteq_{\text{err}}} \text{id}_B$. Thus ep pair semantics can be seen as a more refined retraction semantics. Retractions have been used to study interaction between typed and untyped languages, e.g., see \citet{benton05:embedded, favonia17}. Embedding-projection pairs are used extensively in domain theory as a technical device for solving non-well-founded domain equations, such as the semantics of a dynamic type. In this paper, our error-approximation ep pairs do not play this role, and instead the retraction and projection properties are desirable in their own right for their intuitive meaning for type checking. Many of the properties of our embedding-projection pairs are anticipated in \citet{henglein94:dynamic-typing} and \citet{thatte90}. \citet{henglein94:dynamic-typing} defines a language with a notion of \emph{coercion} $A \rightsquigarrow B$ that corresponds to general casts, with primitives of tagging $tc! : tc(\mathord{?},\ldots) \rightsquigarrow \mathord{?}$ and untagging $tc? : \mathord{?} \rightsquigarrow tc(\mathord{?},\ldots)$ for every type constructor ``$tc$''. Crucially, Henglein notes that $tc!;tc?$ is the identity modulo efficiency and that $tc?;tc!$ errors more than the identity. Furthermore, they define classes of ``positive'' and ``negative'' coercions that correspond to embeddings and projections, respectively, and a ``subtyping'' relation that is the same as type precision. They then prove several theorems analogous to our results: \begin{enumerate} \item (Retraction) For any pair of positive coercion $p : A \rightsquigarrow B$, and negative coercion $n : B \rightsquigarrow A$, they show that $p;n$ is equal to the identity in their equational theory. \item (Almost projection) Dually, they show that $n;p$ is equal to the identity \emph{assuming} that $tc?;tc!$ is equal to the identity for every type constructor. \item They show every coercion factors as a positive cast to $\mathord{?}$ followed by a negative cast to $\mathord{?}$. \item They show that $A \leq B$ if and only if there exists a positive coercion $A \rightsquigarrow B$ and a negative coercion $B \rightsquigarrow A$. \end{enumerate} They also prove factorization results that are similar to our factorization definition of semantic type precision, but it is unclear if their theorem is stronger or weaker than ours. One major difference is that their work is based on an equational theory of casts, whereas ours is based on notions of observational equivalence and approximation of a standard call-by-value language. Furthermore, in defining our notion of observational error approximation, we provide a more refined projection property, justifying their use of the term ``safer'' to compare $p;e$ and the identity. The system presented in \citet{thatte90}, called ``quasi-static typing'' is a precursor to gradual typing that inserts type annotations into dynamically typed programs to make type checking explicit. There they prove a desirable soundness theorem that says their type insertion algorithm produces an explicitly coercing term that is minimal in that it errors no more than the original dynamic term. They prove this minimality theorem with respect to a partial order $\sqsupseteq$ defined as a logical relation over a domain-theoretic semantics that (for the types they defined) is analogous to our error ordering for the operational semantics. However, they do not define our operational formulation of the ordering as contextual approximation, linked to the denotational definition by the adequacy result, nor that any casts form embedding-projection pairs with respect to this ordering. Finally, we note that neither of these papers \cite{henglein94:dynamic-typing,thatte90} extends the analysis to anything like graduality. \paragraph{Semantics of Casts} \label{section:related:casts} Superficially similar to the embedding-projection pair semantics are the \emph{threesome casts} of \citet{siek-wadler10}. A threesome cast factorizes an arbitrary cast $A \Rightarrow B$ through a third type $C$ as a \emph{downcast} $A \Rightarrow C$ followed by an \emph{upcast} $C \Rightarrow B$, whereas ep-pair semantics factorizes a cast as an \emph{upcast} $A \Rightarrow \mathord{?}$ followed by a \emph{downcast} $\mathord{?} \Rightarrow B$. Threesome casts can be used to implement gradual typing in a space-efficient manner, the third type $C$ is used to collapse a sequence of arbitrarily many casts into just the two. In the general case, the threesome cast $A \Rightarrow C \Rightarrow B$ is \emph{stronger} (fails more) than the direct cast $A \Rightarrow B$. This is the point of threesome casts: the middle type faithfully represents a sequence of casts in minimal space. EP pair semantics instead factorizes a cast $A \Rightarrow B$ into an \emph{upcast} $A \Rightarrow \mathord{?}$ followed by a \emph{downcast} $\mathord{?} \Rightarrow B$, a factorization already utilized in \cite{henglein94:dynamic-typing}, and which we showed is \emph{always} equivalent to the direct cast $A \Rightarrow B$. We view the benefits of the techniques as orthogonal: the up-down factorization helps to prove graduality, whereas the down-up factorization helps implementation. The fact that both techniques reduce reasoning about arbitrary casts to just upcasts and downcasts supports the idea that upcasts and downcasts are a fundamental aspect of gradual typing. Recently, work on \emph{dependent interoperability} \cite{dagand16, dagand18} has identified Galois connections as a semantic formulation for casting between more and less precise types in a non-gradual dependently typed language, and conjectures that this should relate to type dynamism. We confirm their conjecture in showing that the casts in gradual typing satisfy the slightly stronger property of being embedding-projection pairs and have used it to explain the cast semantics of gradual typing and graduality. Furthermore, our analysis of the precision rules as compositional constructions on ep pairs is directly analogous to their library, which implements ``connections'' between, for instance, function types given connections between the domains and codomains using Coq's typeclass mechanism. \paragraph{Pairs of Projections and Blame} \label{section:related:projections} One of the main inspirations for this work is the analysis of contracts in \citet{findler-blume06}. They decompose contracts in untyped languages as a pair of ``projections'', i.e., functions $c : \mathord{?} \to \mathord{?}$ satisfying $c \errordof{\mathord{?}\to\mathord{?}} \text{id}$. However, they do not provide a rigorous definition or means to prove this ordering for complex programs as we have. There is a close relationship between such projections and ep pairs (an instance of the relationship between adjunctions and (co)monads): for any ep pair $e,p : A \mathrel{\triangleleft} B$, $e\circ p : B \to B$ is a projection. However, we think this relationship is a red herring: instead we think that a pair of projections is better understood as ep pairs themselves. The intuition they present is that one of the projections restricts the behavior of the ``positive'' party (the term) and the other restricts the behavior of the ``negative'' party (the continuation). EP pairs are similar, the projection restricts the positive party by directly checking, and the embedding restricts the negative party in the function case by calling a projection on any value received from its continuation. However, in our current formulation, it does not even make sense to ask if each component of our embedding-projection pairs is a projection because the definition of a projection assumes that the domain and codomain are the same (to define the composite $c \circ c$). We conjecture that this can be made sensible by using a PER semantics where types are relations on untyped values, so that the embedding and projection have ``underlying'' untyped terms representing them, and those are projections. Their analysis of blame was adapted to gradual typing in \citet{wadler-findler09} and plays a complementary role to our analysis: they use the dynamism relation to help prove the blame soundness theorem, whereas we use it to prove graduality. The fact that they use essentially the same solution suggests there is \mbox{a deeper connection between blame and graduality than is currently understood.} \paragraph{Gradualization} \label{section:related:gradualization} The Gradualizer \cite{gradualizer16,gradualizer17} and Abstracting Gradual Typing (AGT) \cite{AGT} both seek to make language design for gradually typed languages more systematic. In doing so they make proving graduality far easier than our proof technique possibly could: it holds by construction. Furthermore, these systems also provide a surface-level syntax for gradual typing and an explanation for gradual type checking, while we do not address these at all. However, the downside of their approaches is that they require a rigid adherence to a predefined language framework. While our gradual cast calculus as presented fits into this framework, many gradually typed languages do not. For instance, Typed Racket, the first gradually typed language ever implemented \cite{tobin-hochstadt08}, is not given an operational semantics in the style of a cast calculus, but rather is given a semantics \emph{by translation} to an untyped language using contracts. We could prove the graduality of such a system by adapting our logical relation to an untyped setting. We hope in the future to explore the connections between the above frameworks and our analysis of dynamism as embedding-projection pairs. We conjecture that both Gradualizer and AGT by construction produce upcasts and downcasts that satisfy the ep pair properties. The AGT approach in particular has some similarities that stand out: their formulation of type dynamism is based on an embedding-projection pair between static types and sets of gradual types. However, we are not sure if this is a coincidence or has a deeper connection to our approach. \section{Conclusion} \label{sec:concl} Graduality is a key property for gradually typed languages as it validates programmer intuition that adding precise types only results in stricter type checking. Graduality is challenging to prove. Moreover, it rests upon the language's definition of type dynamism but there has been little guidance on defining type dynamism, other than that graduality must hold. We have given a semantics for type dynamism: $\sA \sqsubseteq \sB$ should hold when the casts between $\sA, \sB$ form an embedding-projection pair. This allows for natural proofs of graduality using a logical relation for observational error approximation. Looking to the future, we would like to make use of our semantic formulation of type dynamism based on ep pairs to design and analyze gradual languages with advanced features such as parametric polymorphism, effect tracking, and mutable state. For parametric polymorphism in particular, we would like to investigate whether our approach justifies any of the type-dynamism definitions previously proposed \cite{ahmed17,igarashipoly17}, and the possibility of proving both graduality and parametricity theorems with a single logical relation. \begin{acks} We gratefully acknowledge the valuable feedback provided by Ben Greenman and the anonymous reviewers. Part of this work was done at Inria Paris while Amal Ahmed was a Visiting Professor. This material is based upon work supported by the National Science Foundation under grant CCF-1453796, and the European Research Council under ERC Starting Grant SECOMP (715753). Any opinions, findings, and conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect the views of our funding agencies. \end{acks}
1,108,101,565,341
arxiv
\section{Introduction} Conformal mapping \cite{ZN} is a coordinate transformation technique that has found numerous practical applications in science and engineering. Most of these applications are for two-dimensional problems but the scope of the technique is not limited to two-dimensions. Liouville's theorem \cite{DEB} in fact shows that higher dimensional conformal maps are possible but must be composed of translations, similarities, orthogonal transformations and inversions. In two recent papers, an isometric conformal transformation has been used to simplify the Schr\"{o}dinger equation for the three-dimensional harmonic oscillator \cite{RJD1} and the hydrogen atom \cite{RJD2}. The purpose of this paper is to show the same transformation technique can also be similarly applied to central potential problems formulated using the Klein-Gordon equation. It is convenient to consider a single particle of energy E and a spatial displacement $x_i$ $(i=1,2,3)$ from a potential source. In the rest frame of this system, both the particle and the source may be assumed to exist at the same time t. The goal of section 2 of this paper is to introduce an isometric conformal mapping of the form $z_i= x_i,s=t-\imath \hbar \mathcal{F}(|x_i|,E)$ where $\mathcal{F}$ is a real function, $\hbar$ is Planck's constant divided by 2$\pi$ and $\imath=\sqrt{-1}$. It is clear from inspection that this passive transformation does no more than introduce an imaginary shift along the time-axis of the two related coordinate systems. It is convenient to write the complex conjugate form of the $(z_i,s)$-coordinates as $(z_i^*,s^*)$ even though $z_i^*=z_i$ since $z_i^*$ and $z_i$ still belong to different coordinate systems. This distinction is shown to be most evident in the computation of the partial derivatives $\partial / \partial z_i$ and $\partial / \partial z_i^*$ from the chain rule of partial differentiation as these evaluate differently in their conjugate coordinate systems. One further topic to be introduced in section 2 is the Cauchy-Riemann equations that are necessary to determine if a function transformed into the $(z_i,s)$-coordinate system has well defined partial derivatives. The Klein-Gordon equation for the harmonic oscillator is presented in section 3 alongside a complete set of eigensolutions. It is shown that both these results simplify in terms of $(z_i,s)$-coordinates and that the eigensolutions are holomorphic. It is of interest that the harmonic oscillator potential is eliminated from the Klein-Gordon equation in $(z_i,s)$-coordinates. It is also convenient that the lowering and raising operators for the harmonic oscillator are proportional to the partial derivatives $\partial / \partial z_i$ and $\partial / \partial z_i^*$ respectively. In section 4, the Klein-Gordon equation for a charged particle in a Coulomb field is mapped into $(z_i,s)$-coordinates. As in the harmonic oscillator case, any reference to the potential field is eliminated from the mathematical description of the problem through the mapping. \section{Conformal Mapping} The task ahead is to present an isometric conformal transformation relating a real $(x_i,t)$-coordinate system and a complex $(z_i,s)$ coordinate system. This mapping is intended for application to a particle of mass $m$ and total energy $E$ at a radial separation $r$ from a point source. It is convenient to express it in the form \begin{equation} \label{eq: conftrans1} z_{i} = x_{i}, \quad s = t - \imath \frac{\hbar}{E} \left[ a\ln(r) + \left(\frac{r}{b} \right)^\lambda \right] \end{equation} where $\lambda=1$ for the hydrogen atom, $\lambda=2$ for the harmonic oscillator. The form of the dimensionless quantity $a$ and scale length $b$ must be determined for each specific problem. It is notable the logarithmic term is absent in previous papers that treat non-relativistic problems. It is introduced here for the relativistic treatment of a charged particle in a Coulomb field. In the $(x_i,t)$-coordinate system, the particle has a spatial displacement $x_i$ from the potential source but shares the same world time t as the source. Similarly, in the $(z_i,s)$-coordinate system, the particle has a displacement $z_i = x_i$ from the source but shares the same complex time s. The isometric nature of the transformation therefore follows from the result $|z_i|=|x_i|$. It is also clear that the complex time $s$ is translated through an imaginary displacement from the real time $t$. In the application of complex coordinates to express physical problems, there is generally going to be both a complex and a complex conjugate coordinate representation for each individual problem. In the present case, the complex conjugate of eq. (\ref{eq: conftrans1}) is \begin{equation} \label{eq: conftrans2} z_{i}^* = x_{i}, \quad s^* = t + \imath \frac{\hbar}{E} \left[ a\ln(r) + \left(\frac{r}{b} \right)^\lambda \right] \end{equation} Naturally, there must also be inverse transformations mapping the complex and complex conjugate representations of the problem back into a single physical coordinate system. The inverses of the transformations (\ref{eq: conftrans1}) and (\ref{eq: conftrans2}) are readily shown to be \begin{equation} \label{eq: inv_ict1} x_{i} = z_{i}, \quad t = s + \imath \frac{\hbar}{E} \left[ a\ln(r_z) + \left(\frac{r_z}{b} \right)^\lambda \right] \end{equation} \begin{equation} \label{eq: inv_ict2} x_{i} = z_{i}^*, \quad t = s^* - \imath \frac{\hbar}{E} \left[ a\ln(r_z) + \left(\frac{r_z}{b} \right)^\lambda \right] \end{equation} ($r_z =|z_i|$) respectively. It is now interesting to investigate properties of derivatives with respect to complex 4-position coordinates. In particular, the chain rule of partial differentiation gives \begin{equation} \label{eq: complexDiff1} \frac{\partial}{\partial s} = \frac{\partial t}{\partial s} \frac{\partial}{\partial t} + \frac{\partial x_{i}}{\partial s} \frac{\partial}{\partial x_{i}} = \frac{\partial}{\partial t} \end{equation} \begin{equation} \label{eq: complexDiff2} \frac{\partial}{\partial s^*} = \frac{\partial t}{\partial s^*} \frac{\partial}{\partial t} + \frac{\partial x_{i}}{\partial s^*} \frac{\partial}{\partial x_{i}} = \frac{\partial}{\partial t} \end{equation} \begin{equation} \label{eq: complexDiff3} \frac{\partial}{\partial z_i} = \frac{\partial x_i}{\partial z_i} \frac{\partial}{\partial x_i} + \frac{\partial t}{\partial z_i} \frac{\partial}{\partial t} = \frac{\partial}{\partial x_i} + \imath \frac{x_i}{r^2} \left[a + \lambda \left( \frac{r}{b} \right)^{\lambda} \right] \frac{\hbar}{E} \frac{\partial}{\partial t} \end{equation} \begin{equation} \label{eq: complexDiff4} \frac{\partial}{\partial z_i^*} = \frac{\partial x_i}{\partial z_i^*} \frac{\partial}{\partial x_i} + \frac{\partial t}{\partial z_i^*} \frac{\partial}{\partial t} = \frac{\partial}{\partial x_i} - \imath \frac{x_i}{r^2} \left[a + \lambda \left( \frac{r}{b} \right)^{\lambda} \right] \frac{\hbar}{E} \frac{\partial}{\partial t} \end{equation} Note, eqs. (\ref{eq: complexDiff1}) and (\ref{eq: complexDiff3}) have been obtained using eq. (\ref{eq: inv_ict1}); eqs. (\ref{eq: complexDiff2}) and (\ref{eq: complexDiff4}) are based on eq. (\ref{eq: inv_ict2}). It has also been assumed in deriving eqs. (\ref{eq: complexDiff1}) through (\ref{eq: complexDiff4}) that \begin{equation} \label{eq: complexDiff5} \frac{\partial z_i}{\partial s}=\frac{\partial s}{\partial z_i} = \frac{\partial z_i^*}{\partial s^*}=\frac{\partial s^*}{\partial z_i^*}=0 \end{equation} indicating that the coordinates $z_i$ and $s$ are independent of each other as are the complex conjugate coordinates $z_i^*$ and $s^*$. This assumption is readily validated using eqs. (\ref{eq: complexDiff1}) through (\ref{eq: complexDiff4}) to directly evaluate each of the derivatives in eq. (\ref{eq: complexDiff5}) in $(x_i,t)$-coordinates. In further consideration of eqs. (\ref{eq: conftrans1}), it is convenient to write $s=t+i\tau$ where \begin{equation} \tau = -\frac{\hbar}{E} \left[ a\ln(r) + \left(\frac{r}{b} \right)^\lambda \right] \end{equation} The requirement for a continuously differentiable function $f(s)=g(t,\tau)+ih(t,\tau)$ to be holomorphic is then for the real functions g and h to satisfy the set of Cauchy-Riemann equations \begin{equation} \frac{\partial g}{\partial \tau}=\frac{\partial h}{\partial t} \quad \frac{\partial h}{\partial \tau}=-\frac{\partial g}{\partial t} \end{equation} or equivalently \begin{equation} \label{eq: creq} \frac{\partial^2 f}{\partial t^2} + \frac{\partial^2 f}{\partial \tau^2} = 0 \end{equation} It is thus concluded that a function $\psi(x_i,t)$ will also have an equivalent holomorphic form $\theta(z_i)f(s)$ in the complex $(z_\mu, s)$-coordinate system providing it is separable and $f$ satisfies eq. (\ref{eq: creq}). Here, it is understood that the domain of the Cauchy-Riemann equations in this problem is the complex plane containing s. The Cauchy-Riemann equations put no restriction at all on the form of the function $\theta(z_i)$ since $z_i$ and $s$ are independent coordinates and $z_i$ belongs to a real three-dimensional space. \section{The Harmonic Oscillator} The Klein-Gordon equation determining the wavefunction $\psi(x_i, t)$ for a single particle confined in a 3-dimensional harmonic oscillator potential can be expressed in the form \begin{equation} \label{eq: KGHO1} -\hbar^2c^2 \frac{\partial^2 \psi}{\partial x_i^2} + m_0^2c^4 \psi + \Omega^2 x^2\psi = E^2 \psi \end{equation} where $\Omega$ is the spring constant of the oscillator and \begin{equation} \label{eq: KGHO2} E\psi = \imath \hbar \frac{\partial \psi}{\partial t} \end{equation} gives the total energy of the particle. The solution \cite{DFL} to eqs. (\ref{eq: KGHO1}) and (\ref{eq: KGHO2}) takes the separable form \begin{eqnarray} \label{eq: hopsi1} \psi_{l_1 l_2 l_3}(x_i,t) = \phi_{l_1}(x_1)\phi_{l_2}(x_2)\phi_{l_3}(x_3)\exp(-\imath Et / \hbar) \end{eqnarray} where \begin{eqnarray} \label{eq: phi1} \phi_l(x_i) = k_l H_{l}(\xi_i) \exp \left(-\frac{\xi^2}{2} \right) \end{eqnarray} $\xi_i=\sqrt{\frac{\Omega}{\hbar}}x_i$, $H_{l_j}$ are Hermite polynomials and $l_1,l_2,l_3$ are positive integers. Inserting eq. (\ref{eq: hopsi1}) into eq. (\ref{eq: KGHO1}) gives the energy spectrum \begin{equation} E_n = \sqrt{2\hbar c \Omega \left(\frac{3}{2} + n \right)+ m_0^2c^4 } \end{equation} where $n=l_1 + l_2 + l_3$. In developing the connection between complex $(z_i,s)$-coordinates and the quantum harmonic oscillator, the first step is to set $\lambda=2$ and \begin{equation} b=\sqrt{\frac{2\hbar c}{\Omega}} \end{equation} in eq. (\ref{eq: conftrans1}). In this case, eqs. (\ref{eq: complexDiff3}), (\ref{eq: complexDiff4}) and (\ref{eq: KGHO2}) can be combined to give \begin{equation} \label{eq: complexDiff6} \frac{\partial}{\partial z_i} = \frac{\partial}{\partial x_i} + \frac{\Omega}{\hbar c} x_i \end{equation} \begin{equation} \label{eq: complexDiff7} \frac{\partial}{\partial z_i^*} = \frac{\partial}{\partial x_i} - \frac{\Omega}{\hbar c} x_i \end{equation} These results lead to the operator relationship \begin{equation}\label{eq: qprop1} -\frac{\partial^2}{\partial z_i^* \partial z_i} + \frac{3 \Omega}{\hbar c} = -\frac{\partial^2}{\partial x_i^2} + \frac{\Omega^2 x^2}{\hbar^2 c^2} \end{equation} enabling the Klein-Gordon equation (\ref{eq: KGHO1}) for the harmonic oscillator to be expressed in the concise form \begin{equation}\label{eq: complexKGHO1} -\hbar^2 c^2 \frac{\partial^2 \psi}{\partial z_i^* \partial z_i} + m_0^2c^4 \psi = \left( E^2 - 3\hbar c \Omega \right) \psi \end{equation} It is also readily shown using eqs. (\ref{eq: complexDiff1}) and (\ref{eq: KGHO2}) that \begin{equation} \label{eq: complexKGHO2} E\psi = \imath \hbar \frac{\partial \psi}{\partial s} \end{equation} Eqs. (\ref{eq: complexKGHO1}) and (\ref{eq: complexKGHO2}) together, therefore, constitute a complete description of the quantum harmonic oscillator in terms of $(z_i,s)$-coordinates. On comparing eq. (\ref{eq: KGHO1}) and (\ref{eq: complexKGHO1}), it is clear that the harmonic oscillator potential term in the original Klein-Gordon equation is absent in the complex representation. The oscillator function (\ref{eq: hopsi1}) is readily transformed into $(z_i,s)$-coordinates using eqs. (\ref{eq: conftrans1}) to give \begin{eqnarray} \label{eq: hopsi3} \psi(z_i,s) = \theta_{l_1}(z_1)\theta_{l_2}(z_2)\theta_{l_3}(z_3)f(s) \end{eqnarray} where \begin{eqnarray} \label{eq: hopsi4} \theta_l(z_i) = k_l H_{l}(\zeta_i), \quad f(s) = \exp(-\imath Es / \hbar) \end{eqnarray} and $\zeta_i = \sqrt{\frac{\Omega}{\hbar}}z_i$. It is notable that eq. (\ref{eq: hopsi1}) and (\ref{eq: hopsi3}) are similar except that eq. (\ref{eq: hopsi3}) does not contain a gaussian term. It is also notable that $f(s)$ is a continuously differentiable solution of eq.(\ref{eq: creq}) thus demonstrating that the oscillator function $\psi(z_i, s)$ is holomorphic. It is instructive that the partial derivatives in $z_i$-space can be scaled to give the lowering $\hat{a}_\mu$ and raising $\hat{a}_\mu^\dag$ operators \begin{equation} \label{ladder_nr} \hat{a}_i = \sqrt{\frac{\hbar c}{2\Omega}}\frac{\partial}{\partial z_i}, \quad \hat{a}_i^\dag = -\sqrt{\frac{\hbar c}{2\Omega}}\frac{\partial}{\partial z_i^*} \end{equation} Inserting these results into eq. (\ref{eq: complexKGHO1}) gives the expression \begin{equation} \hat{a}_i^\dag \hat{a}_i \psi = n \psi \end{equation} It is clear therefore that the conformal transformation of the Klein-Gordon eq. (\ref{eq: KGHO1}) has led to the ladder operator representation of the harmonic oscillator. For an oscillator in the ground state $\psi_{000}$, eq. (\ref{eq: complexKGHO1}) simplifies to \begin{equation}\label{eq: ground_state_ho} \frac{\partial^2 \psi_{000}}{\partial z_i^* \partial z_i} = 0 \end{equation} giving the solution $\psi_{000} = \exp(-\imath E_0s / \hbar)$. This result may also be cast into the familiar form $\hat{a}_i \psi_{000} = 0$. In consideration of the foregoing arguments, it is of interest that eqs. (\ref{eq: conftrans1}) reduces to the form $z_i=x_i, s=t$ on setting $\Omega=0$. It also apparent that eq. (\ref{eq: complexKGHO1}) reduces to the free field form of the Klein-Gordon equation under these same conditions. The converse of this argument is that harmonic interactions may be introduced into the free-field Klein-Gordon equation through the replacement $t \rightarrow t - \imath \frac{ \Omega}{2 E}x^2$ exactly equivalent to the more usual approach of adding the oscillator potential into the Hamiltonian for the oscillator. \section{The Coulomb Potential} The Klein-Gordon equation determining the wavefunction $\psi(x_i, t)$ for an electron of charge $-e$ bound in a Coulomb field originating from a fixed point charge $e$ can be expressed in the form \begin{equation} \label{eq: KGCP1} -\hbar^2c^2 \nabla^2 \psi + m_0^2c^4\psi = \left( E + c \hbar \frac{\alpha}{r} \right)^2 \psi \end{equation} where \begin{equation} \alpha = \frac{e^2}{4 \pi \epsilon_0 c \hbar} \end{equation} is the fine structure constant and $\epsilon_0$ is the permittivity of free space. This system is an approximation to the hydrogen atom neglecting the spin of the electron and the finite mass of the proton. Eq. (\ref{eq: KGHO2}) gives the total energy of the electron. The solution \cite{DFL, JN} to eqs. (\ref{eq: KGCP1}) and (\ref{eq: KGHO2}) in spherical polar coordinates $(r,\theta,\phi)$ takes the separable form \begin{eqnarray} \label{eq: psi_ha} \psi_{nlk}(r,\theta,\phi,t) = R_{nl}(r)Y_{lk}(\theta, \phi)\exp(-\imath E_nt / \hbar) \end{eqnarray} where $n,l,k$ are the hydrogenic quantum numbers, \begin{equation} R_{nl}(r) = \frac{\mathcal{N}_{nl}}{r^{\eta_l}} \exp \left( -\frac{r}{r_{nl}} \right)p_n\left(\frac{r}{r_{nl}}\right) \end{equation} $\mathcal{N}_{nl}$, $r_{nl}$ and $\eta_l$ are constants and $p_n$ is a polynomial of degree n. The normalized angular component $Y_{lk}(\theta, \phi)$ is unaffected by the conformal transformation. Inserting the wavefunction (\ref{eq: psi_ha}) into eq. (\ref{eq: KGCP1}) and collecting together terms in the same power of r gives \begin{equation} \label{energy_ha} E_{nl} = m_0c^2 \left[ 1 + \frac{\alpha^2}{(n+1-\eta_l)^2}\right]^{-1/2} \end{equation} \begin{equation} r_{nl} = \frac{\hbar c (n+1-\eta_l)}{\alpha E_{nl}} \end{equation} where \begin{equation} \eta_l = \frac{1}{2} \pm \sqrt{\left(l+\frac{1}{2} \right)^2-\alpha^2} \end{equation} The expression for $\eta_l$ contains a $\pm$ sign. The negative sign is usually chosen since in this case eq. (\ref{energy_ha}) corresponds to the Sommerfeld energy spectrum for the hydrogen atom. By comparison, the positive sign predicts a much higher binding energy sometimes called the hydrino state. Setting $\lambda=1$ in eqs. (\ref{eq: complexDiff3}) and (\ref{eq: complexDiff4}) and making use of eq. (\ref{eq: KGHO2}) gives \begin{equation} \label{eq: dz1_ha} \frac{\partial}{\partial z_i} = \frac{\partial}{\partial x_i} + \frac{x_i}{r} \left(\frac{a}{r} + \frac{1}{b} \right) \end{equation} \begin{equation} \label{eq: dz2_ha} \frac{\partial}{\partial z_i^*} = \frac{\partial}{\partial x_i} - \frac{x_i}{r} \left(\frac{a}{r} + \frac{1}{b} \right) \end{equation} such that \begin{equation}\label{eq: d2z_ha} \frac{\partial^2}{\partial z_i^* \partial z_i} = \frac{\partial^2}{\partial x_i^2} + \frac{a}{r^2}(1-a) + \frac{2}{br}(1-a)-\frac{1}{b^2} \end{equation} Comparing the $r^{-1}$ and $r^{-2}$ terms in eqs. (\ref{eq: KGCP1}) and (\ref{eq: d2z_ha}) leads to the following explicit forms for the $a$ and $b$ coefficients \begin{equation}\label{eq: ct_coeffs_ha} a = \eta_0, \quad b =\frac{\hbar c (1-\eta_0)}{\alpha E_{nl}} \end{equation} showing that $b$ depends on both the $n$ and $l$ quantum numbers. Eqs. (\ref{eq: ct_coeffs_ha}) enable eq. (\ref{eq: d2z_ha}) to be rewritten in the form of the operator relationship \begin{equation} \frac{\partial^2}{\partial z_i^* \partial z_i} + \frac{\alpha^2 E_{nl}^2}{\hbar^2 c^2 (1-\eta_0)^2} = \frac{\partial^2}{\partial x_i^2} + \frac{\alpha^2}{r^2} + \frac{2E_{nl}}{\hbar c} \frac{\alpha}{r} \end{equation} Hence, using this last result the Klein-Gordon equation (\ref{eq: KGCP1}) for the hydrogen atom becomes \begin{equation}\label{eq: complexKGCP1} -\hbar^2 c^2 \frac{\partial \psi}{\partial z_i^* \partial z_i} + m_0^2 c^4 \psi = E_{nl}^2 \left[ 1 + \frac{\alpha^2}{(1-\eta_0)^2} \right] \psi \end{equation} in the $(z_i,s)$-coordinates system. It is clear from inspection of this result that the goal of eliminating the Coulomb potential from eq. (\ref{eq: KGCP1}) has been achieved. Akin to eq. (\ref{eq: complexKGHO1}) for the harmonic oscillation, eq. (\ref{eq: complexKGCP1}) is similar in form to the free-particle Klein-Gordon equation except that the eigenvalues are a different function of the total energy. The hydrogenic wave function (\ref{eq: psi_ha}) is readily transformed into the spherical polar form of $(z_i,s)$-coordinates using eqs. (\ref{eq: conftrans1}) to give \begin{eqnarray} \label{eq: complex_psi_ha} \psi_{nlk}(r_z,\theta_z,\phi_z,s) = R_{nl}(r_z)Y_{lk}(\theta_z, \phi_z)\exp(-\imath Es / \hbar) \end{eqnarray} where \begin{equation}\label{eq: Rz} R_{nl}(r_z) = \mathcal{N}_{nl}\exp \left[ -\frac{(n-\eta_l+\eta_0)}{(1-\eta_0)} \frac{r_z}{r_{nl}}\right] \frac{r_z^{\eta_0}}{r_z^{\eta_l}} p_n\left( \frac{r_z}{r_{nl}} \right) \end{equation} Here, $\exp(-\imath Es / \hbar)$ is a continuously differentiable solution of eq.(\ref{eq: creq}) thus demonstrating that eq. (\ref{eq: complex_psi_ha}) is holomorphic. For the ground state of the hydrogen atom $\psi_{000}$, eq. (\ref{eq: complexKGCP1}) reduces to eq. (\ref{eq: ground_state_ho}) giving the solution $\psi_{000} = \exp(-\imath E_0s / \hbar)$ identical to the harmonic oscillator. \section{Concluding Remarks} It has been shown the concept of a potential field can be eliminated from the mathematical description of both the relativistic quantum harmonic oscillator and the hydrogen atom through the use of an isometric conformal transformation. In the transformed coordinate system, time is a complex quantity. The real part of this complex time is the world time; the imaginary part is responsible for binding the particles into their respective systems. \newpage
1,108,101,565,342
arxiv
\section{Formalism of resonant-state expansion for outside perturbation} Suppose that we have calculated a sufficient number of the RSs of a basis optical system described by generally dispersive permittivity and permeability tensors, $\hat{{\pmb{\varepsilon}}}(k,\r)$ and $\hat{{\pmb{\mu}}}(k,\r)$, where $k=\omega/c$ is the light wave number. The RSs are solutions to Maxwell's equations~\cite{MuljarovOL18}, \begin{equation} \left[k_n\wP_0(k_n,\r)-\wD(\r)\right]\wF_n(\r)=0\,, \label{ME} \end{equation} where \begin{equation} \wP_0(k,\r)= \begin{pmatrix} \hat{{\pmb{\varepsilon}}}(k,\r)&\zero\\ \zero&\hat{{\pmb{\mu}}}(k,\r) \end{pmatrix},\quad\wD(\r)= \begin{pmatrix} \zero&\nabla\times\\ \nabla\times&\zero \end{pmatrix}, \label{PD} \end{equation} are, respectively, the $6\times6$ generalized permittivity and curl operators, $\wF_n(\r)$ is a $6\times1$ column vector with components $\textbf{E}_n(\r)$ and $i\H_n(\r)$ of the electric and magnetic fields, respectively, $\zero$ is the $3\times3$ zero matrix, and $n$ is an index labelling the RSs. Let us use ${\cal V}_{\rm in}$ and ${\cal V}_{\rm out}$ to denote, respectively, the system volume and rest of the space. Let us also assume for clarity that the basis system is surrounded by vacuum, i.e. $\hat{{\pmb{\varepsilon}}}(k,\r)=\hat{{\pmb{\mu}}}(k,\r)=\one$ for $\r\in{\cal V}_{\rm out}$, where $\one$ is the $3\times3$ identity matrix. Note that the general case of a bi-isotropic surrounding medium and a bi-anisotropic material constituting the system is treated in~\cite{SI}. Assuming the perturbed system is the same optical resonator placed in a different environment, the modified RSs labelled with index $\nu$, satisfy Maxwell's equations \begin{equation} \left[k_\nu\wP(k,\r)-\wD(\r)\right]\wF_\nu(\r)=0\,, \label{MEp} \end{equation} with $\wP(k,\r)=\wP_0(k,\r)=\left[\hat{{\pmb{\varepsilon}}}(k,\r);\hat{{\pmb{\mu}}}(k,\r)\right]$ for $\r\in{\cal V}_{\rm in}$, but $\wP(k,\r)=\left[\varepsilon_b(k)\one;\mu_b(k)\one \right]$ for $\r\in{\cal V}_{\rm out}$, where for brevity we denote with $[...;...]$ block-diagonal operators. The surrounding medium is described by homogeneous isotropic permittivity $\varepsilon_b$ and permeability $\mu_b$, which in the following are assumed for clarity of derivation to be frequency independent -- the case of dispersive $\varepsilon_b(k)$ and $\mu_b(k)$ is discussed in~\cite{SI}. Now, we perform a linear transformation of \Eq{MEp}, introducing fields $\textbf{E}$ and $\H$ and a wave number $k$: \begin{equation} \textbf{E}(\r)=\sqrt{\varepsilon_b}\textbf{E}_\nu(\r)\,,\quad\H(\r)=\sqrt{\mu_b}\H_\nu(\r)\,,\quad k=n_bk_\nu\,, \label{transformation} \end{equation} where $n_b=\sqrt{\varepsilon_b\mu_b}$ is the refractive index of the surrounding medium. Equation~(\ref{MEp}) then becomes \begin{equation} \left[k\tilde{\mathbb{P}}(k,\r)-\wD(\r)\right]\wF(\r)=0\,, \label{MEt} \end{equation} where $\tilde{\mathbb{P}}(k,\r)=\left[\tilde{{\pmb{\varepsilon}}}(k,\r);\tilde{{\pmb{\mu}}}(k,\r)\right]$ with $\tilde{{\pmb{\varepsilon}}}(k,\r)={\hat{{\pmb{\varepsilon}}}(k/n_b,\r)}/{\varepsilon_b}$ and $\tilde{{\pmb{\mu}}}(k,\r)={\hat{{\pmb{\mu}}}(k/n_b,\r)}/{\mu_b}$, for $\r\in{\cal V}_{\rm in}$ and $\tilde{\mathbb{P}}(k,\r)=\left[\one;\one\right]$ for $\r\in{\cal V}_{\rm out}$. In other words, the transformed equation \Eq{MEt} describes a modified, effective optical system which is again surrounded by vacuum. We can therefore solve \Eq{MEt} with the help of the dispersive RSE, treating \Eq{ME} as unperturbed system and $k_n$ and $\wF_n$ as basis RSs. To do so, we introduce a perturbation $\Delta\wP(k,\r)=\tilde{\mathbb{P}}(k,\r)-\wP_0(k,\r)$ for $\r\in{\cal V}_{\rm in}$ and $\Delta\wP(k,\r)=0$ for $\r\in{\cal V}_{\rm out}$, so that \Eq{MEt} becomes \begin{equation} \left[k\wP_0(k,\r)+k\Delta\wP(k,\r)-\wD(\r)\right]\wF(\r)=0 \end{equation} and is solved by expanding the perturbed RS into the unperturbed ones $\wF(\r)=\sum_n c_n\wF_n$. Then, according to the dispersive RSE~\cite{MuljarovPRB16,MuljarovOL18}, the perturbed RS wave number $k$ and the expansion coefficients $c_n$ satisfy a linear matrix eigenvalue equation \begin{eqnarray} (k-k_n) c_n&=&-k\sum\limits_mV_{nm}(\infty) c_m \label{RSE-matrix} \\ &&+k_n\sum\limits_m\left[V_{nm}(\infty)-V_{nm}(k_n)\right]c_m\,, \nonumber \end{eqnarray} where \begin{equation} V_{nm}(k)= \int\wF_n(\r)\cdot\Delta\wP(k,\r)\wF_m(\r) d\r\,. \label{Vnm} \end{equation} This is valid for an arbitrary generalized Drude-Lorentz dispersion of the generalized permittivity, \begin{equation} \wP_0(k,\r)=\wP_{\infty}(\r)+\sum\limits_j\frac{\wQ_j(\r)}{k-\Omega_j}\,, \end{equation} where the generalized conductivity $\wQ_j(\r)$ is the residue of $\wP_0(k,\r)$ at the pole $k=\Omega_j$ in the complex frequency plane~\cite{SehmiPRB17}. Note that the poles of $\tilde{\mathbb{P}}(k,\r)$ and $\wP_0(k,\r)$ are generally different as $n_b\neq 1$, so that $\Delta\wP(k,\r)$ replaces one group of poles with the other. Both groups have to be taken into account in the basis RSs, e.g. by using the infinitesimal-dispersive RSE (id-RSE)~\cite{SehmiPRB20}. Equation~(\ref{RSE-matrix}) is an exact result provided that a sufficient number of the RSs are included in the basis to guarantee a required accuracy. Let us now develop some approximations and simplifications. First of all, consider a single-mode, or diagonal version of \Eq{RSE-matrix}. In this case, the perturbed RS wave number is given by \begin{equation} k_\nu=\frac{k}{n_b}\approx\frac{k_n}{n_b}\frac{1+V_{nn}(\infty)-V_{nn}(k_n)}{1+V_{nn}(\infty)}\,. \label{diagonal} \end{equation} This can be simplified further, by extracting the first-order contribution of the surrounding medium, assuming $|\varepsilon_b-1|\ll1$ and $|\mu_b-1|\ll1$. In this case, the refractive index is approximated as $n_b=\sqrt{\varepsilon_b\mu_b}\approx(\varepsilon_b+\mu_b)/2$, and \begin{equation} k_{\nu}\approx k_n\left(1-\frac{\varepsilon_b-1}{2}-\frac{\mu_b-1}{2}-V_{nn}(k_n)\right)\,. \label{1st-orderRSE} \end{equation} Equation \eqref{1st-orderRSE} is identical to the first-order result presented in~\cite{BothOL19}. Another approximation, very similar to \Eq{diagonal} and more accurate than \Eq{1st-orderRSE}, can be obtained by using the idea of regularization of the RSs. To regularize them, Zel'dovich proposed~\cite{Baz69} to multiply all RS wave functions with a Gaussian factor $e^{-\alpha r^2}$ and take the limit $\alpha\rightarrow+0$ after integration. This allows to extend the volume of integration in the normalization to the entire space, which gives exactly the same result as the analytic rigorous normalization, as has been recently demonstrated in \cite{McphedranIEEE20} for the RSs of a homogeneous dielectric sphere. Alternative to this regularization are the complex coordinate transformation~\cite{LeungPRA94} and use of perfectly matched layers~\cite{HugoninOL05,SauvanPRL13}, ideally leading to the same result for the RS norm. Now, using a single mode approximation, $\wF_\nu(\r)\approx\wF_n(\r)$, as in the diagonal version of the RSE above, we solve \Eq{MEp} as \begin{equation} \left\{k_\nu\left[\wP_0(\r)+\delta\wP(\r)\right]-\wD(\r)\right\}\wF_n(\r)\approx0\,, \label{MEr} \end{equation} where the perturbation $\delta\wP(\r)=0$ for $\r\in{\cal V}_{\rm in}$ and $\delta\wP(\r)=\left[\left(\varepsilon_b-1\right)\one;\left(\mu_b-1\right)\one\right]$ for $\r\in{\cal V}_{\rm out}$. Note that for clarity of presentation, dispersion is neglected in \Eq{MEr}. However, we provide illustrations for dispersive systems below and a full derivation with dispersion in~\cite{SI}. Multiplying \Eq{MEr} with $\wF_n(\r)$ and integrating over the entire space, assuming regularization, we obtain from \Eqs{ME}{MEr} \begin{equation} k_{\nu}-k_n+k_{\nu}\int_{V_{\rm out}}\wF_n(\r)\cdot\delta\wP(\r)\wF_n(\r)d\r\approx0\,, \label{MEr2} \end{equation} where we have used the fact that $\wF_n(\r)$ is normalized as \begin{equation} \int_{{\cal V}_{\rm in}}\wF_n(\r)\cdot\wP_0(\r)\wF_n(\r) d\r+\int_{{\cal V}_{\rm out}}\wF_n^2(\r)d\r= 1\,, \end{equation} which is equivalent to the exact analytical normalization without regularization~\cite{MuljarovOL18}. Using the Poynting theorem for the regularized fields, \begin{equation} I_n^E+W_n^E+I_n^H+W_n^H=0\,, \end{equation} where \begin{eqnarray} &&I_n^E=\int_{{\cal V}_{\rm in}}\textbf{E}_n\cdot\hat{{\pmb{\varepsilon}}}(\r)\textbf{E}_nd\r,\quad W_n^E=\int_{{\cal V}_{\rm out}}\textbf{E}_n^2d\r\,, \nonumber\\ &&I_n^H=\int_{{\cal V}_{\rm in}}\H_n\cdot\hat{{\pmb{\mu}}}(\r)\H_nd\r,\quad W_n^H=\int_{{\cal V}_{\rm out}}\H_n^2d\r\,, \nonumber \end{eqnarray} \Eq{MEr2} then takes the form \begin{equation} \frac{k_n}{k_{\nu}}\approx1+\left(\varepsilon_b-1\right)\left(\frac{1}{2}-I_n^E\right) +\left(\mu_b-1\right)\left(\frac{1}{2}+I_n^H\right), \label{regularized} \end{equation} where the integral of the perturbation over the surrounding medium, $\int_{V_{\rm out}}\wF_n\cdot\delta\wP\wF_nd\r$, is converted into integrals over the system volume, $I_n^E$ and $I_n^H$. Finally, keeping in the expansion for $k$ only linear terms in $\varepsilon_b-1$ and $\mu_b-1$, \Eq{regularized} becomes identical to the first-order approximation \Eq{1st-orderRSE}. \begin{figure}[h!]% \centering \includegraphics[width=9cm]{dispersive_spectrum.png}% \caption{Complex energies $E=\hbar c k$ of the RSs of a gold nano-sphere of radius $R=200$\,nm in vacuum (open circles) and dielectric with $\varepsilon_b=2$, calculated with the RSE (+) and analytically ($\times$), for TM polarization and $l=1$. } \label{dispersive_spec}% \end{figure} \begin{figure}[h!]% \centering \includegraphics[width=9cm]{Dispersiv_panel}% \caption{(a) Resonance energy, (b) linewidth (-Im\,$E$), and (c) relative error of the complex energy $E$ of the fundamental SP mode of the gold nano-sphere ($R=200$\,nm) as functions of the background permittivity $\varepsilon_b$, calculated analytically (black open squares), using the full RSE (blue lines), diagonal dispersive RSE (red dotted lines), regularized dispersive version (black lines), and first-order approximation (green lines). (c) also shows the relative error of the full RSE with $N=50$ and $100$, and the RSE without pole RSs in the basis ($N=150$). } \label{dispersive_panel}% \end{figure} \begin{figure}[h!]% \centering \includegraphics[width=9cm]{R_panel.png}% \caption{(a) Complex energy of the fundamental SP mode of the gold nano-sphere surrounded by vacuum (unperturbed) and by a dielectric with $\varepsilon_b$= 2 (exact and first-order) as functions of the sphere radius $R$ given by the color code. (b) Relative error of the full RSE (blue line), diagonal RSE (dotted red line), regularized version (black line), and first-order approximation (green line) as functions of $R$, for $\varepsilon_b= 2$. } \label{evolution}% \end{figure} We illustrate in \Figss{dispersive_spec}{l_350_local} the full rigorous RSE \Eq{RSE-matrix}, its diagonal version \Eq{diagonal}, the first-order approximation \Eq{1st-orderRSE}, and the regularized result \Eq{regularized} or its dispersive analog~\cite{SI}, focusing on two experimentally relevant examples: (i) the dipolar ($l$=1) SP mode of a gold nano-sphere of radius $R$ varying between 10\,nm and 200\,nm~\cite{PayneNS20} and (ii) high-angular momentum ($l=350$) WG modes of a silica micro-sphere of radius $R= 39.5\,\mu$m~\cite{BaaskeNN14}. Both systems are assumed to be nonmagnetic ($\mu=1$), described by an isotropic permittivity, and surrounded by an isotropic dielectric with varying refractive index. More illustrations are provided in~\cite{SI}. For gold, the permittivity is taken in the Drude model, $\varepsilon(k)=\varepsilon_{\infty}-\sigma\gamma/[k(k+i\gamma)]$, with $\varepsilon_\infty=4$, $\hbar c\sigma= 957$\,eV, and $\hbar c \gamma= 0.084$\,eV fitted to the Johnson and Christy data~\cite{JohnsonPRB72} with the help of the fit program provided in~\cite{SehmiPRB17}. For silica, the permittivity is calculated using Sellmeier formula~\cite{Vollmer20} at wavelength $\lambda=780$\,nm, giving $\varepsilon=2.114$. In the full RSE calculation via \Eq{RSE-matrix}, the only numerical parameter is the number $N$ of the basis RSs which is determined (unless otherwise stated) by the cut-off frequency $k_c$, such that all RSs with $k_n$ within the circle $|\sqrt{k_n\varepsilon(k_n)}|<k_c$ in the complex wave number plane are kept in the basis. Figure~\ref{dispersive_spec} shows in the complex energy plane ($E=\hbar c k$) the spectrum of the RSs of a gold nano-sphere of radius $R=200$\,nm surrounded by a dielectric with $\varepsilon_b=2$, calculated analytically and via the RSE, using as basis system the same sphere in vacuum. The dipolar SP mode at the beginning of the spectrum is further displayed in \Figs{dispersive_panel}{evolution}, for varying background permittivity $\varepsilon_b$ and sphere radius $R$. Since the perturbation shifts the unperturbed Drude pole of the permittivity at $k=-i\gamma$ to a new position at $k=-i\gamma n_b$, we apply the idRSE which requires including in the basis both the old and the new pole RSs (pRSs)~\cite{SehmiPRB20}. Inclusion of pRSs is crucial for the RSE to converge to the exact solution, as it is clear from the error in \Fig{dispersive_panel}(c) scaling with the basis size as $1/N^3$, as usually guaranteed by the RSE~\cite{MuljarovEPL10,DoostPRA14}. In fact, without pRSs the error for $N=150$ basis RSs is almost the same as for the diagonal version ($N=1$) when keeping only the SP mode in the basis. At the same time, the pRSs of the new pole in the idRSE are perturbation-dependent which makes the whole calculation rather inefficient. To avoid this problem, we have replaced the new pRSs with the old ones adapted for the perturbation, so that all the basis RSs are calculated only once for all perturbations of the environment, see~\cite{SI} for more details. The diagonal approximation is amazingly accurate in this system as it is clear from Figs.\,\ref{dispersive_panel}(a) and (b), showing at the same time that the first-order fails very quickly as $\varepsilon_b$ deviates from 1. Interestingly, the single-mode regularized version gives a reasonable agreement with the exact solution for the whole range of radii considered in \Fig{evolution}. \begin{figure}[h!]% \centering \includegraphics[width=9cm]{silica_spectrum.png}% \caption{Complex wave numbers $k$ of the RSs of a silica micro-sphere of radius $R= 39.5\,\mu$m in water with $\varepsilon_{b_0}=1.77$ (open circles) and in vacuum, calculated with the RSE (+) and analytically ($\times$), for TM polarization and $l=350$. Here the perturbation of the environment is $\Delta\varepsilon_b=\varepsilon_b-\varepsilon_{b_0}=-0.77$. } \label{l_350_spec}% \end{figure} \begin{figure}[h!]% \centering \includegraphics[width=9cm]{Dielectric_panel_l_350.png}% \caption{(a) Real part and (b) relative error of the wave number $k$ of the TM fundamental WG mode for $l=350$ in a silica micro-sphere as functions of the background permittivity $\varepsilon_b$, calculated exactly (black open squares), using the diagonal RSE (dotted red lines), and a local basis of $N$= 2 (black lines), $N$= 4 (blue lines), and $N$= 6 (green lines) WG modes. } \label{l_350_local}% \end{figure} Figure~\ref{l_350_spec} shows the spectra of the RSs of the silica sphere in water ($\varepsilon_{b_0}=1.77$) and in vacuum ($\varepsilon_{b}=1$) playing the role of, respectively, the basis and perturbed systems. Since $\varepsilon_{b_0}\neq1$, one has to replace $\varepsilon_{b}$ with a ratio $\varepsilon_{b}/\varepsilon_{b_0}$ in all the above equations, see~\cite{SI} for derivation. To reach in the full RSE the relative error of $10^{-5}$ or smaller, one needs to take $N= 2400$ basis states in this system, because of the very low permittivity contrast, see~\cite{SI} for details. The perturbed system has a large number of WG modes which are all well reproduced by the RSE even though the basis system has only 3-4 pairs of them. The diagonal, regularized and first-order approximations fail very quickly with perturbation, as it is clear from \Fig{l_350_local} showing the fundamental WG mode for $l=350$ versus $\varepsilon_b$ (see also \cite{SI}). We therefore apply here a more suited approximation: the local RSE, introduced in~\cite{DoostPRA14}, which is keeping in the basis only the RSs which are close in frequency to the state of interest or have the biggest overlap matrix elements with this state. Results are demonstrated in \Fig{l_350_local} for one, two, and three pairs of WG modes positioned symmetrically with respect to the imaginary $k$-axis ($N=2$, 4, and 6, respectively). Interestingly, adding to the basis only the conjugated mode on the other side of the spectrum ($N=2$) already improved the result significantly. Taking all three WG modes and their counterparts ($N=6$) provides a full visual agreement with the exact solution, seen in \Fig{l_350_local}(a) and in the inset to \Fig{l_350_spec}. Adding to this basis more modes, all having relatively low quality factors, makes the situation worse, unless a really large number of them is included. In conclusion, we have developed a rigorous and efficient RSE-based approach to treating arbitrarily strong homogeneous perturbations of the medium surrounding an optical system, which is crucial for sensing applications. The idea of the approach is to map the changes in the surrounding medium onto the interior of the system, where the resonant states are complete, in this way effectively modifying the resonator while keeping the medium unchanged. Such a modified system is then treated by the RSE which requires a fixed basis of resonant state, with the basis size $N$ determined by the required accuracy and the error scaling as $1/N^3$. Single-mode approximation and a local RSE using only a few modes are shown to be very accurate for the examples provided, at the same time going significantly beyond the first-order perturbation theory.
1,108,101,565,343
arxiv
\section{Introduction} Learning theory traditionally has been studied in a statistical framework, discussed at length, for example, by \citet{SSS14:book}. The issue with this approach is that the analysis of the performance of learning methods seems to critically depend on whether the data generating mechanism satisfies some probabilistic assumptions. Realizing that these assumptions are not necessarily critical, much work has been devoted recently to studying learning algorithms in the so-called online learning framework \citep{CBLu06:book}. The online learning framework makes minimal assumptions about the data generating mechanism, while allowing one to replicate results of the statistical framework through online-to-batch conversions \citep{CBCoG04:OnlineToBatch}. By following a minimax approach, however, results proven in the online learning setting, at least initially, led to rather conservative results and algorithm designs, failing to capture how more regular, ``easier'' data, may give rise to faster learning speed. This is problematic as it may suggest overly conservative learning strategies, missing opportunities to extract more information when the data is nicer. Also, it is hard to argue that data resulting from passive data collection, such as weather data, would ever be adversarially generated (though it is equally hard to defend that such data satisfies precise stochastic assumptions). Realizing this issue, during recent years much work has been devoted to understanding what regularities and how can lead to faster learning speed. For example, much work has been devoted to showing that faster learning speed (smaller ``regret'') can be achieved in the online convex optimization setting when the loss functions are ``curved'', such as when the loss functions are strongly convex or exp-concave, or when the losses show small variations, or the best prediction in hindsight has a small total loss, and that these properties can be exploited in an adaptive manner (e.g., \citealt{MF92}, \citealt{FrSc97}, \citealt{gaivoronski2000stochastic}, \citealt{CBLu06:book}, \citealt{hazan2007logarithmic}, \citealt{bartlett2007adaptive}, \citealt{kakade2009mind}, \citealt{orabona2012beyond}, \citealt{RakhlinS13}, \citealt{vanerven2015fast}, \citealt{foster2015adaptive}). In this paper we contribute to this growing literature by studying online linear prediction and the follow the leader (FTL) algorithm. Online linear prediction is arguably the simplest yet fundamental of all the learning settings, and lies at the heart of online convex optimization, while it also serves as an abstraction of core learning problems such as prediction with expert advice. FTL, the online analogue of empirical risk minimization of statistical learning, is the simplest learning strategy, one can think of. Although the linear setting of course removes the possibility of exploiting the curvature of losses, as we will see, there are multiple ways online learning problems can present data that allows for small regret, even for FTL. As is it well known, in the worst case, FTL suffers a linear regret (e.g., Example 2.2 of \citet{SS12:Book}). However, for ``curved'' losses (e.g., exp-concave losses), FTL was shown to achieve small (logarithmic) regret (see, e.g., \citet{MF92,CBLu06:book,gaivoronski2000stochastic,hazan2007logarithmic}). In this paper we take a thorough look at FTL in the case when the losses are linear, but the problem perhaps exhibits other regularities. The motivation comes from the simple observation that, for prediction over the simplex, when the loss vectors are selected independently of each other from a distribution with a bounded support with a nonzero mean, FTL quickly locks onto selecting the loss-minimizing vertex of the simplex, achieving finite expected regret. In this case, FTL is arguably an excellent algorithm. In fact, FTL is shown to be the minimax optimizer for the binary losses in the stochastic expert setting in the paper of \citet{kotlowskiminimax}. Thus, we ask the question of whether there are other regularities that allow FTL to achieve nontrivial performance guarantees. Our main result shows that when the decision set (or constraint set) has a sufficiently ``curved'' boundary (equivalently, if it is strongly convex) and the linear loss is bounded away from $0$, FTL is able to achieve logarithmic regret even in the adversarial setting, thus opening up a new way to prove fast rates not based on the curvature of losses, but on that of the boundary of the constraint set and non-singularity of the linear loss. In a matching lower bound we show that this regret bound is essentially unimprovable. We also show an alternate bound for polytope constraint sets, which allows us to prove that (under certain technical conditions) for stochastic problems the expected regret of FTL will be finite. To finish, we use ($\mathcal{A}$, $\mathcal{B}$)-prod of \citet{sani2014exploiting} to design an algorithm that adaptively interpolates between the worst case $O(\sqrt{n\log n})$ regret and the smaller regret bounds, which we prove here for ``easy data.'' We also show that if the constraint set is the unit ball, both the follow the regularized leader (FTRL) algorithm and a combination of FTL and shrinkage, which we call follow the shrunken leader (FTSL), achieve logarithmic regret for easy data. Simulation results on artificial data complement the theoretical findings. While we believe that we are the first to point out that the curvature of the constraint set $\mathcal{W}$ can help in speeding up learning, this effect is known in convex optimization since at least the work of \citet{LePo66}, who showed that exponential rates are attainable for strongly convex constraint sets if the norm of the gradients of the objective function admit a uniform lower bound. \todoc{Longer version: This is their Theorem 6.1, part (5).} More recently, \citet{garber2014faster} proved an $O(1/n^2)$ optimization error bound (with problem-dependent constants) for the Frank-Wolfe algorithm for strongly convex and smooth objectives and strongly convex constraint sets. The effect of the shape of the constraint set was also discussed by \citet{abbasi2010forced} who demonstrated $O(\sqrt{n})$ regret in the linear bandit setting. While these results at a high level are similar to ours, our proof technique is rather different than that used there. \todoc[inline]{ Interpolating between stochastic and adversarial settings: \citet{bubeck2012best}. I think Rakhlin and Karthrik also write about this. What did they write? Cite them.} \todor[inline]{\citep{abernethy2008optimal} section 4.2 talks about lower bound for linear game with constraint sets being balls. \citep{abernethy2009stochastic} relates the regret to the flatness of $\Phi$ and the Bregman divergence. \citep{abernethy2014online} Bregman divergence again.} \todoc[inline]{ \citet{MF92} considers the following assumption (dropping measurability and other technical requirements): Let $\ell: \mathcal{F} \times \mathcal{W} \to [0,\infty)$ be a fixed loss function. For a probability distribution $P$ over $\mathcal{F}$, let $w^*(P) = \argmin_{w\in \mathcal{W}} \int \ell(f,w) P(df)$. Further, for $\alpha\in [0,1]$, $f\in \mathcal{F}$, let $P_{\alpha,x} = P+ \alpha( \delta_f - P)$, where $\delta_f$ is the Dirac measure that puts all the weight to $f$. (Note that $P_{\alpha,x}-P = \alpha (\delta_f-P)$.) Then, the assumption is that for some $L>0$ and for all $f\in \mathcal{F}$, $|\ell(f, b^*(P) ) - \ell( f, b^*(P_{\alpha,f}))| \le \alpha L$ (a form of a Lipschitz condition). Under this assumption they show that FTL achieves logarithmic regret. How does this assumption relate to our smoothness assumption? } \todor[inline]{More about stability. \citep{saha2012interplay}. Such stability is usually achieved by the strongly convexity of the loss function.} \section{Preliminaries, online learning and the follow the leader algorithm} \label{sec:notation} We consider the standard framework of online convex optimization, where a learner and an environment interact in a sequential manner in $n$ rounds: In round every round $t=1,\ldots,n$, first the learner predicts $w_t\in \mathcal{W}$. Then the environment picks a loss function $\ell_t\in \mathcal{L}$, and the learner suffers loss $\ell_t(w_t)$ and observes $\ell_t$. Here, $\mathcal{W}$ is a non-empty, compact \todoa{Changed to compact here, as we need it anyways for the linear losses} convex subset of $\mathbb{R}^d$ and $\mathcal{L}$ is a set of convex functions, mapping $\mathcal{W}$ to the reals. The elements of $\mathcal{L}$ are called loss functions. The performance of the learner is measured in terms of its regret, \[ R_n = \sum_{t=1}^n \ell_t(w_t) - \min_{w\in \mathcal{W}}\sum_{t=1}^n \ell_t(w)\,. \] The simplest possible case, which will be the focus of this paper, is when the losses are linear, i.e., when $\ell_t(w) = \ip{f_t,w}$ for some $f_t\in \mathcal{F}\subset \mathbb{R}^d$. \newcommand{\tilde{\ell}_t}{\tilde{\ell}_t} In fact, the linear case is not only simple, but is also fundamental since the case of nonlinear loss functions can be reduced to it: Indeed, even if the losses are nonlinear, defining $f_t \in \partial \ell_t(w_t)$ to be a subgradient% \footnote{ We let $\partial g(x)$ denote the subdifferential of a convex function $g:\dom(g) \to \mathbb{R}$ at $x$, i.e., $\partial g(x) = \set{\theta\in \mathbb{R}^d}{g(x') \ge g(x) + \ip{\theta, x'-x} \,\, \forall x'\in \dom(g) }$, where $\dom(g)\subset \mathbb{R}^d$ is the domain of $g$. } of $\ell_t$ at $w_t$ and letting $\tilde{\ell}_t(u) = \ip{f_t,u}$, by the definition of subgradients, $\ell_t(w_t)-\ell_t(u) \le \ell_t(w_t)-(\ell_t(w_t)+\ip{f_t,u-w_t}) = \tilde{\ell}_t(w_t)-\tilde{\ell}_t(u)$, hence for any $u\in \mathcal{W}$, \[ \sum_t \ell_t(w_t) - \sum_t \ell_t(u) \le \sum_t \tilde{\ell_t}(w_t) - \sum_t \tilde{\ell_t}(u)\,. \] In particular, if an algorithm keeps the regret small no matter how the linear losses are selected (even when allowing the environment to pick losses based on the choices of the learner), the algorithm can also be used to keep the regret small in the nonlinear case. Hence, in what follows we will study the linear case $\ell_t(w)=\ip{f_t,w}$ and, in particular, we will study the regret of the so-called ``Follow The Leader'' (FTL) learner, which, in round $t\ge 2$ picks \begin{align*} w_t = \argmin_{w\in \mathcal{W}} \sum_{i=1}^{t-1} \ell_i(w)\,. \end{align*} For the first round, $w_1\in \mathcal{W}$ is picked in an arbitrary manner. When $\mathcal{W}$ is compact, the optimal $w$ of $\min_{w\in\mathcal{W}} \sum_{i=1}^{t-1}\inpro{w}{f_t}$ is attainable, which we will assume henceforth. If multiple minimizers exist, we simply fix one of them as $w_t$. We will also assume that $\mathcal{F}$ is non-empty, compact and convex. \todoc{Why compact? Convex? How is this used?}\todoa{It is used with $\Phi$ and its Bregman divergence. Could be relaxed but this is the standard way.} \if0 \begin{wrapfigure}{R}{5cm} \vspace{-.5cm} \centering \begin{algorithmic} \STATE In round $t\ge 2$, predict: \begin{align*} w_t = \argmin_{w\in \mathcal{W}} \sum_{i=1}^{t-1} \ell_i(w)\,. \end{align*} while in round one predict arbitrarily. \end{algorithmic} \caption{Follow the Leader (FTL)} \label{fig:ftl} \vspace{-.5cm} \end{wrapfigure} \fi \subsection{Support functions} Let $\Theta_t = -\frac1t \sum_{i=1}^t f_i$ be the negative average of the first $t$ vectors in $(f_t)_{t=1}^n$, $f_t\in \mathcal{F}$. For convenience, we define $\Theta_0 := 0$. Thus, for $t\ge 2$, \begin{align*} w_t = \argmin_{w\in\mathcal{W}} \sum_{i=1}^{t-1} \ip{ w, f_i } = \argmin_{w\in\mathcal{W}} \ip{ w, -\Theta_{t-1} } = \argmax_{w\in \mathcal{W}} \ip{w,\Theta_{t-1}}\,. \end{align*} Denote by $\Phi(\Theta) = \max_{w\in\mathcal{W}} \langle w, \Theta\rangle$ the so-called \emph{support function} of $\mathcal{W}$. The support function, being the maximum of linear and hence convex functions, is itself convex. Further $\Phi$ is positive homogenous: for $a\ge 0$ and $\theta\in \mathbb{R}^d$, $\Phi(a \theta) = a\Phi(\theta)$. It follows then that the epigraph $\epi(\Phi) = \set{ (\theta,z)}{ z\ge \Phi(\theta), z\in \mathbb{R}, \theta\in \mathbb{R}^d }$ of $\Phi$ is a cone, since for any $(\theta,z)\in \epi(\Phi)$ and $a\ge 0$, $az \ge a \Phi(\theta) = \Phi(a\theta)$, $(a\theta,az)\in \epi(\Phi)$ also holds. The differentiability of the support function is closely tied to whether in the FTL algorithm the choice of $w_t$ is uniquely determined: \begin{prop} \label{prop:derivativePhi} Let $\mathcal{W}\ne \emptyset$ be convex and closed. Fix $\Theta$ and let $\mathcal{Z}:= \set{w\in \mathcal{W}}{\inpro{w}{\Theta} = \Phi(\Theta) }$. Then, $\partial \Phi(\Theta) = \mathcal{Z}$ and, in particular, $\Phi(\Theta)$ is differentiable at $\Theta$ if and only if $\max_{w\in\mathcal{W}} \inpro{w}{\Theta}$ has a unique optimizer. In this case, $\nabla \Phi(\Theta) = \argmax_{w\in \mathcal{W}} \ip{w,\Theta}$. \end{prop} The proposition follows from Danskin's theorem when $\mathcal{W}$ is compact (e.g., Proposition B.25 of \citealt{bertsekas99nonlinear}), but a simple direct argument can also be used to show that it also remains true even when $\mathcal{W}$ is unbounded. \todor{Remove the footnote.} \footnote{ The proofs not given in the main text can be found in the appendix. } By \cref{prop:derivativePhi}, when $\Phi$ is differentiable at $\Theta_{t-1}$, $w_t = \nabla \Phi(\Theta_{t-1})$. \section{Non-stochastic analysis of FTL} \label{sec:FTL} We start by rewriting the regret of FTL in an equivalent form, which shows that we can expect FTL to enjoy a small regret when successive weight vectors move little. A noteworthy feature of the next proposition is that rather than bounding the regret from above, it gives an equivalent expression for it. \begin{prop} \label{prop:regretabel} The regret $R_n$ of FTL satisfies \begin{align*} R_n & = \sum_{t=1}^n t\,\ip{ w_{t+1}-w_t,\Theta_t} \,. \end{align*} \end{prop} The result is a direct corollary of Lemma 9 of \citet{McMahan10:Equiv}, which holds for any sequence of losses, even in the lack of convexity. It is also a tightening of the well-known inequality $R_n \le \sum_{t=1}^n \ell_t(w_t)-\ell_t(w_{t+1})$, which again holds for arbitrary loss sequences (e.g., Lemma 2.1 of \citet{SS12:Book}). To keep the paper self-contained, we give an elegant, short direct proof, based on the summation by parts formula: \begin{proof} The summation by parts formula states that for any $u_1,v_1,\dots,u_{n+1},v_{n+1}$ reals, $ \sum_{t=1}^n u_t\,(v_{t+1}-v_t) = (u_{t+1}v_{t+1}-u_1 v_1) - \sum_{t=1}^n (u_{t+1}-u_t)\,v_{t+1} $. Applying this to the definition of regret with $u_t:=w_{t,\cdot}$ and $v_{t+1} := t\Theta_{t}$, we get \begin{align*} R_n & = -\sum_{t=1}^n \ip{w_t,t\Theta_t - (t-1)\Theta_{t-1}} + \ip{w_{n+1},n\Theta_n} \\ & = - \left\{ \bcancel{\ip{w_{n+1},n\Theta_n}} - 0 - \sum_{t=1}^n \ip{w_{t+1}-w_t,t\Theta_t} \right\} + \bcancel{\ip{w_{n+1},n\Theta_n}}. \end{align*} \end{proof} Our next proposition gives another formula that is equal to the regret. As opposed to the previous result, this formula is appealing as it is independent of $w_t$; but it directly connects the sequence $(\Theta_t)_t$ to the geometric properties of $\mathcal{W}$ through the support function $\Phi$. For this proposition we will momentarily assume that $\Phi$ is differentiable at $(\Theta_t)_{t\ge 1}$; a more general statement will follow later. \begin{prop} \label{prop:R_nBregmanDivergence} If $\Phi$ is differentiable at $\Theta_1, \ldots, \Theta_n$, \begin{align} \label{eq:regreteq} R_n = \sum_{t=1}^{n} t\,D_{\Phi}(\Theta_t,\Theta_{t-1})\,, \end{align} where $D_{\Phi}(\theta', \theta) = \Phi(\theta') - \Phi(\theta) - \ip{ \nabla\Phi(\theta), \theta' - \theta}$ is the Bregman divergence of $\Phi$ and we use the convention that $\nabla\Phi(0) = w_1$. \end{prop} \begin{proof} Let $v = \argmax_{w\in\mathcal{W}}\inpro{w}{\theta}$, $v' = \argmax_{w\in \mathcal{W}}\ip{w,\theta'}$. When $\Phi$ is differentiable at $\theta$, \begin{align} D_{\Phi}(\theta', \theta) & = \Phi(\theta') - \Phi(\theta) - \inpro{\nabla\Phi(\theta)}{\theta' \!- \theta} = \inpro{v'}{\theta'} \!- \inpro{v}{\theta} -\inpro{v}{\theta' \!- \theta} = \inpro{v'\!-v}{\theta'}\,. \label{eq:bregman} \end{align} Therefore, by \cref{prop:regretabel}, $R_n = \sum_{t=1}^{n} t\ip{ w_{t+1}-w_t,\Theta_t} = \sum_{t=1}^{n} t\,D_{\Phi}(\Theta_t,\Theta_{t-1})$. \end{proof} When $\Phi$ is non-differentiable at some of the points $\Theta_1,\dots,\Theta_n$, the equality in the above proposition can be replaced with inequalities. Defining the upper Bregman divergence $\overline{D}_{\Phi}(\theta', \theta) = \sup_{w\in \partial \Phi(\theta)} \Phi(\theta') - \Phi(\theta) - \ip{ w, \theta' - \theta}$ and the lower Bregman divergence $\underline{D}_{\Phi}(\theta', \theta)$ similarly with $\inf$ instead of $\sup$, we can easily obtain an analogue of \Cref{prop:R_nBregmanDivergence}: \begin{align} \label{eq:regreteq_alt} \sum_{t=1}^{n} t\,\underline{D}_{\Phi}(\Theta_t,\Theta_{t-1}) \le R_n \le \sum_{t=1}^{n} t\,\overline{D}_{\Phi}(\Theta_t,\Theta_{t-1})\,. \end{align} \subsection{Constraint sets with positive curvature} The previous results show in an implicit fashion that the curvature of $\mathcal{W}$ controls the regret. Before presenting our first main result, which makes this connection explicit, we define some basic notions from differential geometry related to the curvature (all differential geometry concept and results that we need can be found in Section 2.5 of \citealp{Sch14:ConvexBodies}). Given a $C^2$ (twice continuously differentiable) planar curve $\gamma$ in $\mathbb{R}^2$, there exists a parametrization with respect to the curve length $s$, such that $\|\gamma'(s)\| = \|\left(x'(s), y'(s)\right)\| = x'(s)^2 + y'(s)^2=1$. Under the curve length parametrization, the curvature of $\gamma$ at $\gamma(s)$ is $\|\gamma''(s)\|$. Define the unit normal vector $\bf{n}(s)$ as the unit vector that is perpendicular to $\gamma'(s)$.\footnote{There exist two unit vectors that are perpendicular to $\gamma'(s)$ for each point on $\gamma$. Pick the ones that are consistently oriented.} Note that $\bf{n}(s)\cdot \gamma'(s) = 0$. Thus $0=\left(\bf{n}(s)\cdot \gamma'(s)\right)' = \bf{n}'(s)\cdot\gamma'(s) + \bf{n}(s)\cdot \gamma''(s)$, and $\|\gamma''(s)\| = \|\bf{n}(s)\cdot \gamma''(s)\| = \|\bf{n}'(s)\cdot\gamma'(s)\| = \|\bf{n}'(s)\|$. Therefore, the curvature of $\gamma$ at point $\gamma(s)$ is the length of the differential of its unit normal vector. Denote the boundary of $\mathcal{W}$ by $\mathrm{bd}(\mathcal{W})$. We shall assume that $\mathcal{W}$ is $C^2$, that is, $\mathrm{bd}(\mathcal{W})$ is a twice continuously differentiable submanifold of $\mathbb{R}^d$. We denote the tangent plane of $\mathrm{bd}(\mathcal{W})$ at point $w$ by $T_w\mathcal{W}$. Now there exists a unique unit vector at $w$ that is perpendicular to $T_w\mathcal{W}$ and points outward of $\mathcal{W}$. In fact, one can define a continously differentiable normal unit vector field on $\mathrm{bd}(\mathcal{W})$, $u_{\mathcal{W}}: \mathrm{bd}(\mathcal{W}) \to \mathbb{S}^{d-1}$, the so-called Gauss map, which maps a boundary point $w\in \mathrm{bd}(\mathcal{W})$ to the unique outer normal vector to $\mathcal{W}$ at $w$, where $\mathbb{S}^{d-1}=\set{x\in\mathbb{R}^d}{\|x\|_2=1}$ denotes the unit sphere in $d$-dimensions. The differential of the Gauss map, $\nabla u_{\mathcal{W}}(w)$, defines a linear endomorphism of $T_w\mathcal{W}$. Moreover, $\nabla u_{\mathcal{W}}(w)$ is a self-adjoint operator, with nonnegative eigenvalues. The differential of the Gauss map, $\nabla u_{\mathcal{W}}(w)$, describes the curvature of $\mathrm{bd}(\mathcal{W})$ via the second fundamental form. In particular, the \emph{principal curvatures} of $\mathrm{bd}(\mathcal{W})$ at $w\in\mathrm{bd}(\mathcal{W})$ is defined as the eigenvalues of $\nabla u_{\mathcal{W}}(w)$. Perhaps a more intuitive, yet equivalent definition, is that the principal curvatures are the eigenvalues of the Hessian of $f=f_w$ in the parameterization $t\mapsto w+t-f_w(t) u_{\mathcal{W}}(w)$ of $\mathrm{bd}(\mathcal{W})$ which is valid in a small open neighborhood of $w$, where $f_w: T_w \mathcal{W} \to [0,\infty)$ is a suitable convex, nonnegative valued function that also satisfies $f_w(0)= 0$ and where $T_w \mathcal{W}$, a hyperplane of $\mathbb{R}^d$, denotes the tangent space of $\mathcal{W}$ at $w$, obtained by taking the support plane $H$ of $\mathcal{W}$ at $w$ and shifting it by $-w$. Thus, the principal curvatures at some point $w\in \mathrm{bd}(\mathcal{W})$ describe the local shape of $\mathrm{bd}(\mathcal{W})$ up to the second order. In this paper, we are interested in the minimum principal curvature at $w\in\mathrm{bd}(\mathcal{W})$, which can be intepreated as the minimum curvature at $w$ over all the planar curves $\gamma \in \mathrm{bd}(\mathcal{W})$ that go through $w$. A related concept that has been used in convex optimization to show fast rates is that of a strongly convex constraint set \citep{LePo66,garber2014faster}: $\mathcal{W}$ is $\lambda$-strongly convex with respect to the norm $\norm{\cdot}$ if, for any $x,y\in \mathcal{W}$ and $\gamma\in [0,1]$, the $\norm{\cdot}$-ball with origin $\gamma x + (1-\gamma) y$ and radius $\gamma(1-\gamma) \lambda \norm{x-y}^2/2 $ is included in $ \mathcal{W}$. We show in \cref{strongconvex} in the appendix that a convex body $\mathcal{W} \in C^2$ is $\lambda$-strongly convex with respect to $\norm{\cdot}_2$ if and only if the principal curvatures of the surface $\mathrm{bd}(\mathcal{W})$ are all at least $\lambda$. As promised, our next result connects the principal curvatures of $\mathrm{bd}(\mathcal{W})$ to the regret of FTL and shows that FTL enjoys logarithmic regret for highly curved surfaces, as long as $\norm{\Theta_t}_2$ is bounded away from zero. \begin{thm} \label{thm:R_curvesurface} Let $\mathcal{W}\subset \mathbb{R}^d$ be a $C^2$ convex body\footnote{Following \citet{Sch14:ConvexBodies}, a convex body of $\mathbb{R}^d$ is any non-empty, compact, convex subset of $\mathbb{R}^d$.} with $d\ge 2$. Let $M = \max_{f\in \mathcal{F}} \norm{f}_2$ and assume that $\Phi$ is differentiable at $(\Theta_t)_{t}$. Assume that the principal curvatures of the surface $\mathrm{bd}(\mathcal{W})$ are all at least $\lambda_0$ for some constant $\lambda_0>0$ and $L_n:=\min_{1\le t \le n} \|\Theta_t\|_2 >0$. Choose $w_1\in \mathrm{bd}(\mathcal{W})$. Then \[ R_n \le \frac{2M^2}{\lambda_0 L_n}(1+ \log(n))\,. \] \end{thm} As we will show later in an essentially matching lower bound, this bound is tight, showing that the forte of FTL is when $L_n$ is bounded away from zero and $\lambda_0$ is large. Note that the bound is vacuous as soon as $L_n =O( \log(n)/n )$ and is worse than the minimax bound of $O(\sqrt{n})$ when $L _n = o( \log(n)/\sqrt{n} )$. One possibility to reduce the bound's sensitivity to $L_n$ is to use the trivial bound $\ip{w_{t+1}-w_t,\Theta_t} \le L W = L \sup_{w,w'\in \mathcal{W}} \norm{w-w'}_2$ for indices $t$ when $\norm{\Theta_t}\le L$. Then, by optimizing the bound over $L$, one gets a data-dependent bound of the form $\inf_{L>0} \left(\frac{2M^2}{\lambda_0 L} (1+\log(n)) + LW \, \sum_{t=1}^n t \,\one{ \norm{\Theta_t}\le L }\right)$, which is more complex, but is free of $L_n$ and thus reflects the nature of FTL better. Note that in the case of stochastic problems, where $f_1,\ldots,f_n$ are independent and identically distributed (i.i.d.) with $\mu := -\Exp{\Theta_t}\ne 0$, the probability that $\norm{\Theta_t}_2 < \norm{\mu}_2/2$ is exponentially small in $t$. Thus, selecting $L=\norm{\mu}_2/2$ in the previous bound, the contribution of the expectation of the second term is $O(\norm{\mu}_2W)$, giving an overall bound of the form $O(\frac{M^2}{\lambda_0 \norm{\mu}_2}\log(n)+\norm{\mu}_2 W)$. \todor{the second term should be $\frac{W}{\|\mu\|_2^3}$. The sum of the probabilities brings a term $\frac{1}{\|\mu\|_2^4}$.} \todor{I correct the sum to $\min$.}\todoa{The correct is the sum. And there is no need to multiply the second term by $n$, since it is a sum of exponentially decaying sequence} After the proof we will provide some simple examples that should make it more intuitive how the curvature of $\mathcal{W}$ helps keeping the regret of FTL small. \begin{proof} Fix $\theta_1, \theta_2 \in \mathbb{R}^d$ and let $w^{(1)} = \argmax_{w\in\mathcal{W}}\inpro{w}{\theta_1}$, $w^{(2)} = \argmax_{w\in\mathcal{W}}\inpro{w}{\theta_2}$. Note that if $\theta_1,\theta_2\ne 0$ then $w^{(1)} , w^{(2)} \in \mathrm{bd}(\mathcal{W})$. Below we will show that \begin{align*} \inpro{w^{(1)} - w^{(2)} }{\theta_1} & \le \frac{1}{2\lambda_0} \frac{\|\theta_2 - \theta_1\|_2^2}{\|\theta_2\|_2}\,. \numberthis\label{eq:middletheta} \end{align*} \cref{prop:regretabel} suggests that it suffices to bound $\ip{w_{t+1}-w_t,\Theta_t}$. By \eqref{eq:middletheta}, we see that it suffices to bound how much $\Theta_t$ moves. A straightforward calculation shows that $\Theta_t$ cannot move much: for any norm $\norm{\cdot}$ on $\mathcal{F}$, we have \begin{align} \|\Theta_t - \Theta_{t-1} \| & = \left\|\frac{1}{t-1}\sum_{i=1}^{t-1} f_i - \frac{1}{t}\sum_{i=1}^{t} f_i \right\| = \left\| \sum_{i=1}^{t-1} \left( \frac{1}{t-1} - \frac{1}{t}\right) f_i- \frac{1}{t}f_t\right\| \nonumber \\ & \le \left\| \sum_{i=1}^{t-1} \left( \frac{1}{t-1} - \frac{1}{t}\right) f_i \right\| + \left\| \frac{1}{t}f_t\right\| = \left\| \sum_{i=1}^{t-1} \frac{1}{t(t-1)} f_i \right\| + \left\| \frac{1}{t}f_t\right\| \nonumber \\ & = \frac{1}{t} \left\| \frac{1}{t-1} \sum_{i=1}^{t-1} f_i\right\| + \frac{1}{t}\left\|f_t\right\| \le \frac{2}{t}M\,. \label{prop:avgdiff} \end{align} where $M = \max_{f\in\mathcal{F}} \|f\|$ is a constant that depends on $\mathcal{F}$ and the norm $\norm{\cdot}$. Combining inequality \eqref{eq:middletheta} with \cref{prop:regretabel} and \eqref{prop:avgdiff}, we get \begin{align*} R_n &= \sum_{t=1}^{n} t\ip{ w_{t+1}-w_t,\Theta_t} \le \sum_{t=1}^{n} \frac{t}{2\lambda_0} \frac{\|\Theta_t - \Theta_{t-1}\|_2^2}{\|\Theta_{t-1}\|_2} \\ &\le \frac{2M^2}{\lambda_0}\sum_{t=1}^{n} \frac{1}{t\|\Theta_{t-1}\|_2} \le \frac{2M^2}{\lambda_0L_n} \sum_{t=1}^{n} \frac{1}{t} \le \frac{2M^2}{\lambda_0L_n} (1+\log(n))\,. \end{align*} To finish the proof, it thus remains to show~\eqref{eq:middletheta}. The following elementary lemma relates the cosine of the angle between two vectors $\theta_1$ and $\theta_2$ to the squared normalized distance between the two vectors, thereby reducing our problem to bounding the cosine of this angle. For brevity, we denote by $\cos\inangle{\theta_1}{\theta_2}$ the cosine of the angle between $\theta_1$ and $\theta_2$. \begin{lemma} \label{lem:upperboundcos} For any non-zero vectors $\theta_1, \theta_2 \in \mathbb{R}^d$, \begin{align} 1- \cos \inangle{\theta_1}{\theta_2} \le \frac{1}{2} \frac{\|\theta_1 - \theta_2\|_2^2}{\|\theta_1\|_2\|\theta_2\|_2}. \label{eq:angleineq} \end{align} \end{lemma} \begin{proof} Note that $\|\theta_1\|_2\|\theta_2\|_2\cos\inangle{\theta_1}{\theta_2} = \inpro{\theta_1}{\theta_2}$. Therefore, \eqref{eq:angleineq} is equivalent to $ 2\|\theta_1\|_2\|\theta_2\|_2 - 2\inpro{\theta_1}{\theta_2} \le \|\theta_1 - \theta_2\|_2^2 $, which, by algebraic manipulations, is itself equivalent to $0 \le (\|\theta_1\|_2-\|\theta_2\|_2)^2$. \end{proof} \begin{figure}[h] \centering \includegraphics[height=4cm]{figures/GaussmapPro} \caption{Illustration of the construction used in the proof of~\eqref{eq:middletheta}.} \label{fig:cuttingplane} \end{figure} With this result, we see that it suffices to upper bound $\cos \inangle{\theta_1}{\theta_2}$ by $1-\lambda_0 \inpro{w^{(1)}-w^{(2)}}{\frac{\theta_1}{\|\theta_1\|_2}}$. To develop this bound, let $\tilde{\theta}_i = \frac{\theta_i}{\|\theta_i\|_2}$ for $i=1,2$. The angle between $\theta_1$ and $\theta_2$ is the same as the angle between the normalized vectors $\tilde{\theta}_1$ and $\tilde{\theta}_2$. To calculate the cosine of the angle between $\tilde{\theta}_1$ and $\tilde{\theta}_2$, let $P$ be a plane spanned by $\tilde{\theta}_1$ and $w^{(1)}-w^{(2)}$ and passing through $w^{(1)}$ ($P$ is uniquely determined if $\tilde{\theta}_1$ is not parallel to $w^{(1)}-w^{(2)}$; if there are multiple planes, just pick any of them). Further, let $\hat{\theta}_2\in \mathbb{S}^{d-1}$ be the unit vector along the projection of $\tilde{\theta}_2$ onto the plane $P$, as indicated in \cref{fig:cuttingplane}. Clearly, $\cos \inangle{\tilde{\theta}_1}{\tilde{\theta}_2} \le \cos \inangle{\tilde{\theta}_1}{\hat{\theta}_2}$. Consider a curve $\gamma(s)$ on $\mathrm{bd}(\mathcal{W})$ connecting $w^{(1)}$ and $w^{(2)}$ that is defined by the intersection of $\mathrm{bd}(\mathcal{W})$ and $P$ and is parametrized by its curve length $s$ so that $\gamma(0) = w^{(1)}$ and $\gamma(l) = w^{(2)}$, where $l$ is the length of the curve $\gamma$ between $w^{(1)}$ and $w^{(2)}$. Let $u_{\mathcal{W}}(w)$ denote the outer normal vector to $\mathcal{W}$ at $w$ as before, and let $u_\gamma\, : \, [0,l]\rightarrow \mathbb{S}^{d-1}$ be such that $u_\gamma(s) = \hat{\theta}$ where $\hat{\theta}$ is the unit vector parallel to the projection of $u_{\mathcal{W}}(\gamma(s))$ on the plane $P$. By definition, $u_\gamma(0) = \tilde{\theta}_1$ and $u_\gamma(l) = \hat{\theta}_2$. Note that in fact $\gamma$ exists in two versions since $\mathcal{W}$ is a compact convex body, hence the intersection of $P$ and $\mathrm{bd}(\mathcal{W})$ is a closed curve. Of these two versions we choose the one that satisfies that $\ip{\gamma'(s),\tilde{\theta}_1}\le 0$ for $s\in [0,l]$.\footnote{$\gamma'$ and $u'_\gamma$ denote the derivatives of $\gamma$ and $u$, respectively, which exist since $\mathcal{W}$ is $C^2$.} Given the above, we have \begin{align*} \cos \inangle{\tilde{\theta}_1}{\hat{\theta}_2} & = \inpro{\hat{\theta}_2}{\tilde{\theta}_1} = 1 \! + \inpro{\hat{\theta}_2 - \tilde{\theta}_1}{\tilde{\theta}_1} = 1\!+ \Big\langle\int_{0}^{l} u_\gamma'(s)\,\text{d}s, \tilde{\theta}_1 \Big\rangle = 1\!+ \!\int_{0}^{l} \inpro{u_\gamma'(s)}{\tilde{\theta}_1} \,\text{d}s. \numberthis \label{eq:cosint} \end{align*} Note that $\gamma$ is a planar curve on $\mathrm{bd}(\mathcal{W})$, thus its curvature $\lambda(s)$ satisfies $\lambda(s) \ge \lambda_0$ for $s\in [0,l]$. Also, for any $w$ on the curve $\gamma$, $\gamma'(s)$ is a unit vector parallel to $P$. Moreover, $u_\gamma'(s)$ is parallel to $\gamma'(s)$ and $\lambda(s) = \|u_\gamma'(s)\|_2$. Therefore, \[ \inpro{u_\gamma'(s)}{\tilde{\theta}_1} = \|u_\gamma'(s)\|_2\inpro{\gamma'(s)}{\tilde{\theta}_1} \le \lambda_0\inpro{\gamma'(s)}{\tilde{\theta}_1}, \] where the last inequality holds because $\inpro{\gamma'(s)}{\tilde{\theta}_1} \le 0$. Plugging this into~\eqref{eq:cosint}, we get the desired \begin{align*} \cos \inangle{\tilde{\theta}_1}{\hat{\theta}_2} & \le 1+ \lambda_0\, \int_{0}^{l} \, \inpro{\gamma'(s)}{\tilde{\theta}_1} \,\text{d}s = 1+ \lambda_0 \Big\langle\int_{0}^{l} \gamma'(s) \,\text{d}s, \tilde{\theta}_1 \Big\rangle = 1 - \lambda_0 \inpro{w^{(1)} - w^{(2)}}{\tilde{\theta}_1}\,. \end{align*} Reordering and combining with~\eqref{eq:angleineq} we obtain \begin{align*} \inpro{w^{(1)} - w^{(2)}}{\tilde{\theta}_1} & \le \frac{1}{\lambda_0} \left( 1- \cos \inangle{\tilde{\theta}_1}{\hat{\theta}_2} \right) \le \frac{1}{\lambda_0} \left( 1- \cos \inangle{\theta_1}{\theta_2} \right) \le \frac{1}{2\lambda_0} \frac{\|\theta_1 - \theta_2\|_2^2}{\|\theta_1\|_2\|\theta_2\|_2}\,. \end{align*} Multiplying both sides by $\norm{\theta_1}_2$ gives~\eqref{eq:middletheta}, thus, finishing the proof. \end{proof} \begin{example} \label{ex:curvature} The smallest principal curvature of some common convex bodies are as follows: \begin{itemize}\setlength{\itemsep}{0pt} \item The smallest principal curvature $\lambda_0$ of the Euclidean ball $\mathcal{W} = \set{w}{\|w\|_2\le r}$ of radius $r$ satisfies $\lambda_0=\frac{1}{r}$. \item Let $Q$ be a positive definite matrix. If $\mathcal{W} = \set{w}{w^\top Q w\le 1 }$ then $\lambda_0=\lambda_{\min}/\sqrt{\lambda_{\max}}$, where $\lambda_{\min}$ and $\lambda_{\max}$ are the minimal, respectively, maximal eigenvalues of $Q$. (\citealt{Pol96} also derived this result for the strong convexity definition \eqref{sc:l2} in \cref{strongconvex}.) \item In general, let $\phi:\mathbb{R}^d \to \mathbb{R}$ be a $C^2$ convex function. Then, for $\mathcal{W} = \set{w}{\phi(w)\le 1}$, $\lambda_0=\min_{w\in\mathrm{bd}(\mathcal{W})}\min_{v\,:\,\|v\|_2=1, v\perp \phi'(w) }\frac{v^{\top}\nabla^2\phi(w) v}{\|\phi'(w)\|_2}$~. \todoa{We should prove this one} \end{itemize} \end{example} We only prove the last statement, since it implies the other two. \begin{proof} Fix $w\in\mathrm{bd}(\mathcal{W})$. Note that $\phi'(w)$ is a normal vector at $w$ for $\mathrm{bd}(\mathcal{W})$, thus $T_w\mathcal{W} = \seto{v: v\perp \phi'(w)}$. Then the Gauss map $u_{\mathcal{W}}$ of $\mathcal{W}$ satisfies $u_{\mathcal{W}}(w) = \frac{\phi'(w)}{\|\phi'(w)\|_2}$ for $w\in\mathrm{bd}(\mathcal{W})$. Next we compute the Weingarten map $W_w(v):\, T_w\mathcal{W} \rightarrow T_w\mathcal{W}$, which, by definition, is the differential of $u_{\mathcal{W}}(w)$ restricted to $T_w\mathcal{W}$. Note that the Weingarten map is a linear map. \[ W_w(v) = \left.\frac{\mbox{d }u_{\mathcal{W}} }{\mbox{d }w} \right\vert_{T_w\mathcal{W}} (v) = \frac{\nabla^2(w)v}{\|\phi'(w)\|_2} -\frac{\phi'(w)\nabla^2\phi(w)\phi'(w)^{T}v}{\|\phi'(w)\|_2^3} = \frac{\nabla^2(w)v}{\|\phi'(w)\|_2}. \] By \citep[page 105]{Sch14:ConvexBodies}, the principal curvature of $\mathcal{W}$ at $w$ are the eigenvalues of the Weingarten map $W_w(v)$. Therefore, the smallest principal curvature at $w$ is $\min_{v\,:\,\|v\|_2=1, v\perp \phi'(w) }\frac{v^{\top}\nabla^2\phi(w) v}{\|\phi'(w)\|_2}$. Taking minimum over all $w\in\mathrm{bd}(\mathcal{W})$ finishes the proof. \end{proof} \begin{wrapfigure}{R}{0.35\textwidth} \vspace{-.05cm} \begin{framed} \centering \includegraphics[width = \textwidth, trim={6.2cm 1cm 1.8cm 0},clip] {figures/ExcessError} \vspace{-0.4cm} \caption{Illustration of how curvature helps to keep the regret small. } \label{fig:excesserror} \vspace{-0.1cm} \end{framed} \vspace{-1.5cm} \end{wrapfigure} In the stochastic i.i.d.\ case, when $\Exp{\Theta_t} = -\mu$, we have $\norm{\Theta_t +\mu}_2 = O(1/\sqrt{t})$ with high probability. Thus say, for $\mathcal{W}$ being the unit ball of $\mathbb{R}^d$, one has $w_t = \Theta_t/\norm{\Theta_t}_2$; therefore, a crude bound suggests that $\norm{w_t- w^* }_2 = O(1/\sqrt{t})$, overall predicting that $\Exp{R_n} = O(\sqrt{n})$, while the previous result predicts that $R_n$ is much smaller. In the next example we look at the unit ball, to explain geometrically, what ``causes'' the smaller regret. \begin{example} \label{exam:ERM} Let $\mathcal{W} = \set{w}{\|w\|_2\le 1}$ and consider a stochastic setting where the $f_i$ are i.i.d. samples from some underlying distribution with expectation $\Exp{f_i} = \mu = (-1,0,\ldots,0)$ and $\|f_i\|_\infty\le M$. It is straightforward to see that $w^* = (1,0,\ldots,0)$, and thus $\inpro{w^*}{\mu} = -1$. Let $E = \set{-\theta}{\|\theta - \mu\|_2 \le \epsilon}$. As suggested beforehand, we expect $-\mu_t\in E$ with high probability. As shown in \cref{fig:excesserror}, the excess loss of an estimate $\vv{OA}$ is $\inpro{\vv{O\tilde{A}}}{\vv{OD}} - 1 = |\tilde{B}D|$. Similarly, the excess loss of an estimate $\vv{OA'}$ in the figure is $|{CD}|$. Therefore, for an estimate $-\mu_t \in E$, the point $A$ is where the largest excess loss is incurred. The triangle $OAD$ is similar to the triangle $ADB$. Thus $\frac{|BD|}{|AD|} = \frac{|AD|}{|OD|}$. Therefore, $|BD| = \epsilon^2$ and since $|{\tilde{B}D}| \le |{BD}|$, if $\|\mu_t - \mu\|_2 \le \epsilon$, the excess error is at most $\epsilon^2 = O(1/t)$, making the regret $R_n = O(\log n)$. \end{example} Our last result in this section is an asymptotic lower bound for the linear game, showing that FTL achieves the optimal rate under the condition that $\min_t \|\Theta_t\|_2\ge L >0$. \begin{thm} \label{thm:lowerbound} Let $\lambda,L \in (0,1)$. Assume that $\seto{(1,-L), (-1, -L)} \subset \mathcal{F}$ and let \[ \mathcal{W} = \seto{(x,y) \in \mathbb{R}^2: x^2 + \frac{y^2}{\lambda^2} \le 1} \] be an ellipsoid with principal curvature $h$. Then, for any learning strategy, there exists a sequence of losses in $\mathcal F$ such that $R_n = \Omega\left(\log(n)/(Lh)\right)$ and $\|\Theta_t\|_2 \ge L$ for all $t$. \end{thm} Note that by Example~\ref{ex:curvature}, the minimal principal curvature of $\mathcal{W}$ in the above theorem is $\lambda$. In fact, it is not too hard to extend the above argument for any set $\mathcal{W}$ such that there is $w \in \mathrm{bd}(\mathcal{W})$ where the curvature is $h$, and the curvature is a continuous function in a neighborhood of $w$ over the boundary $\mathrm{bd}(\mathcal{W})$. The constants in the bound then depend on how fast the curvature changes within this neighborhood. \begin{proof} We define a random loss sequence, and we will show that no algorithm on this sequence can achieve an $o(\log n/ (\lambda_0 L)$ regret. Let $P$ be a random variable with $\mbox{Beta}(K,K)$ distribution for some $K>0$, and, given $P$, assume that $X_t, t \ge 1$ are i.i.d. Bernoulli random variables with parameter $P$. Let $f_t = X_t (1, -L) + (1-X_t) (-1, -L) = (2X_t - 1, -L)$. Thus, the second coordinate of $f_t$ is always $-L$, and so $\|\Theta_t\|_2 = \left\| \tfrac{1}{t} \sum_{i=1}^t f_i \right\|_2 \ge L$. Furthermore, the conditional expectation of the loss vector is $f^p \overset{\triangle}{=} \Expc{f_t}{P=p} = (2p - 1, -L)$. Note that $X_t$ is a function of $f_t$ for all $t$; thus the conditional expectation of $P$, given $f_1,\ldots,f_{t-1}$, can be determined by the well-known formula $\hat{P}_{t-1}= \Expc{P}{f_1 \ldots f_{t-1}} = \frac{K+\sum_{i=1}^{t-1} X_i}{2K+t-1}$. Given $p$, denote the optimizer of $f^p$ by $w^p$, that is, $w^p = \argmin_{w \in \mathcal{W}} \inner{w,f^p}$. Then the Bayesian optimal choice in round $t$ is \begin{align} \argmin_{w \in \mathcal W} \Expc{[\inner{w, f^P}}{ f_1\ldots f_{t-1}} &= \argmin_{w \in \mathcal W} \inner{w, \Expc{f^P}{f_1 \ldots f_{t-1}}} \nonumber \\ &= \argmin_{w \in \mathcal W} \inner{w, f^{\hat P_{t-1}}} \nonumber \\ &= w^{\hat P_{t-1}}\,, \label{eq:bayes-opt} \end{align} where the first equality follows by linearity of the inner product, the second since $f^p$ is a linear function of $p$ and the third by the definition of $w^p$. Thus, denoting by $W_t$ the prediction of an arbitrary algorithm in round $t$, the expected regret can be bounded from below as \begin{align} \Exp{R_n} &= \Exp{\max_{w \in \mathcal{W}} \sum_{t=1}^n \inner{W_t - w, f_t}} = \Exp{ \Expc{\max_{w \in \mathcal{W}} \sum_{t=1}^n \inner{W_t - w, f_t}}{P} } \nonumber \\ & \ge \Exp{ \Expc{ \sum_{t=1}^n \inner{W_t - w^P, f_t}}{P} } = \Exp{\sum_{t=1}^n \Expc{ \inner{W_t - w^P, f_t} }{P, f_1,\ldots,f_{t-1}}} \nonumber \\ & = \Exp{\sum_{t=1}^n \Expc{ \inner{W_t - w^P, f^P} }{f_1,\ldots,f_{t-1}}} \label{eq:Wf-ind} \\ & \ge \Exp{\sum_{t=1}^n \min_{w \in \mathcal{W}} \Expc{ \inner{w- w^P, f^P} }{f_1,\ldots,f_{t-1}}} \nonumber \\ & = \Exp{\sum_{t=1}^n \Expc{ \inner{w^{\hat{P}_{t-1}}- w^P, f^P} }{f_1,\ldots,f_{t-1}}} \label{eq:bayes1} \\ & = \sum_{t=1}^n \Exp{\inner{w^{\hat{P}_{t-1}} - w^P, f^P}} \,, \nonumber \end{align} where \eqref{eq:Wf-ind} holds because of the independence of the $f_s$ given $P$ and since $W_t$ is chosen based on $f_1,\ldots,t_{t-1}$ (but not on $P$), and \eqref{eq:bayes1} holds by \eqref{eq:bayes-opt}. By \cref{lem:P2P1loss} we have \begin{align} \sum_{t=1}^n \Exp{\inner{w^{\hat{P}_{t-1}} - w^P, f^P}} & \ge \frac{hL}{2}\sum_{t=1}^n \Exp{ \frac{\left( \frac{2\hat{P}_{t-1} - 2P}{hL} \right)^2}{\sqrt{1+\left( \frac{1-2P}{hL}\right)^2 } \left(1+\left( \frac{1-2\hat{P}_{t-1}}{hL}\right)^2 \right)} } \label{eq:hLloss} \\ & = \frac{2}{hL}\sum_{t=1}^n \Exp{\frac{1}{\sqrt{1+\left( \frac{1-2P}{hL}\right)^2 }}\Expc{ \frac{ ( \hat{P}_{t-1} - P)^2}{ 1+\left( \frac{1-2\hat P_{t-1}}{hL}\right)^2 } }{P} } \nonumber \\ & \ge \frac{2}{hL}\sum_{t=1}^n \Exp{ \frac{1}{\sqrt{1+\left( \frac{1-2P}{hL}\right)^2 }}\Expc{ \frac{( \hat{P}_{t-1} - P)^2}{ 1+ 2\left( \frac{1-2P}{hL}\right)^2 +2 \left(\frac{2P - 2\hat{P}_{t-1}}{hL}\right)^2 }}{P } } \label{eq:hLlossCond}\,, \end{align} where in the last step we used $(a+b)^2 \le a^2 + b^2$. Let $\mathcal{G}_t$ be the event that $|\hat P_{t} - P| \le \frac{K |1-2P|}{2K+t} + \frac{t hL}{2K+t}$; note that $\mathcal{G}_t$ holds with high probability by \cref{lem:concenPhat}. Then, lower bounding the first term by $0$, \eqref{eq:hLlossCond} can be lower bounded by \begin{align*} &\frac{2}{hL}\sum_{t=1}^{n-1} \Exp{ \frac{1}{\sqrt{1+\left( \frac{1-2P}{hL}\right)^2 }}\Expc{ \frac{( \hat{P}_{t} - P)^2}{ 1+ 2\left( \frac{1-2P}{hL}\right)^2 +2 \left(\frac{2P - 2\hat{P}_{t}}{hL}\right)^2 }\mathbb{I}(\mathcal{G}_t)}{P } } \\ &\ge \frac{2}{hL}\sum_{t=1}^{n-1} \Exp{ \frac{1}{\sqrt{1+\left( \frac{1-2P}{hL}\right)^2 }}\frac{\Expc{ ( \hat{P}_{t} - P )^2\mathbb{I}(\mathcal{G}_t) }{P}}{ \left(1+ 2\left( \frac{1-2P}{hL}\right)^2 +2 \left(\frac{2K}{2K+t}\frac{|1-2P|}{hL} + \frac{2t}{2K+t}\right)^2 \right)} } \\ & \ge \frac{2}{hL}\sum_{t=1}^{n-1} \Exp{\frac{1}{\sqrt{1+\left( \frac{1-2P}{hL}\right)^2 }}\frac{\Expc{ ( \hat{P}_{t} - P )^2\mathbb{I}(\mathcal{G}_t) }{P}}{ \left(9+ 4\left( \frac{1-2P}{hL}\right)^2 +8 \frac{|1-2P|}{hL} \right)} }. \end{align*} Combining the above, and using $(\hat{P}_{t} - P )^2 \le 1$ together with the upper bound on the probability of the event $\mathcal{G}^c_t$, the complement of $\mathcal{G}_t$, given in \cref{lem:concenPhat}, we get \begin{align} \Exp{R_n} & \ge \frac{2}{hL}\sum_{t=1}^{n-1} \Exp{\frac{1}{\sqrt{1+\left( \frac{1-2P}{hL}\right)^2 }}\frac{\Expc{ ( \hat{P}_{t} - P )^2 }{P}-\Prob{\mathcal{G}^c_t}}{ \left(9+ 4\left( \frac{1-2P}{hL}\right)^2 +8 \frac{|1-2P|}{hL} \right)} } \nonumber \\ & \ge \frac{2}{hL}\sum_{t=1}^{n-1} \left( \Exp{\frac{1}{\sqrt{1+\left( \frac{1-2P}{hL}\right)^2 }}\frac{\Expc{ ( \hat{P}_{t} - P )^2 }{P}}{ \left(9+ 4\left( \frac{1-2P}{hL}\right)^2 +8 \frac{|1-2P|}{hL} \right)} } - e^{-(t-1)h^2L^2} \right) \nonumber \\ & \ge \frac{2}{hL}\left(\sum_{t=1}^{n-1} \Exp{\frac{1}{\sqrt{1+\left( \frac{1-2P}{hL}\right)^2 }}\frac{\Expc{ ( \hat{P}_{t} - P )^2 }{P}}{ \left(9+ 4\left( \frac{1-2P}{hL}\right)^2 +8 \frac{|1-2P|}{hL} \right)} } \; - \frac{1}{1-e^{-h^2L^2}} \right)\,. \label{eq:Rngc} \end{align} Now, by \cref{lem:bayeserror}, we have \begin{align*} \Expc{ ( \hat{P}_{t} - P )^2 }{P} & = \frac{K^2(1-2P)^2}{(2K+t)^2} + \frac{tP(1-P)}{(2K+t)^2} \ge P(1-P) \left( \frac{1}{t} - \frac{2}{t(2K+t)} \right)~. \end{align*} Combining this with \eqref{eq:Rngc} and introducing the constant \[ C = \Exp{\frac{1}{\sqrt{1+\left( \frac{1-2P}{hL}\right)^2 }}\frac{P(1-P)}{ \left(9+ 4\left( \frac{1-2P}{hL}\right)^2 +8 \frac{|1-2P|}{hL} \right)} } \] we obtain, for any $K>0$, \begin{align} \liminf_{n \to \infty} \frac{\Exp{R_n}}{\log n} & \ge \liminf_{n \to \infty} \frac{2}{hL \log n}\left[ - \frac{1}{1-e^{-h^2L^2}} + \sum_{t=1}^{n-1} C\left(\frac{1}{t} - \frac{2}{t(2K+t)} \right) \right] = \frac{2 C}{hL}~. \end{align} It remains to calculate a constant lower bound for $C$ that is independent of $h$ and $L$. Denote $\frac{|1-2P|}{hL}$ by $Y$; then $0\le P(1-P) = \frac{1-Y^2h^2L^2}{4}\le 1/4$. Define $\widehat{\mathcal{G}}$ to be the event when $|Y| \le 1$. Since $P$ has $\mbox{Beta}(K,K)$ distribution, $\Exp{P} = \frac{1}{2}$ and $\mbox{Var}(P) = \frac{1}{8K}$. Therefore, by Chebyshev's inequality, \begin{align*} \Prob{\widehat{\mathcal{G}}^c} = \Prob{ \left| P-\frac{1}{2}\right| > \frac{hL}{2} } \le \frac{1}{2 K h^2 L^2}~. \end{align*} Therefore, \begin{align*} C &= \Exp{\frac{1}{\sqrt{1+Y^2 }}\frac{1-Y^2h^2L^2}{ 4 (9+ 4Y^2 +8 Y )}} \ge \Exp{\frac{1}{\sqrt{1+Y^2 }}\frac{1-Y^2h^2L^2}{ 4 (9+ 4Y^2 +8 Y )} \mathbb{I}(\widehat{\mathcal{G}}) } \\ & \ge \frac{1}{84\sqrt{2}}\Exp{(1-Y^2h^2L^2)\mathbb{I}(\widehat{\mathcal{G}})} \ge \frac{1}{84\sqrt{2}} \left( \Exp{1-Y^2h^2L^2} - \Prob{\widehat{\mathcal{G}}^c}\right) \\ & \quad \ge \frac{1}{84\sqrt{2}} \left( 1- \Exp{(1-2P)^2} - \frac{1}{2Kh^2L^2}\right) = \frac{1}{84\sqrt{2}} \left(\frac{1}{2} - \frac{h^2L^2}{2}\right). \end{align*} Therefore, \[ \liminf\limits_{n\rightarrow \infty} \frac{\Exp{R_n}}{\log n} \ge \frac{1}{84\sqrt{2}}\left(\frac{1}{hL} - hL\right) \ge \frac{1}{84\sqrt{2}}\left(\frac{1}{hL} - 1\right). \] The result is completed by noting that the worst-case regret is at least as big as the expected regret, thus, for every $n$, there exist a $P$ and a sequence of loss vectors $f_1,\ldots,f_n$ such that the regret $R_n$ is at least $\Omega(\frac{\log n }{hL})$. \end{proof} \subsection{Other regularities} So far we have looked at the case when FTL achieves a low regret due to the curvature of $\mathrm{bd}(\mathcal{W})$. The next result characterizes the regret of FTL when $\mathcal{W}$ is a polytope, which has a flat, non-smooth boundary and thus \cref{thm:R_curvesurface} is not applicable. For this statement recall that given some norm $\|\cdot\|$, its dual norm is defined by $\|w\|_* = \sup_{\|v\|\le 1} \inpro{v}{w}$. \begin{thm} \label{thm:regretpolytope} Assume that $\mathcal{W}$ is a polytope and that $\Phi$ is differentiable at $\Theta_i$, $i= 1, \ldots, n$. Let $w_t = \argmax_{w\in\mathcal{W}} \inpro{w}{\Theta_{t-1}}$, $W = \sup_{w_1,w_2\in\mathcal{W}}\|w_1 - w_2\|_*$ and $F = \sup_{f_1,f_2\in \mathcal{F}} \norm{f_1-f_2}$. Then the regret of FTL is \[ R_n \le W\, \sum_{t=1}^{n} t \,\mathbb{I}(w_{t+1}\neq w_{t}) \|\Theta_t - \Theta_{t-1}\| \le FW\,\sum_{t=1}^{n} \mathbb{I}(w_{t+1}\neq w_{t})\,. \] \end{thm} Note that when $\mathcal{W}$ is a polytope, $w_t$ is expected to ``snap'' to some vertex of $\mathcal{W}$. Hence, we expect the regret bound to be non-vacuous, if, e.g., $\Theta_t$ ``stabilizes'' around some value. Some examples after the proof will illustrate this. \begin{proof} Let $v \!=\! \argmax_{w\in\mathcal{W}} \inpro{w}{\theta}$, $v'\!=\!\argmax_{w\in \mathcal{W}}\ip{w,\theta'}$. Similarly to the proof of \cref{thm:R_curvesurface}, \begin{align*} \inpro{v'-v}{\theta'} & = \inpro{v'}{\theta'} - \inpro{v'}{\theta} + \inpro{v'}{\theta} - \inpro{v}{\theta} + \inpro{v}{\theta} -\inpro{v}{\theta'} \\ & \le \inpro{v'}{\theta'} - \inpro{v'}{\theta} + \inpro{v}{\theta} -\inpro{v}{\theta'} = \inpro{v' - v}{\theta' - \theta} \le W\,\mathbb{I}(v'\neq v)\|\theta' - \theta \|, \end{align*} where the first inequality holds because $\inpro{v'}{\theta} \le \inpro{v}{\theta}$. Therefore, by \cref{prop:avgdiff}, \begin{align*} R_n & = \sum_{t=1}^n t\,\ip{ w_{t+1}-w_t,\Theta_t} \le W\,\sum_{t=1}^{n} t\, \mathbb{I}(w_{t+1}\!\neq\! w_{t}) \|\Theta_t - \Theta_{t-1}\| \le FW\,\sum_{t=1}^{n} \mathbb{I}(w_{t+1}\!\neq\! w_{t})\,. \end{align*} \end{proof} \if0 \begin{comm} \cref{thm:regretpolytope} bounds the regret of FTL by the number of switches of the maximizers $\sum_{t=1}^{n} \mathbb{I}(w_t\neq w_{t-1})$. \end{comm} \fi As noted before, since $\mathcal{W}$ is a polytope, $w_t$ is (generally) attained at the vertices. In this case, the epigraph of $\Phi$ is a polyhedral cone. Then, the event when $w_{t+1}\neq w_{t}$, i.e., when the ``leader'' switches corresponds to when $\Theta_{t}$ and $\Theta_{t-1}$ belong to different linear regions corresponding to different linear pieces of the graph of $\Phi$. We now spell out a corollary for the stochastic setting. In particular, in this case FTL will often enjoy a constant regret: \begin{cor}[Stochastic setting] \label{cor:stocpolytope} Assume that $\mathcal{W}$ is a polytope and that $(f_t)_{1\le t \le n}$ is an i.i.d. sequence of random variables such that $\Exp{f_i} = \mu$ and $\|f_i\|_\infty \le M$. Let $W = \sup_{w_1,w_2\in \mathcal{W}} \norm{w_1-w_2}_1$. Further assume that there exists a constant $r > 0$ such that $\Phi$ is differentiable for any $\nu$ such that $\|\nu-\mu\|_\infty \le r$. \todoc{We should probably explain the intuitive meaning of this. Maybe replace this with something more intuitive in fact.. $r$ should be the radius of the largest ball such that $\nu$ and $\mu$ are on the same face of $\mathcal{W}$. Then we won't need $\Phi$ indeed.} \todoa{Explained after the corollary.} Then, \[ \Exp{R_n} \le 2MW \, (1+4d M^2/r^2 )\,. \] \end{cor} The condition on $\Phi$ means that $r$ can be selected to be the radius of the largest ball such that the optimal decisions for expected losses $\mu$ and $\nu$ (i.e., the maximizers defining $\Phi(-\mu)$ and $\Phi(-\nu)$) belong to the same face of $\mathcal{W}$. \begin{proof} Let $V = \set{\nu}{\|\nu - \mu\|_\infty\le r}$. Note that the epigraph of the function $\Phi$ is a polyhedral cone. Since $\Phi$ is differentiable in the interior of $V$, $\set{(\theta, \Phi(\theta))}{\theta\in V}$ is a subset of a linear subspace. Therefore, for $-\Theta_t, -\Theta_{t-1} \in V$, $w_{t+1}=w_t$. Hence, by \cref{thm:regretpolytope}, \[ \Exp{R_n} \le 2MW\,\sum_{t=1}^{n} \Prob{-\Theta_t,-\Theta_{t-1} \notin V} \le 4MW\,\left(1+\sum_{t=1}^{n} \Prob{-\Theta_t \notin V}\right)\,. \] On the other hand, note that $\|f_i\|_\infty\le M$. Then \begin{align*} \Prob{-\Theta_t \notin V} & = \Prob{ \norm{\frac{1}{t} \sum_{i=1}^{t} f_i - \mu}_\infty \ge r} \le \sum_{j=1}^{d} \Prob{ \left|\frac{1}{t} \sum_{i=1}^{t} f_{i,j} - \mu_j\right| \ge r } \le 2d e^{-\frac{tr^2}{2M^2}}\,, \end{align*} where the last inequality is due to Hoeffding's inequality. Now, using that for $\alpha>0$, $\sum_{t=1}^n \exp(-\alpha t ) \le \int_0^n \exp(-\alpha t ) dt \le \frac{1}{\alpha}$, we get $ \Exp{R_n} \le 2MW \, (1+4d M^2/r^2 ) $. \end{proof} The condition that $\Phi$ is differentiable for any $\nu$ such that $\|\nu-\mu\|_\infty \le r$ is equivalent to that $\Phi$ is differentiable at $\mu$. By \cref{prop:derivativePhi}, this condition requires that at $\mu$, $\max_{w\in\mathcal{W}} \ip{w,\theta}$ has a unique optimizer. Note that the volume of the set of vectors $\theta$ with multiple optimizers is zero. \section{Adaptive algorithm for the linear game} While as shown in \cref{thm:R_curvesurface}, FTL can exploit the curvature of the surface of the constraint set to achieve $O(\log n)$ regret, it requires the curvature condition and $\min_t \|\Theta_t\|_2 \ge L$ being bounded away from zero, or it may suffer even linear regret. On the other hand, many algorithms, such as the "Follow the regularized leader" (FTRL) algorithm \citep[see,e.g.,][]{SS12:Book}, are known to achieve a regret guarantee of $O(\sqrt{n})$ even for the worst-case data in the linear setting. This raises the question whether one can have an algorithm that can achieve constant or $O(\log n)$ regret in the respective settings of \cref{cor:stocpolytope} or \cref{thm:R_curvesurface}, while it still maintains $O(\sqrt{n})$ regret for worst-case data. One way to design an adaptive algorithm is to use the ($\mathcal{A}$, $\mathcal{B}$)-prod algorithm of \citet{sani2014exploiting}, trivially leading to the following result: \begin{prop} Consider ($\mathcal{A}$, $\mathcal{B}$)-prod of \citet{sani2014exploiting}, where algorithm \todoa{Do we need to write out ($\mathcal{A}$, $\mathcal{B}$)-prod?} $\mathcal{A}$ is chosen to be FTRL with an appropriate regularization term, while $\mathcal{B}$ is chosen to be FTL. Then the regret of the resulting hybrid algorithm $\mathcal{H}$ enjoys the following guarantees: \begin{itemize}\setlength{\itemsep}{0pt} \item If FTL achieves constant regret as in the setting of \cref{cor:stocpolytope}, then the regret of $\mathcal{H}$ is also constant. \item If FTL achieves a regret of $O(\log n)$ as in the setting of \cref{thm:R_curvesurface}, then the regret of $\mathcal{H}$ is also $O(\log n)$. \item Otherwise, the regret of $\mathcal{H}$ is at most $O(\sqrt{n\log n})$. \end{itemize} \end{prop} In the next section we show that if the constraint set is the unit ball, it is possible to design adaptive algorithms directly. \subsection{Adaptive Algorithms for the Unit Ball Constraint Set} In this section we provide some interesting results about adaptive algorithms for the case when $\mathcal{W}$ is the unit ball in $\mathbb{R}^d$ (naturally, the results easily generalize to any ball centered at the origin). First, we show that a variant of FTL using shrinkage as regularization has $O(\log(n))$ regret when $\|\Theta_t\|_2 \ge L>0$ for all $t$, but it also has $O(\sqrt{n})$ worst case guarantee. Furthermore, we show that the standard FTRL algorithm is adaptive if the constraint set is the unit ball and the loss vectors are stochastic. Throughout the section we will use the notation $F_t=-(t-1)\Theta_t=\sum_{i=1}^{t-1} f_i$. \subsubsection{Follow the Shrunken Leader} In this section we are going to analyze a combination of the FTL algorithm and the idea of shrinkage often used for regularization purposes in statistics. We assume that $\mathcal{W}=\set{x \in \mathbb{R}^d}{ \|x\|_2 \le 1}$ is the unit ball and, without loss of generality, we further assume that $\|f\|_2 \le 1$ for all $f\in\mathcal{F}$. \begin{algorithm}[t] \caption{Follow The Shrunken Leader (FTSL} \label{alg:adaptiveAlgorithm} \begin{algorithmic}[1] \STATE Predict $w_1 = 0$; \FOR {$t = 2, ..., n-1$} \STATE {FTL: Compute $\tilde{w}_{t} = \argmin_{w\in\mathcal{W}} \inner{ w, F_{t-1}}$} \STATE {Shrinkage: Predict $w_t = \frac{\|F_{t-1}\|_2}{\sqrt{\|F_{t-1}\|_2^2+t+2}}\tilde{w}_{t}$} \ENDFOR \STATE {FTL: Compute $\tilde{w}_{n} = \argmin_{w\in\mathcal{W}} \inner{ w, F_{n-1}}$} \STATE {Shrinkage: Predict $w_n = \frac{\|F_{n-1}\|_2}{\sqrt{\|F_{n-1}\|_2^2+n}}\tilde{w}_{n}$} \end{algorithmic} \end{algorithm} \begin{thm} The Follow The Shrunken Leader (FTSL) algorithm is given in \cref{alg:adaptiveAlgorithm}. The main idea of the algorithm is to predict a shrunken version of the FTL prediction, in this way keeping it away from the boundary of $\mathcal{W}$. The next theorem shows that the right amount of shrinkage leads to a robust, adaptive algorithm: \begin{itemize} \item If there exists $L$ such that $\|\Theta_t\|_2 \ge L>0$ for $1\le t\le n$, then the regret of FTSL is $O(\log(n)/L)$. \item Otherwise, the regret of FTSL is at most $O(\sqrt{n})$. \end{itemize} \end{thm} \begin{proof} By the definition of $F_t$ and $\mathcal{W}$, $\tilde{w}_{t} =- F_{t-1}/\|F_{t-1}\|_2$. Let $\sigma_n = \frac{\|F_{n-1}\|_2}{\sqrt{\|F_{n-1}\|_2^2 + n}}$. Our proof follows the idea of \citet{abernethy2008optimal}. We compute the upper bound on the value of the game for each round backwards for $t=n,n-1,\dots,1$, by solving the optimal strategies for $f_t$. The value of the game using FTSL is defined as \begin{align*} V_n & = \max_{f_1, \ldots, f_n} \sum_{t=1}^{n}\inpro{w_t}{f_t}- \min_{w\in\mathcal{W}} \inpro{w}{F_n} \\ & = \max_{f_1,\ldots, f_{n-1}} \sum_{t=1}^{n-1}\inpro{w_t}{f_t} + \underbrace{\max_{f_n} \|F_{n-1}+f_n\|_2 + \inpro{f_n}{w_n}}_{=:U_n} \end{align*} We first prove that $U_n$, the second term above, is bounded from above by $\sqrt{\|F_{n-1}\|_2^2 + n}$. To see this, let $f_n = a_n \tilde{F}_{n-1} + b_n \Omega_{n-1}$ where $\tilde{F}_{n-1}$ is the unit vector parallel to $F_{n-1}$ and $\Omega_{n-1}$ is a unit vector orthogonal to $F_{n-1}$. Furthermore, since $\|f_n\|_2 \le 1$, we have $a_n^2+b_n^2 \le 1$. Thus, \begin{align*} U_n & = \max_{f_n} \sqrt{\|F_{n-1}\|_2^2 + 2a_n\|F_{n-1}\|_2 + a_n^2 + b_n^2} - a_n\sigma_{n}\\ & \le \max_{a} \sqrt{\|F_{n-1}\|_2^2 + 2a\|F_{n-1}\|_2 + n} - a\sigma_{n}\\ & = \sqrt{\|F_{n-1}\|_2^2 + n}, \end{align*} where the last equality follows since the maximum is attained at $a=0$. A similar statement holds for the other time indices: for any $t \ge 1$, \begin{equation} \label{eq:stepDiff1} \max_{f_t} \sqrt{\|F_{t-1} + f_t\|_2^2 + t + 1} + \inpro{f_t}{w_t} \le \sqrt{\|F_{t-1}\|_2^2 + t} + \frac{1}{\sqrt{t}}~. \end{equation} Before proving this inequality, let us see how it implies the second statement of the theorem: \begin{align*} V_n & \le \max_{f_1,\ldots, f_{n-1}} \sum_{t=1}^{n-1}\inpro{w_t}{f_t} + \sqrt{\|F_{n-1}\|_2^2 + n} \\ & \le \max_{f_1,\ldots, f_{n-2}} \sum_{t=1}^{n-2}\inpro{w_t}{f_t} + \sqrt{\|F_{n-2}\|_2^2 + n-1} + \frac{1}{\sqrt{n}} \\ & \le \ldots \\ & \le 1+ \sum_{t=1}^{n}\frac{1}{\sqrt{t}} = O(\sqrt{n}). \end{align*} Moreover, if $\|\Theta_t\|_2 \ge L$ for $1\le t\le n$, a stronger version of \eqref{eq:stepDiff1} also holds: \begin{equation} \label{eq:stepDiff2} \max_{f_t} \sqrt{\|F_{t-1} + f_t\|_2^2 + t + 1} + \inpro{f_t}{w_t} \le \sqrt{\|F_{t-1}\|_2^2 + t} + \frac{1}{(t-1)L}. \end{equation} This implies the first statement of the theorem, since \begin{align*} V_n & \le \max_{f_1,\ldots, f_{n-1}} \sum_{t=1}^{n-1}\inpro{w_t}{f_t} + \sqrt{\|F_{n-1}\|_2^2 + n} \\ & \le \max_{f_1,\ldots, f_{n-2}} \sum_{t=1}^{n-2}\inpro{w_t}{f_t} + \sqrt{\|F_{n-2}\|_2^2 + n-1} + \frac{1}{(n-1)L} \\ & \le \ldots \\ & \le 1+ \sum_{t=1}^{n-1}\frac{1}{tL} = O(\log(n)/L). \end{align*} To finish the proof, it remains to show \eqref{eq:stepDiff1} and \eqref{eq:stepDiff2}. Let $f_t = a_t \tilde{F}_{t-1} + b_t \Omega_{t-1}$ where $\tilde{F}_{t-1}$ is the unit vector parallel to $F_{t-1}$ and $\Omega_{t-1}$ is a unit vector orthogonal to $F_{t-1}$. Since $\|f_t\|_2 \le 1$, observe that $a_t^2+b_t^2 =\|f_t\|_2 \le 1$. Furthermore, let $\sigma_t = \frac{\|F_{t-1}\|_2}{\sqrt{\|F_{t-1}\|_2^2 + t+2}}$. Then, for any $t \ge 1$, \begin{align} \Delta_t & =\max_{f_t} \sqrt{\|F_{t-1}\|_2^2 + 2a_t\|F_{t-1}\|_2 + a_t^2 + b_t^2 + t+1} - a_t\sigma_t - \sqrt{\|F_{t-1}\|_2^2 + t} \nonumber\\ & \le \max_{a_t} \sqrt{\|F_{t-1}\|_2^2 + 2a_t\|F_{t-1}\|_2 + t+2} - a_t\sigma_t - \sqrt{\|F_{t-1}\|_2^2 + t} \nonumber\\ & = \sqrt{\|F_{t-1}\|_2^2 + t+2} - \sqrt{\|F_{t-1}\|_2^2 + t} \nonumber\\ & = \frac{2}{\sqrt{\|F_{t-1}\|_2^2 + t+2} + \sqrt{\|F_{t-1}\|_2^2 + t}} \label{eq:stepDiff3} \\ & \le \frac{1}{\sqrt{t}}. \nonumber \end{align} This proves \eqref{eq:stepDiff1}. Moreover, if $\|F_{t-1}\|_2 = \|(t-1)\Theta\|_2 \ge (t-1)L$, by \eqref{eq:stepDiff3} we obtain \[ \Delta_t \le \frac{2}{\sqrt{\|F_{t-1}\|_2^2 + t+2} + \sqrt{\|F_{t-1}\|_2^2 + t}} \le \frac{1}{\|F_{t-1}\|_2}\le \frac{1}{(t-1)L}, \] proving \eqref{eq:stepDiff2}. \end{proof} \subsubsection{FTRL for the case of the unit ball constraint set} This section is to show that in the case when $\mathcal{W}$ is the unit ball in $\ell_2$ norm, FTRL with $R(w) = \frac{1}{2}\|w\|^2$ as its regularization is an adaptive algorithm. To fix the notation, in round $t$, FTLR predicts \[ w_{t} = \argmin_{w\in \mathcal{W}} \eta_t \inpro{F_{t-1}}{w} + R(w), \] if $t >1$ and $w_1=0$. It has been well known that FTRL with $\eta_t = 1/\sqrt{t-1}$ is guaranteed to achieve $O(\sqrt{n})$ regret in the adversarial setting, see, e.g., \citep{SS12:Book}. It remains to prove that FTRL indeed achieves a fast rate in the stochastic setting. \begin{thm} Assume that the sequence of loss vectors, $f_1,\ldots,f_n \in \mathbb{R}^d$ satisfies $\|f_t\|_2 \le 1$ almost surely and $\Exp{f_t} = \mu$ for all $t$ with some $\|\mu\|_2 >0$. Then FTRL with $\eta_t=1/\sqrt{t-1}$ suffers $O(\log n)$ regret . \end{thm} \begin{proof} Using $R(w) = \frac{1}{2}\|w\|^2$ as its regularization, in round $t>1$ FTRL predicts \begin{equation} \label{eq:ftrl-eq} w_{t} = \argmin_{w\in \mathcal{W}} \eta_t \inpro{F_{t-1}}{w} + R(w) = \begin{cases} \frac{1}{\sqrt{t-1}} F_{t-1} & \quad \text{if } \|F_{t-1}\| \le \sqrt{t-1} \\ \frac{F_{t-1}}{\|F_{t-1}\|} & \quad \text{otherwise.} \end{cases} \end{equation} For any $1\le t \le n$, denote the event $\|F_t\| \ge \sqrt{t}$ by $\mathcal{E}_t$. Note that if $\|F_{t-1}\| \ge \sqrt{t-1}$, FTRL predicts exactly the same $w_t$ as FTL. Denote the accumulate loss of FTL in $n$ rounds by $\mathcal{L}^{FTL}_n$. Thus, the regret of FTRL is \begin{align*} \Exp{R_n} & = \Exp{\sum_{t=1}^{n} \inpro{f_t}{w_t} - \min_{w\in\mathcal{W}} \inpro{f_t}{w}} \\ & = \Exp{ \sum_{t=1}^{n} \inpro{f_t}{w_t} - \mathcal{L}^{FTL}_n }+ \Exp{\mathcal{L}^{FTL}_n - \min_{w\in\mathcal{W}} \inpro{f_t}{w} } \\ & \le 2 \sum_{t=1}^{n} \Prob{\mathcal{E}_t^c} + O(\log n), \end{align*} where, to obtain the last inequality, we applied \eqref{eq:ftrl-eq} for the first term, while the second term is $O(\log n)$ by the discussion following \cref{thm:R_curvesurface}. It remains to bound the first term, 2 $\sum_{t=1}^{n} \Prob{\mathcal{E}_t^c} $ in the above. For any $t > \frac{4}{\|\mu\|_2^2}$, \begin{align*} \Prob{ \|F_{t}\|_2 \le \sqrt{t} } &\le \Prob{ \|F_{t}\|_2 < \frac{t}{2}\|\mu\|_2 } \le \sum_{i=1}^d \Prob{|F_{t,i}| < \frac{t}{2} |\mu_i|} \\ &\le \sum_{i=1}^d \Prob{|F_{t,i}-t\mu_i| > \frac{t}{2} |\mu_i|} \le 2 \sum_{i=1}^d e^{-\frac{\mu_i^2}{4} t} \end{align*} Thus, \begin{align*} \sum_{t=1}^{n} \Prob{\mathcal{E}_t^c} & = \sum_{t=1}^{4/\|\mu\|_2^2} \Prob{\mathcal{E}_t^c} + \sum_{t=4/\|\mu\|_2^2}^{n} \Prob{\mathcal{E}_t^c} \\ & \le \frac{4}{\|\mu\|_2^2} + 2\sum_{i=1}^d \sum_{t=0}^{n} e^{-\frac{\mu_i^2}{4} t} \\ & \le \frac{4}{\|\mu\|_2^2} + 2\sum_{i=1}^d \frac{1}{1-e^{ -\frac{\mu_i^2}{4}}} \\ & \le \frac{4}{\|\mu\|_2^2} + 2\sum_{i=1}^d \frac{\mu_i^2}{4} = \frac{4}{\|\mu\|_2^2} + \frac{\|\mu\|_2^2}{2}~. \end{align*} where in the last inequality we used $1/(1-e^{-a}) \le a$. Therefore, if $\|\mu\| > 0$, the regret of FTRL satisfies \[ \Exp{R_n} \le \frac{8}{\|\mu\|_2^2} + \|\mu\|_2^2 + O(\log n) = O(\log n). \] \end{proof} \section{Simulations} \label{sec:Simulations} We performed three simulations to illustrate the differences between FTL, FTRL with the regularizer $R(w) = \frac12 \norm{w}_2^2$ when $w_t = \argmin_{w\in \mathcal{W}} \sum_{i=1}^{t-1} \ip{f_{i-1},w} + R(w)$, and the adaptive algorithm ($\mathcal{A}$, $\mathcal{B}$)-prod (AB) using FTL and FTRL as its candidates, which we shall call AB(FTL,FTRL). For the experiments the constraint set $\mathcal{W}$ was chosen to be a slightly elongated ellipsoid in the $4$-dimensional Euclidean space, with volume matching that of the $4$-dimensional unit ball. The actual ellipsoid is given by $\mathcal{W} = \set{w\in \mathbb{R}^4}{w^{\top}Qw \le 1}$ where $Q$ is randomly generated as \[ Q = \left(\begin{array}{cccc} 4.3367 & 3.6346 & -2.2250 & 3.5628 \\ 3.6346 & 3.9966 & -2.3613 & 3.2817\\ -2.2250 & -2.3613 & 2.0589 & -2.1295\\ 3.5628 & 3.2817 & -2.1295 & 3.4206\\ \end{array}\right). \] We experimented with 3 types of data to illustrate the behavior of the different algorithms: stochastic, ``half-adversarial'', and ``worst-case'' data (worst-case for FTL), as will be explained below. The first two datasets are random, so the experiments were repeated 100 times, and we report the average regret with its standard deviation; the worst case data is deterministic, so there no repetition was needed. For each experiment, we set $n = 2500$. The regularization coefficient for the FTRL, and the learning rate for AB were chosen based on their theoretical bounds minimizing the worst-case regret. \paragraph{Stochastic data.} In this setting we used the following model to generate $f_t$: Let $(\hat{f}_t)_t$ be an i.i.d. sequence drawn from the 4-dimensional standard normal distribution, and let $\tilde{f}_t = \hat{f}_t/\norm{\hat{f}_t}_2$. Then, $f_t$ is defined as $f_t = \tilde{f}_t + L e_1$ where $e_1 = (1,0,\dots,0)^\top$. Therefore, $\Exp{\norm{\tfrac{1}{t}\sum_{s=1}^t f_s}_2} \to L$ as $t \to \infty$. In the experiments we picked $L \in \{0, 0.1\}$. The results are shown in \cref{res:stoch}. On the left-hand side we plotted the regret against the logarithm of the number of rounds, while on the right-hand side we plotted the regret against the square root of the number of rounds, together with the standard deviation of the results over the $100$ independent runs. As can be seen from the figures, when $L=0.1$, the growth-rate of the regret of FTL is indeed logarithmic, while when $L=0$, the growth-rate is $\Theta(\sqrt{n})$. In particular, when $L=0.1$, FTL enjoys a major advantage compared to FTRL, while for $L=0$, FTL and FTRL perform essentially the same (in this special case, the regret of FTL will indeed be $O(\sqrt{n})$ as $w_t$ will stay bounded but $\norm{\Theta_t} = O(1/\sqrt{t})$). As expected, AB(FTL,FTRL), gets the better of the two regrets with little to no extra penalty. \todoa{Wouldn't it be enough to provide one picture for both values of $L$ using the relevant scale? I don;t know which one is better.} \begin{figure}[th] \centering \includegraphics[width=0.8\textwidth]{figures/ExpResults/Stoc_normalized.eps} \caption{Regret of FTL, FTRL and AB(FTL,FTRL) against time for stochastic data. \label{res:stoch}} \end{figure} \paragraph{``Half-adversarial'' data} The half-adversarial data used in this experiment is the optimal solution for the adversary in the \emph{linear game} when $\mathcal{W}$ is the unit ball \citep{abernethy2008optimal}. This data is generated as follows: The sequence $\hat{f}_t$ for $t = 1, \ldots, n$ is generated randomly in the $(d-1)$-dimensional subspace $S = \text{span}\{e_2, \ldots, e_d\}$ (here $e_i$ is the $i$th unit vector in $\mathbb{R}^d$) as follows: $\hat{f}_1$ is drawn from the uniform distribution on the unit sphere of $S$ (actually $\mathbb{S}_{d-2}$. For $t = 2, \ldots, n$, $\hat{f}_t$ is drawn from the uniform distribution on the unit sphere of the intersection of $S$ and the hyperplane perpendicular to $\sum_{i=1}^{t-1} \hat{f}_i$ and going through the origin. Then, $f_t = Le_1 + \sqrt{1-L^2} \hat{f}_t$ for some $L \ge 0$. The results are reported in \cref{res:adver}. When $L=0$, the regret of both FTL and FTRL grows as $O(\sqrt{n})$. When $L=0.1$, FTL achieves $O(\log n)$ regret, while the regret of FTRL appears to be $O(\sqrt{n})$. AB(FTL,FTRL) closely matches the regret of FTL. \begin{figure}[th] \centering \includegraphics[width=0.8\textwidth]{figures/ExpResults/Adve.eps} \caption{Experimental results for ``half-adversarial'' data. \label{res:adver}} \end{figure} \paragraph{Worst-case data} We also tested the algorithms on data where FTL is known to suffer linear regret, mainly to see how well AB(FTL,FTRL) is able to deal with this setting. In this case, we set $f_{t,i}=0$ for all $t$ and $i\ge 2$, while for the first coordinate, $f_{1,1} = 0.9$, and $f_{t,1} = 2(t \mod 2) - 1$ for $t \ge 2$. The results are reported in \cref{res:worst_case}. It can be seen that the regret of FTL is linear (as one can easily verify theoretically), and AB(FTL,FTRL) succeeds to adapt to FTRL, and they both achieve a much smaller $O(\sqrt{n})$ regret. \todoa{The scale of the axes should be sqrt-sqrt and sqrt-lin to show the required dependencies.} \begin{figure}[th] \centering \includegraphics[width=0.8\textwidth]{figures/ExpResults/WorstCase.eps} \caption{Experimental results for worst-case data. \label{res:worst_case}} \end{figure} \paragraph{The unit ball} We close this section by comparing the performance of our adaptive algorithms on the unit ball, namely, FTL, FTSL, FTLR, and AB(FTL,FTRL). All these algorithms are parametrized as above. The problem setup is similar to the stochastic data setting and the worst-case data setting. Again, we consider a 4-dimensional setting, that is, $\mathcal{W}$ is the unit ball in $\mathbb{R}^4$ centered at the origin. The worst-case data is generated exactly as above, while the generation process of the stochastic data is slightly modified to increase the difference between FTLR and FTL: we sample the i.i.d. vectors $\hat{f}_t$ from a zero-mean normal distribution with independent components whose variance is $1/16$, and let $\tilde{f_t}=\hat{f}_t$ if $\|\hat{f}_t\|_2 \le 1$ and $\tilde{f}_t = \hat{f}_t/\norm{\hat{f}_t}_2$ when $\norm{\hat{f}_t}_2>1$ (i.e., we only normalize if $\hat{f}_t$ falls outside of the unit ball). The reason of this modification is to encourage the occurrence of the event $\|F_{t-1}\|_2 < \sqrt{t-1}$. Recall that when $\|F_{t-1}\|_2 \ge \sqrt{t-1}$, the prediction of FTRL matches that of FTL, so we are trying to create some data where their behavior is actually different. As a result, we will be able to observe that the predictions of FTL and FTRL are different in the early rounds. Finally, as before, we let $f_t=\tilde{f}_t + L e_1$, and set the time horizon to $n=20,000$. The results of the simulation of the stochastic data setting are shown in Figure~\ref{res:Stoc_unitBall}. In the case of $L=0.1$, FTRL suffers more regret at the beginning for some rounds, but then succeeds to match the performance of FTL. The results of the simulation of the worst-case data setting are shown in Figure~\ref{res:WorstCase_unitBall}, where FTSL has similar performance as FTRL. \begin{figure}[h] \centering \includegraphics[width=0.8\textwidth]{figures/ExpResults/Stoc_unitBall.eps} \caption{Experimental results for stochastic data when $\mathcal{W}$ is the unit ball. \label{res:Stoc_unitBall}} \end{figure} \begin{figure}[h] \centering \includegraphics[width=0.8\textwidth]{figures/ExpResults/WorstCase_unitBall.eps} \caption{Experimental results for worst-case data when $\mathcal{W}$ is the unit ball. \label{res:WorstCase_unitBall}} \end{figure} \section{Conclusion} FTL is a simple method that is known to perform well in many settings, while existing worst-case results fail to explain its good performance. While taking a thorough look at why and when FTL can be expected to achieve small regret, we discovered that the curvature of the boundary of the constraint and having average loss vectors bounded away from zero help keep the regret of FTL small. These conditions are significantly different from previous conditions on the curvature of the loss functions which have been considered extensively in the literature. It would be interesting to further investigate this phenomenon for other algorithms or in other learning settings.
1,108,101,565,344
arxiv
\section{Introduction} By using a \emph{Bell-nonlocal}~\cite{Bell64,NLreview} resource, such as an entangled pure quantum state, one can generate correlations between measurement outcomes which do not obey the principle of local causality~\cite{Bell04}, beating our intuitive understanding of nature. To date, convincing experimental demonstrations of Bell-nonlocality (hereafter abbreviated as nonlocality) have been achieved in a number of different physical systems (see, e.g., Refs.~\cite{Hensen15,Shalm15,Giustina15,rosenfeld_event-ready_2017}). Operationally, nonlocality enables one to perform some tasks that are not achievable in classical physics, including quantum cryptography~\cite{Ekert91}, randomness generation~\cite{Colbeck2006,Pironio10}, reduction of communication complexity~\cite{Cleve97} etc. For example, using nonlocal correlations, the task of quantum key distribution~\cite{Ekert91} can be achieved~\cite{Acin07} even when one assumes nothing about the shared quantum resource or the measurement apparatuses. Since then, several quantum information tasks have been proposed within this black-box paradigm (see \cite{Brunner08,Scarani12,Pironio16} and references therein) --- forming a discipline that has come to be known as \emph{device-independent (DI) quantum information}. Another peculiar feature offered by quantum theory is \emph{steering}~\cite{Schr35_0} --- the fact that one can remotely steer the set of conditional quantum states (called an \emph{assemblage}~\cite{Pusey13}) accessible by a distant party by locally measuring a shared entangled state. This intriguing phenomenon was revisited in 2007 by Wiseman, Jones, and Doherty~\cite{wiseman2007}. In turn, their mathematical formulation forms the basis of a very active field of research (see, e.g., Refs.~\cite{Cavalcanti09,SNC14,Piani15,Gallego15,HLL2016} and references therein) and has given rise to the so-called {\em one-sided DI quantum information}~\cite{Branciard12}. To exhibit nonlocality or to demonstrate the steerability of a quantum state, it is necessary to employ \emph{incompatible measurements}~\cite{Wolf09}. In particular, among existing formulations of such measurements~\cite{Heinosaari10,Reeb13,Haapasalo15}, {\em any} measurements that are incompatible---in the sense of being \emph{non-jointly-measurable}~\cite{Liang:PRep}---can always be used~\cite{Quint14,Uola14} to demonstrate the steerability of some quantum states. In fact, {\color{nblack} the} incompatibility robustness~\cite{Uola15}---a quantifier for measurement incompatibility---has even been shown to be lower bounded~\cite{SLChen16,Cavalcanti16} by {\color{nblack} the} steering robustness~\cite{Piani15} -- a quantifier for quantum steerability. In the context of DI quantum information, a \emph{moment matrix}, i.e., a matrix composed of a set of expectation values of observables, is known to play a very important role. In particular, the hierarchy of moment matrices due to Navascu\'{e}s, Pironio, and Ac\'{i}n (NPA)~\cite{NPA} not only has provided the only known effective characterization (more precisely, approximation) of the quantum set, but also has found applications in DI entanglement detection~\cite{Bancal11,Baccari17}, quantification~\cite{Moroder13,Liang15,SLChen16}, dimension-witnessing~\cite{Brunner08,Navascues14,Navascues15prl}, self-testing~\cite{Yang14,Bancal15} etc. Similarly, some other variants~\cite{Pusey13,Kogias15} of the NPA hierarchy have also found applications in the context of one-sided DI quantum information. In Appendix~\ref{Sec_App_MMs}, we summarize in Table \ref{TB_MM_Refs} some of the hierarchy of moment matrices that have been considered in (one-sided) DI quantum information. Inspired by the moment matrices considered in Refs.~\cite{Moroder13,Pusey13}, a framework known by the name of \emph{assemblage moment matrices} (AMMs) was proposed in Ref.~\cite{SLChen16}. As opposed to previous considerations, a distinctive feature of AMM is that the moment matrices considered {\color{nblack} consist} of expectation values only for subnormalized quantum states (specifically, the assemblage induced in a steering experiment). This unique feature makes AMM a very natural framework for the DI quantification of steerability, and consequently the DI quantification of measurement incompatibility as well as the DI quantification of entanglement robustness, and its usefulness in certain quantum information tasks. In this paper, we further explore the relevance of AMM for DI characterizations. We begin in Sec.~\ref{Sec_MM} by reviewing the concept of moment matrices considered in DI quantum information. Then, we recall from Ref.~\cite{SLChen16} the framework of AMM in Sec.~\ref{Sec:AMM}. After that, we discuss the applications of AMM in DI quantum information, specifically DI characterizations. In Sec.~\ref{Sec:Conclusion}, we conclude with a summary results and outline some possibilities for future research. \section{Moment matrices within the device-independent paradigm} \label{Sec_MM} Moment matrices, i.e., matrices of expectation values of certain observables, were first discussed in a DI setting by NPA in Ref.~\cite{NPA}. For our purposes, however, it would be more convenient to think about these matrices as the result of some local, complete-positive (CP) maps acting on the underlying density matrix, as discussed in Ref.~\cite{Moroder13}. To this end, consider two local CP maps $\Lambda_\text{A}$ and $\Lambda_\text{B}$ acting, respectively, on Alice's and Bob's system ($\rho_\text{A}$ and $\rho_\text{B}$): \begin{subequations}\label{Eq:LocalMapping} \begin{equation} \begin{aligned} \Lambda_{\text{A}}(\rho_{\text{A}})= \sum_n K_n \rho_{\text{A}} K_n^\dagger, \quad \Lambda_{\text{B}}(\rho_{\text{B}}) = \sum_m L_m \rho_{\text{B}} L_m^\dagger, \end{aligned} \end{equation} where the Kraus operators are \begin{equation} K_n = \sum_i |i\rangle_{\bar{\text{A}}\text{A}} \langle n|A_i, \quad L_m = \sum_j |j\rangle_{\bar{\text{B}}\text{B}} \langle m|B_j, \label{Eq_kraus} \end{equation} \end{subequations} while $\{\ket{i}_{\bar{\text{A}}}\}, \{\ket{n}_{\text{A}}\}$ ($\{\ket{j}_{\bar{\text{B}}}\}, \{\ket{m}_{\text{B}}\}$) are, respectively, orthonormal bases for the output Hilbert space $\bar{\text{A}}$ ($\bar{\text{B}}$) and input Hilbert space A (B) of Alice's (Bob's) system. In Eq.~\eqref{Eq_kraus}, $A_i$ and $B_j$ are, respectively, operators acting on Alice's and Bob's input Hilbert space. Together, when applied to a quantum state ${\rho_{\mbox{\tiny AB}}}$, these local CP maps give rise to a matrix $\chi$ of expectation values $\braket{A_k^\dagger A_i \otimes B_l^\dagger B_j}_{\rho_{\mbox{\tiny AB}}}$ \begin{equation \begin{split} &\,\chi[{\rho_{\mbox{\tiny AB}}},\{A_i\} ,\{B_j\}] \\ &= \Lambda_{\text{A}}\otimes\Lambda_{\text{B}}({\rho_{\mbox{\tiny AB}}})\\ &= \sum_{ijkl}\ket{ij}\!\bra{kl}\tr[{\rho_{\mbox{\tiny AB}}} A_k^\dagger A_i \otimes B_l^\dagger B_j], \end{split} \label{Eq_MM1} \end{equation} which is a function of ${\rho_{\mbox{\tiny AB}}}$, as well as the choice of $\{A_i\}$ and $\{B_j\}$. Consider now a bipartite Bell experiment where Alice (Bob) can freely choose to perform any of the $n_x$ ($n_y$) measurements, each giving $n_a$ ($n_b$) possible outcomes. In quantum theory, these measurement are described by positive-operator-valued measures (POVMs). Let $\{E^\text{A}_{a|x}\}_{x,a}$ and $\{E^\text{B}_{b|y}\}_{y,b}$ respectively denote the collection of POVM elements (also known as a {\em measurement assemblage}~\cite{Piani15}) associated with Alice's and Bob's measurements, and let $\openone$ be the identity operator. Then, if we let $\{A_i\}$ ($\{B_j\}$) be the set of operators obtained by taking all $\ell$-fold products of operators from $\{\openone\}\cup\{E^\text{A}_{a|x}\}_{x,a}$ ($\{\openone\}\cup\{E^\text{B}_{b|y}\}_{y,b}$), the corresponding moment matrix, cf. Eq.~\eqref{Eq_MM1}, is said~\cite{Moroder13} to be a moment matrix of {\em local} level $\ell$ (see also Ref.~\cite{Vallins17}). Note that for all $\ell\ge 1$, one can find in the corresponding moment matrix $\chi^{(\ell)}$ expectation values that are (at most) first order in $E^\text{A}_{a|x}, E^\text{B}_{b|y}$. From Born's rule, one finds that they correspond to the joint probability of Alice (Bob) observing outcome $a$ ($b$) conditioned on she (he) performing the $x$-th ($y$-th) measurement, i.e., \begin{equation}\label{Eq:Quantum} P(a,b|x,y)\stackrel{\mathcal{Q}}{=}\tr\left({\rho_{\mbox{\tiny AB}}}\,E^\text{A}_{a|x}\otimes E^\text{B}_{b|y}\right). \end{equation} Importantly, these quantities can be estimated directly from the experimental data without assuming any knowledge about the POVM elements nor the shared state ${\rho_{\mbox{\tiny AB}}}$. In addition, all legitimate moment matrices of the form of Eq.~\eqref{Eq_MM1} are easily seen to be positive semidefinite, denoted by $\chi\succeq 0$. Thus, in a DI paradigm when only the correlations $\mathbf{P}_\text{obs}=\{P(a,b|x,y)\}_{a,b,x,y}$ are assumed (or estimated), one can still determine through the positive semidefinite nature of moment matrices if $\mathbf{P}_\text{obs}$ is {\color{nblack} not} quantum realizable. Let us denote by $\chi_{\mbox{\tiny DI}}^{(\ell)}$ the corresponding moment matrix in this black-box setting. If there is no way to fill in the remaining unknown entries of $\chi_{\mbox{\tiny DI}}^{(\ell)}$ [collectively denoted by $\{u_i\}$] such that $\chi_{\mbox{\tiny DI}}^{(\ell)}\succeq 0$, one would have found a certificate showing that the given $\mathbf{P}_\text{obs}$ is {\em not} quantum realizable [in the sense of Eq.~\eqref{Eq:Quantum}]. From these observations, a hierarchy~\cite{NPA2008,Doherty08,Moroder13} of superset approximations $\tilde{\mathcal{Q}}^{(\ell)}$ to the set of legitimate quantum correlations (denoted by $\mathcal{Q}$) can be obtained by solving a hierarchy of semidefinite programs, each associated with a moment matrix of local level $\ell$. Moreover, the hierarchy $\tilde{\mathcal{Q}}^{(1)}\supsetneq \tilde{\mathcal{Q}}^{(2)}\supsetneq ...\supsetneq \tilde{\mathcal{Q}}$ provably converges to $\mathcal{Q}$, i.e., $\tilde{\mathcal{Q}}^{(\ell\rightarrow\infty)}\rightarrow\mathcal{Q}$ (see also~\cite{NPA2008,Doherty08}). In performing this algorithmic characterization, since any POVM can be realized as a projective measurement (embedded in higher-dimensional Hilbert space~\cite{Neumark}), without loss of generality one can thus set the uncharacterized $\{E_{a|x}\}_a$ and $\{E_{b|y}\}_b$ to be projectors for all $x$ and $y$, such that $E_{a|x}E_{a'|x}=\delta_{a,a'}E_{a|x}$ and $E_{b|y}E_{b'|y}=\delta_{b,b'}E_{b|y}$. In addition, one can further assume that each $u_i$ is a real number; see~\cite{Moroder13} for the detailed reasonings behind these simplifications. In Table~\ref{TB_MM}, we provide a summary of the various elements of $\chi_{\mbox{\tiny DI}}^{(\ell)}$ in relation to the operators whose expectation values are to be evaluated. \begin{center} \begin{table}[h!] \centering \caption{Elements of the moment matrix $\chi_{\mbox{\tiny DI}}^{(\ell)}$ constructed from Eq.~\eqref{Eq_MM1} with the simplification that all measurements are described by orthogonal projectors. } \begin{tabular}{|c|c|} \hline elements & for $A_k^\dagger A_i$ ($B_l^\dagger B_j$) \\ \hline \hline 0 & containing $E_{a|x}^\text{A} E_{a'|x}^\text{A}$ with $a\neq a'$ \\ & (or $E_{b|y}^\text{B} E_{b'|y}^\text{B}$ with $b\neq b'$) \\ \hline $P_\text{obs}(a,b|x,y)$ & being $E_{a|x}^\text{A}$ (and $E_{b|y}^\text{B}$) \\ \hline unknown $u_i\in\mathbb{R}$ & being otherwise \\ \hline \end{tabular}\label{TB_MM} \end{table} \end{center} \section{Assemblage moment matrices \& quantum steering} \label{Sec:AMM} \subsection{Steerability} In the DI paradigm explained above, all preparation devices and measurement devices are treated as uncharacterized (black) boxes. In contrast, consider now a situation where the measurements devices of one party, say, Bob, are fully characterized. Then, for every outcome $a$ that Alice obtains when she performs the $x$-th measurement, Bob can in principle perform quantum state tomography to determine the corresponding quantum state $\hat{\rho}_{a|x}$ prepared on his end. In quantum theory, if the shared quantum state is ${\rho_{\mbox{\tiny AB}}}$ and Alice's measurement assemblage is given by $\{E_{a|x}^\text{A}\}_{a,x}$ (henceforth abbreviated as $\{E_{a|x}^\text{A}\}$), then $\hat{\rho}_{a|x}$ is simply the normalized version of the conditional state \begin{equation}\label{Eq_quantum_assemblage} \rho_{a|x} = \tr_\text{A}(E_{a|x}^\text{A}\otimes\mathbb{1}~{\rho_{\mbox{\tiny AB}}})\quad \forall\,\, a,x, \end{equation} where $\tr_\text{A}(.)$ refers to a partial trace over Alice's Hilbert space. Explicitly, if we denote by $P(a|x)=\tr(\rho_{a|x})$, then $\hat{\rho}_{a|x}={\rho}_{a|x}/P(a|x)$. Following Ref.~\cite{Pusey13}, we refer to the set of conditional quantum states $\{\rho_{a|x}\}_{a,x}$ ($\{\rho_{a|x}\}$ in short) as an \emph{assemblage}. In certain cases, instead of the usual quantum mechanical description, the preparation of an assemblage $\{\rho_{a|x}\}$ can be understood via a semiclassical model. Specifically, following Ref.~\cite{wiseman2007}, we say that an assemblage $\{\rho_{a|x}\}$ admits a local-hidden-state (LHS) model if there exists legitimate probability distributions $P(\lambda)$, $P(a|x,\lambda)$, and normalized quantum states $\hat{\sigma}_\lambda$ such that \begin{equation} \rho_{a|x} = \sum_\lambda P(a|x,\lambda)P(\lambda)\hat{\sigma}_\lambda \quad \forall\,\, a,x, \label{Eq_LHS} \end{equation} i.e., the observed assemblage is an average of quantum states $\hat{\sigma}_\lambda$ distributed to Bob over the common-cause distribution $P(\lambda)$ and the local response function $P(a|x,\lambda)$ on Alice's end. In this case, it is conventional to refer to the assemblage as being {\em unsteerable}. Otherwise, an assemblage $\{\rho_{a|x}\}$ that cannot be decomposed in the form of Eq.~\eqref{Eq_LHS} is said to be {\em steerable}, as Alice can apparently {\em steer} the ensemble of quantum states at Bob's end with her choice of local measurements. There are several ways to quantify the degree of steerability of any given assemblage $\{\rho_{a|x}\}$, e.g., the steerable weight~\cite{SNC14}, the steering robustness~\cite{Piani15}, the relative entropy of steering~\cite{Gallego15,Eneet17b}, the optimal steering fraction~\cite{HLL2016}, consistent trace-distance measure \cite{KuPRA18} etc. In this paper, we would focus predominantly on the steering robustness ($SR$), defined~\cite{Piani15} as the minimum (unnormalized) weight associated with another assemblage $\{\tau_{a|x}\}$ so that its mixture with $\{\rho_{a|x}\}$ is unsteerable, i.e.,\footnote{Throughout, we use $A\succeq B$ to mean that $A-B$ is positive semidefinite.} \begin{equation}\label{Eq_DefineSR} \begin{aligned} {\rm SR}(\{\rho_{a|x}\}) := &\,\,\min_{t,\{\sigma_\lambda\}, \{\tau_{a|x}\}} \quad t\\ \text{s.t.}~ &\frac{\rho_{a|x} + t \tau_{a|x}}{1+t} = \sum_\lambda D(a|x,\lambda)\sigma_\lambda \quad \forall\,\, a,x,\\ & \sigma_\lambda\succeq0,\quad \sum_\lambda \tr(\sigma_\lambda)=1,\\ &\{\tau_{a|x}\}\ \ \text{is a valid assemblage}, \end{aligned} \end{equation} where $D(a|x,\lambda)=\delta_{a,\lambda_x}$, $\lambda=(\lambda_1,\ldots,\lambda_{n_x})$, and $\sigma_\lambda$ is a subnormalized quantum state [$\sigma_\lambda=P(\lambda)\hat{\sigma}_\lambda$, cf. Eq.~\eqref{Eq_LHS}]. In the above formulation, we have made use of the fact that, in determining the existence of a decomposition in the form of Eq.~\eqref{Eq_LHS}, it suffices to consider deterministic $P(a|x,\lambda)$ in the form just described. A prominent advantage of SR is that, as with steerable weight~\cite{SNC14}, it can be efficiently computed as a semidefinite program (SDP) [by setting $(1+t)\sigma_\lambda$ as $\rho_\lambda$ in Eq.~\eqref{Eq_DefineSR}]: \begin{subequations} \begin{align} {\rm SR}(\{\rho_{a|x}\}) = & \ \ \min_{\{\rho_\lambda\}} \ \ \sum_{\lambda}\tr\left(\rho_\lambda\right) - 1\label{Eq_SR1}\\ \text{s.t.}~ &\sum_\lambda D(a|x,\lambda)\rho_\lambda \succeq \rho_{a|x} \quad \forall\ a,x,\label{Eq_SR2}\\ &\rho_\lambda \succeq 0 \quad \forall\ \lambda.\label{Eq_SR3} \end{align} \label{Eq_SR} \end{subequations} From the dual of this SDP (see Ref.~\cite{Cavalcanti17}), one finds that {\color{nblack} {SR}} actually coincides with the optimal steering fraction, a steering monotone (based on optimal steering inequalities) introduced in Ref.~\cite{HLL2016}. Finally, as remarked by Piani and Watrous~\cite{Piani15}, {\color{nblack} SR} can be given an operational meaning in terms of the (relative) success probability of some quantum information tasks (more on this below). \subsection{The framework of assemblage moment matrices}\label{Sec_AMMs_formulation} In a DI setting, single-partite probability distributions $P(a|x)$, $P(b|y)$ {\em alone} cannot be used to provide nontrivial characterizations of the underlying devices. This is because for one to arrive at any nontrivial statement, the observed correlation $\mathbf{P}_\text{obs}$ must also violate a Bell inequality~\cite{Brunner08,Scarani12}. Since single-partite probability distributions alone do not reveal any correlation between the measurement outcomes of distant parties, they cannot possibly violate any Bell inequalities. Following this reasoning, it {\color{nblack} may seem the} case that moment matrices associated with single-partite density matrices are also useless for DI characterizations. While this intuition is true for normalized single-partite density matrices, the same cannot be said when it comes to an assemblage, which consists only of subnormalized density matrices that arise in a steering experiment. Specifically, for each combination of outcome $a$ and setting $x$, applying the local CP map of Eq.~\eqref{Eq:LocalMapping} to the conditional state $\rho_{a|x}$, cf. Eq.~\eqref{Eq_quantum_assemblage}, gives rise to a matrix of expectation values: \begin{equation} \begin{aligned} \chi[\rho_{a|x},\{B_i\}] &= \Lambda_{\text{B}}(\rho_{a|x})\\ &= \sum_{ij}\ket{i}\!\bra{j}\tr[\rho_{a|x} B_j^\dagger B_i]\quad \forall\ a,x, \end{aligned} \label{Eq_AMM} \end{equation} where $\{B_i\}$ are again operators formed from the product of $\{\openone\}\cup\{E^\text{B}_{b|y}\}_{y,b}$. When {\color{nblack} the set $\{B_i\}$ involves} operators that are at most $\ell$-fold product of Bob's POVM elements, the collection of matrices in Eq.~\eqref{Eq_AMM} are said~\cite{SLChen16} to be the \emph{assemblage moment matrices} (AMMs) of level $\ell$, and we denote each of them by $\chi^{(\ell)}[\rho_{a|x}]$. Indeed, as with the moment matrices introduced in Sec.~\ref{Sec_MM}, all entries of $\mathbf{P}_\text{obs}$ can be identified with entries in these single-partite moment matrices. For example, by using Eq.~\eqref{Eq_quantum_assemblage} in Eq.~\eqref{Eq_AMM} and choosing an entry in $\chi^{(\ell)}[\rho_{a|x}]$ such that $B_i=B_j=B_j^2=E^\text{B}_{b|y}$ for some $b,y$ gives $\tr[\rho_{a|x} B_j^\dagger B_i]=P(a,b|x,y)$. In a DI setting, neither the assemblage $\{\rho_{a|x}\}$ nor the measurement assemblage $\{E_{b|y}^\text{B}\}$ is known. Thus, apart from the few entries that can be estimated, each of these moment matrices is (largely) uncharacterized. Let us denote the corresponding AMM in this setting by $\chi_{\mbox{\tiny DI}}^{(\ell)}[\rho_{a|x}]$ and the corresponding unknown entries collectively by $\{u_i^{(a,x)}\}$.\footnote{Although the known data in these AMMs are $\mathbf{P}_\text{obs}$, we shall write $\chi_{\mbox{\tiny DI}}^{(\ell)}[\rho_{a|x}]$ instead of $\chi^{(\ell)}[\mathbf{P}_\text{obs}]$ to emphasize that the underlying moment matrices are induced by an assemblage. } The requirement that each $\chi_{\mbox{\tiny DI}}^{(\ell)}[\rho_{a|x}]$ is a legitimate moment matrix, i.e., is in the form of Eq.~\eqref{Eq_AMM} while assuming Eq.~\eqref{Eq_quantum_assemblage}, then allows one to approximate algorithmically (from outside) the set of quantum correlations $\mathcal{Q}$, cf. Eq.~\eqref{Eq:Quantum}. In addition, as with the moment matrices discussed in Sec.~\ref{Sec_MM}, in determining if some given $\mathbf{P}_\text{obs}$ is quantum realizable, we may assume that all $\{E_{b|y}^\text{B}\}$ correspond to those of projective measurements while the unobservable expectation values are real numbers (see Table~\ref{TB_AMM} for a summary of the various entries of $\chi_{\mbox{\tiny DI}}^{(\ell)}[\rho_{a|x}]$). \begin{table}[h!] \centering \caption{Elements of the moment matrix $\chi_{\mbox{\tiny DI}}^{(\ell)}[\rho_{a|x}]$ constructed from Eq.~\eqref{Eq_AMM} with the simplification that all measurements are described by orthogonal projectors. } \begin{tabular}{|c|c|} \hline elements & for $B_j^\dagger B_i$ \\ \hline \hline 0 & containing $E_{b|y}^\text{B} E_{b'|y}^\text{B}$ with $b\neq b'$ \\ \hline $P_\text{obs}(a,b|x,y)$ & being $E_{b|y}^\text{B} $ \\ \hline unknown $u_i\in\mathbb{R}$ & otherwise\\ \hline \end{tabular}\label{TB_AMM} \end{table} As an explicit example, consider the $\ell=1$ AMMs with $n_y=n_b=2$, i.e., where $\{B_i\}=\{\openone,E_{1|1}^\text{B},E_{1|2}^\text{B}\}$. From Eq.~\eqref{Eq_AMM} we have that for each $a$ and $x$: \begin{equation} \begin{aligned} &\chi^{(1)}[\rho_{a|x},\{B_i\}] =\\ &\begin{pmatrix} \tr(\rho_{a|x}) & \tr(\rho_{a|x} E_{1|1}^\text{B}) & \tr(\rho_{a|x} E_{1|2}^\text{B})\\ \tr(\rho_{a|x} E_{1|1}^\text{B}) & \tr(\rho_{a|x} E_{1|1}^\text{B}) & \tr(\rho_{a|x}E_{1|1}^{\text{B}\dag} E_{1|2}^\text{B})\\ \tr(\rho_{a|x} E_{1|2}^\text{B}) & \tr(\rho_{a|x} E_{1|2}^{\text{B}\dag} E_{1|1}^\text{B}) & \tr(\rho_{a|x} E_{1|2}^\text{B}) \end{pmatrix}. \end{aligned} \label{Eq_AMM_level1} \end{equation} For DI characterizations, we then write this matrix (for a fixed value of $a$ and $x$) as: \begin{equation} \begin{aligned} &\chi_{\mbox{\tiny DI}}^{(1)}[\rho_{a|x}] =\\ &\begin{pmatrix} P_\text{obs}(a|x) & P_\text{obs}(a,1|x,1) & P_\text{obs}(a,1|x,2)\\ P_\text{obs}(a,1|x,1) & P_\text{obs}(a,1|x,2) & u_1^{(a,x)}\\ P_\text{obs}(a,1|x,2) & u_1^{(a,x)} & P_\text{obs}(a,1|x,2) \end{pmatrix}, \end{aligned} \label{Eq_DIAMM_level1} \end{equation} where we have made use of the simplification mentioned above and expressed the experimentally inaccessible expectation value as: \begin{equation} \tr(\rho_{a|x} E_{1|2}^{\text{B}\dag} E_{1|1}^\text{B}) = \tr(\rho_{a|x}E_{1|1}^{\text{B}\dag} E_{1|2}^\text{B}) =u_1^{(a,x)}, \end{equation} with $u_i^{(a,x)}\in\mathbb{R}$ (see Ref.~\cite{Moroder13}). \section{Device-independent applications} \label{Sec:DI-App} Having recalled from Ref.~\cite{SLChen16} the AMM framework, we are now in a position to further explore the framework for DI characterizations. \subsection{Quantification of steerability} As was already noted in our previous work~\cite{SLChen16}, a DI lower bound on SR forms the basis of a couple of DI applications based on the AMM framework. For completeness and for comparison with the improved lower bound that we shall present in Sec.~\ref{Sec_DIIR}, we now explain how a DI lower bound on SR can be obtained by relaxing the optimization problem given in Eq.~\eqref{Eq_SR}, as was proposed in Ref.~\cite{SLChen16}. To this end, let us emphasize once again that in the DI paradigm, one does not assume any knowledge (e.g., the Hilbert space dimension) of quantum states $\rho_\lambda$ and $\rho_{a|x}$. However, if the constraints of Eq.~\eqref{Eq_SR} hold, it must be the case that even upon the application of the local CP map given in Eq.~\eqref{Eq_AMM}, the constraints---which demand the positivity of certain matrices---would still hold. At the same time, notice that each $\tr\left(\rho_\lambda\right)$ appearing in the objective function of Eq.~\eqref{Eq_SR} can still be identified as a specific entry, denoted by $\chi_{\mbox{\tiny DI}}^{(\ell)}[\rho_{\lambda}]_\text{tr}$ in the AMM. For example, in the AMM given in Eq.~\eqref{Eq_AMM_level1}, the trace of the underlying {\color{nblack} matrix $\rho_{a|x}$} is given by the upper-left entry of the matrix. Putting all these together, we thus see that a DI lower bound on SR can be obtained by solving the following SDP: \begin{subequations}\label{Eq_relax_SR} \begin{align} \min_{\{u_v\}} ~~&\left(\sum_{\lambda}\chi_{\mbox{\tiny DI}}^{(\ell)}[\rho_{\lambda}]_\text{tr}\right)-1 \label{Eq_relax_SR1}\\ \text{s.t.}~~ &\sum_{\lambda}D(a|x,{\lambda})\chi_{\mbox{\tiny DI}}^{(\ell)}[\rho_{\lambda}]\succeq \chi_{\mbox{\tiny DI}}^{(\ell)}[\rho_{a|x}] \quad\forall~a,x,\label{Eq_relax_SR2}\\ &\chi_{\mbox{\tiny DI}}^{(\ell)}[\rho_{\lambda}]\succeq 0\quad \forall ~\lambda,\label{Eq_relax_SR3}\\ &\sum_a\chi_{\mbox{\tiny DI}}^{(\ell)}[\rho_{a|x}] = \sum_a\chi_{\mbox{\tiny DI}}^{(\ell)}[\rho_{a|x'}] ~~\forall\, x\neq x',\label{Eq_relaxSR_nosig}\\ &\sum_a\chi_{\mbox{\tiny DI}}^{(\ell)}[\rho_{a|x}]_\text{tr}=1,\quad \chi_{\mbox{\tiny DI}}^{(\ell)}[\rho_{a|x}]\succeq 0 ~~\forall ~a,x\label{Eq_relaxSR_posi},\\ &P(a,b|x,y)=P_{\mbox{\tiny obs}}(a,b|x,y)\quad\forall\quad a,b,x,y.\label{Eq_prob_match} \end{align} \end{subequations} As explained above, Eq.~\eqref{Eq_relax_SR2} and Eq.~\eqref{Eq_relax_SR3} follow by applying the CP map of Eq.~\eqref{Eq_AMM} to the constraints of Eq.~\eqref{Eq_SR}. However, by themselves, physical constraints (including normalization, positivity and consistency) associated with the assemblage $\{\rho_{a|x}\}$ may be violated and thus have to be separately enforced in Eq.~(\ref{Eq_relaxSR_nosig}) and (\ref{Eq_relaxSR_posi}). Empirical observation enters at the level of observed correlation in Eq.~\eqref{Eq_prob_match}, i.e., by matching entries in the AMM with the empirical data summarized in $\mathbf{P}_\text{obs}$. Instead of Eq.~\eqref{Eq_prob_match}, a (weaker) lower bound can also be obtained by imposing an equality constraint of the form $\sum_{a,b,x,y} \beta^{x,y}_{a,b} P(a,b|x,y) = \hat{I}_{\vec{\beta}}$ where $\hat{I}_{\vec{\beta}}$ is the observed value of a certain Bell function specified by real coefficients $\beta^{x,y}_{a,b}$. {\color{nblack} Moreover, notice that if we have access to the observed probabilities $\mathbf{P}_\text{obs}$, the condition $\sum_a\chi_{\mbox{\tiny DI}}^{(\ell)}[\rho_{a|x}]_\text{tr}=1$ is automatically satisfied, as it amounts to the condition $\sum_a P(a|x)=1$. On the other hand, this is not the case if we have access only to the Bell function $\hat{I}_{\vec{\beta}}$.} Importantly, the constraints of Eq.~\eqref{Eq_relaxSR_nosig} and Eq.~\eqref{Eq_relaxSR_posi} do not necessarily single out $\{\rho_{a|x}\}$ as the underlying assemblage; neither do Eq.~\eqref{Eq_relax_SR2} and Eq.~\eqref{Eq_relax_SR3} entail the constraints of Eq.~\eqref{Eq_SR}. The above optimization problem is thus a relaxation of that given in Eq.~\eqref{Eq_SR}. For concreteness, let us denote the optimum of Eq.~\eqref{Eq_relax_SR} by ${\rm SR}_{\mbox{\tiny DI},\ell}^{\mbox{\tiny A$\rightarrow$B}}(\mathbf{P}_{\text{obs}})$ and that obtained for some observed Bell violation as SR$_{\text{\tiny DI},\ell}^{\mbox{\tiny A$\rightarrow$B}}(\hat{I})$, then \begin{equation}\label{Eq:SRBounds} {\rm SR}(\{\rho_{a|x}\}) \ge {\rm SR}_{\mbox{\tiny DI},\ell}^{\mbox{\tiny A$\rightarrow$B}}(\mathbf{P}_{\text{obs}}) \ge \text{SR}_{\text{\tiny DI},\ell}^{\mbox{\tiny A$\rightarrow$B}}(\hat{I}) \end{equation} for all $\ell\ge 1$, thus giving the desired DI lower bound on ${\rm SR}(\{\rho_{a|x}\})$ [see Sec.~\ref{Sec:CS16} and also Ref.~\cite{Cavalcanti16} for alternative approaches for bounding ${\rm SR}(\{\rho_{a|x}\})$.] \subsection{Quantification of the advantage of quantum states in subchannel discriminations} From Eq.~\eqref{Eq:SRBounds}, one can also quantitatively estimate the usefulness of certain steerable quantum states in a kind of subchannel discrimination problem (see Ref.~\cite{Piani15} and references therein). To this end, let $\hat{\Lambda}=\sum_a \Lambda_a$ be a quantum channel (a trace-preserving CP map) that can be decomposed into a collection of subchannels $\{\Lambda_a\}_a$, i.e., a family of CP maps $\Lambda_a$ that are each trace nonincreasing for all input states $\rho$. Following Ref.~\cite{Piani15}, we refer to this collection of subchannels as a {\em quantum instrument} $\mathcal{I}=\{\Lambda_a\}_a$. An example of $\mathcal{I}$ consists in performing measurement on the input state with ``$a$'' labeling the measurement outcome. In its primitive form, a subchannel discrimination problem concerns the following task: input a quantum state $\rho$ into the channel $\hat{\Lambda}$ and determine, for each trial, the actual evolution (described by $\Lambda_a$) that $\rho$ undergoes by performing a measurement on $\Lambda_a[\rho]$. For an input quantum state $\rho$, if we denote by $\{G_{a}\}_a$ the POVM associated with the measurement on the output of the channel, then the probability of correctly identifying the subchannel $\Lambda_a$ is given by \begin{equation}\label{Eq:ProbCorrect} p_\checkmark(\mathcal{I}, \{G_{a'}\}_{a'}, \rho) := \sum_a \tr\left(G_a\Lambda_a[\rho]\right). \end{equation} For any given quantum instrument $\mathcal{I}$, the maximal probability of correctly identifying the subchannel is then obtained by maximizing the above expression over the input state $\rho$ and the POVM $\{G_{a'}\}_{a'}$, i.e., \begin{equation}\label{Eq:ProbNE} p_\checkmark^{\text{NE}}(\mathcal{I}):=\max_{\rho,\{G_{a'}\}_{a'}}p_\checkmark(\mathcal{I}, \{G_{a'}\}_{a'}, \rho), \end{equation} where we use NE to signify ``no entanglement'' in the above guessing probability expression. In Refs.~\cite{Piani09,Piani15}, the authors considered a situation where the input to the channel is a part of an entangled state ${\rho_{\mbox{\tiny AB}}}$ (B is the part that enters the channel) and where a measurement on the output $\mathbb{I}_\text{A}\otimes\Lambda^\text{B}_a[{\rho_{\mbox{\tiny AB}}}]$ is allowed. Suppose now that the final measurement is restricted to be separable across A and B, but allowed to be coordinated by one-way classical communication~\cite{Piani15} (one-way LOCC) from B to A, i.e., taking the form of \begin{equation}\label{Eq:1-wayLOCC} G_{a'} = \sum_x E_{a'|x}^{\text{A}}\otimes E_x^{\text{B}},\\ \end{equation} where $E_{a'|x}^{\text{A}}\succeq 0$, $\sum_{a'}E_{a'|x}^{\text{A}}=\openone_\text{A}$ and $E_x^{\text{B}}\succeq 0$, $\sum_x E_x^{\text{B}}=\openone_\text{B}$. Then, it was shown~\cite{Piani15} that for any steerable quantum state ${\rho_{\mbox{\tiny AB}}}$, there always exists an instrument $\mathcal{I}=\{\Lambda_a\}_a$ such that the corresponding guessing probability---after optimizing over measurements of the form given in Eq.~\eqref{Eq:1-wayLOCC}---exceeds $p_\checkmark^{\text{NE}}(\mathcal{I})$. More precisely, let $\{G_{a'}\}_{a'}$ take the form of Eq.~\eqref{Eq:1-wayLOCC}. Then, for the initial state ${\rho_{\mbox{\tiny AB}}}$, the corresponding guessing probability (after optimizing over such measurements) is \begin{equation}\label{Eq:Prob1way} p^{\mbox{\tiny B$\rightarrow$A}}_\checkmark(\mathcal{I},{\rho_{\mbox{\tiny AB}}} ):= \max_{\{G_{a'}\}_{a'}} \sum_a \tr\left(G_a\mathbb{I}_\text{A}\otimes\Lambda^\text{B}_a[{\rho_{\mbox{\tiny AB}}}]\right). \end{equation} The advantage of a steerable state ${\rho_{\mbox{\tiny AB}}}$ compared to unentangled resources in the subchannel discrimination task can then be quantified via the ratio of their success probabilities. In Ref.~\cite{Piani15}, this ratio was shown to be closely related to the ${\rm SR}^{\mbox{\tiny A$\rightarrow$B}}({\rho_{\mbox{\tiny AB}}})$, the steering robustness of the given \emph{quantum state} ${\rho_{\mbox{\tiny AB}}}$, defined as: \begin{equation} {\rm SR}^{\mbox{\tiny A$\rightarrow$B}}({\rho_{\mbox{\tiny AB}}}):=\sup_{\{E_{a|x}^\text{A}\}}{\rm SR}(\{\rho_{a|x}\}). \label{Eq_SRofState} \end{equation} Explicitly, since~\cite{Piani15} \begin{equation}\label{Eq:SRtoQITask} \sup_\mathcal{I}\frac{p^{\mbox{\tiny B$\rightarrow$A}}_\checkmark(\mathcal{I}, {\rho_{\mbox{\tiny AB}}})}{p_\checkmark^{\text{NE}}(\mathcal{I})} = {\rm SR}^{\mbox{\tiny A$\rightarrow$B}}({\rho_{\mbox{\tiny AB}}})+1, \end{equation} and we can provide a DI lower bound on ${\rm SR}(\{\rho_{a|x}\})$ via Eq.~\eqref{Eq:SRBounds}, it follows from Eq.~\eqref{Eq_SRofState} that we can also estimate in a DI manner the advantage of the measured state over unentangled resources for the task of subchannel discrimination. \subsection{Quantification of entanglement} \label{Sec:DI-ER} The possibility to lower bound the entanglement of an underlying state in a DI setting was first demonstrated---using negativity~\cite{Vidal02} as the entanglement measure---in Ref.~\cite{Moroder13}. Subsequently, in Ref.~\cite{Toth15}, this possibility was extended to include the linear entropy of entanglement. In this subsection, we discuss how such a quantification can be achieved also for the generalized robustness of entanglement~\cite{vidal1999,Steiner03} defined as: \begin{equation} \begin{aligned} {\rm ER}({\rho_{\mbox{\tiny AB}}}):= \min_{t, {\tau_{\mbox{\tiny AB}}}} & ~~t\geq 0\\ \text{s.t.}& ~~\frac{{\rho_{\mbox{\tiny AB}}} + t {\tau_{\mbox{\tiny AB}}}}{1+t}\quad \text{is separable},\\ & ~~{\tau_{\mbox{\tiny AB}}} \quad \text{is a quantum state}. \end{aligned} \label{Eq_ER} \end{equation} \subsubsection{Via the approach of AMM} To obtain a DI lower bound on $ER$, we first remind that the set of unsteerable states (either from A to B, or from B to A) is a strict superset to the set of separable states. Hence, it is evident from Eq.~\eqref{Eq_ER} that (see also Ref.~\cite{Piani15}) \begin{equation} {\rm ER}({\rho_{\mbox{\tiny AB}}})\geq {\rm SR}({\rho_{\mbox{\tiny AB}}}):=\max\{{\rm SR}^{\mbox{\tiny A$\rightarrow$B}}({\rho_{\mbox{\tiny AB}}}),{\rm SR}^{\mbox{\tiny B$\rightarrow$A}}({\rho_{\mbox{\tiny AB}}})\}. \label{Eq_ERgeqSR} \end{equation} It then immediately follows from Eq.~\eqref{Eq:SRBounds} and Eq.~\eqref{Eq_SRofState} that for any assemblage on {\color{nblack} Bob's} side $\{\rho_{a|x}\}$, any assemblage on {\color{nblack} Alice's} side $\{\rho_{b|y}\}$, or any correlation $\mathbf{P}_\text{obs}$ associated with these assemblages observed in a Bell experiment: \begin{equation}\label{Eq_ERgeqSRDI} \begin{split} {\rm ER}({\rho_{\mbox{\tiny AB}}})&\geq \max\{{\rm SR}(\{\rho_{a|x}\}),{\rm SR}(\{\rho_{b|y}\})\},\\ &\ge \max\{{\rm SR}_{\mbox{\tiny DI},\ell}^{\mbox{\tiny A$\rightarrow$B}}(\mathbf{P}_{\text{obs}}),{\rm SR}_{\text{\tiny DI},\ell}^{\mbox{\tiny B$\rightarrow$A}}(\mathbf{P}_\text{obs})\}, \end{split} \end{equation} which give the desired DI lower bounds on ER$({\rho_{\mbox{\tiny AB}}})$. \subsubsection{Via the approach of nonlocal robustness}\label{Sec:CS16} In Ref.~\cite{Cavalcanti16}, Cavalcanti and Skrzypczyk introduced, for any given correlation $\{P(a,b|x,y)\}$, a quantifier for nonlocality by the name of \emph{nonlocal robustness}: \begin{equation}\label{Eq_NR} \begin{aligned} {\rm NR}(\mathbf{P}):= \min_{r, \{ Q(a,b|x,y)\}} &\quad r\geq 0\\ \text{s.t.}& \quad\Big\{ \frac{P(a,b|x,y) + r Q(a,b|x,y)}{1+r} \Big\} \in \mathcal{L}\\ & \quad\{Q(a,b|x,y)\}\in \mathcal{Q}, \end{aligned} \end{equation} where $\mathcal{L}$ and $\mathcal{Q}$ are, respectively, {\color{nblack} the sets } of Bell-local and quantum correlations. Moreover, they~\cite{Cavalcanti16} showed that the nonlocal robustness $\NR(\{P(a,b|x,y)\})$ for any correlation associated with an assemblage is a lower bound on the corresponding steering robustness, i.e., \begin{equation} {\rm SR}(\{\rho_{a|x}\})\geq\NR(\{P(a,b|x,y)\}). \label{Eq_SRgeqNR} \end{equation} Hence, by using the first inequality of Eq.~\eqref{Eq_ERgeqSR}, we see that a DI lower bound on ${\rm ER}({\rho_{\mbox{\tiny AB}}})$ can also be obtained by computing ${\rm NR}(\mathbf{P}_\text{obs})$. \subsubsection{Via an MBLHG-based~\cite{Moroder13} approach} For comparison, let us mention here also the possibility for bounding ${\rm ER}({\rho_{\mbox{\tiny AB}}})$ based on the approach of Moroder \emph{et al.}~\cite{Moroder13}, abbreviated as MBLHG (see Sec.~\ref{Sec_MM}). The idea is to first relax the separability constraint of Eq.~\eqref{Eq_ER} by the positive-partial-transposition constraint~\cite{Peres96,Horodecki96}, thereby making the optimum of the following SDP, i.e., \begin{equation} \begin{aligned} \min_{\omega_{\mbox{\tiny AB}}}& \quad\tr({\omega_{\mbox{\tiny AB}}})-1\\ \text{s.t.}& \quad \omega_{\mbox{\tiny AB}}^{\mbox{\tiny T}_A}\succeq 0, \quad {\omega_{\mbox{\tiny AB}}} \succeq {\rho_{\mbox{\tiny AB}}}, \end{aligned} \label{Eq_ER_SDP} \end{equation} a lower bound on ${\rm ER}({\rho_{\mbox{\tiny AB}}})$; here, we use $O^{\mbox{\tiny T}_A}$ to denote the partial transposition of operator $O$ with respect to the Hilbert space of A. For a two-qubit state or a qubit-qutrit state ${\rho_{\mbox{\tiny AB}}}$, the result of Horodecki~{\em et al.}~\cite{Horodecki96} implies that the ${\rm ER}({\rho_{\mbox{\tiny AB}}})$ computed from Eq.~\eqref{Eq_ER_SDP} is tight. Next, by applying the local mapping of Eq.~\eqref{Eq:LocalMapping} to the linear matrix inequality constraints of Eq.~\eqref{Eq_ER_SDP}, we obtain a further relaxation of Eq.~\eqref{Eq_ER}---and hence also a DI lower bound on ${\rm ER}({\rho_{\mbox{\tiny AB}}})$---by solving the following SDP: \begin{equation} \begin{aligned} \min_{\chi[{\omega_{\mbox{\tiny AB}}}],\{u_i\}}& \quad\chi[{\omega_{\mbox{\tiny AB}}}]_{\tr} -1\\ \text{s.t.}& \quad \chi[{\omega_{\mbox{\tiny AB}}}]^{\mbox{\tiny T}_{\bar{A}}}\succeq 0, \quad \chi[{\omega_{\mbox{\tiny AB}}}] \succeq \chi[{\rho_{\mbox{\tiny AB}}}],\\ & \quad \chi[{\omega_{\mbox{\tiny AB}}}] \succeq 0,\quad \chi[{\rho_{\mbox{\tiny AB}}}] \succeq 0,\quad \chi[{\rho_{\mbox{\tiny AB}}}]_{\tr} = 1,\\ &P(a,b|x,y)=P_{\mbox{\tiny obs}}(a,b|x,y)\quad\forall\quad a,b,x,y, \end{aligned} \label{Eq_DIER_MM} \end{equation} where $\chi[.]$ refers to moment matrix in the form of Eq.~\eqref{Eq_MM1}, $\{u_i\}$ is the set of unknown moments in $\chi[{\rho_{\mbox{\tiny AB}}}]$, and the empirical observation enters, as with Eq.~\eqref{Eq_prob_match}, by imposing the last line of equality constraints for the relevant entries in $\chi[{\rho_{\mbox{\tiny AB}}}]$. Note also that the second line of constraints on $\chi[{\rho_{\mbox{\tiny AB}}}]$ stems from the fact that we now no longer assume anything about the underlying state ${\rho_{\mbox{\tiny AB}}}$, but only constraints of the form of Eq.~\eqref{Eq_prob_match}. Hereafter, we denote the optimum of Eq.~\eqref{Eq_DIER_MM} by ${\rm ER}_{\text{\tiny DI},\ell}(\mathbf{P}_\text{obs})$. In Table~\ref{TB_DIER}, we summarize how a DI lower bound on ${\rm ER}({\rho_{\mbox{\tiny AB}}})$ can be obtained using the three approaches explained above. \begin{center} \begin{table}[h!] \centering \caption{Different approaches to DI quantification of the {\color{nblack} generalized} robustness of entanglement. Here and below, we use $^\dag$ to point out a new method introduced in the present work for bounding quantities of interest in a DI manner. } \begin{tabular}{|c|c|} \hline method & bound relations \\ \hline \hline MBLHG-based~\cite{Moroder13}$^\dag$ & ${\rm ER}({\rho_{\mbox{\tiny AB}}})\geq {\rm ER}_{\text{\tiny DI},\ell}[\mathbf{P}_{\text{obs}}]$ \\ \hline CS~\cite{Cavalcanti16} & ${\rm ER}({\rho_{\mbox{\tiny AB}}})\geq {\rm SR}(\{\rho_{a|x}\})\geq \NR(\mathbf{P}_{\text{obs}})$ \\ \hline CBLC~\cite{SLChen16} & ${\rm ER}({\rho_{\mbox{\tiny AB}}})\geq {\rm SR}(\{\rho_{a|x}\})\geq {\rm SR}_{\mbox{\tiny DI},\ell}^{\mbox{\tiny A$\rightarrow$B}}(\mathbf{P}_{\text{obs}}) $\\ \hline \end{tabular}\label{TB_DIER} \end{table} \end{center} \subsubsection{Some explicit examples} To gain some insight on the tightness of the DI bounds provided by the aforementioned approaches, consider, for example, the isotropic states~\cite{Horodecki99}: \begin{equation}\label{Eq:IsoStates} \rId(v_d) = v_d\proj{\Phi^+_d} + (1-v_d)\frac{\openone}{d^2},\quad -\frac{1}{d^2-1}\leq v_d \leq 1, \end{equation} where $\ket{\Phi^+_d}=\frac{1}{\sqrt{d}}\sum_{i=1}^d \ket{i}\ket{i}$ is the $d$-dimensional maximally entangled state, and $\frac{\openone}{d^2}$ is the two-qudit maximally mixed state. It is known that these states are entangled if and only if $v_d>\frac{1}{d+1}$. In Appendix~\ref{App:ERStates}, we show that the generalized robustness of entanglement for these states are: \begin{equation}\label{Eq:ER-rhoIso} {\rm ER}[\rId(v_d)]=\max\left\{0,\frac{d-1}{d}\left[(d+1)v_d-1\right]\right\}. \end{equation} To compare the efficiency of these three methods in lower bounding ${\rm ER}[\rId(v_d)]$ in a DI setting, we first consider $\rho_{\text{\tiny I},2}$ in conjunction with their optimal measurements with respect to the Clauser-Horne-Shimony-Holt (CHSH) Bell inequality~\cite{Clauser69} (see, e.g., Chapter 6 and Appendix B.4.1 of Ref.~\cite{Liang:PhDthesis}), the $I_{3322}$~\cite{Collins04_0} Bell inequality, and the elegant Bell inequality~\cite{Gisin:ManyQuestions} (see, e.g., Ref.~\cite{Christensen15}), respectively. The correlation $\mathbf{P}=\{P(a,b|x,y)\}$ obtained therefrom for each of these Bell scenarios is then fed into the SDP of Eq.~\eqref{Eq_relax_SR}, Eq.~\eqref{Eq_NR} and Eq.~\eqref{Eq_DIER_MM}, respectively, to obtain the corresponding DI lower bound on ${\rm ER}[\rId(v_d)]$ (cf. Table~\ref{TB_DIER}). The {\em best} lower bounds obtainable for each approach are shown in Fig.~\ref{Fig_comparison_NR_and_DISR_Werner_state} (for the lower bounds obtained for each approach in each Bell scenario, see Fig.~\ref{Fig:FullDetails}). \begin{figure}[h!] \begin{center} \emph{\includegraphics[width=9cm]{SimplifiedBoundsDIER_IsoQubitStates} } \caption{\label{Fig:ERBoundsSimplified} Certifiable DI lower bounds on the generalized robustness of entanglement (${\rm ER}$) for two-qubit isotropic states $\rho_{\text{\tiny I}, 2}(v_2)$ based on various Bell-inequality-violating correlations $\mathbf{P}$ obtained from these states using the three approaches discussed in Sec.~\ref{Sec:DI-ER} (see text and Fig.~\ref{Fig:FullDetails} for further details). Bounds obtained from the approach of MBLHG~\cite{Moroder13}, AMM~\cite{SLChen16}, and of Ref.~\cite{Cavalcanti16} are marked, respectively, using triangles ({\color{magenta} $\triangledown$}), squares ({\color{blue} $\square$}) , and crosses ($+$). For completeness, the actual value of ${\rm ER}[\rho_{\text{\tiny I}, 2}(v_2)]$ for each given value of visibility $v_2$, cf. Eq.~\eqref{Eq:ER-rhoIso}, is also included as a (red) solid line. } \label{Fig_comparison_NR_and_DISR_Werner_state} \end{center} \end{figure} For visibilities less than $v_2\approx 0.9314$, the lower bounds obtained from the approach of AMM~\cite{SLChen16} and that of Ref.~\cite{Cavalcanti16} seem to fit well with the expression $(\sqrt{2}v_2-1)(\sqrt{2}-1)$. But for greater values of $v_2$, especially for $v_2\gtrsim0.9321$, the AMM-based lower bounds appear to be somewhat tighter, and appear to fit nicely with the expression $2v_2-\sqrt{3}$. On the other hand, it is also clear from the Figure that the lower bounds ${\rm ER}_{\text{\tiny DI},\ell}$ offered by the MBLHG-bsaed approach---which are well-represented by the expression\footnote{In general, since the correlations $\mathbf{P}$ employed for a particular value of $v_2\in[\tfrac{1}{\sqrt{2}},1]$ is a convex combination of the $\mathbf{P}$ for $v_2\approx\tfrac{1}{\sqrt{2}}$ and $v_2=1$, the DI bounds on ${\rm ER}[\rId(v_d)]$ can be shown to be a convex function of $v_2$.} $\frac{\sqrt{2}v_2-1}{\sqrt{2}-1}$---considerably outperform the lower bounds obtained from the other two approaches. As a second example, we consider the $d=3$ case of Eq.~\eqref{Eq:IsoStates} and the correlations leading to the optimal quantum violation of the $I_{2233}$-Bell inequality~\cite{Collins04_0} by these states. Our results are shown in Fig.~\ref{Fig_comparison_NR_DISR_qutrit_states_2233scenario}. Again, as with the case shown in Fig,~\ref{Fig_comparison_NR_DISR_qutrit_states_2233scenario}, the AMM approach appears to offer a somewhat tighter lower bounds than that of Ref.~\cite{Cavalcanti16}. Also, the MBLHG-based approach again appears to give a much better lower bound on ${\rm ER}[\rho_{\text{\tiny I}}(V_3)]$ than the other two approaches. \begin{figure}[h!] \begin{center} \emph{\includegraphics[width=9cm]{DIER_QutritIsotropic} } \caption{ Certifiable DI lower bounds on the generalized robustness of entanglement (${\rm ER}$) for two-qutrit isotropic states $\rho_{\text{\tiny I}, 3}(v_3)$ based on the maximum $I_{2233}$-Bell-inequality-violating correlations $\mathbf{P}$ obtained from these states using the three approaches discussed in Sec.~\ref{Sec:DI-ER}. Legends of the figure follow that of Fig.~\ref{Fig:ERBoundsSimplified}. $\ell$ in the legend denotes the level of the SDP hierarchy involved in the computation; a $*$ is included as a superscript of $\ell$ whenever the next level of the hierarchy, $\ell+1$, gives the same SDP bound (within a numerical precision of the order of $10^{-6}$. } \label{Fig_comparison_NR_DISR_qutrit_states_2233scenario} \end{center} \end{figure} \subsection{Quantification of measurement incompatibility}\label{Sec_DIIR} A collection of measurement, i.e., a measurement assemblage~\cite{Piani15} $\{E_{a|x}\}_{a,x}$ with $a$ denoting the output and $x$ the input, is said to be incompatible (not jointly-measurable) whenever it cannot be written in the form \begin{equation} E_{a|x} = \sum_\lambda D(a,x|\lambda) G_\lambda,\ \forall \ a,x, \end{equation} where $G_\lambda \succeq 0$, $\sum_\lambda G_\lambda =\openone$, and $D(a|x,\lambda)$ can be chosen, without loss of generality, as $D(a|x,\lambda) =\delta_{a,\lambda_x}$ [cf. Eq.~\eqref{Eq_DefineSR} and the text thereafter]. In other words, a measurement assemblage is incompatible if there does not exist a joint measurement $\{G_\lambda\}$ providing all the outcome probabilities for any input. The use of incompatible measurement is necessary to observe both nonlocality \cite{Wolf09} and steering \cite{Uola14,Quint14}. Moreover, steering and incompatibilty problems can be mapped from one into another \cite{Uola15}, thus suggesting a measure of incompatibility, the incompatibility robustness (IR) introduced in Ref.~\cite{Uola15}. In analogy to the steering robustness, IR may be computed by solving the following SDP: \begin{equation}\label{eq:IR_SDP} \begin{aligned} &{\rm IR}(\{E_{a|x}\}) = \min_{\{\tilde{G}_\lambda\}} \frac{1}{d}\sum_\lambda \tr[\tilde{G}_\lambda]-1\\ \text{s.t.}~ &\sum_\lambda D(a|x,\lambda)\tilde{G}_\lambda \succeq E_{a|x} \quad \forall\,\, a,x,\\ & \tilde{G}_\lambda \succeq 0 \quad \forall\,\, \lambda,\\ &\sum_\lambda \tilde{G}_\lambda = \openone \frac{1}{d}\sum_\lambda \tr[\tilde{G}_\lambda]. \end{aligned} \end{equation} where $d$ is the dimension of $\{E_{a|x}\}$. In Ref.~\cite{SLChen16}, it has been proven that the steering robustness of a given assemblage $\{\rho_{a|x}\}$ is a lower bound on the incompatibility robustness of the steering equivalent observables \cite{Uola15} $B_{a|x}= \rho_B^{-\frac{1}{2}} \rho_{a|x} \rho_B^{-\frac{1}{2}}$ with $\rho_B=\sum_a \rho_{a|x}$,\footnote{In the case of a reduced state $\rho_B$ not of full rank, it is sufficient to project the observables to its range, as discussed in Ref.~\cite{Uola15}. The same reasoning applies to the mapping of the two SDPs below.} which, in turn, is a lower bound on the incompatibility robustness of $\{E_{a|x}\}$, namely \begin{equation} {\rm IR}(\{E_{a|x}\})\geq {\rm IR}(\{B_{a|x}\})\geq {\rm SR}(\{\rho_{a|x}\}). \end{equation} The corresponding DI quantifier has then been discussed in Ref.~\cite{SLChen16}. An analogous observation has been made in Ref.~\cite{Cavalcanti16}, where Cavalcanti and Skrzypczyk also gave a lower bound on the degree of incompatibility, quantified by the incompatibility robustness of Alice's measurement assemblage $\{E_{a|x}^A\}$ in a DI manner. In their work, they first introduced a modified quantifier of steerability, called the \emph{consistent steering robustness}, defined as: \begin{equation}\label{Eq:SRc} \begin{aligned} &{\rm SR}^c(\{\rho_{a|x}\}) = \min_{t,\{\tau_{a|x}\},\{\sigma_\lambda\}} \quad t \geq 0\\ \text{s.t.}~ &\frac{\rho_{a|x} + t \tau_{a|x}}{1+t} = \sum_\lambda D(a|x,\lambda)\sigma_\lambda \quad \forall\,\, a,x,\\ &\{\tau_{a|x}\}\quad\text{is a valid assemblage},\\ &\sigma_\lambda \succeq 0 \quad \forall\,\, \lambda,\quad \sum_\lambda \tr(\sigma_\lambda) = 1,\\%\quad\forall\,\,\lambda,\\ &\sum_a\tau_{a|x} = \sum_a\rho_{a|x}\quad\forall\,\, x. \end{aligned} \end{equation} Compared with Eq.~\eqref{Eq_DefineSR}, the consistent steering robustness needs more constraints, i.e., $\sum_a\tau_{a|x}=\sum_a\rho_{a|x}$ for all $x$. The above problem can also be formulated as the following SDP [by setting $\tilde{\sigma}_\lambda=(1+t)\sigma_\lambda$ and noting the non-negativity of $\tau_{a|x}$]: \begin{equation}\label{Eq_SR_c} \begin{aligned} &{\rm SR}^c(\{\rho_{a|x}\}) = \min_{\{\tilde{\sigma}_\lambda\}}\quad \tr\sum_{\lambda}\tilde{\sigma}_\lambda - 1\\ \text{s.t.}~ &\sum_\lambda D(a|x,\lambda)\tilde{\sigma}_\lambda \succeq \rho_{a|x} \quad \forall\,\, a,x,\\ &\tilde{\sigma}_\lambda \succeq 0 \quad \forall\,\, \lambda,\\ &\sum_\lambda \tilde{\sigma}_\lambda = \tr\Big[\sum_\lambda \tilde{\sigma}_\lambda\Big] \cdot \sum_a\rho_{a|x} ~~\forall\,\, x. \end{aligned} \end{equation} Following an argument analogous to those in Ref.~\cite{SLChen16}, one can straightforwardly prove that ${{\rm SR}^c(\{\rho_{a|x}\}) = {\rm IR}(\{B_{a|x}\}})$ for the steering equivalent observables $\{B_{a|x}\}$. In fact, by a direct inspection of Eqs.~\eqref{eq:IR_SDP} and \eqref{Eq_SR_c}, one sees that the SDP for computing ${\rm IR}(\{B_{a|x}\})$, cf.~Eq.~\eqref{eq:IR_SDP}, can be transformed into the one for computing ${\rm SR}^c(\{\rho_{a|x}\}$, Eq.~\eqref{Eq_SR_c}, via the mappings $E_{a|x} \mapsto {B_{a|x} = \rho_B^{-\frac{1}{2}} \rho_{a|x} \rho_B^{-\frac{1}{2}}}$, ${\tilde{G}_\lambda = \rho_B^{-\frac{1}{2}} \tilde{\sigma}_\lambda \rho_B^{-\frac{1}{2}}}$, and the fact that $\sum_{a} \rho_{a|x} =\rho_B$. To show the inverse transformation,it is sufficient to use the inverse of the above mappings. In order to provide a DI lower bound on ${\rm SR}^c(\{\rho_{a|x}\}$, the authors of Ref.~\cite{Cavalcanti16} introduced a nonlocality quantifier [for a given correlation $\mathbf{P}$] {\color{nblack} named} \emph{consistent nonlocal robustness} $\NR^\text{c}(\{P(a,b|x,y)\})$: \begin{equation} \begin{aligned} &\NR^\text{c}(\{P(a,b|x,y)\}) = \min_{r,\{Q(a,b|x,y)\}} \quad r \geq 0\\ \text{s.t.}~ &\frac{P(a,b|x,y) + r Q(a,b|x,y)}{1+r} \\&= \sum_\lambda D(a|x,\lambda)D(b|x,\lambda)P(\lambda) \quad \forall\,\, a,b,x,y,\\ &\{Q(a,b|x,y)\}\in\mathcal{Q},\\ &Q(b|y)=P(b|y)\quad\forall\,\, b,y, \label{Eq_DefineNRc} \end{aligned} \end{equation} i.e., it calculates the minimum noise one has to mix into $\{P(a,b|x,y)\}$ to make the mixture become local. $\{Q(a,b|x,y)\}\in\mathcal{Q}$ denotes $\{Q(a,b|x,y)\}$ that has a quantum realization, cf. Eq.~\eqref{Eq:Quantum}, and the last set of constrains requires the equivalence between the marginals of $\{P(a,b|x,y)\}$ and $\{Q(a,b|x,y)\}$, similar to the case of the consistent steering robustness [see the last line of Eq.~\eqref{Eq:SRc}]. Since the quantum set $\mathcal{Q}$ is not easily characterized, one can rather consider a superset $\tilde{\mathcal{Q}}^{(\ell)}$ of $\mathcal{Q}$ by using the $\ell$-th level of NPA hierarchy. In this way, one obtains a lower bound on $\NR^\text{c}(\{P(a,b|x,y)\})$ by solving the following SDP, which is reformulated from Eq.~\eqref{Eq_DefineNRc} [by setting $q(\lambda)=\tfrac{1+r}{r}P(\lambda)$]: \begin{equation} \begin{aligned} &\NR^\text{c}_\ell(\{P(a,b|x,y)\}) = 1/s^*, \text{ with }\\ s^* = & \max_{\{q(\lambda)\},s} s\\ \text{s.t.}~~&s={\sum_\lambda q(\lambda)-1},\ s\geq 0,\\ \Bigg\{ &\sum_\lambda D(a|x,\lambda)D(b|y,\lambda)q(\lambda) - \\ &\left(\sum_\lambda q(\lambda)-1\right)\cdot P(a,b|x,y) \Bigg\}\in\tilde{\mathcal{Q}}^{(\ell)},\\ &\sum_\lambda D(b|y,\lambda)q(\lambda)=P(b|y)\cdot\sum_\lambda q(\lambda)\quad\forall\,\, b,y,\\ &q(\lambda) \geq 0 \quad\forall\,\, \lambda. \end{aligned} \label{LP_NRc} \end{equation} Using the above quantifiers, Cavalcanti and Skrzypczyk proved \cite{Cavalcanti16} \begin{equation} {\rm IR}(\{E_{a|x}\})\geq {\rm SR}^c(\{\rho_{a|x}\})\geq \NR^\text{c}(\{P(a,b|x,y)\}), \label{IR_SRc_NRc} \end{equation} which allows one to estimate the degree of incompatibility of Alice's measurements from the observed data $\mathbf{P}_\text{obs}$, i.e., in a DI manner. Here, we would like to compare our method of lower-bounding incompatibility robustness with Eq.~\eqref{IR_SRc_NRc} by considering the example in Ref.~\cite{Cavalcanti16}. That is, Alice and Bob share a pure partially entangled state \begin{equation} |\phi\rangle = \cos\theta|00\rangle+\sin\theta|11\rangle\quad \theta\in(0,\pi/4]. \label{Eq_pure_entangled_state} \end{equation} For this state, optimal measurements for Alice and Bob giving the maximal violation of the Bell-Clauser-Horne (CH) inequality~\cite{Clauser74} are known analytically (see, e.g., Ref.~\cite{Liang:PhDthesis}). One can, then, estimate the DI lower bounds on the incompatibility robustness of Alice's and Bob's measurements by using the above different approaches. The results are plotted in Fig.~\ref{Fig:IR}, together with our improved bound ${\rm SR}_{\mbox{\tiny DI},\ell}^{\mbox{\tiny c,A$\rightarrow$B}}$, that will be introduced below. With some attention, one observes a small but noticeable gap (of the order of $10^{-3}$ or less) between ${\rm SR}_{\mbox{\tiny DI},\ell}$ and $\NR^\text{c}_\ell$ for some value of $\theta$, even though we already employed the $5$th level of AMM in our computation of ${\rm SR}_{\mbox{\tiny DI},\ell}$ (while the computation of ${\rm NR}^c$ was achieved using the $2$nd level of the NPA hierarchy). \begin{figure*} \begin{minipage}[c]{.49\textwidth} \emph{\includegraphics[width=8cm]{DIIR_AtoB} } \text{(a) For Alice's measurement assemblage} \label{Fig_comparison_DIIR0_Alice} \end{minipage} \begin{minipage}[c]{.49\textwidth} \emph{\includegraphics[width=8cm]{DIIR_BtoA} } \text{(b) For Bob's measurement assemblage} \label{Fig_comparison_DIIR_BtoA.eps} \end{minipage} \caption{\label{Fig:IR} Comparison of DI lower bounds on measurement incompatibility---as measured by the incompatibility robustness ${\rm IR}$---of the measurements employed in attaining the optimal Bell-CH inequality violation of pure (partially) entangled two-qubit states. The ${\rm IR}$ of the optimal measurement assemblage as a function of $\theta$ [cf. Eq.~\eqref{Eq_pure_entangled_state}] is marked with a (red) solid line. Following Ref.~\cite{Liang:PhDthesis}, we take the optimal measurements on Alice's side to be $\sigma_x$ and $\sigma_z$ [independent of $\theta$, see subplot (a)] while those on Bob's side to be a pair of measurements that are orthogonal on the Bloch sphere at $\theta=\frac{\pi}{4}$, but which gradually become aligned as $\theta$ decreases to 0 [see subplot (b)]. From the resulting optimal correlations $\mathbf{P}$, one can estimate, in a DI manner, ${\rm IR}(\{E_{a|x}^{\mbox{\tiny A}}\})$ or ${\rm IR}(\{E_{b|y}^{\mbox{\tiny B}}\})$ given by the AMMs approach ({\color{magenta} $\square$}), the Cavalcanti-Skrzypczyk approach ($\times$)~\cite{Cavalcanti16}, and the improved AMMs approach ({\color{blue} $\triangledown$}) introduced in this work. For comparison, we have also included the actual value of ${\rm IR}$ and SR in each plot using, respectively, a red (upper) and a turqoise (bottom) solid line. } \end{figure*} Such a gap may be explained by the fact that ${\rm SR}_{\mbox{\tiny DI},\ell}$ does not take into account of the consistency condition $\sum_a \tau_{a|x}=\sum_a \rho_{a|x}$, present in some form in $\NR^\text{c}$, which provides a better lower bound to ${\rm IR}$. To improve our bound, we apply the the AMMs approach to ${\rm SR}^{\text{c}}$. Then, the optimization problem of Eq.~\eqref{Eq_SR_c} gets relaxed to \begin{equation} \begin{aligned} \min_{\{u_v\}} ~~&\left(\sum_{\lambda}\chi_{\mbox{\tiny DI}}^{(\ell)}[\sigma_{\lambda}]_\text{tr}\right)-1\\ \text{s.t.}~~ &\sum_{\lambda}D(a|x,{\lambda})\chi_{\mbox{\tiny DI}}^{(\ell)}[\sigma_{\lambda}] \succeq \chi_{\mbox{\tiny DI}}^{(\ell)}[\rho_{a|x}] \quad\forall~a,x,\\ &\sum_\lambda\chi_{\mbox{\tiny DI}}^{(\ell)}[\sigma_{\lambda}] = \sum_\lambda\chi_{\mbox{\tiny DI}}^{(\ell)}[\sigma_{\lambda}]_{\text{tr}} \cdot \sum_a \chi_{\mbox{\tiny DI}}^{(\ell)}[\rho_{a|x}] \quad\forall x,\\ &\chi_{\mbox{\tiny DI}}^{(\ell)}[\sigma_{\lambda}]\succeq 0\quad \forall ~\lambda\quad \forall ~\lambda,\\ &\sum_a\chi_{\mbox{\tiny DI}}^{(\ell)}[\rho_{a|x}] = \sum_a\chi_{\mbox{\tiny DI}}^{(\ell)}[\rho_{a|x'}] ~~\forall\, x\neq x',\\ &\chi_{\mbox{\tiny DI}}^{(\ell)}[\rho_{a|x}]\succeq 0 ~~\forall ~a,x,\\ &P(a,b|x,y)=P_{\mbox{\tiny obs}}(a,b|x,y)\quad\forall\quad a,b,x,y. \end{aligned} \label{Eq_SRDIc} \end{equation} This optimization problem, however, is not in the form of an SDP since the third line contains quadratic constraints in the free variables. To circumvent this complication, we can relax the original problem by keeping, instead, only a subset of the original constraints, i.e., entries \begin{equation} \sum_\lambda\Big[\chi_{\mbox{\tiny DI}}^{(\ell)}[\sigma_{\lambda}]\Big]_{ij} = \sum_\lambda\chi_{\mbox{\tiny DI}}^{(\ell)}[\sigma_{\lambda}]_{\text{tr}} \cdot \sum_a \Big[\chi_{\mbox{\tiny DI}}^{(\ell)}[\rho_{a|x}]\Big]_{ij} \quad\forall\,\, x, \label{Eq_SRDIc_constr} \end{equation} where $i,j$ are those corresponding to $[\chi_{\mbox{\tiny DI}}^{(\ell)}[\rho_{a|x}]]_{ij}=P(a,b|x,y)$. With this replacement, Eq.~\eqref{Eq_SRDIc} becomes an SDP, and we refer to its solution as ${\rm SR}_{\mbox{\tiny DI},\ell}^{\mbox{\tiny c,A$\rightarrow$B}}(\mathbf{P}_{\text{obs}})$. Clearly, ${\rm SR}_{\mbox{\tiny DI},\ell}^{\mbox{\tiny c,A$\rightarrow$B}}(\mathbf{P}_{\text{obs}})$ is a lower bound on ${\rm SR}^{\text{c}}(\{\rho_{a|x}\})$ as it is obtained by solving a relaxation to the optimization problem of Eq.~\eqref{Eq_SRDIc}, and hence of Eq.~\eqref{Eq_SR_c}. At the same time, for any given level $\ell$, a straightforward comparison shows that the lower bound ${\rm SR}_{\mbox{\tiny DI},\ell}^{\mbox{\tiny c,A$\rightarrow$B}}(\mathbf{P}_{\text{obs}})$ obtained by solving Eq.~\eqref{Eq_SRDIc} (with the third line replaced in the manner mentioned above) provides an upper bound on ${\rm SR}_{\mbox{\tiny DI},\ell}^{\mbox{\tiny A$\rightarrow$B}}(\mathbf{P}_{\text{obs}})$ obtained by solving Eq.~\eqref{Eq_relax_SR}, thus giving: \begin{equation} {\rm IR}(\{E_{a|x}^{\mbox{\tiny A}}\}) \geq {\rm SR}^{\text{c}}(\{\rho_{a|x}\}) \geq {\rm SR}_{\mbox{\tiny DI},\ell}^{\mbox{\tiny c,A$\rightarrow$B}}(\mathbf{P}_{\text{obs}}) \ge {\rm SR}_{\mbox{\tiny DI},\ell}^{\mbox{\tiny A$\rightarrow$B}}(\mathbf{P}_{\text{obs}}). \label{LB_IR_AMM2} \end{equation} Table~\ref{TB_DIIRIW} summarizes the various approaches discussed above for the DI quantification of measurement incompatibility. From Fig.~\ref{Fig:IR}, we can see that ${\rm SR}_{\mbox{\tiny DI},\ell}^{\mbox{\tiny c,A$\rightarrow$B}}$ provides a much better bound (in some instances, even tight bounds) on ${\rm IR}$ compared to ${\rm SR}_{\mbox{\tiny DI},\ell}$ and $\NR^\text{c}_\ell$. {\color{nblack} On the other hand, it is also clear from the plots that, in these instances, ${\rm SR}_{\mbox{\tiny DI},\ell}$ already provides a tight bound on the underlying SR.} \begin{center} \begin{table}[h!] \centering \caption{Different methods that can be used to provide a DI quantification of measurement incompatibility. } \begin{tabular}{|c|c|} \hline method & bound relations \\ \hline \hline C & $ {\rm IR}(\{E_{a|x}^{\mbox{\tiny A}}\}) \geq {\rm SR}^{\text{c}}(\{\rho_{a|x}\}) \geq {\rm NR}^\text{c}_\ell(\mathbf{P}_\text{obs}) $ \\ \hline CBL & $ {\rm IR}(\{E_{a|x}^{\mbox{\tiny A}}\}) \geq {\rm SR}(\{\rho_{a|x}\}) \geq {\rm SR}_{\mbox{\tiny DI},\ell}^{\mbox{\tiny A$\rightarrow$B}}(\mathbf{P}_{\text{obs}}) $\\ \hline modified CBLC$^\dag & $ {\rm IR}(\{E_{a|x}^{\mbox{\tiny A}}\}) \geq {\rm SR}^{\text{c}}(\{\rho_{a|x}\}) \geq {\rm SR}_{\mbox{\tiny DI},\ell}^{\mbox{\tiny c,A$\rightarrow$B}}(\mathbf{P}_{\text{obs}}) $\\ \hline \end{tabular}\label{TB_DIIRIW} \end{table} \end{center} \section{Multipartite generalization and post-quantum steering} Evidently, the framework of AMM introduced in Sec.~\ref{Sec:AMM} can be generalized to a scenario with more than two parties. Below, we discuss this specifically for the tripartite scenario and explain how this leads to novel insights on the set of correlations characterized by the framework of AMM. \subsection{Steering in the tripartite scenario} Following Ref.~\cite{Sainz15}, let us consider a tripartite Bell-type experiment where only Charlie has access to trusted (i.e., well-characterized) measurement devices. If we denote the shared quantum state by ${\rho_{\mbox{\tiny ABC}}}$, the local POVM acting on Charlie's subsystem as $E^\text{C}_{c|z}$, then the analog of Eq.~\eqref{Eq:Quantum} reads as: \begin{equation}\label{Eq:Quantum3} P(a,b,c|x,y,z)\stackrel{\mathcal{Q}}{=}\tr\left({\rho_{\mbox{\tiny ABC}}}\,E^\text{A}_{a|x}\otimes E^\text{B}_{b|y}\otimes E^\text{C}_{c|z}\right), \end{equation} while that of Eq.~\eqref{Eq_quantum_assemblage} reads as: \begin{equation}\label{Eq_quantum_assemblage3} \rho^\text{\tiny C}_{ab|xy} = \tr_\text{A,B}(E_{a|x}^\text{A}\otimes E^\text{B}_{b|y}\otimes\openone~{\rho_{\mbox{\tiny ABC}}})\quad \forall\,\, a,x, b,y. \end{equation} It is straightforward to see from Eq.~\eqref{Eq_quantum_assemblage3} that the assemblage $\{{\rho_{ab|xy}^{\mbox{\tiny C}}}\}_{a,b,x,y}$ (hereafter abbreviated as $\{{\rho_{ab|xy}^{\mbox{\tiny C}}}\}$) satisfy the positivity constraints and some no-signaling-like consistency constraints, i.e., \begin{equation}\label{Eq_valid_assemblage3} \begin{aligned} &{\rho_{ab|xy}^{\mbox{\tiny C}}}\succeq 0 \quad \forall\,\, a,b,x,y,\quad \tr\sum_{a,b}{\rho_{ab|xy}^{\mbox{\tiny C}}} =1,\\ &\sum_{a}{\rho_{ab|xy}^{\mbox{\tiny C}}} = \sum_{a}\rho_{ab|x'y}^\text{\tiny C} \quad\forall\, x, x', y,\\ &\sum_{b}{\rho_{ab|xy}^{\mbox{\tiny C}}} = \sum_{b}\rho_{ab|xy'}^\text{\tiny C} \quad\forall\, x, y, y',\\ &\sum_{a,b}{\rho_{ab|xy}^{\mbox{\tiny C}}} = \sum_{a,b}\rho_{ab|x'y'}^\text{\tiny C} \quad\forall\, x, x', y, y'. \end{aligned} \end{equation} As with the bipartite case, the assemblage $\{{\rho_{ab|xy}^{\mbox{\tiny C}}}\}$ is said to admit an LHS model from A and B to C if there exists a collection of normalized quantum states {\color{nblack} $\{\hat{\sigma}_\lambda\}$}, probability distribution $P(\lambda)$, response functions $P(a|x,\lambda)$ and $P(b|y,\lambda)$ such that {\color{nblack} ${{\rho_{ab|xy}^{\mbox{\tiny C}}} = \sum_\lambda P(a|x,\lambda)P(b|y,\lambda)P(\lambda) \hat{\sigma}_\lambda}$ } for all $a,b,x,y$. Otherwise, the assemblage is said to be steerable from A and B to C. \subsection{AMMs in a tripartite scenario} \label{Sec:AMM3} To generalize the AMM framework to the aforementioned steering scenario, consider the analog of Eq.~\eqref{Eq:LocalMapping} that acts on the Hilbert space of Charlie's system $\rho_\text{\tiny C}$: \begin{equation}\label{Eq:CPmap:C} \Lambda_\text{\tiny C}({\rho_\ttC}) = \sum_n K_n{\rho_\ttC} K_n^\dag,\quad K_n = \sum_i |i\rangle_{\bar{\text{C}}\text{C}} \langle n|C_i, \end{equation} where $\{|i\rangle\}$ ($\{|n\rangle\}$) are orthonormal bases vectors for the output (input) Hilbert space $\bar{\text{C}}$ (C) and $C_i$, $C_j$ are some operators acting on C. Specifically, for each combination of outcome $a,b$ and setting $x,y$, applying the local CP map of Eq.~\eqref{Eq:CPmap:C} to the conditional state ${\rho_{ab|xy}^{\mbox{\tiny C}}}$ gives rise to a matrix of expectation values: \begin{equation}\label{Eq_AMM3} \begin{aligned} \chi[{\rho_{ab|xy}^{\mbox{\tiny C}}},\{C_i\}] &= \Lambda_\text{\tiny C}({\rho_{ab|xy}^{\mbox{\tiny C}}})\\ &= \sum_{i,j}\ket{i}\!\bra{j}\tr[{\rho_{ab|xy}^{\mbox{\tiny C}}} C_j^\dagger C_i]\quad \forall\ a,b,x,y, \end{aligned} \end{equation} where $\{C_i\}$ are again operators formed from the product of $\{\openone\}\cup\{E^\text{C}_{c|z}\}_{z,c}$. When $\{C_i\}$ involves operators that are at most $\ell$-fold product of Charlie's POVM elements, we say that the collection of matrices in Eq.~\eqref{Eq_AMM3} defines AMMs of level $\ell$, which we denote by $\chi^{(\ell)}[{\rho_{ab|xy}^{\mbox{\tiny C}}}]$. In a DI scenario, neither the assemblage $\{{\rho_{ab|xy}^{\mbox{\tiny C}}}\}$ nor the measurement assemblage $\{E_{c|z}^\text{\tiny C}\}$ is assumed. Therefore, the level $\ell$ AMMs corresponding to $\chi[\rho_{ab|xy},\{C_i\}]$ in the DI setting, which we denote by $\chi_{\mbox{\tiny DI}}^{(\ell)}[\rho_{ab|xy}^{\mbox{\tiny C}}]$, is not fully determined. Following analogous procedure as that detailed in Sec.~\ref{Sec_AMMs_formulation}, one finds that the elements of the $\chi_{\mbox{\tiny DI}}^{(\ell)}[\rho_{ab|xy}^{\mbox{\tiny C}}]$ fall under two categories: observable correlation (i.e., conditional probabilities) ${P_{\mbox{\tiny obs}}(abc|xyz)}$\footnote{To save the space, $P(a,b,c|x,y,z)$ is abbreviated as $P(abc|xyz)$ when there is no risk of confusion.} and unknown variables. As an example, consider the steering scenario with binary input and output on Charlie's side such that $C_i\in\{\openone,E_{1|1}^\text{C},E_{1|2}^\text{C}\}$. Then, for all $a,b,x,y$, the first-level AMMs take the form of \begin{equation} \begin{split} &\chi_{\mbox{\tiny DI}}^{(1)}[\rho_{ab|xy}^{\mbox{\tiny C}}] =\\ &\begin{pmatrix} \tr( {\rho_{ab|xy}^{\mbox{\tiny C}}}) & \tr( {\rho_{ab|xy}^{\mbox{\tiny C}}} E_{1|1}^\text{\tiny C}) & {\rho_{ab|xy}^{\mbox{\tiny C}}} E_{1|2}^\text{\tiny C})\\ \tr( {\rho_{ab|xy}^{\mbox{\tiny C}}} E_{1|1}^\text{\tiny C}) & \tr {\rho_{ab|xy}^{\mbox{\tiny C}}} E_{1|1}^\text{\tiny C}) & \tr({\rho_{ab|xy}^{\mbox{\tiny C}}} E_{1|2}^{C\dag} E_{1|1}^\text{\tiny C})\\ \tr( {\rho_{ab|xy}^{\mbox{\tiny C}}} E_{1|2}^\text{\tiny C}) & \tr( {\rho_{ab|xy}^{\mbox{\tiny C}}} E_{1|1}^{C\dag} E_{1|2}^\text{\tiny C}) & \tr( {\rho_{ab|xy}^{\mbox{\tiny C}}} E_{1|2}^\text{\tiny C}) \end{pmatrix}\\ &=\begin{pmatrix} P_{\mbox{\tiny obs}}(ab|xy) & P_{\mbox{\tiny obs}}(ab1|xy1) & P_{\mbox{\tiny obs}}(ab1|xy2)\\ P_{\mbox{\tiny obs}}(ab1|xy1) & P_{\mbox{\tiny obs}}(ab1|xy1) & u_1^{abxy}\\ P_{\mbox{\tiny obs}}(ab1|xy2) & u_1^{abxy} & P_{\mbox{\tiny obs}}(ab1|xy2) \end{pmatrix}, \end{split} \end{equation} where we have made use of the simplification mentioned in Sec.~\ref{Sec:AMM} and expressed the experimentally inaccessible expectation value as: \begin{equation} \tr({\rho_{ab|xy}^{\mbox{\tiny C}}} E_{1|2}^{\text{C}\dag} E_{1|1}^\text{C}) = \tr(\rho_{a|x}E_{1|1}^{\text{C}\dag} E_{1|2}^\text{C}) =u_1^{abxy}, \end{equation} with $u_v^{abxy}\in\mathbb{R}$ \subsection{Correlations characterized by the AMM framework and post-quantum steering} In Ref.~\cite{SLChen16}, it was left as an open problem whether the set of correlations characterized by the AMM framework converges to the set of quantum distributions, i.e., the set of $\mathbf{P}$ that satisfy Born's rule. In this section, we show that in the tripartite scenario, the set of $\mathbf{P}$ allowed by demanding the positivity of AMMs---even in the limit of $\ell\to\infty$---generally cannot lead to the set of $\mathbf{P}$ that can be written in the form of Eq.~\eqref{Eq:Quantum3}. To this end, we recall from Ref.~\cite{Sainz17} that there exists assemblage $\{{\rho_{ab|xy}^{\mbox{\tiny C}}}\}$ satisfying Eq.~\eqref{Eq_valid_assemblage3} but not Eq.~\eqref{Eq_quantum_assemblage3} for any ${\rho_{\mbox{\tiny ABC}}}$ and any local POVM $\{E^\text{A}_{a|x}\}$, $\{E^\text{B}_{b|y}\}$. The authors of Ref.~\cite{Sainz17} dubbed this phenomenon {\em post-quantum steering}. A simple example of this kind is given by ${\rho_{ab|xy}^{\mbox{\tiny C}}} = \frac{1}{4}[1-(-1)^{ab+(x-1)(y-1)}]\hat{\rho}$ where $x,y\in\{1,2\}$, $a,b\in\{0,1\}$ and $\hat{\rho}$ is an arbitrary, but normalized density operator. Since the resulting marginal distribution $P(a,b|x,y)$ is exactly that of a Popescu-Rohrlich box~\cite{Popescu1994}, we see that this assemblage cannot have a quantum realization. Now, note from our discussion in Sec.~\ref{Sec:AMM3} that if we start from an assemblage satisfying Eq.~\eqref{Eq_valid_assemblage3}, the resulting AMMs are always positive semidefinite, and hence are compatible with the physical requirements imposed on AMMs. However, as mentioned above, there exists assemblage $\{{\rho_{ab|xy}^{\mbox{\tiny C}}}\}$ satisfying Eq.~\eqref{Eq_valid_assemblage3} but which is not quantum realizable. We thus see that the AMM framework in the tripartite scenario, as described in Sec.~\ref{Sec:AMM3}, can, at best, lead to a characterization of the set of post-quantum-steerable correlations, i.e., a {\em superset} of correlations satisfying Eq.~\eqref{Eq:Quantum3} that also include, e.g., non-signaling, but stronger-than-quantum marginal distributions between A and B. On the other hand, it follows from the results of Refs.~\cite{Gisin89,Hughston93} that the phenomenon of post-quantum steering cannot occur in the bipartite scenario. Thus the problem of whether the set of correlations characterized by the AMM framework leads to the set of quantum distributions remains open in the bipartite scenario. Likewise, if one {\color{nblack} considers} AMMs in a tripartite scenario based on one party steering the remaining two parties, the above argument {\color{nblack} does not apply either}. As such, the problem whether one recovers---in the asymptotic limit---the quantum set, cf. Eq.~\eqref{Eq:Quantum3}, using the AMM framework remains open. \section{Concluding Remarks} \label{Sec:Conclusion} In this work, we have further explored and developed the AMM framework introduced in Ref.~\cite{SLChen16}. To begin with, we flashed out the details on how a DI bound on steering robustness (SR) provided by the AMM framework allows us to estimate the usefulness of an entangled state in the kind of subchannel discrimination problem discussed in Ref.~\cite{Piani15}. We then went on to compare the DI bound on the generalized robustness of entanglement provided by the AMM framework against that given by the approach of Cavalcanti and Skrzypczyk~\cite{Cavalcanti16}. Within our computational limit, the bounds of AMM appear to be {\color{nblack} slightly tighter than (or at least as good) } those from the latter approach. In the process, we also offered another mean to bound the generalized robustness of entanglement from the data alone via the approach of Moroder {\em et al.}~\cite{Moroder13}. This last set of DI bounds turned out to be much stronger than that offered by the other two approaches. In these comparisons, we considered the two-qudit isotropic states where we also evaluated their generalized robustness of entanglement explicitly (see Appendix~\ref{App:ERStates}). Next, we compared the DI bound on the incompatibility robustness (IR) given by the AMM framework against that of Ref.~\cite{Cavalcanti16}. In this case, the DI bounds offered by the AMM approach---based on bounding SR---do not perform as well compared with those of Ref.~\cite{Cavalcanti16}, which are based on bounding the underlying consistent steering robustness. Motivated by this difference, we then provided an alternative way to lower bound---in a DI manner---the consistent steering robustness via the AMM framework. This turned out to provide---as compared with the approaches just mentioned---much tighter (and in some instances even tight) DI bounds on the underlying IR. Even then, let us note that, in general, a tight DI bound on the underlying IR does guarantee the possibility to self-test the underlying measurements, as exemplified by the results of Ref.~\cite{Andersson2017}. On a related note, we demonstrated in Appendix~\ref{App:SW} how the AMM framework can be used to provide a DI lower bound on the steerable weight, and hence the incompatibility weight---another measure of incompatibility between different measurements. We also briefly explored the framework in the tripartite scenario. This led to the observation that the AMM framework generally does not characterize the set of quantum correlations, but rather the set of correlations where the phenomenon of post-quantum steering is allowed. However, the problem of whether the set of correlations characterized by the AMM framework converges to the quantum set in the bipartite scenario, or in a multipartite scenario where one party tries to steer the states of the remaining parties remains unsolved. \begin{acknowledgements} We are grateful to Daniel Cavalcanti and Paul Skrzypczyk for useful discussions and for sharing their computational results in relation to the plot shown in Fig.~\ref{Fig:IR}. This work is supported by the Ministry of Science and Technology, Taiwan (Grants No. 103-2112-M-006-017-MY4, 104-2112-M-006-021-MY3, 107-2112-M-006-005-MY2, and 107-2917-I-564 -007 (Postdoctoral Research Abroad Program)), and by the FWF Project M~2107~(Meitner-Programm). \end{acknowledgements}
1,108,101,565,345
arxiv
\section{Introduction} The mass of the top quark is a fundamental parameter of the Standard Model, since it enters in the electroweak precision tests \cite{gfitter} and constrained the mass of the Higgs boson even before its actual discovery at the LHC. Moreover, the fact that the electroweak vacuum lies on the boundary between stability and metastability regimes \cite{degrassi} depends on the actual values of top and Higgs masses. This statement does however depend on the identification of the top-quark mass world average, i.e. $m_t=[173.34\pm 0.27{\rm (stat)} \pm 0.71{\rm (syst)}]$~GeV \cite{wave}, with the pole mass and no extra uncertainty is included in the exploration of Ref.~\cite{degrassi}. In fact, any change of the central value or of the error on $m_t$ may affect the results in \cite{degrassi}, to the point of even moving the vacuum position inside the stability or instability regions. It is therefore of paramount importance determining $m_t$ at the LHC with the highest possible precision and, above all, estimating reliably all sources of uncertainty. The top-quark mass is determined by comparing experimental data with theory predictions: the extracted mass is the quantity $m_t$ in the calculation or in the Monte Carlo event generator employed to simulate top production and decay. In the following, I shall review the main methods used to reconstruct the top-quark mass at the LHC and discuss the theoretical and Monte Carlo uncertainties, paying special attention to the dependence on the event-generator $b$-fragmentation parameters. I shall finally make some concluding remarks. \section{Top-quark mass extraction at LHC} Top-quark mass determinations at hadron colliders are classified as standard or alternative measurements. Standard top-mass analyses adopt the template, matrix-element and ideogram methods (see, e.g., the analyses in \cite{atlas1,cms1}) and compare final-state distributions, associated with top-decay ($t\to bW$) products, such as the $b$-jet+lepton invariant mass in the dilepton channel, with the predictions yielded by the Monte Carlo codes. Event generators like the general-purpose HERWIG \cite{herwig} or PYTHIA \cite{pythia} simulate the hard-scattering process at leading order (LO), multi-parton emissions in the soft or collinear approximation and the interference between top-production and decay stages is neglected (narrow-width approximation). More recent NLO+shower programs, such as MadGraph5$\_$aMC@NLO \cite{mcnlo} and POWHEG \cite{powheg}, implement NLO hard-scattering amplitudes, but still depend on HERWIG and PYTHIA for parton cascades and non-perturbative phenomena, such as hadronization or underlying event. As a whole, standard top-quark mass determinations, as they are based on the reconstruction of the invariant mass of the top-decay products and rely on programs which factorize top production ad decay, should lead to results close to the top-quark pole mass. However, as will be pointed out hereafter, a careful determination of the theoretical uncertainty, of both perturbative and non-perturbative origins, such as missing higher orders, width corrections and colour-reconnection effects, is compelling. Other strategies to measure $m_t$, making use of total or differential cross sections, endpoints, energy peaks or kinematic properties of $t\bar t$ final states, are traditionally called `alternative' measurements. The total $t\bar t$ cross section was calculated in the NNLO+NNLL approximation \cite{alex} and allows a direct determination of the pole mass \cite{sigmaatl,sigmacms}, the mass definition used in the computation \cite{alex}. The errors in \cite{sigmaatl} and \cite{sigmacms} are larger than those in the standard methods; however, they are expected to decrease thanks to the higher statistics foreseen at the LHC Run II. Moreover, the dependence on the mass implemented in the Monte Carlo program, employed to obtain the acceptance, in very mild. The top pole mass was also extracted from the measurement of the $t\bar t+1$~jet cross section, more sensitive to $m_t$ than the inclusive $t\bar t$ rate \cite{atlttj,cmsttj}. In Ref.~\cite{ttj}, the NLO $t\bar tj$ cross section was calculated through the POWHEG-BOX, using the pole mass, and matched to PYTHIA. Reference~\cite{fuster} computed instead the NLO $t\bar tj$ rate in terms of the $\overline{\rm MS}$ mass and compared the result with the LHC measurements: the values of pole and $\overline{\rm MS}$ masses, extracted by following the methods in \cite{ttj} and \cite{fuster}, are nonetheless in agreement. Other proposed methods to reconstruct $m_t$ rely on kinematic properties of top-decay final states. It was found that the peak of the energy of the $b$-jet in top decay at LO is independent of the boost from the top to the laboratory frame, as well as of the production mechanism \cite{roberto}. The CMS Collaboration measured the top mass from the $b$-jet energy peak data at 8 TeV in \cite{bj}. The $b$-jet+lepton invariant-mass ($m_{b\ell}$) spectrum was used by CMS to reconstruct $m_t$ in the dilepton channel, by comparing the data with PYTHIA \cite{mbl}. The endpoints of distributions like $m_{b\ell}$, $\mu_{bb}$ and $\mu_{\ell\ell}$, where $\mu_{bb}$ and $\mu_{\ell\ell}$ are a generalization of the $b\bar b$ and $\ell^+\ell^-$ invariant masses in the dilepton channel, $b$ being a $b$-jet in top decay, were also explored to constrain $m_t$ \cite{end}. Since $b$-flavoured jets can be calibrated directly from data, Monte Carlo uncertainties in the endpoints are mostly due to colour reconnection. Finally, purely leptonic observables in the dilepton channel, such as the Mellin moments of lepton energies or transverse momenta, were proposed to measure $m_t$ as they do not require the reconstruction of the top quarks \cite{frix}. Such quantities exhibit pretty small hadronization effects, but they are sensitive to the production mechanism, to the Lorentz boost from the top rest frame to the laboratory frame, as well as to higher-order corrections. Preliminary analyses have been carried out in \cite{cmslep} (CMS, based on LO MadGraph) and \cite{nisius} (ATLAS, based on the MCFM NLO parton-level code \cite{mcfm}) and are expected to be improved by matching NLO amplitudes with shower/hadronization generators. \section{Theory and Monte Carlo uncertainties in the top-mass extraction} In Ref.~\cite{wave}, where the extraction of the world average is described, the theory uncertainty accounts for about 540 MeV of the overall 710 GeV systematics. In particular, Ref.~\cite{wave} distinguishes the contributions due to Monte Carlo generators, radiation effects, colour reconnection and parton distribution functions (PDFs). The Monte Carlo systematics is due to the differences in the implementation of parton showers, matrix-element matching, hadronization and underlying event in the various programs available to describe top-quark production and decay. There is no unique way to estimate this uncertainty, though: one can either compare two different generators or choose a code and explore how its predictions fare with respect to variations of the parameters. For example, in \cite{wave}, CDF compares HERWIG and PYTHIA, while D0 uses ALPGEN+PYTHIA and ALPGEN+HERWIG \cite{alpgen}; both Tevatron experiments use MC@NLO to gauge the overall impact of NLO corrections. At the LHC, ATLAS compares MC@NLO with POWHEG for the NLO contributions and PYTHIA with HERWIG for shower and hadronization; CMS instead confronts MadGraph with POWHEG. The radiation uncertainty gauges the effect of initial- and final-state radiation on the top mass and is typically obtained by varying in suitable ranges the relevant parameters in the parton-shower generators. Concerning PDFs, there are strategies to gauge the induced error on $m_t$ in the different experiments, although using two different sets or a given set but with different parametrizations are common trends. Colour reconnection is another source of error on $m_t$, accounting for about 310 MeV in \cite{wave}: the very fact that, for example, a bottom quark in top decay ($t\to bW$) can be colour-connected to an initial-state antiquark does not have its counterpart in $e^+e^-$ annihilation and therefore its modelling in Monte Carlo event generators may need retuning at hadron colliders. Moreover, this phenomenon is an irreducible uncertainty in the interpretation of the measured mass as a pole mass. Investigations on the impact of colour reconnection on $m_t$ were undertaken in \cite{spyros,corc1}, in the frameworks of PYTHIA and HERWIG, respectively. In particular, Ref.~\cite{corc1} addresses this issue by simulating fictitious top-flavoured hadrons in HERWIG and comparing final-state distributions, such as the $BW$ invariant mass, with standard $t\bar t$ events. In fact, in the top-hadron case, assuming $T$ decays according to the spectator model, the $b$ quark is forced to connect with the spectator or with antiquarks in its own shower, namely $b\to bg$, followed by $g\to q\bar q$, and colour reconnection is suppressed. Furthermore, the analysis \cite{corc1} may also serve to address the relation between the measured mass, often called `Monte Carlo' mass, with the pole mass, since the mass of a $T$-hadron can be related to any top-quark mass definition by means of lattice, potential models or Non Relativistic QCD. More recently, work has been carried out to assess the dependence of the top-quark mass, extracted by means of the Mellin moments ${\cal M}_n$ of some variables related to $B$-hadrons in top decays, on the Monte Carlo shower and hadronization parameters \cite{cfk}, extending the investigation in \cite{mescia}, which studied only the $m_{B\ell}$ quantity. In fact, when addressing $B$-hadron rather than $b$-jet observables, one should deal with fragmentation uncertainties, rather than with the jet-energy scale, entering in measurements relying on $b$-jets. If ${\cal M}_1=\langle O\rangle$ is the average value of some observable $O$ and $\theta$ a generic generator parameter, one can write the following relations: \begin{equation} \frac{dm_t}{m_t}=\Delta_O^m\ \frac{d\langle O\rangle}{\langle O\rangle}\ \ ;\ \ \frac{d\langle O\rangle}{\langle O\rangle}= \Delta_\theta^O\ \frac{d\theta}{\theta}\ \Rightarrow\ \frac{dm_t}{m_t}=\Delta_\theta^m \frac{d\theta}{\theta}, \end{equation} where we defined $\Delta_\theta^m=\Delta_O^m\ \Delta_\theta^O$. Therefore, if one requires, e.g., a relative error below $0.3\%$ on $m_t$, namely $dm_t/dm_t<0.003$, one should also have $\Delta_\theta^m (d\theta/\theta)<0.003$. \begin{table*} \tiny \begin{centering} \begin{tabular}[t]{|c|c|c|c|c|c|c|c|c|c|c|} \hline \multirow{2}{*}{$\mathcal{O}$} & \multirow{2}{*}{$\Delta_O^m$} & \multicolumn{9}{c|}{$\Delta_{\theta}^m$}\tabularnewline \cline{3-11} & & PSPLT(2) & QCDLAM & CLPOW & CLSMR(2) & CLMAX & RMASS(5) & RMASS(13) & VGCUT & VQCUT\tabularnewline \hline \hline $m_{B\ell}$ & 0.52 & 0.036(4) & -0.008(2) & -0.007(5) & 0.002(3) & -0.007(4) & 0.058(1) & 0.06(5) & 0.003(1) & -0.003(3)\tabularnewline \hline $p_{T,B}$ & 0.47 & 0.072(1) & -0.03(9) & -0.02(7) & 0.0035(5) & -0.03(5) & 0.11(9) & 0.12(5) & 0.0066(2) & -0.006(5)\tabularnewline \hline $E_{B}$ & 0.43 & 0.069(7) & -0.026(7) & -0.017(5) & 0.0038(9) & -0.01(2) & 0.12(1) & 0.12(2) & 0.006(2) & -0.007(5)\tabularnewline \hline $E_{\ell}$ & 0.13 & 0.0005(5) & -0.04(3) & 0.04(2) & -0.0002(2) & -0.004(4) & 0.008(3) & 0.008(2) & -0.002(5) & 0.008(2)\tabularnewline \hline \end{tabular} \par\end{centering} \caption{\label{tabhw} Dependence of $m_t$ HERWIG 6 shower and hadronization parameters.} \end{table*} In Table~\ref{tabhw}, we present the $\Delta$ factors $\Delta_O^m$ and $\Delta_\theta^m$, assuming that the top mass is extracted in the dilepton channel by means of the first Mellin moment of observables $O$, like the $B\ell$ invariant mass, the energy $E_B$ and the transverse momentum $p_{T,B}$ of $B$ hadrons in top decays, the energy $E_\ell$ of charged leptons in $W$ decays. The $\theta$ entries in Table~\ref{tabhw} are parameters of the HERWIG 6 event generator, implementing the cluster hadronization model. In detail, PSPLT(2) is a parameter ruling the mass distribution of the decays of $b$-flavoured clusters, while CLMAX(2) and CLPOW determine the highest allowed cluster mass. Furthermore, unlike Ref.~\cite{mescia}, which just accounted for cluster-hadronization parameters, Ref.~\cite{cfk} also investigates the dependence of top-quark mass observables on the following parameters: RMASS(5) and RMASS(13), the bottom and gluon effective masses, respectively, and the virtuality cutoffs, VQCUT for quarks and VGCUT for gluons, which are added to the parton masses in the shower. The impact of changing QCDLAM, the HERWIG parameter playing the role of an effective $\Lambda_{\rm QCD}$ in the shower definition of the strong coupling constant \cite{cmw}, is also examined. Overall, from Table~\ref{tabhw} one learns that, if one aims at $dm_t/m_t<0.003$, the parameters PSPLT(2), QCDLAM, CLPOW, CLMAX and the $b$-quark and gluon effective masses are to be known with a relative precision of 10\%. The dependence of $m_t$ on CLSMR(2) and the cutoffs VQCUT and VGCUT is instead very mild and, in principle, it would be sufficient determining only the order of magnitude of such parameters to meet a 0.3\% goal on $m_t$. More details on the dependence of the top-quark mass on hadronization and shower parameters will be soon available in \cite{cfk}. Another recent investigation on the sensitivity of $m_t$ to the Monte Carlo modelling was carried out in Ref.~\cite{schwartz}, where the authors investigated the PYTHIA uncertainty in $m_t$ in the lepton+jets channel. It was then found that the error on the top mass can be significantly reduced if one calibrates the $W$ mass or applies the soft-drop jet grooming. On the top of the uncertainties in the $m_t$ determination at the LHC, there are long-standing theoretical issues affecting the accuracy on the top mass, namely the interpretation of the measured quantity in terms of the pole mass and the renormalon ambiguity on the pole mass: a review on such topics can be found in \cite{corc2}. In fact, Ref.~\cite{buten} compared PYTHIA with a NLO+NNLL SCET calculation for $e^+e^-\to t\bar t$ annihilation and calibrated the top mass used in the computation, the so-called MSR definition, to agree with the Monte Carlo 2-jettiness distribution. Within the error range, the mass parameter in PYTHIA is consistent with the fitted value of $m_{\rm MSR}(1~{\rm GeV})$, while $(0.57\pm 0.28)$~GeV is the discrepancy with respect to the corresponding pole mass. Ref.~\cite{groom} proposed instead the measurement of $m_t$ in $pp$ collisions by using boosted top jets with light soft-drop grooming, in lepton+jets and all-jet final states. The groomed top-jet mass spectrum was then calculated resumming soft- and collinear-enhanced contributions in the NLL approximation and compared with the spectra yielded by PYTHIA, trying to calibrate the PYTHIA mass parameter to reproduce the resummed distribution. The result of this calibration is that the pole mass is about 400-700 MeV smaller than the tuned mass in the Monte Carlo generator. As for the renormalon ambiguity, two recent analyses, i.e. Refs.~\cite{nasben} and \cite{hpole}, estimated the uncertainty in the top pole mass due to renormalons, obtaining about 110 and 250 MeV, respectively: though differing by about a factor of 2, such results are both smaller than the current error on the measured top mass. \section{Conclusions} I discussed the main strategies to determine the top mass at the LHC, emphasizing the role played by the theory uncertainties. The so-called standard measurements, such as those relying on template, matrix-element or ideogram methods, are based on the reconstruction of the top-decay products, and therefore they yield results close to the top-quark pole mass: however, a careful exploration of the theory error is mandatory. In particular, the ongoing work in \cite{corc1}, studying fictitious top-flavoured hadrons, should shade light on both colour reconnection and relation between measured mass and pole mass. It will be therefore very interesting to compare the eventual uncertainties according to the method in \cite{corc1} with the errors obtained in \cite{buten,groom}, comparing resummations and event generators, as well as with the renormalon ambiguity gauged in \cite{nasben,hpole}. Among the alternative measurements, extracting $m_t$ by confronting the $t\bar t$ and $t\bar t j$ cross sections with NLO or NNLO calculations allows a clean extraction of the pole mass. The errors are substantially larger than in the standard measurements, but nonetheless they are supposed to become much smaller once the LHC statistics increase. Other methods, employing endpoints, leptonic observables or observables like the $b$-jet+lepton mass distribution, are very interesting and worthwhile to be further developed, since they are sensitive to different effects with respect to the standard determination. For example, the endpoint method minimizes the impact of the Monte Carlo generators, while leptonic quantities do not need the reconstruction of the top quarks. Particular care was taken in this talk about the Monte Carlo uncertainty in the $m_t$ determination, namely the dependence of $m_t$ on hadronization parameters, once it is measured from $B$-hadron observables, such as the $B\ell$ invariant mass or the $B$-energy or transverse-momentum spectra. I presented some results yielded by the HERWIG event generator, showing that most parameters are to be tuned with an accuracy of at least 10\%, for the sake of meeting a 0.3\% precision goal on $m_t$. More details on this investigation and on the dependence of $m_t$ on the parameters of PYTHIA, the other multi-purpose parton shower generator employed in the analysis, will be soon available in \cite{cfk}. It will be challenging comparing the hadronization uncertainties obtained in \cite{cfk} with those relying on NLO+shower generators, such as the recent $t\bar t$ POWHEG implementation presented in \cite{powtop}, accounting for non-resonant contributions and for the interference between top production and decay. Furthermore, at parton-level, the full process $pp\to W^+W^-b\bar b\to (\ell^+\nu_\ell) (\ell^-\bar\nu_\ell)$ was lately computed at NLO and compared with scenarios where NLO $t\bar t$ production is matched with different top-decay modelling, namely LO and NLO top decays in the narrow-width approximation, as well as parton showers \cite{gudrun}. It will be certainly very interesting confronting the approaches in Refs.~\cite{powtop} and \cite{gudrun}, and the induced error in the $m_t$ determination. In summary, the current world-average $m_t$ analysis exhibits an uncertainty of about 0.5\% and higher accuracies are foreseen in the near future, thanks to the large statistics. Given the relevance of $m_t$ in the Standard Model, any progress in improving the present higher-order calculations and Monte Carlo event generators, for the sake of assessing reliably the theoretical error, including the interpretation of the measurements in terms of the pole mass, will therefore be especially desirable.
1,108,101,565,346
arxiv
\section{Introduction} \vspace{-2mm} The term `quantification' was introduced by Forman~\cite{forman2008quantifying} and defined as the task of estimating the class-distribution of a categorical data-set. In a typical scenario, a classifier is trained on a set of labeled `source' samples, and applied to a new set of `target' samples. This is challenging in the presence of `data-set shifts'\cite{moreno2012unifying} where the underlying probability function of the source data differs from that of the target data. Contrary to classification, which is studied both in the presence and absence of domain shift, quantification is only of interest under data-set shift, since otherwise, the class-distribution can be estimated directly from the labeled source data. Quantification is important for two principled reasons. First the class-distribution itself, rather than the full labeling, is the desired end-product in many applications. This occurs for example in sampling surveys, where repeated quantification is required to assess spatial or temporal patterns~\cite{forman2008quantifying, sampling}. Second, in applications where a full labeling is required, the class-distribution of the target data can be used to re-calibrate a classifier trained on the source data~\cite{royer2015classifier, saerens2002adjusting}. Quantification has been studied under the class-distribution shift assumption~\cite{saerens2002adjusting, forman2008quantifying, du2014semi}, meaning that the source and target class conditional distributions are the same~\cite{moreno2012unifying}. As we shall see, this is a rather strong assumption and may not hold in practice. In another line of work, domain adaptation (DA) has been studied under the assumption that the source and target class-conditional distributions are `similar'~\cite{jing_literature}. In this work, the class-distribution shift is often controlled for by experimenting on class-balanced data-sets~\cite{saenko_adapting}. We introduce two large-scale data-sets from marine ecology, where quantification arise naturally and where automation is imperative for ecological analysis. We evaluate efficacy of several quantification methods from the literature. \vspace{-2mm} \subsection{Problem statement} \vspace{-2mm} Let $x\in \mathbb{R}^d$ be d-dimensional input samples, and $y \in {1, \hdots, c}$ class labels. We assume that a large number of samples and classes, $\{(x_i, y_i)\}_{i=1}^n$, are available in a source domain defined by some joint probability function $p(x, y)$. We also assume that a large number of unlabeled samples, $\{x_i\}_{i=1}^{n'}$, are available in a target domain, defined by different probability function $p'(x, y) \neq p(x, y)$. The general goal of quantification is to estimate the probability distribution over classes in the target domain: $p'(y) \equiv q \in \mathbb{R}^c$. However, the problem statement as defined above is intractable if (a) the domain shift is arbitrary and (b) there are no labeled samples in the target domain. It is therefore typically studied under one of two relaxations: \begin{defn} \textbf{Unsupervised Quantification:} In unsupervised quantification, the data-set shift is assumed to be a pure class-distribution shift, i.e. $p(y) \neq p'(y)$, but $p(x|y) = p'(x|y)$. Alternatively, the data-set shift is assumed to be `small', and the unlabeled set of target samples, $\{x_i\}_{i=1}^{n'}$, is used to align the internal feature representation of a machine learning algorithm. \end{defn} \begin{defn} \textbf{Supervised Quantification:} In supervised quantification, no explicit assumptions are made on the data-set shift, but it is assumed that a small amount of samples are available in the target domain, $\{(x_i, y_i)\}_{i=1}^b \in p'(x, y)$. \end{defn} For supervised quantification, we only consider methods where the labeled target samples are selected randomly, leaving the design of active sampling methods to future work. \vspace{-2mm} \subsection{Related work} \vspace{-2mm} For unsupervised quantification, a straight-forward method is classify \& count~\cite{forman2008quantifying} where a classifier, $f$ is trained using the source data and then used to estimate $\hat{q}_c = \frac{1}{n'} \sum_{i = 1}^{n'} \mathds{1} (f(x_i), c)$. The classifier, $f$ can also be adapted using unsupervised adaption methods~\cite{Tzeng_ICCV2015,hal_domain}. Further, unsupervised quantification has been studied extensively under class-distribution shift~\cite{moreno2012unifying,forman2008quantifying, saerens2002adjusting}. This work follow one of two main strategies. The first, introduced by Saerens et al.~\cite{saerens2002adjusting}, and refined by~\cite{du2014semi}, derive an EM algorithm for maximizing the likelihood of the target data, $p'(x)$ by iterative updates of $\hat{p}'(y)$ and $\hat{p}'(y|x)$. The second, discussed by several authors~\cite{forman2008quantifying, saerens2002adjusting, solow2001estimating, beijbom2014cost}, rely on the miss-classification rates (confusion matrix) estimated on the source data to adjust the estimated counts on the target data. Forman extended this method with several heuristic demonstrating significant performance increase for binary quantification~\cite{forman2008quantifying}. For supervised quantification, simple random sampling~\cite{sampling} can be utilized to achieve an unbiased estimate the class-distribution: $\hat{q}_c = \frac{1}{b} \sum_{i = 1}^b \mathds{1} (y_{s_i}, c)$ where $s$ is a vector of randomly permuted indices and $b$ the annotation budget. Simple random sampling doesn't utilize the classifier, $f$, but it can be incorporated using auxiliary sampling designs, through offset~\cite{beijbom2014cost} or ratio~\cite{royall1981empirical} estimators. In these methods, some property (e.g. bias) of the classifier is estimated from the labeled subset, and then used to adjust the prediction on the whole target set. Other methods include adapting the classifier to operate in the target domain, typically achieved by modifying the internal feature representation of the classifier~\cite{Tzeng_ICCV2015}. \vspace{-2mm} \section{Data-sets} \vspace{-2mm} We introduce two large-scale, image data-sets from marine ecology. In both, repeated sets of collected survey images require quantification, and manual annotation is unfeasible due to the vast amounts of collected data. In both, a large set of labeled source images are available to train a classifier, and several randomly selected smaller sets are available for evaluation. Each such set is denoted a test `cell' and the goal is to achieve accuracy quantification across all test cells. Domain shifts occur naturally in these data-sets, so that the class appearance, $p(x|y)$ and class-distributions, $p(y)$, varies across the test cells. However, the extent of these variations differ between the two data-sets as discussed below. A data-set overview is given in \tblref{data-sets}, and all data can be downloaded from \url{www.eecs.berkeley.edu/~obeijbom/quantification_dataset.html}. \vspace{-2mm} \subsection{Plankton population survey} \vspace{-2mm} \textbf{Background:} The Imaging Flow Cytobot (IFCB), is an \emph{in-situ} instrument for measuring plankton populations~\cite{olson2007submersible, ifcb_cvpr}. The IFCB is installed at an offshore tower at 4 m below water level at the Martha's Vineyard Coastal Observatory, and collects images by automatically drawing seawater from the environment. From this stream of images ($\sim100$k day$^{-1}$) the ecologists need to quantify the daily plankton class-distribution. Manual annotation of the complete data stream is unfeasible, and is currently restrained to two randomly chosen hours each month. While insufficient for a complete ecological analysis, these randomly selected, fully annotated, image-sets are ideal for evaluation of quantification methods. \textbf{Details:} We formalize the Plankton Survey quantification benchmark as follows. All IFCB labeled data from 2006-2013 is considered as pertaining to the source domain. The 21 randomly selected hours of annotated data from 2014 are the test cells. Only classes with $>1000$ total samples are included in this benchmark, leaving around 3.3 million total labeled samples across $33$ classes. The Plankton Survey data-set is dominated by a class-distribution shift (Figs. \ref{fig:ifcb_class_counts}, \ref{fig:ifcb_appearance_shift}). \vspace{-2mm} \subsection{Coral reef survey} \vspace{-2mm} \textbf{Background:} The XL Catlin Seaview Survey (XL CSS) is an ambitious project to monitor the world's coral reefs~\cite{gonzalez2014catlin}. Using underwater scooters, 2 kilometer of reef-scape is imaged each dive, with approximately one image meter$^{-1}$. The XL CSS has surveyed the Great Barrier Reef, the Coral Sea, the Caribbean, the Coral Triangle, the Maldives, and Chagos, and captured over 1 million photographs. From this images set, ecologists are interested in quantifying the percent cover of key benthic substrates for each set of $30$ consecutive images~\cite{gonzalez2014catlin}. Percent cover for each image is estimated by classifying 50 patches extracted at random row \& column locations in each image as pertaining to one of $32$ classes~\cite{pante2012getting}, for a total of $\sim 1500$ patches across the $30$ images. Similarly to the Plankton Survey, we can think of these sets of patches as cells, each requiring quantification. Manual quantification of all cells is unfeasible: the images from the Caribbean alone would require 30 person-years to annotate~\cite{gonzalez2014catlin}. \textbf{Details:} We formalize the Coral Survey quantification benchmark as follows. A training-set of $324732$ annotated patches extracted from $1505$ images constitute the source domain. In addition, $15$ randomly selected sets of $30$ consecutive images, each with $50$ annotated patches, are the test cells (\figref{map}). Both class appearance and class-distribution varies across the test cells (since they are drawn from different location across the Caribbean), meaning that the data-set shifts are more complex than for the IFCB data-set (Figs. \ref{fig:css_class_counts}, \ref{fig:css_appearance_shift}). \vspace{-2mm} \begin{table}[t] \centering \caption{Data-set summary. n = number, avg. = average} \small \begin{tabular}{| l | c | c | c | c |} \hline Data-set & n train samples & n test cells & avg. n samples cell$^{-1}$ & n classes \\ \hline \hline Plankton Survey & 3.3m & 21 & 14248 & 33 \\ \hline Coral Survey & 325k & 15 & 1480 & 32 \\ \hline \end{tabular} \label{tbl:data-sets} \end{table} \vspace{-2mm} \subsection{Performance evaluation} \vspace{-2mm} Let $q^{(m)} = P(y)$ be the normalized ground-truth class-distribution: $q^{(m)} \in \mathbb{R}^c, \sum_{j=1}^c q^{(m)}_j = 1 ~ \forall m$ for sampling cell $m$, and $\hat{q}$ the estimated distribution. We measure the distance between $q^{(m)}$ and $\hat{q}^{(m)}$ using the Bray-Curtis distance, which is commonly used in ecology. For normalized class counts, it reduces to the l1-norm: $h^{\mathrm{BC}}(q^{(m)}, \hat{q}^{(m)}) = \frac{|q^{(m)}-\hat{q}^{(m)}|_1}{2}$. The utility of a quantification method is evaluated by the average Bray-Curtis distance across the sampling cells. \vspace{-2mm} \section{Experiments} \vspace{-2mm} For all experiments, we train a Convolutional Neural Network, $f$, of the Alexnet network architecture~\cite{krizhevsky2012imagenet}, on the source data using Caffe~\cite{jia2014caffe}. More details are given in~\cite{ifcb_cvpr, beijbom2014cost}. \textbf{Unsupervised quantification:} We evaluate four unsupervised quantification methods. Applying Classify \& Count is straight-forward, and creates a natural baseline (\figref{res-unsuper}). The EM-algorithm of~\cite{saerens2002adjusting} is also evaluated (\figref{em_convergence}) along with the confusion-matrix (CM) correction method of~\cite{forman2008quantifying, solow2001estimating, saerens2002adjusting}. The latter method requires inverting the confusion-matrix, which can be problematic for multi-class problems, since inversion requires full rank. We therefore apply the abundance correction for each class, $i$ independently, by mapping the CM to a binary $2 \times 2$ matrix before inverting and estimating $\hat{q}_i$. We then normalized $\hat{q}$ so that $\sum_{j=1}^c \hat{q}_j = 1$. Finally, we use a recent unsupervised domain-adaptation method which adapts the source net to the unlabeled data from each cell~\cite{Tzeng_ICCV2015}. \begin{figure}[htb] \vspace{-2mm} \begin{center} \includegraphics[width=.35 \linewidth]{unsupervisedCoral_Survey.pdf} \hspace{10mm} \includegraphics[width=.35 \linewidth]{unsupervisedPlankton_Survey.pdf} \end{center}\vspace{-4mm} \caption{\textbf{Unsupervised results.} Quantification errors displayed as mean $\pm$ SE for Classify \& Count~\cite{forman2008quantifying}, the unsupervised Deep Transfer DA method of~\cite{Tzeng_ICCV2015}, distribution matching using the EM algorithm~\cite{saerens2002adjusting}, and for correction using Confusion Matrix inversion~\cite{forman2008quantifying, solow2001estimating, saerens2002adjusting}.} \label{fig:res-unsuper} \vspace{-2mm} \end{figure} \textbf{Supervised quantification:} We investigate quantification performance for budgets, $10 < b < 150$ samples, which well covers the feasible range of what is economical for the respective surveys. Random sampling is used to establish an upper bound on the error, and the offset estimator is included as a simple improvement~\cite{beijbom2014cost}. We also experimented with a ratio estimator~\cite{sampling}, but this is impractical for small sample sizes~\cite{beijbom2014cost}. Further, we used a DA baseline ('DA mix'). In this method, the classifier, $f$ was further fine-tuned on a mixture of $75\%$ source and $25\%$ target data (drawn from the $b$ samples) for $\sim3$ epochs. Finally, the supervised DA method of~\cite{Tzeng_ICCV2015} was evaluated. \begin{figure}[htb] \begin{center} \includegraphics[width=.45 \linewidth]{supervisedCoral_Survey.pdf} \hspace{4mm} \includegraphics[width=.45 \linewidth]{supervisedPlankton_Survey.pdf} \end{center}\vspace{-4mm} \caption{\textbf{Supervised results.} Quantification errors displayed as mean $\pm$ SE for Simple random sampling~\cite{sampling}, Offset sampling~\cite{beijbom2014cost}, DA mix, and Deep Transfer DA ~\cite{Tzeng_ICCV2015}} \label{fig:res-super} \vspace{-2mm} \end{figure} \textbf{Results:} For unsupervised quantification, our results indicate that the appropriate method depends on the nature of the data-set shift. For the Plankton Survey, the EM and CM correction methods work well, significantly lowering the estimation errors (\figref{res-unsuper}). The EM method~\cite{saerens2002adjusting} outperformed the CM method~\cite{forman2008quantifying}, suggesting that this approach is more appropriate for high number of classes. However, for the Coral Survey, where the class-distribution shift assumption is violated, the CM and EM corrections corrupt the results, and simply counting the raw classification is preferable. The Deep Transfer DA method~\cite{Tzeng_ICCV2015} was able to capture the data-set shift for the Plankton Survey, but produced inferior quantification compared to the CM and EM methods. Also note that the quantification results are, in general, stronger for the Plankton Survey, since it is an easier classification task with smaller data-set shift. Fur supervised quantification, the two DA methods outperformed the random sampling baselines, in particular for smaller annotation budgets (\figref{res-super}). The adaptation method of~\cite{Tzeng_ICCV2015} performed on par with DA mix. Among the random sampling methods, the offset estimator clearly outperformed simple random sampling on the Plankton Survey, but performed on-par for the Coral Survey. This is expected, as the hybrid estimator performs better when the classification errors are small, which they are in the Plankton Survey (\figref{res-super};~\cite{beijbom2014cost}). \textbf{Discussion:} In pure class-distribution shift situations, as with the Plankton Surveys, the EM algorithm of~\cite{saerens2002adjusting} worked well, achieving a mean Bray-Curtis distance of $4.7 \pm 3.2\%$. Achieving such accurate quantification through simple random sampling would require $b \approx 150$ samples. The DA mix method, which achieved $3.7 \pm 0.5\%$ at $b = 50$ and $4.1 \pm 0.4\%$ at $b = 25$, makes better use of the supervision. It is a compelling alternative overall since it performed well also on the more challenging Coral Survey data. This is important since, in a real-world situation, one may not now \emph{a-priori} what type of data-set shift to expect for the new target data. The fact that fine-tuning of a deep neural network can be achieved with such small amount of target data ($25$ samples) is surprising, and deserves further investigation. Further, while the EM method presented here didn't perform very strongly, Forman suggested several improvements for binary classification~\cite{forman2008quantifying}. We were unable to generalize these to the multi-class case, but it deserves attention. Finally, we think active sampling methods offer much promise, with collected samples utilized either to correct the raw classification counts, or for fine-tuning model parameters. \textbf{Acknowledgments:} This work was supported by the National Oceanic and Atmospheric Administration grant No. NA10OAR4320156 and by the XL and Catlin Group Limited, Global Change Institute. We gratefully acknowledge the support of NVIDIA for their hardware donations. \bibliographystyle{plain}
1,108,101,565,347
arxiv
\section{Introduction}\label{SecIntro} Risset and Mathews \cite{risset1969} were the first to highlight the fact that the spectral enrichment of brass sounds with increasing sound level is crucial to recognize these instruments. They included nonlinear distortion into their additive sound synthesis more than 10 years before acousticians began to focus on this phenomenon, and 25 years before its origin was understood. In 1980, Beauchamp \cite{beauchamp1980} stressed the fact that a linear model of the air column cannot explain brassy sounds. Since 1996, it is well established that the spectacular spectral enrichment of loud brass sounds is mainly due to the nonlinear wave propagation inside the bore of the instrument \cite{hirschberg96b, gilbert96, rendon2013}. At extremely high sound levels, shock waves have been observed, but nonlinear distortion even at moderate sound levels can contribute significantly to the timbre of a brass instrument \cite{campbell2014, norman2010}. Considering nonlinear propagation is thus fundamental both for sound synthesis by physical modeling \cite{vergez2000a,msallam2000,helie2008a,Bilbao11} and to improve the understanding of musical instruments design \cite{myers2012,gilbert2008,chick2012a}. One must account for the nonlinear wave propagation of both outgoing and incoming pressure waves, and not only the outgoing pressure wave as it can be done to simplify the problem \cite{thompson2001}. Besides the nonlinear wave propagation, other mechanisms need also to be incorporated to describe the physics of brass instruments. First, one must handle a continuous variation of the cross section of the instrument with respect to space. Second and more challenging, one must handle the viscothermal losses resulting from the interaction between the acoustic field and the bore of the instrument. Gilbert and coauthors proposed an approach to handle these mechanisms in the periodic regime. The harmonic balance method has been applied to cylinders with straight tube \cite{Menguy00} or with varying cross section \cite{These_Menguy}. This approach resulted in the development of a simulation tool for brassiness studies \cite{gilbert2008}. The time domain offers a more realistic framework to simulate instruments in playing conditions; in counterpart, it introduces specific difficulties. Non-smooth (and possibly non-unique) waves are obtained, whose numerical approximation is not straightforward \cite{Godlewski96}. Moreover, the viscothermal losses introduce fractional derivatives in time \cite{Matignon-These,Matignon08}. These convolution products require to store the past values of the solution, which is highly consuming from a computational point of view. These features (nonlinear propagation, viscothermal losses, varying cross section) have been examined separately in the works of Bilbao \cite{Bilbao11,Bilbao13}. In particular, a discrete filter was used to simulate the memory effects due to the viscothermal losses in the linear regime. But to our knowledge, the full coupling in the time domain between both the nonlinear propagation, the variable section and the viscothermal losses has never been examined. Proposing a unified and efficient discretization of all these aspects is the first goal of this paper. Our second objective is to show how to couple the numerical model of resonator to a classical one-mass model for the lips \cite{Elliott82}. This coupling allows the full system to be simulated, including the instrument and the instrumentalist both during steady states and transients. Emphasis is put throughout the paper on the choice of the numerical methods and on their validation. The paper is organized as follows. Section \ref{SecReso} is devoted to the modeling of the resonator. The acoustic propagation inside the bore of the instrument is described by outgoing and incoming nonlinear simple waves, that interact together only at the extremities of the instrument \cite{gilbert2008}. A so-called diffusive approximation is introduced to provide an efficient discretization of the viscothermal losses. Then the equations are solved numerically by following a splitting strategy, which offers a maximal computational efficiency: the propagative part is solved by a TVD scheme (standard in computational fluid dynamics), and the relaxation part is solved exactly. This approach is validated by a set of test-cases; an application to the determination of the input impedance in the linear case is proposed. Section \ref{SecExc} is devoted to the numerical modeling of the exciter. The coupling between the exciter (air blown through vibrating lips) and the resonator (the instrument) is explored in section \ref{SecExp} through various numerical experiments. They show the possibilities offered by the simulation tool developed, and they highlight the influence of nonlinear propagation on various aspects of the instrument behavior. Lastly, future lines of research are proposed in section \ref{SecConclu}. \section{Resonator}\label{SecReso} \subsection{Physical modeling}\label{SecResoPhys} \subsubsection{Notations}\label{SecResoPhysNot} \begin{figure}[h!] \begin{center} \includegraphics[scale=0.55]{Pavillon_ogffle.eps} \caption{\label{FigGuide1D} One-dimensional acoustic tube of cross section area $S(x)$.} \end{center} \end{figure} A cylinder with radius $R$ depending on the abscissa $x$ is considered. The length of the cylinder is $D$ and its cross section is $S$ (figure~\ref{FigGuide1D}). The physical parameters are the ratio of specific heats at constant pressure and volume $\gamma$; the pressure at equilibrium $p_0$; the density at equilibrium $\rho_0$; the Prandtl number Pr; the kinematic viscosity $\nu$; and the ratio of shear and bulk viscosities $\mu_v/\mu$. One deduces the sound speed $a_0$, the sound diffusivity $\nu_d$ and the coefficient of dissipation in the boundary layer $C$: \begin{equation} \begin{array}{l} \displaystyle a_0=\sqrt{\frac{\textstyle \gamma\,p_0}{\textstyle \rho_0}}, \hspace{0.2cm} \nu_d=\nu\left(\frac{\textstyle 4}{\textstyle 3}+\frac{\textstyle \mu_v}{\textstyle \mu}+\frac{\textstyle \gamma-1}{\textstyle \mbox{Pr}}\right),\\ [8pt] \displaystyle C=1+\frac{\textstyle \gamma-1}{\textstyle \sqrt{\mbox{Pr}}}. \end{array} \label{Omega0} \end{equation} \subsubsection{Menguy-Gilbert model}\label{SecResoPhysEqu} The angular frequency of the disturbance is below the first cut-off angular frequency ($\omega<\omega^*=\frac{1.84\,a_0}{R}$ where $R$ is the maximum radius), so that only the plane mode propagates and the one-dimensional assumption is satisfied \cite{Chaigne08}. Within the framework of weakly nonlinear propagation and assuming that $S$ varies smoothly with $x$, the wave fields are split into simple outgoing waves (denoted $+$) and incoming waves (denoted $-$) that do not interact during their propagation \cite{Hamilton98,These_Menguy}. Velocities along the $x$ axis are denoted $u^\pm$. Pressures fluctuations associated with the simple waves are given by \begin{equation} p^\pm=\pm \rho_0\,a_0\,u^\pm. \label{SurP} \end{equation} According to the Menguy-Gilbert model, the evolution equations satisfied by the velocities are \begin{subnumcases}{\label{Chester}} \displaystyle \frac{\textstyle \partial u^\pm}{\textstyle \partial t} + \frac{\textstyle \partial}{\textstyle \partial x}\left(\pm au^\pm+b\frac{\textstyle (u^\pm)^2}{\textstyle 2}\right) \pm \frac{\textstyle a}{\textstyle S}\,\frac{\textstyle dS}{\textstyle dx}\,u^\pm \nonumber\\ \displaystyle \hspace{0.1cm}=\pm c\frac{\textstyle \partial^{-1/2}}{\textstyle \partial t^{-1/2}}\frac{\textstyle \partial u^\pm}{\textstyle \partial x}+d\frac{\textstyle \partial^2 u^\pm}{\textstyle \partial x^2},\quad 0<x<D,\label{Chester1}\\ [6pt] \displaystyle u^+(0,t)=u_0(t),\label{Chester2}\\ [6pt] \displaystyle u^-(D,t)=u^+(D,t),\label{Chester3} \end{subnumcases} with the coefficients \begin{equation} \hspace{-0.5cm} a=a_0,\quad b=\frac{\textstyle \gamma+1}{\textstyle 2},\quad c(x)=\frac{\textstyle C\,a_0 \sqrt{\nu}}{\textstyle R(x)},\quad d=\frac{\textstyle \nu_d}{\textstyle 2}. \label{CoeffsEDP} \end{equation} Menguy-Gilbert's equation (\ref{Chester1}) takes into account nonlinear advection (coefficients $a$ and $b$), viscothermal losses at walls (coefficient $c$) and volumic dissipation (coefficient $d$) \cite{Chester64,Menguy00}. The operator $\frac{ \partial^{-1/2}}{ \partial t^{-1/2}}$ is the Riemann-Liouville fractional integral of order $1/2$. For a causal function $w(t)$, it is defined by: \begin{equation} \begin{array}{lll} \displaystyle \frac{\textstyle \partial^{-1/2}}{\textstyle \partial t^{-1/2}}w(t)&=& \displaystyle \frac{\textstyle H(t)}{\textstyle \sqrt{\pi\,t}} \ast w,\\ &=& \displaystyle \frac{\textstyle 1}{\textstyle \sqrt{\pi}}\int_0^t(t-\tau)^{-1/2}\, w(\tau)\,d\tau, \end{array} \label{RiemannLiouville} \end{equation} where $\ast$ denotes the convolution product, and $H(t)$ is the Heaviside function \cite{Matignon08}. Each wave requires only one boundary condition. The condition for the outgoing wave (\ref{Chester2}) models the acoustic source, linked to the musician. The condition for the incoming wave (\ref{Chester3}) models the Dirichlet condition on pressure at the bell. This condition is the unique coupling between $+$ and $-$ waves. \subsubsection{Dispersion analysis}\label{SecResoPhysDisp} Applying space and time Fourier transforms to (\ref{Chester1}) yields \begin{equation} \begin{array}{l} \displaystyle i\,d\,k^2\widehat{u^\pm}\pm \left(\left(a - c\chi(\omega)\right)\widehat{u^\pm} \pm \frac{\textstyle b}{\textstyle 2} \widehat {(u^\pm)^2}\right)k\\ [8pt] \displaystyle -\omega\,\widehat{u^\pm} \pm i\, \widehat{\frac{\textstyle a}{\textstyle S}\frac{\textstyle dS}{\textstyle dx}\,u^\pm}=0, \end{array} \label{DispersionGuide} \end{equation} where the hat refers to the transforms, $k$ is the wavenumber, and $\chi$ is the symbol of the 1/2-integral: \begin{equation} \chi(\omega)=\frac{\textstyle 1}{\textstyle \left(i\,\omega\right)^{1/2}}. \label{ChiDF} \end{equation} In the case of constant radius $R$, linear propagation ($b=0$), and no sound diffusivity ($d=0$), the phase velocity $\upsilon=\omega\,/\,\mbox{Re}(k)$ and the attenuation $\alpha=-\mbox{Im}(k)$ of an outgoing wave are deduced explicitly: \begin{equation} \begin{array}{l} \displaystyle \upsilon=\frac{\textstyle a^2\,\omega-a\,c\sqrt{2\,\omega}+c^2}{\textstyle \displaystyle a\,\omega-c\sqrt{\omega/2}},\\ [8pt] \displaystyle \alpha=\frac{\textstyle c}{\textstyle \sqrt{2}}\,\frac{\textstyle \omega^{3/2}}{\textstyle a^2\,\omega-a\,c\,\sqrt{2\,\omega}+c^2}. \end{array} \label{CelAttGuide} \end{equation} In the case where the viscosity is ignored ($c=d=0$), the phase velocity is equal to $a$, and no attenuation occurs. Otherwise, one has: \begin{equation} \begin{array}{llll} \displaystyle\upsilon(\omega)\mathop{\sim}\limits_{0} - c \sqrt{\frac{2}{\omega}}, &\quad &\quad &\displaystyle\lim_{\omega\rightarrow+\infty}\upsilon(\omega)=a,\\ [8pt] \displaystyle\alpha(0)=0, &\quad &\quad &\displaystyle\alpha(\omega)\mathop{\sim}\limits_{+\infty} \frac{c}{a^2} \sqrt{\frac{\omega}{2}}. \end{array} \label{PropertyGuide} \end{equation} \subsection{Mathematical modeling}\label{SecResoMath} \subsubsection{Diffusive approximation}\label{SecResoMathDiff} The half-order integral (\ref{RiemannLiouville}) in (\ref{Chester1}) is non-local in time. It requires to keep the memory of the past history of the solution, which is very costly in view of numerical computations. An alternative approach is followed here, based on the diffusive representation of the fractional integral. A change of variables yields \cite{Matignon-These,Diethelm08} \begin{equation} \frac{\textstyle \partial^{-1/2}}{\textstyle \partial t^{-1/2}}w(t)=\int_0^{+\infty}\phi(t,\theta)\,d\theta, \label{I12} \end{equation} where the memory variable $\phi$ \begin{equation} \phi(t,\theta)=\frac{\textstyle 2}{\textstyle \pi}\int_0^t e^{-(t-\tau)\,\theta^2}w(\tau)\,d\tau \label{Phi12} \end{equation} satisfies the ordinary differential equation \begin{equation} \left\{ \begin{array}{l} \displaystyle \frac{\partial \phi}{\partial t}=-\theta^2\,\phi+\frac{\textstyle 2}{\textstyle \pi}\,w,\\ [8pt] \phi(0,\theta)=0. \end{array} \right. \label{ODEI12} \end{equation} The diffusive representation (\ref{I12}) replaces the non-local term (\ref{RiemannLiouville}) by an integral on $\theta$ of functions $\phi(t,\theta)$, which are solutions of local-in-time equations. Integral (\ref{I12}) is then approximated by a quadrature formula \begin{equation} \frac{\textstyle \partial^{-1/2}}{\textstyle \partial t^{-1/2}}w(t)\simeq\sum_{\ell=1}^L\mu_{\ell}\,\phi(t,\theta_{\ell})=\sum_{\ell=1}^L\mu_{\ell}\,\phi_{\ell}(t), \label{RDI12} \end{equation} on $L$ quadrature points. Determining the quadrature weights $\mu_{\ell}$ and the nodes $\theta_{\ell}$ is crucial for the efficiency of the diffusive approximation and is discussed further in section \ref{SecResoNumQuad}. \subsubsection{First-order system}\label{SecResoMathSyst} The 1/2 integral in (\ref{Chester1}) is replaced by its diffusive approximation (\ref{RDI12}) and by the set of differential equations satisfied by the memory functions $\phi_{\ell}$ (\ref{ODEI12}). It follows the two systems for $+$ and $-$ waves \begin{subnumcases}{\label{SystComplet}} \displaystyle \frac{\textstyle \partial u^\pm}{\textstyle \partial t} + \frac{\textstyle \partial}{\textstyle \partial x}\left(\pm a \textstyle u^\pm+b\,\displaystyle \frac{\textstyle (u^\pm)^2}{\textstyle 2}\right)\pm\frac{\textstyle a}{\textstyle S}\,\frac{\textstyle dS}{\textstyle dx}\,u^\pm\\%\notag\\ \displaystyle \hspace{0.3cm} =\pm c\sum_{\ell=1}^L\mu_{\ell}\phi_{\ell}+d\,\frac{\textstyle \partial^2 u^\pm}{\textstyle\partial x^2},\quad 0<x<D,\label{SystComplet1}\\ [6pt] \displaystyle \frac{\textstyle \partial \phi^\pm_{\ell}}{\textstyle \partial t}-\frac{\textstyle 2}{\textstyle \pi}\,\frac{\textstyle \partial u^\pm}{\textstyle \partial x}=-\theta_{\ell}^2\,\phi^\pm_{\ell},\,\ell=1,\dots,L,\label{SystComplet2}\\ [6pt] \displaystyle u^+(0,t)=u_0(t),\label{SystComplet3}\\ [6pt] \displaystyle u^-(D,t)=u^+(D,t),\label{SystComplet4}\\ [6pt] \displaystyle u(x,0)=v(x),\,\phi_\ell(x,0)=0, \,\ell=1,\dots,L.\label{SystComplet5} \end{subnumcases} The $(L+1)$ unknowns are gathered in the vectors ${\bf U}^\pm$: \begin{equation} {\bf U}^\pm=\left(u^\pm,\phi^\pm_1,\cdots,\,\phi^\pm_L\right)^T. \label{VecU} \end{equation} Then the systems (\ref{SystComplet}) are recast as: \begin{equation} \hspace{-0.3cm} \frac{\textstyle \partial}{\textstyle \partial t}{\bf U}^\pm+\frac{\textstyle \partial}{\textstyle \partial x}{\bf F}^\pm({\bf U}^\pm)={\bf S}^\pm\,{\bf U}^\pm+{\bf G}\,\frac{\textstyle \partial^2}{\textstyle \partial x^2}{\bf U}^\pm, \label{SystHyper} \end{equation} where ${\bf F}^\pm$ are the nonlinear flux functions \begin{equation} \hspace{-0.8cm} {\bf F}^\pm({\bf U}^\pm)=\left(\pm au^\pm + b\frac{\textstyle (u^\pm)^2}{\textstyle 2},-\frac{\textstyle 2}{\textstyle \pi}u^\pm,\cdots,-\frac{\textstyle 2}{\textstyle \pi}u^\pm\right)^T \label{Fnonlin} \end{equation} and ${\bf G}$ is the $(L+1)\times (L+1)$ diagonal matrix with terms $\mbox{diag}(d,\,0,\cdots,\,0)$. The relaxation matrices ${\bf S}^\pm$ include both a geometrical term, due to the variation of section, and physical terms, related to the diffusive approximation of viscothermal losses: \begin{equation} {\bf S}^\pm= \left( \begin{array}{cccc} \displaystyle \mp \frac{\textstyle a}{\textstyle S}\,\frac{\textstyle dS}{\textstyle dx} & \pm c\,\mu_1 & \cdots & \pm c\,\mu_L\\ 0 & -\theta_1^2 & & \\ \vdots & & \ddots & \\ 0 & & & -\theta_L^2 \end{array} \right). \label{MatS} \end{equation} \subsubsection{Properties}\label{SecResoMathProp} In brass musical instruments, the volumic losses are negligible compared to the viscothermal losses \cite{Sugimoto91,Menguy00}. Consequently, the dynamics of systems (\ref{SystHyper}) is essentially unchanged when taking ${\bf G}={\bf 0}$. In this case, properties of the solutions rely on the Jacobian matrices ${\bf J}^\pm=\frac{\partial {\bf F}^\pm}{\partial {\bf U}^\pm}$. Some properties are listed here without proof; interested readers are referred to standard textbooks for more details about hyperbolic systems \cite{LeVeque92,Godlewski96}. The eigenvalues $\lambda_j^\pm$ of ${\bf J}^\pm$ are real ($j=1,\dots,L+1$): \begin{equation} \lambda_1^\pm=\pm a+b\,u^\pm,\quad \lambda_j^\pm=0; \end{equation} Assuming $\lambda_1^\pm \neq 0$, the matrices of eigenvectors ${\bf R}^\pm=({\bf r}^\pm_1|{\bf r}^\pm_2|...|{\bf r}^\pm_{L+1})$ are \begin{equation} {\bf R}^\pm= \left( \begin{array}{cccc} 1 & 0 & \cdots & 0\\ [6pt] \displaystyle \frac{-2}{\pi\left(\mp a+bu^\pm\right)} & 1 & \\ \vdots & & \ddots & \\ \displaystyle \frac{-2}{\pi\left(\mp a+bu^\pm\right)} & & & 1 \end{array} \right), \label{MatR} \end{equation} and they are invertible if $u^\pm \neq \pm a/b$, which is consistent with the assumption of weak nonlinearity: the systems (\ref{SystHyper}) are hyperbolic, but not strictly hyperbolic (multiple eigenvalues). The characteristic fields satisfy: \begin{equation} \hspace{-0.5cm} {\bf \nabla \lambda_1}.{\bf r}_1^\pm=b\neq 0, \quad {\bf \nabla \lambda_j}.{\bf r}_j^\pm=0, \hspace{0.2cm} j=1,\dots,L+1, \end{equation} where the gradient is calculated with respect to each coordinate of $\bf U$ in (\ref{VecU}), as defined in \cite[p77]{Toro99}. Consequently, there exists 1 genuinely nonlinear wave (shock wave or rarefaction wave), and $L$ linearly degenerate waves (contact discontinuities). Moreover, the eigenvalues of the relaxation matrices ${\bf S}^\pm$ are ($j=1,\dots,L+1$): \begin{equation} \kappa_1=\mp\frac{\textstyle a}{\textstyle S}\,\frac{\textstyle dS}{\textstyle dx}, \hspace{0.2cm} \kappa_j=-\theta_j^2. \end{equation} Assuming that the quadrature nodes are sorted by increasing order ($\theta_1<\theta_2<\cdots<\theta_L$), the spectral radius of ${\bf S}^\pm$ is \begin{equation} \varrho({\bf S}^\pm)=\max\left(\max_{x\in[0,D]}\frac{a}{\textstyle S}\,\frac{\textstyle dS}{\textstyle dx}, \theta_L^2\right). \label{RayonSpectral} \end{equation} This quantity becomes large in the case of a rapidly-varying section, or with a large maximal quadrature node of the fractional integral (see the next section). Lastly, a Fourier analysis of (\ref{SystComplet}) leads to a similar dispersion relation than (\ref{DispersionGuide}). The symbol $\chi$ in (\ref{ChiDF}) has only to be replaced by the symbol of the diffusive approximation \begin{equation} \tilde{{\chi}}(\omega)=\frac{\textstyle 2}{\textstyle \pi}\sum_{\ell=1}^L\frac{\textstyle \mu_{\ell}}{\textstyle \theta_{\ell}^2+i\,\omega}. \label{ChiAD} \end{equation} \subsection{Numerical modeling}\label{SecResoNum} \subsubsection{Quadrature methods}\label{SecResoNumQuad} Basically, two strategies exist to determine the quadrature weights $\mu_{\ell}$ and nodes $\theta_{\ell}$ in (\ref{RDI12}), which are involved in the relaxation matrices (\ref{MatS}). The first strategy relies on Gaussian polynomials, for instance the modified Gauss-Jacobi polynomials \cite{Birk10,NRPAS}. This approach offers a solid mathematical framework, but very low convergence rate is obtained \cite{Lombard14}. As a consequence, a large number $L$ of memory variables is required to describe the fractional integral by a quadrature formula (\ref{RDI12}), which penalizes the computational efficiency. An alternative approach is followed here, based on an optimization procedure on the symbols (\ref{ChiDF}) and (\ref{ChiAD}). Given a number $K$ of angular frequencies $\omega_k$, the following cost function is introduced: \begin{equation} \begin{array}{l} {\cal J}\left({\{(\mu_\ell,\theta_\ell)\}}_\ell\,;L,K\right)= \displaystyle \sum_{k=1}^{K}\left|\frac{\textstyle {\tilde \chi}(\omega_k)}{\textstyle \chi(\omega_k)}-1\right|^2,\\ [8pt] \hspace{1cm} = \displaystyle \sum_{k=1}^{K}\left|\frac{\textstyle 2}{\textstyle \pi} \sum_{\ell=1}^{L}\mu_\ell\,\frac{\textstyle (i\omega_k)^{1/2}}{\theta_{\ell}^2+i\omega_k}-1\right|^2, \end{array} \label{Objective} \end{equation} to be minimized with respect to the parameters $\{(\mu_\ell,\theta_\ell)\}_\ell$ for $\ell=1,\dots,L$. A nonlinear optimization with a positivity constraint $\mu_{\ell}\geq 0$ and $\theta_{\ell}\geq 0$ is adopted. For this purpose, one implements the SolvOpt algorithm, \cite{Rekik11,Blanc13} based on Shor's iterative method \cite{Shor85}. Initial values in the optimization algorithm have to be chosen with care. This is done by using coefficients obtained by the modified orthogonal Jacobi polynomials \cite{Birk10,Lombard14}. Finally, the angular frequencies $\omega_k$ in (\ref{Objective}) are linearly spaced on a logarithmic scale on the optimization range $[\omega_{\min},\omega_{\max}]$: \begin{equation} \omega_k = \omega_{\min}\left( \frac{\omega_{\max}}{\omega_{\min}}\right)^{\!\frac{k-1}{K-1}},\hspace{0.5cm}k=1,\cdots,L. \label{OmegaK} \end{equation} The number $K$ of angular frequencies $\omega_k$ is chosen equal to the number $L$ of diffusive variables. \subsubsection{Numerical scheme}\label{SecResoNumSchem} To perform numerical integration of systems (\ref{SystHyper}), a uniform grid for space is introduced with step $\Delta x=D/N_x$, as well as a variable time step $\Delta t_n$ (denoted $\Delta t$ for the sake of simplicity). The approximation of the exact solution ${\bf U}^\pm(x_j = j\,\Delta x, t_n = t_{n-1}+\,\Delta t)$ is denoted ${\bf U}_j^{n\pm}$. A direct explicit discretization of (\ref{SystHyper}) is not optimal, since the numerical stability requires \cite{These-Blanc} \begin{equation} \Delta t\leq \min\left(\frac{\Delta x}{a_{\max}^n},\frac{2}{\varrho({\bf S}^\pm)}\right), \label{CFLdirect} \end{equation} where $a_{\max}^n = \max |\pm a + b u_j^{(n)\pm} |$ is the maximum numerical velocity at time $t_n$. As shown in (\ref{RayonSpectral}), the spectral radius of the relaxation matrices ${\bf S}^\pm$ grows with the maximal node of quadrature, which penalizes the standard Courant-Friedrichs-Lewy (CFL) condition of stability (\ref{CFLdirect}). A more efficient strategy is adopted here. Equations (\ref{SystHyper}) are split into a propagative step \begin{equation} \frac{\textstyle \partial}{\textstyle \partial t}{\bf U}^\pm+\frac{\textstyle \partial}{\textstyle \partial x}{\bf F}({\bf U}^\pm)={\bf G}\,\frac{\textstyle \partial^2}{\textstyle \partial x^2}{\bf U}^\pm, \label{SplitPropa} \end{equation} and a relaxation step \begin{equation} \frac{\partial}{\partial t}{\bf U}^\pm={\bf S}^\pm\,{\bf U}^\pm. \label{SplitDiffu} \end{equation} The discrete operators associated to (\ref{SplitPropa}) and (\ref{SplitDiffu}) are denoted by ${\bf H}^\pm_a$ and ${\bf H}^\pm_b$, respectively. Strang splitting is then used to do a step forward from $t_n$ to $t_{n+1}$ by solving successively (\ref{SplitPropa}) and (\ref{SplitDiffu}) with adequate time steps: \cite{LeVeque92} \begin{equation} \begin{array}{lll} \displaystyle {\bf U}_{j}^{(1)\pm} &= &{\bf H}^\pm_{b}\left(\frac{\Delta t}{2}\right)\,{\bf U}_{j}^{(n)\pm},\\ [8pt] \displaystyle {\bf U}_{j}^{(2)\pm} &= &{\bf H}^\pm_{a}\left(\Delta t\right)\,{\bf U}_{j}^{(1)\pm},\\ [8pt] \displaystyle {\bf U}_{j}^{(n+1)\pm} &= &{\bf H}^\pm_{b}\left(\frac{\Delta t}{2}\right)\,{\bf U}_{j}^{(2)\pm}. \end{array} \label{AlgoSplitting} \end{equation} Equation (\ref{SplitPropa}) corresponding to the propagative part is solved with a second-order TVD scheme (Total Variation Diminishing) for hyperbolic equations: \cite{Godlewski96} \begin{equation} \begin{array}{lll} \displaystyle {\bf U}_i^{(n+1)\pm} &=& \displaystyle {\bf U}_i^{(n)\pm} -\frac{\Delta t}{\Delta x} ({\bf F}_{i+1/2}^{\pm}- {\bf F}_{i-1/2}^{\pm})\\ [8pt] \displaystyle &+&\displaystyle {\bf G} \frac{\Delta t}{\Delta x²}( {\bf U}_{i+1}^{(n)\pm}-2 {\bf U}_i^{(n)\pm}+ {\bf U}_{i-1}^{(n)\pm}), \end{array} \label{TVD} \end{equation} where ${\bf F}^\pm_{i\pm1/2}$ is the numerical flux function for Burgers equation in (\ref{Fnonlin}), and $\ell=1,\dots,L$. Defining the discrete P\'eclet number Pe \begin{equation} \mbox{Pe}=a_{\max}^{n}\,\frac{\textstyle \Delta x}{\textstyle 2\,d}\gg 1, \label{Peclet} \end{equation} the discrete operator ${\bf H}^\pm_{a}$ in (\ref{TVD}) is stable if \cite{Sousa03} \begin{equation} \varepsilon=\frac{a_{\max}^{n}\,\Delta t}{\Delta x}\leq \left(1+\frac{\textstyle 1}{\textstyle \mbox{Pe}}\right)^{-1} \approx 1-\frac{\textstyle 1}{\textstyle \mbox{Pe}} \approx 1, \label{CFL} \end{equation} which recovers the optimal CFL condition. Therefore, thanks to the splitting strategy, the value of $\Delta t$ is no more penalized by the spectral radius of {\bf S$^\pm$}. Equation (\ref{SplitDiffu}) of the diffusive part is solved exactly: \begin{equation} {\bf H}^\pm_b\left({\textstyle\frac{\Delta t}{2}}\right)\,{\bf U}^\pm_j = \exp\left({\textstyle{\bf S}^\pm\,\frac{\Delta t}{2}}\right)\,{\bf U}^\pm_j, \label{SplitDiffuExp} \end{equation} with the matrix exponential deduced from (\ref{MatS}) \begin{equation} \begin{array}{l} \hspace{-0.8cm} e^{{\bf S}^\pm \tau}=\\ \displaystyle \hspace{-1cm} \left( \begin{array}{cccc} \displaystyle e^{-\Omega^\pm \tau} & \delta_1^\pm\left(e^{-\Omega^\pm \tau}-e^{-\theta_1^2 \tau}\right) & \cdots & \delta_L^\pm\left(e^{-\Omega^\pm \tau}-e^{-\theta_L^2 \tau}\right)\\ 0 & e^{-\theta_1^2 \tau} & & \\ \vdots & & \ddots & \\ 0 & & & e^{-\theta_L^2 \tau} \end{array} \right), \end{array} \label{MatExpS} \end{equation} and the coefficients ($\ell=1,\dots,L$) \begin{equation} \Omega^\pm=\pm \frac{a}{S}\frac{dS}{dx},\hspace{0.5cm} \delta_\ell^\pm=\pm \frac{\textstyle c\,\mu_\ell}{\textstyle \theta_\ell^2 - \Omega^\pm}. \end{equation} This relaxation step is unconditionally stable. Without viscothermal losses ($c=0$), the matrix exponential (\ref{MatExpS}) degenerates towards the scalar $e^{-\Omega^\pm \tau}$. The physically-realistic case $\frac{dS}{dx}>0$ yields a decreasing amplitude of $u^+$ as $x$ increases. Inversely, it yields an increasing amplitude of $u^-$ as $x$ decreases. The Jacobian matrices and the relaxation matrices do not commute: ${\bf J}^\pm{\bf S}^\pm \neq {\bf S}^\pm{\bf J}^\pm$. Consequently, the splitting (\ref{AlgoSplitting}) is second-order accurate \cite{LeVeque92}. It is stable under the CFL condition (\ref{CFL}). \subsection{Validation}\label{SecResoVal} \subsubsection{Configuration}\label{SecResoValConf} \begin{table}[htbp] \begin{center} {\renewcommand{\arraystretch}{1.2} \renewcommand{\tabcolsep}{0.2cm} \begin{tabular}{cccccc} \hline $\gamma$ & $p_0$ (Pa) & $\rho_0$ (kg/m$^{3}$) & Pr & $\nu$ (m$^2$/s) & $\mu_v/\mu$ \\ \hline 1.403 & $ 10^5$ & 1.177 & 0.708 & $1.57\cdot 10^{-5}$ & 0.60 \\ \hline \end{tabular}} \end{center} \caption{\label{TabParam}Physical parameters of air at $15\,^{\circ}\mathrm{C}$.} \end{table} In all the tests, one considers a circular tube of length $D=1.4$~m, with an entry radius $R(0)=7$~mm. The physical parameters given in table \ref{TabParam} are used to determine the coefficients (\ref{CoeffsEDP}) in (\ref{SystComplet1}). The tube is discretized on $N_x$ points in space, the value of $N_x$ being precised for each test case. At each iteration, the time step $\Delta t$ is deduced from the condition (\ref{CFL}), where the CFL number is $\varepsilon=0.95$. \subsubsection{Diffusive approximation}\label{SecResoValDA} \begin{figure}[htbp] \begin{center} \begin{tabular}{c} (i)\\ \includegraphics[scale=1.2]{OptimV.eps} \\ (ii)\\ \includegraphics[scale=1.2]{OptimErreur.eps} \end{tabular} \caption{\label{FigDispersionOpti} Approximation of the fractional derivative. (i) Phase velocity of the Menguy-Gilbert model (\ref{Chester}) and of the diffusive model (\ref{SystComplet}). The horizontal dotted line denotes the sound velocity $a_0$. (ii) Error $\left|\, {\tilde \chi (\omega)}/{\chi(\omega)}-1\right|$. Vertical dotted lines show limits of the range $[\omega_{\min},\omega_{\max}]$ where the diffusive approximation is optimized.} \end{center} \end{figure} The first test investigates the accuracy of the diffusive approximation to model fractional viscothermal losses (section \ref{SecResoNumQuad}). The nonlinearity and the volumic attenuation are neglected ($b=0$, $d=0$), and the radius $R$ is constant. The tube is discretized on $N_x=200$ points in space. Based on the discussion of section \ref{SecResoNumQuad}, comparison is performed between a modified Gauss-Jacobi quadrature and an optimized quadrature. The reference solution is the phase velocity of the linear Menguy-Gilbert model (\ref{CelAttGuide}), where the symbol $\chi$ is given by (\ref{ChiDF}). Conversely, the phase velocity of the diffusive model relies on the symbol (\ref{ChiAD}). Figure~\ref{FigDispersionOpti}-(i) compares these different phase velocities, using $L=6$ memory variables. Large errors are obtained when the modified Gauss-Jacobi quadrature is used. On the contrary, the agreement between exact and approximate phase velocities is far better when the optimization with constraint is used. In this latter case, the optimization range $\left[\omega_{\min},\omega_{\max}\right]$ is set to $[10^2,10^4]$~rad/s. From now on, optimization with constraint is chosen. To see more clearly the error induced by the optimization (\ref{Objective}), figure~\ref{FigDispersionOpti}-(ii) displays the error of modeling $\left|\, {\tilde \chi (\omega)}/{\chi(\omega)}-1\right|$ on a logarithmic scale. The optimization of the coefficients $(\mu_\ell,\theta_\ell)$ is performed with different numbers of memory variables $L$. The error decreases approximately by a factor 10 when $L$ is doubled. In the following numerical experiments, the viscothermal losses have been accounted for by $L=6$ memory variables, optimized over the range of frequency $\left[\omega_{\min},\omega_{\max}\right] = [10^2,10^4]$~rad/s. \subsubsection{Nonlinear propagation}\label{SecResoValPropaNL} \begin{figure}[htbp] \begin{center} \begin{tabular}{c} (i)\\ \includegraphics[scale=1.3]{Creneau1.eps} \\ (ii)\\ \includegraphics[scale=1.3]{Creneau2.eps} \\ (iii)\\ \includegraphics[scale=1.3]{Creneau3.eps} \end{tabular} \caption{\label{FigPorteNL} Nonlinear wave propagation. (i) Initial data $v(x)$. (ii) comparison between exact and numerical values of the outgoing velocity at $t\approx 0.88$~ms. (iii) idem at $t\approx 2.8$~ms.} \end{center} \end{figure} The second test concerns the modeling of nonlinear waves by the TVD scheme (\ref{TVD}). For this purpose, losses are neglected ($c=0$, $d=0$), and the radius of the tube is constant. The forcing in (\ref{SystComplet3}) is null. The initial data (\ref{SystComplet5}) is a rectangular pulse with a $20$~m/s amplitude and a wavelength $\lambda=0.03$~m. The tube is discretized on $N_x=1000$ points in space. Figure~\ref{FigPorteNL} displays the numerical solution and the exact solution at various instants. The latter is derived from the elementary solutions to the Riemann problem \cite{LeVeque92}. In (ii), one observes a outgoing shock wave followed by a rarefaction wave. In (iii), the rarefaction has reached the shock. In each case, agreement is observed between numerics and analytics, despite the non-smoothness of the solution. In particular, the shock propagates at the good speed, which reveals that the Rankine-Hugoniot condition is correctly taken into account. \subsubsection{Linear propagation with a varying cross section area}\label{SecResoValVar} \begin{figure}[htbp] \begin{center} \begin{tabular}{c} (i)\\ \includegraphics[scale=1.4]{SecVar1.eps} \\ (ii)\\ \includegraphics[scale=1.4]{SecVar2.eps} \end{tabular} \caption{\label{FigSecVarLin} Tube with exponentially-varying cross section area (\ref{SectionVar}). Snapshots of the exact and numerical velocity of the outgoing wave, at $t\approx 1.2$~ms (i) and $t\approx 3.5$~ms (ii).} \end{center} \end{figure} The third test focuses on a variable cross section area$S(x)$, with a radius varying exponentially from $R(0)=7$~mm to $R(D)=2\,R(0)$: \begin{equation} \displaystyle S(x)=\pi\,\left(R(0)\,2^{x/D}\right)^2, \quad 0\leq x\leq D. \label{SectionVar} \end{equation} The tube is discretized on $N_x=200$ points in space. Linear propagation is assumed ($b=0$), and the dissipation effects are neglected ($c=0$, $d=0$). Only the outgoing wave is considered. The discretization of the variable radius involves the relaxation parts of the splitting (\ref{AlgoSplitting}): only the component $e^{-\Omega^+\tau}$ in (\ref{MatExpS}) is non-null. The exciting source in (\ref{SystComplet3}) is a smooth combination of truncated sinusoidal wavelets: \begin{equation} \hspace{-0.4cm} u_0(t)=\left\{ \begin{array}{ll} \displaystyle V\sum_{m=1}^4 a_m \sin\,(b_m\,\omega_c\,t) &\mbox{if }\, 0\leq t\leq {\textstyle \frac{\displaystyle 1}{\displaystyle f_c}},\\ 0 &\mbox{else}, \end{array} \right. \label{Source} \end{equation} with amplitude $V=20$~m/s, central frequency $f_c={\omega_c}/{2\,\pi}=1$~kHz and coefficients $b_m=2^{m-1}$, $a_1=1$, $a_2=-21/32$, $a_3=63/768$ and $a_4=-1/512$. The exact solution is straightforwardly deduced from the method of characteristics \begin{equation} u^+(x,t)=\exp\left(-\Omega^+\frac{x}{a}\right)\,u_0\left(t-\frac{x}{a}\right). \label{ExactSecVar} \end{equation} Figure~\ref{FigSecVarLin} displays a snapshot of the velocity $u^+$, at two successive instants. Agreement between numerical and theoretical results is obtained. As deduced from (\ref{ExactSecVar}), the amplitude of the wave decreases as the wavefront advances. \subsubsection{Simulation of an input impedance}\label{SecResoValImp} \begin{figure}[h!] \begin{center} \includegraphics[scale=1.2]{PropaAR.eps} \caption{Snapshot of the outgoing and ingoing velocity at to different times, where propagation is linear and the tube has a constant cross section area.\label{FigPropaAR} } \end{center} \end{figure} \begin{figure}[h!] \begin{center} \begin{tabular}{c} (i)\\ \includegraphics[scale=0.99]{Zmod.eps} \\ (ii)\\ \includegraphics[scale=0.99]{Zarg.eps} \end{tabular} \caption{Input impedance $Z$: modulus (i) and phase (ii). Comparison between simulated and exact values. \label{FigImpedance} } \end{center} \end{figure} The coupling between the advection and the diffusive approximation of the viscothermal losses is studied here, as well as the interaction between the simple waves (\ref{SystComplet4}). A constant radius $R=7$~mm is considered. Linear wave propagation is assumed ($b=0$). The tube is discretized using $N_x=1000$ points in space. The exciting source in (\ref{SystComplet3}) is the wavelet (\ref{Source}), with the same central frequency $f_c$ and amplitude $V$ as in section \ref{SecResoValVar}. When the outgoing wave "+" reaches the limit of the tube ($x=D$), the incoming wave "$-$" is generated and propagates along the decreasing $x$. The velocity is displayed at two different times on figure \ref{FigPropaAR}. Due to viscothermal losses, the amplitude of the wave diminishes during the simulation. Also, after 5.6~ms of propagation, the waveform is not symmetric anymore, which illustrates the dispersive nature of the propagation. Now, we take $N_x=2000$ to compute the input impedance. This high number of discretization points is required to get a high frequency resolution to calculate the input impedance. A receiver at $x=0$ records the pressure $p^-(0,t_n)$. The outgoing pressure $p^+(0,t_n)$ is known, corresponding to the exciting source. Fourier transforms in time of $p^\pm$ yield an estimate of the input impedance $Z$ of the tube: \begin{equation} Z(\omega)=Z_c\,\frac{\textstyle 1+r(\omega)}{\textstyle 1-r(\omega)}, \label{Impedance} \end{equation} with \begin{equation} Z_c=\frac{\textstyle \rho_0\,a_0}{\textstyle S},\qquad r(\omega)=\frac{\textstyle {\widehat{p^-}}(0,\omega)}{\textstyle {\widehat{p^+}}(0,\omega)}. \label{Ratio} \end{equation} Figure~\ref{FigImpedance} shows the modulus and the phase of the input impedance deduced from the numerical simulations. These quantities are compared to their analytical approximation given by \cite{Chaigne08} \begin{equation} \hspace{-0.5cm} Z=i\,Z_c\tan(kD),\quad k=\frac{\textstyle \omega}{\textstyle a_0}-i\,(1+i)\,3\cdot 10^{-5}\frac{\textstyle \sqrt{f}}{\textstyle R}. \label{Ztheo} \end{equation} Excellent agreement is observed, except around null frequency. These small differences are due to the spectrum of the wavelet (\ref{Source}), which vanishes when $f=0$ Hz. It results numerical inaccuracies in the ratio (\ref{Ratio}). \subsubsection{Complete Menguy-Gilbert model }\label{SecResoValTotal} \begin{figure}[htbp] \begin{center} \includegraphics{Sismo.eps} \caption{\label{FigSismo} Complete model of resonator: nonlinear wave propagation, viscothermal losses, volumic dissipation, variable section. Time-history of the velocity at four receivers along the exponential horn.} \end{center} \end{figure} As a last experiment, we take into account all the effects in (\ref{Chester}). The exciting source is (\ref{Source}). The tube is discretized using $N_x=300$ points in space. The pressure is recorded at four receivers uniformly distributed along the exponential horn. Figure~\ref{FigSismo} displays the time history of the velocity $u=u^++u^-$ at these receivers. In each case, the velocity $u^+$ of the outgoing wave is recorded first (up to $t\approx 5$~ms), followed by the velocity $u^-$ of the incoming wave reflected by the end of the cylinder. As $x$ increases, the amplitude of $u^+$ decreases. It is due to three factors: the emergence of shocks, the intrinsic losses, and the increase of the cross section area (see section \ref{SecResoValVar}). On the contrary, the amplitude of $u^-$ increases as the location of receivers decreases, from $x=1.2$ m downto $x=0$ m. This perharps counter-intuitive observation is explained as follows: in the direction of propagation of the incoming wave (decreasing $x$), the cross section of the guide decreases. It results in an increasing amplitude, exceeding the effect of the losses and of the nonlinearity. Moreover, at each receiver, $u^-$ appears to be less distorted than $u^+$. Indeed, due to the boundary condition (\ref{SystComplet4}), the incoming wave $u^-$ experiences nonlinear effects which balance the ones experienced by the outgoing wave $u^+$ (see (\ref{SystComplet1})). The balance is complete in a lossless description and if a shock does not occur on $u^+$. Here, a closer view on the figure would reveal that a shock indeed occur before $x=1.2$~m. Lastly, at each receiver, one notices that $u^-$ has a smaller amplitude than $u^+$. This is due to two causes. First of all, the losses act both on $u^+$ and $u^-$, hence the effects are cumulative. Secondly, a shock is a dissipative phenomenon and $u^-$ would have a smaller amplitude than $u^+$ if a shock occurs on $u^+$ (which is the case here), even if visco-thermal and volumic losses were ignored. \section{Exciter}\label{SecExc} \subsection{Physical modeling}\label{SecExcPhys} \begin{figure}[htbp] \begin{center} \includegraphics[scale=1.2]{SystemeLevres.eps} \caption{\label{FigLevres} One-mass model for the lips.} \end{center} \end{figure} The musician's lips are modeled by a one-mass mechanical oscillator at the entry of the resonator \cite{Elliott82}. Only the vertical displacement of the top lip is modeled; the interaction with the static bottom lip is ignored. The top lip is modeled by a thin rigid rectangular plate of height $h$ and width $l$. It makes an angle $\varphi$ with the horizontal $x$-axis, so that the projected surface of the lip on the vertical axis is \begin{equation} \displaystyle A=h\,l\sin\varphi. \label{SurfProj} \end{equation} A spring with stiffness $k$ and a damper with coefficient $r$ are put over the lip of mass $m$ (figure~\ref{FigLevres}). The pressure in the musician's mouth is $p_m(t)$; the acoustical pressure $p_e$ at the entry of the resonator ($x=0$) depends upon the opening $y$ of the lips and upon time $t$: \begin{equation} \begin{array}{lll} p_e(y,t)&=& p_e^+(y,t)+p_e^-(y,t),\\ [8pt] &=& p^+(0,t)+p^-(0,t),\\ \end{array} \label{Pe} \end{equation} The balance of forces yields the ordinary differential equation satisfied by $y$: \begin{subnumcases}{\label{OscMeca}} \displaystyle m\ddot{y}+r\dot{y}+k(y-y_\textit{eq})=f(y,t),\label{OscMeca1}\\ [6pt] \displaystyle y(0)=y_0,\hspace{0.5cm}\dot{y}(0)=y_1,\label{OscMeca3} \end{subnumcases} where $y_\textit{eq}$ is the equilibrium position of the free oscillator, \begin{equation} f(y,t)=A\,(p_m(t)-p_e(y,t))\label{AeroForce} \end{equation} is the aeroacoustic force applied to the lip, and $(y_0,y_1)$ are the initial conditions. The flow is assumed to be stationary, incompressible, laminar and inviscid in the musician's mouth and under the lip. Consequently, Bernoulli's equation and the conservation of mass can be applied. The sudden cross section variation behind the lip creates a turbulent jet which dissipates all its kinetic energy without pressure recovery in the mouthpiece \cite{hirschberg95}. It follows \cite{McIntyre83, These_Vergez} \begin{equation} \begin{array}{l} \displaystyle \hspace{-1cm} p_e(y,t) = \\ [8pt] \hspace{-1cm} \left\{\begin{array}{l} \displaystyle 2 p^-_e -\frac{\xi}{2}\psi y\left(\psi y-\sqrt{\psi^2 y^2 + 4\left| p_m - 2 p^-_e \right|} \right) \mbox{if } y > 0,\\ \\ \displaystyle 2 p^-_e \mbox{ else.} \end{array}\right. \end{array} \label{ExciterDepNL} \end{equation} The coefficients in (\ref{ExciterDepNL}) are \begin{equation} \hspace{-0.8cm} \xi(y,t)=\mbox{sgn}(p_m(t)-p_e(y,t))=\mbox{sgn}(p_m(t)-2p^-_e(y,t)) \label{CoeffsCoupl1} \end{equation} and \begin{equation} \psi = l\,Z_c \, \sqrt{\frac{2}{\rho_0}}=l\sqrt{2\,\rho_0}\frac{a_0}{S(0)}. \label{CoeffsCoupl2} \end{equation} \subsection{Numerical modeling}\label{SecExcNum} \subsubsection{Numerical scheme}\label{SecExcNumScheme} The numerical integration of (\ref{OscMeca}) relies on a variable time step $\Delta t_n$, noted $\Delta t$ for sake of simplicity; as shown further in section \ref{SecExpAlgo}, it is the time step used for wave propagation in the resonator. The approximation of the exact solution $y(t_n)$ is denoted $y_n$. Similarly, $\dot{y}(t_n)$ and $\ddot{y}(t_n)$ are approximated by $\dot{y}_n$ and $\ddot{y}_n$, respectively. The Newmark method is applied to (\ref{OscMeca}). This method, which relies upon two coefficients $\beta$ and $\eta$, is second-order accurate in the case of linear forcing. The values $\beta=1/4$ and $\eta=1/2$ lead to an unconditionally stable method \cite{Newmark59}. The Newmark method is written in the predicted-corrected form. The predicted values of $y_{n+1}$ and $\dot{y}_{n+1}$ are computed from the known values at time $t_n$: \begin{equation} \begin{array}{l} \displaystyle \tilde{y}_{n+1} = y_n + \Delta t\,\dot{y}_n + (1-2\beta)\,\frac{{\Delta t}^2}{2}\,\ddot{y}_n,\\ \\ \displaystyle \tilde{\dot{y}}_{n+1} = \dot{y}_n + (1-\eta)\,\Delta t\,\ddot{y}_n. \end{array} \label{NewmarkP} \end{equation} The corrected values at time $t_{n+1}$ are \begin{equation} \begin{array}{l} \displaystyle y_{n+1} = \tilde{y}_{n+1} + \beta\,{\Delta t}^2\,\ddot{y}_{n+1},\\ \\ \displaystyle \dot{y}_{n+1} = \tilde{\dot{y}}_{n+1} + \eta\,\Delta t\,\ddot{y}_{n+1} . \end{array} \label{NewmarkC} \end{equation} To compute (\ref{NewmarkC}), one needs $\ddot{y}_{n+1}$. For this purpose, the corrected values (\ref{NewmarkC}) are injected into (\ref{OscMeca1}), yielding the displacement of the lip at time $t_{n+1}$: \begin{equation} \begin{array}{l} \displaystyle \hspace{-0.5cm} y_{n+1}=\tilde{y}_{n+1}\\ [8pt] \displaystyle +\beta\,\Delta t^2\,\frac{f\left(y_{n+1},t_{n+1}\right)-r\,\tilde{\dot{y}}_{n+1}-k\,(\tilde{y}_{n+1}-y_\textit{eq})}{m+r\,\eta\,\Delta t+k\,\beta\,{\Delta t}^2}. \end{array} \label{Ynp1} \end{equation} The aeroacoustic force $f(y,t)$ in (\ref{AeroForce})-(\ref{ExciterDepNL}) depends nonlinearly upon $y$. Consequently, the displacement $y_{n+1}$ in (\ref{Ynp1}) is the solution of the fixed point equation \begin{equation} g(z)=z, \label{GzZ} \end{equation} with \begin{equation} \begin{array}{l} \displaystyle \hspace{-0.5cm} g(z)=\tilde{y}_{n+1}\\ [8pt] \displaystyle+\beta\,\Delta t^2\,\frac{f\left(z,t_{n+1}\right)-r\,\tilde{\dot{y}}_{n+1}-k\,(\tilde{y}_{n+1}-y_\textit{eq})}{m+r\,\eta\,\Delta t+k\,\beta\,{\Delta t}^2}. \end{array} \label{FixedPoint} \end{equation} A fixed-point method is used to solve (\ref{FixedPoint}). It is initialized by $y_n$, and then it is performed until the relative variation in (\ref{FixedPoint}) doesn't exceed $10^{-13}$. At each step of the fixed-point method, one takes $p_e^-(z,t_{n+1})=p_0^{(n+1)-}$: this value of the incoming pressure at node 0 and time $t_{n+1}$ is known, based on the propagation step in the resonator (section \ref{SecResoNumSchem}). The coefficient $\xi$ in (\ref{ExciterDepNL}) follows from (\ref{CoeffsCoupl1}). Once the displacement $y_{n+1}$ is known, the acceleration is updated based on (\ref{NewmarkC}): \begin{equation} \ddot{y}_{n+1}=\frac{y_{n+1}-\tilde{y}_{n+1}}{\beta\,\Delta t^2}. \label{UpdateNewmark} \end{equation} The velocity $\dot{y}_{n+1}$ is deduced from (\ref{NewmarkC}) and (\ref{UpdateNewmark}). \subsubsection{Validation}\label{SecExcNumVal} \begin{figure}[htbp] \begin{center} \begin{tabular}{c} (i)\\ \includegraphics[scale=1.0]{OscillationsLin.eps} \\ (ii)\\ \includegraphics[scale=1.0]{SchemasOrdreLin.eps} \end{tabular} \caption{\label{FigOrdreSchema} Numerical resolution of the ordinary differential equation (\ref{OscMeca}) with a linear step forcing. (i): time histories of the numerical and exact solution; (ii): convergence measurements.} \end{center} \end{figure} No closed-form solution of (\ref{OscMeca}) is known. To assess the accuracy of the Newmark method, we consider the linear case of a step forcing: $f(y,t)=H(t)$. The parameters are those of table~\ref{TabParam2}, where the initial conditions are $y_0=y_\textit{eq}=0$~m and $y_1=0$~m/s. The numerical solution is computed on $N_t = 64$ time steps, up to 10~ms (here $\Delta t = 10/64$~ms is constant). Figure~\ref{FigOrdreSchema}-(i) compares the Newmark solution to the exact one. For completeness, the solution obtained by the backward Euler method is also displayed. Agreement is obtained between the Newmark solution and the exact one; on the contrary, the Euler solution suffers from a large numerical dissipation. Measures of convergence are performed by considering various numbers of time steps, from $N_t = 32$ to $N_t = 8192$, and by computing the numerical solution up to 10~ms. The errors between the numerical solutions and the exact solution are displayed on figure~\ref{FigOrdreSchema}-(ii) in log-log scales. Second-order accuracy is obtained with Newmark's method, whereas only first-order acuracy is obtained with Euler's method. \section{Numerical experiments}\label{SecExp} \subsection{Summary of the algorithm}\label{SecExpAlgo} Here we sum up the coupling between the resonator and the exciter. Time-marching from time $t_n$ to $t_{n+1}$ is as follows ($i=0,\cdots,N_x$): \begin{enumerate} \item Resonator \begin{enumerate} \item computation of the outgoing and incoming velocities $u_{i>0}^{(n+1)+}$ and $u_{i<N_x}^{(n+1)-}$ using the numerical scheme (\ref{AlgoSplitting}); \item computation of the incoming pressure at the input of the instrument $p_0^{(n+1)-}$ according to (\ref{SurP}); \item update of $u_{N_x}^{(n+1)-}$ at $x=D$, according to the reflection condition (\ref{SystComplet4}). \end{enumerate} \item Exciter \begin{enumerate} \item calculus of the lips opening $y_{n+1}$ in (\ref{NewmarkC}) based on the Newmark method and on the pressure at the entry of the resonator $p_e^{n+1}$ (\ref{ExciterDepNL}); \item calculus of the outgoing pressure at the resonator's entry $p_0^{(n+1)+}=p_e^{n+1}-p_0^{(n+1)-}$ (\ref{Pe}), where $p_0^{(n+1)-}$ is known according to the step 1-(b) of the algorithm; \item update of the forcing source $u_0^{(n+1)+}$ in the resonator (\ref{SystComplet3}), based on $p_0^{(n+1)+}$ and (\ref{SurP}). \end{enumerate} \item Incrementation \begin{enumerate} \item computation of the time step $\Delta t$, according to the CFL condition (\ref{CFL}); \item affectation $n\gets n+1$. \end{enumerate} \end{enumerate} \subsection{Configuration}\label{SecExpConf} \begin{table}[htbp] \caption{\label{TabParam2}Physical and geometrical parameters of the lips.} \begin{center} {\renewcommand{\arraystretch}{1.3} \renewcommand{\tabcolsep}{0.3cm} \begin{tabular}{c|c|c} \hline $m$ (kg) & $k$ (N/m) & $r$ (N.s/m) \\ \hline $1.78\cdot 10^{-4}$ & $1278.8$ & $\sqrt{m\, k}/4$ \\ \hline\hline $l$ (m) & $A$ (m$^2$) & $p_m$ (Pa) \\ \hline $10^{-2}$ & $10^{-4}$ & $20\cdot 10^{3}$ \\ \hline\hline $y_0$ (m) & $y_1$ (m/s) & $y_{\it eq}$ (m)\\ \hline $4\cdot 10^{-3}$ & $-4$ & $5\cdot 10^{-4}$ \\ \hline \end{tabular}} \end{center} \end{table} The wave propagation is described by the complete Menguy-Gilbert model in a resonator with a constant radius $R=7$~mm. The distance $D=1.4$~m from the input to the output of the resonator is discretized on $N_x=100$ points. The parameters of the lips model are given in table \ref{TabParam2}. These parameters are issued both from different publications \cite{adachi2, vilain2003a} and from trial and errors until self-oscillations are obtained. The output of the model is the acoustic velocity at the end of the tube $u(D,t)=u^+(D,t)+u^-(D,t)$. Considering that the open end of the cylinder radiates as a monopole, $u(D,t)$ is converted into $p_{rec}(t)$, the pressure measured at an arbitrary distance $D_{rec}=10$~m from the output of the cylinder, through the relation: \begin{equation} p_{rec}(t) = \frac{\rho_0\,S}{4\,\pi\,D_{rec}} \frac{\partial u}{\partial t}(D,t). \end{equation} This model could be obviously refined : since radiation only acts as a boundary condition, it is completely independant of the propagation inside the waveguide which is the focus of this paper. \subsection{Results}\label{SecExpRes} Various numerical experiments are carried out in order to check the influence of the nonlinear wave propagation on the behavior of the model. Simulations are carried out on Scilab and last 18 minutes for each computed second, when a mid-range laptop is used (Intel Core i7, 2.4 GHz, 8 Go, 2011). Time domain signals $p_{rec}(t)$ presented in the following figures are normalized by their maximal value. Originally registered with a variable time step, they are then sampled at a frequency $f_{s}=44.1$~kHz, by the use of linear interpolation. In figure~\ref{f:RampDesc_sig}, the blowing pressure $p_m(t)$ decreases during 4 s, from $p_m=20$~kPa downto $p_m=0$~kPa. Two regimes are considered: linear wave propagation ($b=0$), and nonlinear wave propagation. The recorded pressure $p_{rec}$ displays radically different time envelopes in the two cases (left column, middle and bottom): a shortest attack transient in the linear case, and an extinction threshold occurring for lower $p_m$ in the nonlinear case. In the linear case, the signal is more symmetric with respect to zero. Moreover, the regime is slightly quasi-periodic in the nonlinear case for high oscillating amplitudes. A closer view on the signals (right column) reveals typical waveforms with sharp peaks in the nonlinear case. \begin{figure} \includegraphics[width=1.05\columnwidth]{RampDesc_sig.eps} \caption{Progressive decrease of the blowing pressure, from $p_m=20$~kPa downto $p_m=0$~kPa. Left: time histories of $p_m$ (top), $p_{rec}/\!\max |p_{rec}|$ with linear wave propagation (middle), and with nonlinear wave propagation (bottom). Right: zoom on a few periods of $p_{rec}/\!\max |p_{rec}|$ in the linear and nonlinear cases. \label{f:RampDesc_sig}} \end{figure} \begin{figure} \includegraphics[width=1.05\columnwidth]{RampDesc_ind.eps} \caption{Descriptors calculated with the MIR Toolbox \cite{mirtoolbox} according to the time domain signals presented in figure~\ref{f:RampDesc_sig}. Top: playing frequency; bottom: spectral centroid. \label{f:RampDesc_ind}} \end{figure} Based on these data, two descriptors are computed in the frequency domain and displayed in figure~\ref{f:RampDesc_ind}: the playing frequency (top) and the spectral centroid (bottom). During an initial phase, playing frequencies differ significantly between the linear and nonlinear cases: the instrument plays at higher frequencies if nonlinear propagation is taken into account, up to $45$~Hz at most around $t=0.25$~s (+157 cents, i.e. between half a tone and a tone). The influence of nonlinear propagation on the playing frequency vanishes around $t=0.35$~s. It is worth noting that even at high oscillating amplitude, a negligible difference is observed after $t=0.35$~s: only 3 cents around $t=0.5$~s. It is also striking that attack time (defined here as the time for the signal to reach the maximum amplitude from $t=0$~s) is almost twice as long in the linear case than in the nonlinear case ($0.35$~s versus $0.18$~s, see figure \ref{f:RampDesc_sig}). However the time during which the playing frequency varies significantly is much longer in the nonlinear case ($0.35$~s versus $0.24$~s, see figure \ref{f:RampDesc_ind}). The influence of the nonlinear propagation on the playing frequency had already been highlighted numerically in the case of the trombone, \cite{msallam00} but only for steady states regimes, yet with lower blowing pressures and a simplified model for the nonlinear propagation. Deviations of less than 5 cents for weak dynamics have been reported and are in agreement with our observations. But the picture is very different during the transient phase, as explained above. The bottom picture of figure~\ref{f:RampDesc_ind} confirms that the nonlinear propagation is associated with an enrichment of the sound spectrum with high frequencies. Indeed the spectral centroid is up to three times higher than in the case of linear propagation. Moreover in the case of linear propagation, the spectral centroid is nearly constant, which suggests that the nonlinearity due to the exciter (\ref{ExciterDepNL}) cannot explain the spectral enrichment features of brassy sounds. On the contrary, in the case of nonlinear propagation, the spectral centroid is a monotonic function of the oscillating amplitude. Another numerical experiment is carried out by linearly increasing $p_m$ from $p_m=0$~kPa to $p_m=20$~kPa during 5~s, Time domain signals are presented in figure~\ref{f:RampMont_sig.eps} (left column). The most striking feature is the fact that the oscillation threshold is shifted toward higher values of $p_m$ when the nonlinear propagation is ignored ($b=0$ in (\ref{SystComplet1})). One should speak preferably about ``dynamic oscillation threshold" \cite{bergeot2013a} instead of ``oscillation threshold" since these observations are done while the bifurcation parameter (here the blowing pressure $p_m$) is being varied in time. This result is surprising at first glance since the consequences of the nonlinear propagation are expected to vanish at the oscillation threshold where $b\frac{(u^\pm)^2}{2} \ll a|u^\pm|$ in (\ref{SystComplet1}). However, the behavior of dynamic bifurcations thresholds can be counterintuitive, even when considering small perturbations \cite{bergeot2013b}. The so-called bifurcation delay observed here is around $0.2$~s, which corresponds to a pressure difference around $800$~Pa. In the right column of figure~\ref{f:RampMont_sig.eps}, the spectrograms of the time signals highlight two major features: first, the signal calculated while considering nonlinear propagation has a much more broadband structure. Secondly, the spectral content with the amplitude of the signal evolves more significantly than in the hypothesis of linear propagation. This is consistent with experimental observations in brass instruments \cite{hirschberg96b, rendon2013}. \begin{figure} \includegraphics[width=1.05\columnwidth]{RampMont_sig.eps} \caption{Progressive increase of the blowing pressure from $p_m=0$~kPa to $p_m=20$~kPa during $5$~s (only a zoom is shown between $t=3.75$~s and $t=5$~s). Left: time histories of $p_m$ (top), $p_{rec}/\!\max |p_{rec}|$ with linear wave propagation (middle) and with nonlinear wave propagation (bottom). Right: spectrogram of $p_{rec}/\!\max |p_{rec}|$ in the case of linear (middle) and nonlinear wave propagation (bottom). \label{f:RampMont_sig.eps}} \end{figure} In a third experiment, a constant blowing pressure $p_m=20$~kPa is considered. The stiffness $k$ of the lip model follows a symmetric decrease / increase during 6~s between $k=3000$~N.m$^{-1}$ and $k=100$~N.m$^{-1}$, as shown in figure~\ref{f:FreqVarTemps} (top). As expected, the model plays on different periodic regimes (corresponding to the $2^{nd}$ to the $6^{th}$ registers), the frequencies of which are displayed in the bottom picture. The most striking result is that for the parameters values chosen for the simulation, the lowest register is not playable in the case of nonlinear propagation. A closer view reveals that for each register, frequency jumps with the neighboor registers (lower and upper) do not occur at the same thresholds. Concerning the playing frequencies, differences may be weak but clearly audible and differences are all the larger as the playing frequency (i.e. the register) is low: up to $10$~cents on the $6^{th}$ register, up to $11.5$~cents on the $5^{th}$, up to $16$~cents on the $4^{th}$, up to $36$~cents on the $3^{rd}$. The playing frequency is always lower in the case of nonlinear propagation when $k$ is decreased. When $k$ is increased, for each register, the playing frequency is lower in the case of nonlinear propagation during the first half of the register. However, since it increases faster than in the case of linear propagation, the playing frequency in the case of nonlinear propagation becomes higher in the second half of the register. Here again, considering nonlinear propagation appears to have a noticeable effect during transients of a control parameter. In order to highlight hysteresis effects, the same data is plotted with respect to the resonance frequency of the lips model in figure~\ref{f:FreqVarFreq} for linear (left) and nonlinear propagation (right). Hysteresis in such experiments is known to result from two mechanisms: the coexistence of stable periodic regimes and the variation with time of the bifurcation parameter (dynamic bifurcations). The left part of the figure shows familiar simulation results \cite{silva2014a}. Considering the nonlinear propagation does not alter significantly the hysteresis excepted for the $3^{rd}$ register (the lowest on the right picture). In this case, a larger hysteresis is observed in the case of nonlinear propagation, which is possibily linked to the fact that the model failed to produce the $2^{nd}$ register. \begin{figure} \includegraphics[width=1.05\columnwidth]{FreqVarTemps.eps} \caption{Symmetric decrease / increase of the stiffness of the lips model between $k=3000$~N.m$^{-1}$ and $k=100$~N.m$^{-1}$. The blowing pressure is constant: $p_m=20$~kPa. Top: time history of $k$. Bottom: time history of the playing frequency with linear wave propagation (blue) and under nonlinear wave propagation hypothesis (red). \label{f:FreqVarTemps}} \end{figure} \begin{figure} \includegraphics[width=1.05\columnwidth]{FreqVarFreq.eps} \caption{Same data than in figure~\ref{f:FreqVarTemps}. Playing frequency is plotted with respect to the lips resonance frequency $\frac{1}{2\pi}\sqrt{\frac{k}{m}}$ with linear propagation ($b=0$ in (\ref{SystComplet1})) (left) and with nonlinear propagation hypothesis (right). \label{f:FreqVarFreq}} \end{figure} \section{Conclusion}\label{SecConclu} A time-domain numerical modeling of brass instruments has been proposed. The propagation of outgoing and incoming nonlinear acoustic waves has been considered, taking into account the viscothermal losses at the boundaries of the resonator. The coupling with a model of lips has been modeled also, enabling to simulate the self-sustained oscillations in brass instruments. The software so-obtained has been extensively validated. Preliminary applications to configurations of interest in musical acoustics have been demonstrated. In its current form, our simulation tool can be used to investigate various open questions in acoustics. The first one concerns the frequency response of a nonlinear acoustical resonator, which has already been the subject of experimental and theoretical works \cite{Ilinskii98,Hamilton01}. For this purpose, the methodology followed to determine the linear impedance (section \ref{SecResoValImp}) can be adapted to the nonlinear regime. A second application is to study numerically the threshold of oscillations in brass instruments. Based on a modal representation of the field in the resonator, a Floquet theory can be applied in the linear regime \cite{Ricaud09}. But to our knowledge, no results are known when the nonlinearity of the wave propagation is taken into account. On the contrary, the numerical tool does not suffer from such a limitation. The physical modeling has also to be improved. In particular, considering simple outgoing and incoming waves is a crude assumption in a tube with a variable section. In the linear regime of propagation, the Webster-Lokshin wave equation provides a more realistic framework \cite{Haddar10}. Extension of this equation to the nonlinear regime of propagation has been considered by Bilbao and Chick, \cite{Bilbao13} but without the viscothermal losses. The derivation of the full bidirectional system---incorporating nonlinear wave propagation, and viscothermal losses in a variable tube---is a subject of current research.
1,108,101,565,348
arxiv
\section{Introduction} \label{Intro} One of the most important questions in a gravitational theory (such as GR) and relativistic astrophysics is the gravitational collapse of a massive star under its own gravity at the end of its life cycle. A process in which a sufficiently massive star undergoes a continual gravitational collapse on exhausting its nuclear fuel, without achieving an equilibrium state \cite{joshb}. According to the singularity theorems in GR \cite{sintheo}, the spacetimes describing the solutions of the Einstein\rq{}s field equations in a typical collapse scenario would inevitably admit singularities\footnote{These are the spacetime events where the metric tensor is undefined or is not suitably differentiable, the curvature scalars and densities are infinite and the existing physical framework would break down \cite{hawsin}.}. These theorems are based on three main assumptions under which the existence of a spacetime singularity is foretold in the form of geodesic incompleteness in the spacetime. The first premise is in the form of a suitable causality condition that ensures a physically reasonable global structure of the spacetime. The second premise is an energy condition that requires the positivity of the energy density at the classical regime as seen by a local observer. The third one demands that gravity be so strong in the sense that trapped surface\footnote{A trapped surface is a closed two-surface on which both in-going as well as out-going light signals normal to it are necessarily converging \cite{FROLOV}.} formation must occur during the dynamical evolution of a continual gravitational collapse. \par The first detailed treatment of the gravitational collapse of a massive star, within the framework of GR, was published by Oppenheimer and Snyder \cite{OS}. They concluded that gravitational collapse of a spherically symmetric homogeneous dust cloud would end in a black hole. Such a black hole is described by the presence of a horizon which covers the spacetime singularity. This scenario provides the basic motivation for the physics of black holes, and the cosmic censorship conjecture (CCC) \cite{CCC}. This conjecture sates that, the spacetime singularities that develop in a gravitational collapse scenario are necessarily covered by the event horizons, thus ensuring that the collapse end-product is a black hole only. As no proof, or an stringent mathematical formulation of the CCC has been available so far, a great deal of effort has been made in the past decades to perform a detailed study of several collapse settings in GR, in order to extend our understanding from the vague corners of this phenomenon. \par While black hole physics has given rise to several interesting theoretical as well as astrophysical progresses, it is necessary, however, to investigate more realistic collapse settings in order to put black hole physics on a firm status. This is because the OS model is rather idealized and pressures as well as inhomogeneities within the matter distributions, would play an important role in the collapse dynamics of any realistic stellar object. It is therefore of significant importance to broaden the study of gravitational collapse to more realistic models in order to deal with this question: what ways there are the possible departures in final outcomes, as opposed to the homogeneous dust cloud collapse? Within this context, several gravitational collapse settings have been investigated over the past years which represent the occurrence of naked singularities\footnote{In this case, the horizons are delayed or failed to form during collapse process, as governed by the internal dynamics of the collapsing object. Then, the scenario where the super-dense regions are visible to external observers occurs, and a visible naked singularity forms \cite{joshb}.}. Work along this line has been reported in the literature within a variety of models; among them we quote the role of inhomogeneities within the matter distribution on the final fate of gravitational collapse \cite{Joshi-inhom}, collapse of a perfect fluid with heat conduction \cite {heat-con}, effects of shear on the collapse end-product \cite{shear}, and collapse process in the context of different gravitational theories \cite{alt-col} (see also \cite{REC} for recent reviews). \par On the other hand, though GR has emerged as a highly successful theory of gravitation, it suffers from the occurrence of spacetime singularities under physically reasonable conditions. It is therefore plausible to seek for the alternative theories of gravitation whose geometrical attributes are not present in GR. This allows for the inclusion of more realistic matter fields within the structure of stellar objects, in order to cure the singularity problem. In this regard, since the realistic stars are made up fermions, it would be difficult to reject the role of intrinsic angular momentum (spin) of fermions in collapse studies. As we shall see, the inclusion of spin of fermions and thus its possible effects on the collapse dynamics could be of significant importance specially at the late stages of the collapse setting where theses effects could go against the gravitational attraction to ultimately balance it. In such a scenario the collapse may no longer terminate in a spacetime singularity and instead is replaced by a bounce, a point at which the contraction of matter cloud stops and an expanding phase begins. However, if the spin effects are explicitly present, then GR will no longer be the relevant theory to describe the collapse dynamics. In GR, the energy-momentum tensor couples to the metric, while in the presence of fermions, it is expected that the intrinsic angular momentum is coupled to a geometrical quantity related to the rotational degrees of freedom in the spacetime, the so called spacetime torsion. This obviously is not possible in the ambit of GR so that one is necessitated to modify the theory in order to introduce torsion and relate it to the spin degrees of freedom of fermions. This point of view suggests a spacetime manifold which is non-Riemannian. One such framework, within which the inclusion of spin effects of fermions can be worked out and thus will allow non-trivial dynamical consequences to be extracted is the Einstein-Cartan (EC) theory \cite{ECT,ECT1,ECT2,ECT3}. Within this context, many cosmological models have been found in which the unphysical big bang singularity is replaced with a bounce at a finite amount the scalar factor \cite{spin-bounce,POPPRD}. From another perspective, the research of the recent years has shown that in the final stages of a typical collapse scenario where a high energy regime governs, the effects of quantum gravity would regularize the singularity that happens in the classical model \cite{Bojowald-PRL-2005}. In cosmological settings, it is shown that non-perturbative quantum geometric effects in loop quantum cosmology would replace the classical singularity by a quantum bounce in the high energy regime where the loop quantum modifications are dominant \cite{SVV}. However, since the full quantum theory of gravity has not yet been discovered, investigating the repulsive spin effects of fermions, which is more physically reasonable and confirmed observationally, on the final state of collapse could be well-motivated. The organization of this paper is as follows: In Sec. \ref{EC} we give a brief review on the field equations in EC theory and the phenomenological Weyssenhoff model. In Sec. \ref{SSR}, we study the collapse dynamics in the presence of spin effects and the possibility of singularity removal. Finally, Conclusions are drawn in Sec. \ref{con}. \section{Einstein-Cartan theory}\label{EC} As we know, the dynamics of the gravitational field, i.e., the metric field, in GR is described by the Hilbert-Einstein action with the Lagrangian which is linear in curvature scalar. Contrary to GR, in the gravity with torsion there is a considerable freedom in constructing the dynamical scheme, since one can define much more invariants from torsion and curvature tensors. There are two most attractable classes of models, namely: the EC theory and the quadratic theories. In this work we are interested in the former one for which the action integral is given by \begin{equation} S=\int d^4x\sqrt{-g} \left\{\f{-\hat{R}}{2\kappa}+\mathcal{L}_m\right\}, \label{action} \end{equation} where $\kappa=8\pi G$ (we set $c=1$) is the gravitational coupling constant, $\mathcal L_m$ is the Lagrangian for the material fields and $\hat{R}$ is the EC curvature scalar constructed out of the general asymmetric connection $\hat{\Gamma}^{\alpha}_{~\mu\nu}$, i.e., the connection of Riemann-Cartan manifold. The torsion tensor $T^{\alpha}_{~\mu\nu}$ is defined as the antisymmetric part of the affine connection, given by \begin{equation}\label{TT} T^{\mu}_{~\alpha\beta}=\f{1}{2}\left[\hat{\Gamma}^{\mu}_{~\alpha\beta}-\hat{\Gamma}^{\mu}_{~\beta\alpha}\right]. \end{equation} From the metricity condition, $\hat{\nabla}_{\alpha}g_{\mu\nu}=0$ we can find the affine connection as \begin{equation}\label{AFC} \hat{\Gamma}^{\mu}_{~\alpha\beta}=\big\{^{\,\mu} _{\alpha\beta}\big\}+K^{\mu}_{~\alpha\beta}, \end{equation} where the first part being the Christoffel symbols and the second part being the contorsion tensor defined as \begin{equation}\label{contortion} K^{\mu}_{~\alpha\beta}=T^{\mu}_{~\alpha\beta}+T_{\alpha\beta}^{~~\,\mu}+ T_{\beta\alpha}^{~~\,\mu}. \end{equation} Extremizing the total action with respect to contorsion tensor gives the Cartan field equation as \begin{equation}\label{FEEC} T^{\alpha}_{~\mu\beta}-\delta^{\alpha}_{\,\beta}T^{\gamma}_{~\,\mu\gamma}+\delta^{\alpha}_{\,\mu}T^{\gamma}_{~\,\beta\gamma}=-\f{1}{2}\kappa\tau_{\mu\beta}^{~~\alpha}, \end{equation} where $\tau^{\mu\alpha\beta}=2\left(\delta\mathcal L_m/\delta K_{\mu\alpha\beta}\right)/\sqrt{-g}$ is the spin angular momentum tensor \cite{ECT3}. It is worth noting that the equation governing the torsion tensor is an equation of pure algebraic type, i.e., the torsion is not allowed to propagate beyond the matter distribution as a torsion wave or through any interaction of non-vanishing range \cite{ECT3}; therefore it can be only nonzero inside material bodies. Varying the total action with respect to the metric tensor leads to the Einstein\rq{}s field equation with additional terms on the curvature side, that are quadratic in the torsion tensor \cite{Venzo}. Substituting for the torsion tensor, from equation (\ref{FEEC}), into these terms we get the combined field equations as \cite{ECT3,POPPRD,NPOP,Venzo} \begin{eqnarray}\label{COMFE} G_{\mu\beta}\left(\{\}\right)=\kappa\left({\mathcal T}_{\mu\beta}+\Sigma_{\mu\beta}\right), \end{eqnarray} where ${\mathcal T}_{\mu\beta}=2\left(\delta\mathcal L_m/\delta g^{\mu\beta}\right)/\sqrt{-g}$, is the dynamical energy-momentum tensor and $\Sigma_{\mu\beta}$ can be considered as representing the contribution of an effective spin-spin interaction \cite{ECT3}, i.e., the product terms \begin{eqnarray}\label{SSCI} \Sigma_{\mu\beta}&=&\f{1}{2}\kappa\bigg[\tau_{\mu\alpha}^{~~~\alpha}\tau_{\beta\gamma}^{~~~\!\!\gamma}-\tau_{\mu}^{~\alpha\gamma}\tau_{\beta\gamma\alpha}-\tau_{\mu}^{~\alpha\gamma}\tau_{\beta\alpha\gamma}\nonumber\\&+&\f{1}{2}\tau^{\alpha\gamma}_{~~~\mu}\tau_{\alpha\gamma\beta}+\f{1}{4}g_{\mu\beta}\left(2\tau_{\alpha\gamma\epsilon}\tau^{\alpha\epsilon\gamma} -2\tau_{\alpha~\gamma}^{~\gamma}\tau^{\alpha\epsilon}_{~~~\epsilon} +\tau^{\alpha\gamma\epsilon}\tau_{\alpha\gamma\epsilon}\right)\bigg]. \end{eqnarray} \par It now is obvious that the second term on the right hand side of (\ref{COMFE}), represents a correction (though very weakly at ordinary densities as this term carries a factor $\kappa^2$) to the dynamical energy-momentum tensor, which takes into account the spin contributions to the geometry of the manifold\footnote{We note that if the spin is switched off, the field equation (\ref{COMFE}) reduces to the ordinary Einstein\rq{}s field equation.}. However, the spin corrections are significant only at the late stages of gravitational collapse of a compact object where super-dense regions of extreme gravity are involved. Therefore, there is a good motivation to investigate the collapse process of material fluid sources which are endowed with spin. Let us now apply equation (\ref{COMFE}) to estimate the influence of spin in the case of Weyssenhoff fluid which generalizes the perfect fluid of GR to the case of non-vanishing spin. This model of the fluid was first studied by Weyssenhoff and Raabe \cite{W1947} and extended by Obukhov and Korotky in order to build cosmological models based on the EC theory \cite{KCQG1987}. In the model presented in this paper we employ an ideal Weyssenhoff fluid which is considered as a continuous medium whose elements are characterized by the intrinsic angular momentum (spin) of particles. In this model the spin density is described by the second-rank antisymmetric tensor $S_{\mu\nu}=-S_{\nu\mu}$. The spin tensor for the Weyssenhoff fluid is then postulated to be \begin{equation}\label{FC} \tau_{\mu\nu}^{~~\alpha}=S_{\mu\nu}{\rm U}^{\alpha}, \end{equation} where ${\rm U}^{\alpha}$ is the four-velocity of the fluid element. The Frenkel condition which arises by varying the Lagrangian of the sources \cite{KCQG1987} requires $S^{\mu\nu}{\rm U}_{\nu}=0$. This condition further restricts the torsion tensor to be traceless. From the microscopical viewpoint, a randomly oriented gas of fermions is the source for the spacetime torsion. However, we have to treat this issue from a macroscopic perspective, that means we need to perform suitable spacetime averaging. In this respect, the average of the spin density tensor vanishes, $\langle S_{\mu\nu} \rangle=0$ \cite{ECT3,G1986}. But even with vanishing this term at macroscopic level, the square of spin density tensor $S^2=\f{1}{2}\langle S_{\mu\nu}S^{\mu\nu}\rangle$ contributes to the total energy-momentum tensor. Taking these considerations into account, the relations (\ref{COMFE})-(\ref{FC}) then give the Einstein\rq{}s field equation with spin correction terms \cite{ECT3,NPOP} \begin{equation}\label{EFESSP} G_{\mu\nu}=\kappa\left(\rho+p-\f{\kappa}{2}S^2\right){\rm U}_{\mu}{\rm U}_{\nu}-\kappa\left(p-\f{\kappa}{4}S^2\right)g_{\mu\nu} \end{equation} where $\rho$ and $p$ are the usual energy density and pressure of the perfect fluid satisfying a barotropic equation of state $p=w\rho$. Thus, the EC equation for such a spin fluid would be equivalent to the Einstein\rq{}s equation for a perfect fluid with the effective energy density $\rho_{\rm eff}=\rho-\f{\kappa}{4}S^2$ and effective pressure $p_{\rm eff}=p-\f{\kappa}{4}S^2$. This estimate shows that, the contribution of spin of fermions to the gravitational interaction is negligible in the case of normal matter densities (e.g., in the early stages of the collapse process) while in the late stages of the collapse where one encounters the ultra-high energy densities, it is the spin contribution that decides the final fate of the collapse scenario. This is the subject of our next discussion. \section{Spin effects on the collapse dynamics and singularity removal} \label{SSR} The study of gravitational collapse of a compact object and its importance in relativistic astrophysics was initiated since the work of Datt \cite{Datt-ZPhys} and OS \cite{OS} (see also \cite{PEDFAB} for a pedagogical discussion). This model which simplifies the complexity of such an astrophysical scenario describes the process of gravitational collapse of a homogeneous dust cloud with no rotation and internal stresses in the framework of GR. Assuming the interior geometry of the collapsing object to be that of FLRW metric, they investigated the dynamics of the continual gravitational collapse of such a matter distribution under its own weight and showed that for an observer comoving with the fluid, the radius of the star crushes to zero size and the energy density diverges in a finite proper time. For this idealized model which showed that a black hole is developed as the collapse end-state, the only evolving portion of the spacetime is the interior of the collapsing object while the exterior spacetime remains as that of Schwarzschild solution with a dynamical boundary. However, in more realistic scenarios, the dynamical evolution of a collapse setting would be significantly different in the later stages of the collapse where the inhomogeneities are introduced within the densities and pressures. These effects could alter the dynamics of the horizons and consequently, the fate of collapse scenario \cite{PSMASAR}. \par The spin effects within more realistic stellar collapse models could have considerable effects on the collapse dynamics as we shall see in this section. In order to deal with this purpose, the matter content of the collapsing object is taken as a homogeneous and isotropic Weyssenhoff fluid that collapses under its own gravity. We then parametrize the interior line element as \begin{equation}\label{FRWL} ds^2=dt^2-\f{a(t)^2dr^2}{1-kr^2}-R^2(t,r)d\Omega^2, \end{equation} where $R(t,r)=ra(t)$ is the physical radius of the collapsing star with $a(t)$ is the scale factor, $k$ is a constant that is related to the curvature of spatial metric and $d\Omega^2$ is the standard line element on the unit two-sphere. The field equations then read \begin{align}\label{FE12} \left\{\begin{array}{l} \left(\f{\dot{a}}{a}\right)^2+\f{k}{a^2}=\f{8\pi G}{3}\rho-\f{(4\pi G)^2}{3}S^2, \\ \\ \\ \f{\ddot{a}}{a}=-\f{4\pi G}{3}(\rho+3p)+\f{2}{3}(4\pi G)^2S^2. \end{array}\right. \end{align} The contracted Bianchi identities give rise to the continuity equation as \begin{equation}\label{ConEq} \dot{\rho}_{\rm eff}=-3\f{\dot{a}}{a}(\rho_{\rm eff}+p_{\rm eff}), \end{equation} whence we have \begin{equation}\label{KAS} \dot{\rho}=-3\f{\dot{a}}{a}(\rho+p),~~~(S^2\dot{)}=-6\f{\dot{a}}{a}S^2. \end{equation} The first part of the above equations give \begin{equation}\label{RHOA} \rho=\rho_i \left(\frac{a}{a_i}\right)^{-3(1+w)}, \end{equation} where $\rho_i$ is the initial energy density profile and $a_i$ is the initial value of the scale factor at the initial epoch. A suitable averaging procedure leads to the following relation between the spin squared and energy densities as \cite{NPPL1983} \begin{equation}\label{S2} S^2= \f{\hbar^2}{8}A_w^{-\f{2}{1+w}}\rho^{\f{2}{1+w}}, \end{equation} where $A_w$ is a dimensional constant that depends on the equation of state parameter. It should be noticed that substituting (\ref{RHOA}) into the above expression leads to $S^2 \propto a^{-6}$ which is nothing but the solution of the second part of (\ref{KAS}). The field equations can then be re-written as \begin{align}\label{FERE} \left\{\begin{array}{l} \left(\f{\dot{a}}{a}\right)^2+\f{k}{a^2}=2Ca^{-3(1+w)}-Da^{-6}, \\ \\ \\ \f{\ddot{a}}{a}=-C(1+3w)a^{-3(1+w)}+2Da^{-6}, \end{array}\right. \end{align} where $ C=\f{4\pi G}{3}\rho_i {a_i}^{3(1+w)}$ and $D=\f{(4\pi G)^2}{24}\hbar^2 A_w^{-\f{2}{1+w}}\rho_i^{\f{2}{1+w}}a_i^6$. Next we proceed to study the collapse evolution for different values of the spatial curvature. We assume that the star begins its contraction phase from a stable situation, i.e. $\dot{a}(t_i)=0$, where $t_i$ is the initial time at which the collapse commences. Thus, from the first part of (\ref{FERE}) we find \begin{equation}\label{k} k=\left[\f{2C}{D}-a_i^{3(w-1)}\right]Da_i^{-(1+3w)}. \end{equation} Depending on the sign of the expression in double brackets, the constant $k$ may be either positive, negative or zero. Therefore we may write \begin{align}\label{signk} \left\{\begin{array}{l} k>0~~~~~~~~\f{2C}{D}>a_i^{3(w-1)}, \\ \\ k\leq0~~~~~~~~\f{2C}{D}\leq a_i^{3(w-1)}. \end{array}\right. \end{align} Let us consider the dust fluid ($w=0$) for which the solution of ($k=0$) clearly represents an expanding solution. For the case ($k<0$) the collapse velocity is non-real which is physically implausible \cite{SAS-PTP-88}. Thus the only remained case is $k>0$ for which we are to investigate the collapse dynamics for large and small values of the scale factor, i.e., the early and late stages of the collapse process, respectively. In the early stages of the collapse, the spin contribution is negligible and thus the first part of (\ref{FERE}) can be approximated as \begin{equation}\label{FEAP} \dot{a}^2\cong-k+\f{2C}{a}. \end{equation} Performing the transformation $ad\xi=\sqrt{k}dt$ we get the solution as \begin{align}\label{kpl} \left\{\begin{array}{l} a(\xi)=\f{C}{k}(1\pm \cos(\xi)), \\ \\ t(\xi)=\f{C}{k^{\f{3}{2}}}(\xi \pm \sin(\xi))+t_i, \\ \end{array}\right. \end{align} where according to equation (\ref{FEAP}) $a_i\cong2C/k$. For the collapse evolution where the scale factor has become small enough and the spin effects are dominant, the $k$ term in the first Friemann equation (\ref{FERE}) can be neglected compared to the rest. We then get \begin{equation}\label{FERTorres} \dot{a}^2\cong\f{2C}{a}-\f{D}{a^4}, \end{equation} for which the solution is given by \begin{equation}\label{kps} a(t)=\left\{a_s^3+\f{9}{2}C(t-t_s)^2-\sqrt{18C}(t-t_s)\sqrt{a_s^3-\f{D}{2C}}\right\}^{\f{1}{3}}, \end{equation} where $t_s>t_i$ represents the time at which the small scale factor regime starts at a finite value of the scale factor $a_s<a_i$. The solution (\ref{kps}) exhibits a bounce at a finite time, say $t=t_b$, where the collapse halts ($\dot{a}(t_b)=0$) at a minimum value of the scale factor given by \begin{eqnarray}\label{amin} a_{{\rm min}}=\left[\f{D}{2C}\right]^{\f{1}{3}}=\left[\f{\pi G\hbar^2\rho_i}{4A_0^2}\right]^{\f{1}{3}}a_i. \end{eqnarray} We note that for $a>a_{{\rm min}}$, equation (\ref{kps}) is always real. For a physically reasonable collapse setting the weak energy condition (WEC) must be satisfied. This condition states that for any non-spacelike vector field $T_{\alpha\beta}^{\rm{eff}}V^\alpha V^{\beta}\geq0$, which for our model amounts to $\rho_{\rm{eff}}\geq0$ and $\rho_{\rm{eff}}+p_{\rm{eff}}\geq0$. The first inequality suggests that $\rho\geq2\pi G S^2$ while the latter implies $\rho\geq4\pi G S^2$. The first inequality with the use of (\ref{S2}) gives \begin{equation}\label{WEC} \rho\leq\f{4A_0^2}{\pi G \hbar^2}=\rho_i\left(\f{a_i}{a_{\rm{min}}}\right)^3, \end{equation} whereby considering (\ref{RHOA}) we arrive at $a/a_{\rm{min}}\geq1$. Since the scale factor never reaches the values smaller than $a_{\rm{min}}$, this inequality is always held implying the satisfaction of positive energy density condition. Moreover, the second inequality with similar calculations for dust tells us $a/a_{\rm{min}}\geq 2^{\f{1}{3}}$. This means that in the later stages of the collapse as governed by a spin dominated regime, WEC is violated. Such a violation of the weak energy condition can be compared to the models where the quantum effects in the collapse scenario have been discussed \cite{Daniele-PRD-2013}. In brief, we have WEC violation for the following interval \begin{equation}\label{WECC} a_{{\rm min}}\le a < 2^{\f{1}{3}}a_{{\rm min}}. \end{equation} \subsection{Numerical analysis} \label{NA} In order to get a better understanding of the situation we perform a numerical simulation for the time behavior of the scale factor, the collapse velocity, its acceleration and Kretschmann scalar, by solving the second part of (\ref{FERE}) numerically and taking the first part as the initial constraint. The left panel of figure \ref{scf} shows that if the spin effects are neglected the collapse process terminates in a spacetime singularity (dashed curve) where the scale factor vanishes at a finite amount of comoving time. While, as the full curve shows, the scale factor begins its evolution from its initial value but deviates from the singular curve as the collapse advances. It then reaches a minimum value ($a_{{\rm min}}$) at the bounce time, $t=t_b$, after which the collapsing phase turns to an expansion. The scale factor never vanishes and hence the spacetime is regular throughout the contracting and expanding phases. \begin{figure} \begin{center} \includegraphics[scale=0.335]{sfn.eps} \includegraphics[scale=0.335]{adotn.eps} \caption {Time behavior of the scale factor (left panel) and the collapse velocity (right panel) for $C=1.11$, $a_i=1$ and $\dot{a}(t_i)=0$, $D=0.08$ (full curve) and $D=0.0$ (dashed curve).}\label{scf} \end{center} \end{figure} \begin{figure} \begin{center} \includegraphics[scale=0.33]{addotn.eps} \includegraphics[scale=0.34]{Kn.eps} \caption {Time behavior of collapse acceleration (left panel) and Kretschmann scalar (right panel) for $C=1.11$, , $a_i=1$ and $\dot{a}(t_i)=0$, $D=0.08$ (full curve) and $D=0.0$ (dashed curve).}\label{scf1} \end{center} \end{figure} The diagram for the speed of collapse indeed verifies such a behavior (see the full curve in the right panel of figure \ref{scf}) where the collapse begins at rest with the speed changing its sign from negative to positive values at the bounce time. The behavior of the collapse acceleration gives us more interesting results. We see that $\ddot a$ changes its sign at two inflection points (see the left panel of figure \ref{scf1}) in such a way that for $t<t_{{\rm 1inf}}$ the collapse undergoes an accelerated contracting phase ($\ddot a<0$ and $\dot a<0$). For the time interval $t_{{\rm 1inf}}<t<t_b$ the collapse process experiences a decelerated contracting phase where $\ddot a<0$ and $\dot a<0$. After the bounce occurs, the scenario enters an inflationary expanding regime till the second inflection point is reached, i.e., $\ddot a>0$ and $\dot a>0$ for $t_b<t<t_{{\rm 2inf}}$. Finally a decelerating expanding phase commences once the collapse acceleration passes through its second inflection point (the same cosmological scenario has been discussed in \cite{G1986}). The collapsing cloud then disperses at later times. Concomitantly, the Kretschmann scalar increases toward a maximum value but remains finite during the whole process of contraction and expansion (see the full curve in the right panel of figure \ref{scf1}) signaling the avoidance of spacetime singularity. \par Now, what would happen to the formation of apparent horizon during the entire evolution of the collapsing cloud and specially whether the bounce is visible or not? In order to answer this question we proceed by recasting the metric \ref{FRWL} into the double-null form as \begin{equation}\label{DN} ds^2=-2d\zeta^+d\zeta^{-}+R^2d\Omega^2, \end{equation} with the null one-forms defined as \begin{eqnarray}\label{NOF} d\zeta^+\!\!\!&=&\!\!\!-\f{1}{\sqrt{2}}\left[dt-\f{a}{\sqrt{1-kr^2}}dr\right],\nonumber\\ d\zeta^-\!\!\!&=&\!\!\!-\f{1}{\sqrt{2}}\left[dt+\f{a}{\sqrt{1-kr^2}}dr\right]. \end{eqnarray} From the above expressions we can easily find the null vector fields as \begin{eqnarray}\label{NVF} \partial_+=\f{\partial}{\partial\zeta^+}\!\!\!&=\!\!\!&-\sqrt{2}\left[\partial_t-\f{\sqrt{1-kr^2}}{a}\partial_r\right],\nonumber\\ \partial_-=\f{\partial}{\partial\zeta^-}\!\!\!&=\!\!\!&-\sqrt{2}\left[\partial_t+\f{\sqrt{1-kr^2}}{a}\partial_r\right]. \end{eqnarray} The condition for the radial null geodesics, $ds^2=0$, leaves us with the two kinds of null geodesics characterized by $\zeta^+=constant$ and $\zeta^-=constant$. The expansions along these two congruences are given by \begin{equation}\label{exp} \theta_{\pm}=\f{2}{R}\partial_{\pm}R. \end{equation} In a spherically symmetric spacetime, the Misner-Sharp quasi-local mass which is the total mass within the radial coordinate $r$ at the time $t$ is defined as \cite{MSMASS} \begin{eqnarray}\label{MSM} m(t,r)&=&\f{R(t,r)}{2}\left(1+g^{\mu\nu}\partial_{\mu}R(t,r)\partial_{\nu}R(t,r)\right)\nonumber\\ &=&\f{R(t,r)}{2}\left(1+\f{R(t,r)^2}{2}\theta_+\theta_-\right). \end{eqnarray} Therefore, it is the ratio $2m(t,r)/R(t,r)=\dot{R}(t,r)^2+kr^2$ that controls the formation or otherwise of trapped surfaces so that the apparent horizon defined as the outermost boundary of the trapped surfaces is given by the condition $\theta_{+}\theta_{-}=0$ or equivalently $2m(t,r_{{\rm ah}}(t))=R(t,r_{{\rm ah}}(t))$. The equation for the apparent horizon curve then reads \begin{equation}\label{AH} R(t,r_{{\rm ah}}(t))^{-2}=\left(\f{\dot a(t)}{a(t)}\right)^2+\f{k}{a(t)^2}, \end{equation} or by the virtue of the first part of \ref{FERE} \begin{equation}\label{rah} r_{{\rm ah}}(a(t))=\left[\f{2C}{a(t)}-\f{D}{a(t)^{4}}\right]^{-\f{1}{2}}, \end{equation} which shows the time at which the shell labeled by $r$ becomes trapped. Figure \ref{scf2} shows the time behavior of the apparent horizon where we see that in the presence of spin effects, the apparent horizon decreases (see the full curve) for a while to a minimum value where the first inflection point is reached ($\ddot{a}(t_{{\rm 1inf}})=0$). It then increases to a finite maximum at the bounce time and converges again in the post-bounce regime to the same minimum value where the acceleration vanishes for the second time. The apparent horizon goes to infinity at later times. In order to find the minimum value for the apparent horizon curve we can easily extremize equation (\ref{AH}) to get \begin{equation}\label{EXTRAH} a_{\star}=\left(\f{2D}{C}\right)^{\f{1}{{3}}}. \end{equation} Therefore there exists a minimum radius \begin{equation}\label{MINr} r_{{\rm min}}=r_{{\rm AH}}(a_{\star})=\f{1}{\sqrt{2}a_i}\left(\f{\hbar}{\pi G\rho_i A_0}\right)^{\f{1}{3}}, \end{equation} for which if the boundary of the collapsing object is taken as $r_b<r_{{\rm min}}$, the apparent horizon does not develop throughout the collapsing and expanding phases and therefore the bounce is uncovered. From the viewpoint of equation (\ref{AH}), we can also deduce that since the collapse velocity is bounded the apparent horizon never converges to zero implying that there exists a minimum radius (the dashed red curve) below which no horizon would form to meet the boundary of the collapsing object. However, the apparent horizon does not diverge at the bounce since $k>0$ and to which extent it could grow depends on the initial configuration of the stellar object. When the spin effects are absent, the apparent horizon decreases monotonically (see the dashed curve) to finally cover the resulted singularity. There can not be found any minimum for the boundary of the collapsing cloud so that the formation of the horizon can be prevented. \begin{figure} \begin{center} \includegraphics[scale=0.4]{rahn.eps} \caption {Time behavior of the apparent horizon curve for $a_i=1$ and $\dot{a}(t_i)=0$, $C=1.11$, $D=0.08$ (full curve) and $C=1.11$, $D=0$ (dashed curve).}\label{scf2} \end{center} \end{figure} The existence of a minimum value for the boundary implies that there exists a minimum value for the total mass contained within the collapsing cloud. Let us be more precise. Using equation (\ref{MSM}) we can re-write the dynamical interior field equations (\ref{FERE}) as \begin{eqnarray}\label{REFES12} \partial_r m(t,r)&=&4\pi G\rho_{{\rm eff}}R(t,r)^2\partial_r R(t,r),\nonumber\\ \partial_t m(t,r)&=&-4\pi Gp_{{\rm eff}}R(t,r)^2\partial_t R(t,r), \end{eqnarray} whereby integration of the first part gives \begin{equation}\label{mtr} m(t,r)=\f{4\pi G}{3}r^3\rho_i a_i^3\left[1-\f{\pi G \hbar^2}{4A_0^2}\rho_i \left(\f{a_i}{a(t)}\right)^3\right]. \end{equation} The above expression together with (\ref{EXTRAH}) and (\ref{MINr}) gives the threshold mass confined within the radius $r_{{\rm min}}$ as \begin{equation}\label{THMA} m_{\star}=m(a_{\star},r_{{\rm min}})=\f{\hbar}{\sqrt{2}A_0}. \end{equation} Thus, if the total mass is chosen so that $m<m_{\star}$, there would not exist enough mass within the collapsing cloud at the later stages of the collapse to get the light trapped and as a result, formation of the apparent horizon is avoided. Furthermore, the time derivative of the mass function, using the second part of (\ref{REFES12}) \begin{equation}\label{TDMASS} \partial_t m(t,r)=\f{\pi^2 G^2 \hbar^2\rho_i^2 a_i^6}{A_0^2a(t)^4}r^3\dot{a}(t), \end{equation} is negative throughout the contracting phase. This may be interpreted as if some mass may be thrown away from the stellar object till the bounce time is approached. At this time, $m(a_{{\rm min}},r)=0$ which can be imagined as the whole evaporation of the collapsing cloud. After this time, when the expanding phase begins, the ejected mass may be regained since $\dot{m}|_{(t>t_b)}>0$. We note that such a behavior is due to the homogeneity of the model since all the shells of matter collapse or expand simultaneously. For the case of dust fluid considered here, the exterior region of the star can be modeled by Schwarzchild spacetime since the spin effects are negligible at the early stages of the collapse. However, as the collapse advances, the mass profile is no longer constant due to the presence of negative pressure which originates from the spin contribution. Hence, at the very late stages of the collapse, The Schwarzchild spacetime may not be a suitable candidate for the matching process and instead the interior region should be smoothly matched to the exterior generalized Vaidya metric \cite{Santos-1985}. Let us consider the exterior spacetime in retarded null coordinates as \begin{equation} \label{EXO} ds_{{\rm out}}^2=f(u,r_v)du^2+2dudr_v-r_v^2(d\theta^2+\sin^2\theta d\phi^2), \end{equation} where $f(u,r_v)=1-2{\rm M}(r_v,u)/r_v$ with ${\rm M}(r_v,u)$ being the Vaidya mass. We label the exterior coordinates as $\left\{X_{{\rm out}}^{\mu}\right\}\equiv\left\{u,r_v,\theta,\phi\right\}$ where $u$ is the retarded null coordinate labeling different shells of radiation and $r_v$ is the Vaidya radius. The above metric is to be matched through the timelike hypersurface $\Sigma$ given by the condition $r=r_b$ to the interior line element given by (\ref{FRWL}). The interior coordinates are labeled as $\left\{X_{{\rm in}}^{\mu}\right\}\equiv\left\{t,r,\theta,\phi\right\}$. The induced metrics from the interior and exterior spacetimes close to $\Sigma$ then read \begin{equation}\label{INMI} ds_{\Sigma{\rm out}}^2=dt^2-a^2(t)r_{b}^2(d\theta^2+\sin^2\theta d\phi^2), \end{equation} and \begin{equation} \label{INMEX} ds_{\Sigma{{\rm out}}}^2=\left[f\big(u(t),r_v\big)\dot{u}^2+2\dot{r}_v\dot{u}\right]dt^2-r_v^2(t)(d\theta^2+\sin^2\theta d\phi^2). \end{equation} Matching the induced metrics give \begin{equation}\label{MINMET} f\dot{u}^2+2\dot{r}_v\dot{u}=1,~~~~r_v(t)=r_ba(t). \end{equation} The unit vector fields normal to the interior and exterior hypersurfaces can be obtained as \begin{eqnarray}\label{UNVF} n^{{\rm in}}_{\mu}&=&\left[0,\f{a(t)}{\sqrt{1-kr^2}},0,0\right],\nonumber\\ n^{{\rm out}}_{\mu}&=&\frac{1}{\left[f(u,r_v)\dot{v}^2+2\dot{r}_v\dot{u}\right]^{\frac{1}{2}}}\left[-\dot{r}_v,\dot{u},0,0\right]. \end{eqnarray} The extrinsic curvature tensors for the interior and exterior spacetimes are given by \begin{equation}\label{EXCIN} K^{{\rm in}}_{ab}=-n_{\mu}^{{\rm in}}\left[\frac{\partial^2X_{{\rm in}}^{\mu}}{\partial y^a\partial y^b}+\hat{\Gamma}^{\mu\,{\rm in}}_{\,\nu\sigma}\frac{\partial X_{{\rm in}}^{\nu}}{\partial y^a}\f{\partial X_{{\rm in}}^{\sigma}}{\partial y^b}\right], \end{equation} and \begin{equation}\label{EXCOUT} K^{{\rm out}}_{ab}=-n_{\mu}^{{\rm out}}\left[\frac{\partial^2X_{{\rm out}}^{\mu}}{\partial y^a\partial y^b}+\big\{^{\,\mu}_{\nu\sigma}\big\}^{\rm out}\frac{\partial X_{{\rm out}}^{\nu}}{\partial y^a}\f{\partial X_{{\rm out}}^{\sigma}}{\partial y^b}\right], \end{equation} respectively where $y^a=\{t,\theta,\phi\}$ are coordinates on the boundary. We note that in computing the components of the extrinsic curvature of the interior spacetime, the general affine connection should by utilized. However, from equations (\ref{AFC}-\ref{FEEC}) together with (\ref{FC}), we see that the affine connection is finally obtained linearly with respect to the spin density tensor. Therefore, by a suitable spacetime averaging, only the Christoffel symbols would remain to be used in (\ref{EXCIN}). The non-vanishing components of the extrinsic curvature tensors then read \begin{eqnarray}\label{EXCCS} K_{tt}^{{\rm in}}&=&0,~~~~~~K^{\!\theta\,{\rm in}}_{\,\,\theta}=K^{\!\phi\,{\rm in}}_{\,\,\phi}=\frac{\sqrt{1-kr_b^2}}{r_ba(t)},\nonumber\\ K_{tt}^{{\rm out}}&=&-\frac{\dot{u}^2\left[ff_{,r_v}\dot{u}+f_{,u}\dot{u}+3f_{,r_v}\dot{r}_v\right] +2\left(\dot{u}\ddot{r}_v-\dot{r}_v\ddot{u}\right)}{2\left(f\dot{u}^2+2\dot{r}_v\dot{u}\right)^{\frac{3}{2}}},\nonumber\\ K^{\!\theta\,{\rm out}}_{\,\,\theta}&=&K^{\!\theta\,{\rm out}}_{\,\,\theta}=\frac{f\dot{u}+\dot{r}_v}{r_v\sqrt{f\dot{u}^2+2\dot{r}_v\dot{u}}}. \end{eqnarray} Matching the components of extrinsic curvatures on the boundary give \begin{eqnarray} &&f\dot{u}+\dot{r}_v=\sqrt{1-kr_b^2},\label{matchinout1}\\ &&\dot{u}^2\left[ff_{,r_v}\dot{u}+f_{,u}\dot{u}+3f_{,r_v}\dot{r}_v\right] +2\left(\dot{u}\ddot{r}_v-\dot{r}_v\ddot{u}\right)=0.\label{matchinout2}\nonumber\\ \end{eqnarray} A straightforward but lengthy calculation reveals that (\ref{matchinout2}) results in $f(r_v,u)=f(r_v)$ on the boundary \cite{Hussain-PRD-2011}. Furthermore from (\ref{matchinout1}) and the first part of (\ref{MINMET}) we get \begin{equation}\label{rvdot} \dot{r}_v=-(1-f-kr_b^2)^{\f{1}{2}}, \end{equation} whence using the second part of (\ref{MINMET}) we readily arrive at the following equality \begin{equation}\label{EQMM} {\rm M}(r_v)=m(t,r_b). \end{equation} Thus from the exterior viewpoint, equation (\ref{THMA}) implies that there can be found a mass threshold so that for the mass distributions below such a threshold, the apparent horizon would fail to intersect the surface boundary of the collapsing cloud. Moreover, in view of (\ref{mtr}), we observe that as the scale factor increases in the post-bounce regime, the second term decreases and vanishes at late times. This leaves us with a Schwarzchild exterior spacetime with a constant mass. \section{Concluding remarks} \label{con} We studied the process of gravitational collapse of a massive star whose matter content is a homogeneous Weyssenhoff fluid in the context of EC theory. Such a fluid is considered as a perfect fluid with spin correction terms that come from the presence of intrinsic angular momentum of fermionic particles within a real star. The main objective of this paper was to show that, contrary to the OS model, if the spin contributions of the matter sources are included in the gravitational field equations, the collapse scenario does not necessarily end in a spacetime singularity. The spin effects can be negligible at the early stages of the collapse, while as the collapse proceeds, these effects would play a significant role in the final fate of the collapse scenario. This situation can be compared to the singularity removal for a FLRW spacetime in the very early universe, as we go {\it backwards-in-time} \cite{spin-bounce,POPPRD}. We showed that in contrast to the homogeneous dust collapse which leads inevitably to the formation of a spacetime singularity, the occurrence of such a singularity is avoided and instead a bounce occurs at the end of the contracting phase. The whole evolution of the star experiences four phases that the two of which are in the contracting regime and the other two ones are in the post-bounce regime. While, in the homogeneous dust case without spin correction terms, the singularity is necessarily dressed by an event horizon, formation of such a horizon can be always prevented by suitably choosing the surface boundary of the collapsing star. This signals that there exists a critical threshold value for the mass content, below which no horizon would form. The same picture can be found in \cite{BMMM} where the non-minimal coupling of gravity to fermions is allowed. Besides the model presented here, non-singular scenarios have been reported in the literature within various models such as $f(R)$ theories of gravity in Palatini \cite{fr} and metric \cite{npcb} formalisms, non-singular cosmological settings in the presence of spinning fluid in the context of EC theory \cite{bretchet}, bouncing scenarios in brane models \cite{brb} and modified Gauss-Bonnet gravity \cite{mgbb} (see also \cite{repb} for recent review). While the spacetime singularities could generically occur as the end-product of a continual gravitational collapse, it is widely believed that in the very final stages of the collapse where the scales are comparable to Planck length and extreme gravity regions are dominant, quantum corrections could generate a strong negative pressure in the interior of the cloud to finally resolve the classical singularity \cite{QGRS}. Finally, as we near to close this paper it deserves to point out that, quantum effects due to particle creation could possibly avoid the cosmological \cite{pfprd} as well as astrophysical singularities \cite{BACK}. \section{Acknowledgments} The authors would like to sincerely thank the anonymous referee for constructive and helpful comments to improve the original manuscript.
1,108,101,565,349
arxiv
\section{Introduction} \label{sec:1} With the rapid development in the computational fluid dynamics (CFD) techniques and their engineering and scientific applications, the need for accurate and efficient simulations of fluid flows has become vitally important. The lattice Boltzmann (LB) method has proved especially advantageous in simulating a variety of fluid flows, including complex flows such as multiphase and multicomponent systems, porous media flows, and turbulence~\cite{mcnamara1988use,benzi1992lattice,lallemand2021lattice}. Its firm basis resulting from certain a discretization of the Boltzmann equation and its ability to accommodate considerations beyond hydrodynamics, such as the higher order kinetic moments, contributed to many refinements of this approach. The LB method is naturally parallelizable, flexible in adopting models from kinetic theory, and it facilitates boundary condition implementations on Cartesian grids with relative ease, which have paved the way for its growing number of applications. Briefly, the method involves tracking the spatial and temporal evolution of the distribution of the particle populations due to collisions and advection along the characteristic discrete velocity directions referred to as a lattice. The collision step is often represented by a model involving the relaxation to certain equilibria, either directly involving the distribution functions~\cite{qian1992lattice} or their raw moment~\cite{d2002multiple}, central moment~\cite{geier2006cascaded} or cumulant~\cite{geier2015cumulant} representations, performed with using a single relaxation time~\cite{qian1992lattice} or multiple relaxation times~\cite{d2002multiple}, or by a model that is compliant with certain notions of entropy~\cite{karlin1999perfect}. The asymptotic continuum limit of such collide-and-stream steps on a lattice satisfying the necessary symmetry and isotropy considerations correspond to the dynamics of the fluid flow represented by the Navier-Stokes (NS) equations. The LB method typically uses uniform Cartesian grids, resulting, for example, from the choice of a square lattice in two-dimensions (2D). Real world problems are often dominated by inhomogeneous and anisotropic flows, including wall-bounded shear flows. For example, in turbulent boundary layers or flow through channels or ducts, the eddy sizes are markedly different in different coordinate directions, and which progressively increase in the direction normal to the wall. Similar situations arise in simulating flows within enclosures with high geometric skewness characterized by large disparities in the length scales or the aspect ratios. Thus, the use of square/cubic grids are to solve such problems of practical interest are associated with high computational costs both in terms of time and memory, which can be orders of magnitude more than efficient algorithms based on nonuniform, stretched grids. Hence, it becomes highly important to develop more efficient approaches that use grids which naturally accommodate the spatial variations in the flow features. As a result, much attention has been paid to extend the LB schemes involving non-uniform grids. While unstructured grids or schemes involving interpolations for considering stretched grids can be used in this regard (see e.g., the monograph~\cite{kruger2017lattice} for a survey of such methods), it can result in complicated implementations or introduce additional numerical dissipation~\cite{lallemand2000theory}. On the other hand, one of the hallmarks of the LB methods is that the perfect lock-step advection or streaming used in their standard formulations incurs minimal overall dissipation while maintaining the simplicity of its implementation. For more efficiently simulating anisotropic flows while preserving this important feature, it becomes natural to utilize rectangular lattices rather than square lattices in 2D. Thus, significant focus has been shown towards developing LB methods using rectangular lattice grids during the last decade following the initial investigation in this direction by Koelman~\cite{koelman1991simple}. For example, Hegele \emph{et al}~\cite{hegele2013rectangular}, Peng \emph{et al}~\cite{peng2016lattice}, Wang \emph{et al}~\cite{wang2019simulating} presented rectangular LB algorithms based on a single relaxation time (SRT) model via an extended lattice set, corrections to equilibrium distribution functions, and counteracting source terms, respectively, to recover the NS equations. Moreover, rectangular LB schemes based on raw moments using multiple relaxation times (MRT) were developed by Bouzidi \emph{et al}~\cite{bouzidi2001lattice} and Zhou \emph{et al}~\cite{zhou2012mrt}, which were analyzed and an improved LB scheme on a rectangular grid with the necessary correction terms that is consistent with the NS equations was constructed by Peng \emph{et al}~\cite{peng2016hydrodynamically}. However, many of these methods involved cumbersome implementations, complicated expressions for the corrections, and numerical stability issues when the grid aspect ratio of the rectangular lattice (defined later) is significantly far off from unity (i.e., characterizing strong grid stretching in one of the directions relative to the other) or for simulating flows with relatively low viscosities or high Reynolds numbers. On the other hand, recognizing that the use of central moments, which naturally preserves the Galilean invariance of those moments independently supported by the lattice, can significantly improve the stability and accuracy when compared to the use of raw moments~\cite{geier2006cascaded,asinari2008generalized,premnath2009incorporating,premnath2011three,ning2016numerical,de2017non,de2017nonorthogonal,fei2017consistent,fei2018three,Hajabdollahi201897,HAJABDOLLAHI2018838,chavez2018improving,hajabdollahi2019cascaded,fei2020mesoscopic,hajabdollahi2021central,hajabdollahi2021central,adam2019numerical,adam2021cascaded}, we recently constructed a rectangular central moment LB method (RC-LBM)~\cite{yahia2021central}, which was then further extended to three-dimensions with an improved implementation strategy~\cite{yahia2021three}. While the original central moment LB scheme was constructed using an orthogonal moment basis~\cite{geier2006cascaded}, Geier \emph{et al.}~\cite{geier2015cumulant} in 2015 provided a detailed discussion on the role of the moment basis in their development of a cumulant LB method and also constructed a variety of collision models, including those based on raw moments, central moments and cumulants using non-orthogonal moment basis and presented them in the various appendices of~ \cite{geier2015cumulant}. Moreover, the numerical stability advantages of using such non-orthogonal moment basis relative to the orthogonal moment basis were demonstrated via a linear stability analysis in~\cite{dubois2015stability}. Besides, earlier studies on cascaded LB schemes performed mathematical analysis and demonstrated consistency to the Navier-Stokes equations using such simpler basis~\cite{asinari2008generalized, premnath2009incorporating}. The use of non-orthogonal central moments in algorithmic implementations in LB schemes was later adopted in Refs.~\cite{de2017non,de2017nonorthogonal}. Hence, in contrast to the prior rectangular LB schemes, the RC-LBM used a natural non-orthogonal moment basis, a matching principle to construct the equilibria involving higher order velocity terms resulting in a simpler and significantly more robust implementation. Also, in Ref.~\cite{yahia2021three}, we also explicitly demonstrated the computational advantages of using a rectangular lattice in lieu of a square lattice in solving inhomogeneous and anisotropic flows. Moreover, the central moment LB method on a cuboid lattice presented in Ref.~\cite{yahia2021three} is modular in construction thereby allowing ready extension of the existing algorithms on a cubic lattice to cuboid lattices, and provides a unified formulation with corrections that are applicable for a wide variety of all the standard collision models. Since the LB methods, which are time-marching and weakly compressible flow solvers, represent the fluid motion in the incompressible limit asymptotically, the smaller the Mach number is used simulations, the better is their accuracy, which also enhances their numerical stability. However, a general issue in CFD, including those for the LB methods, for computing flows via reducing the Mach number to relatively small values is the associated increase in the stiffness that causes slower convergence to the steady state. This is due to the large condition number of evolution equations resulting from wide contrasts in the flow speeds and the acoustic speeds in such cases. One approach to reduce the number of steps taken for convergence is to precondition the system of flow equations, wherein such disparities between the characteristic speeds are reduced at the cost of sacrificing the temporal accuracy. As shown by Turkel (see e.g.,~\cite{turkel1987preconditioned,turkel1999preconditioning}), this has been accomplished in the context of classical CFD methods by solving the so-called preconditioned NS equations which involve an adjustable preconditioning parameter. Guo \emph{et al}~\cite{guo2004preconditioned} introduced the first preconditioned LB scheme using an SRT model, which was then extended to a MRT model involving raw moments and forcing terms by Premnath \emph{et al}~\cite{premnath2009steady}. Izquierdo and Fueyo~\cite{izquierdo2009optimal} demonstrated optimal preconditioning of a MRT-LB scheme, while Meng \emph{et al}~\cite{meng2018preconditioned} introduced a preconditioned MRT-LB algorithm for simulations of steady two-phase flows in porous media. More recently, Hajabdollahi and Premnath~\cite{hajabdollahi2019improving} developed a cascaded central moment LB scheme for simulation of preconditioned NS equations, which was then further improved by eliminating the non-Galilean invariant cubic velocity errors that are dependent on the preconditioning parameter and demonstrating significant reductions in the number of steps for convergence of a variety of flows in Ref.~\cite{Hajabdollahi201897}. Moreover, we note that, recently, a preconditioned SRT-LB approach based on a finite-volume discretization on unstructured grids was developed and studied by Walsh and Boyle~\cite{walsh2020preconditioned}. However, generally, prior investigations constructed preconditioned LB algorithms on square lattices, and rectangular LB schemes for the solution of preconditioned NS equations while maintaining the collide-and-stream steps with perfect lock-step advection have not yet been discussed in the literature. Development and analysis of such preconditioned LB schemes on rectangular lattices could enable convergence acceleration of inhomogeneous and anisotropic flows thereby further improving the computational efficiency achieved with the use of rectangular lattice grids, which is the focus of this paper. In this work, we aim to construct a new preconditioned rectangular central moment lattice Boltzmann method (referred to as the PRC-LBM in what follows). In this regard, we employ a simpler non-orthogonal moment basis and the central moment equilibria are constructed by matching with those of the continuous Maxwell distribution with appropriate modifications in order to consistently recover the preconditioned NS equations. By performing a Chapman-Enskog analysis, we will derive corrections to the equilibria that eliminate the truncation errors due to grid anisotropy and the non-Galilean invariant cubic velocity terms arising due to aliasing effects on the standard D2Q9 lattice appearing in the emergent equations under the asymptotic limit when compared to the preconditioned NS equations. The resulting corrections will be shown to depend on the preconditioning parameter, grid aspect ratio, and the normal components of the velocity gradient tensor, where the latter will be expressed in terms of second-order non-equilibrium moments which will allow their computations locally without using finite difference approximations. It may be noted that in our previous 2D rectangular LBM~\cite{yahia2021central}, the transformation matrices for mappings between the distribution functions and raw moments, which depend on the grid aspect ratio, are constructed to separate the trace of the second order moments from its other components that allows independent specification of the bulk viscosity and shear viscosity, respectively. By contrast, in this paper, following our recent work in Ref.~\cite{yahia2021three}, the PRC-LBM will segregate the bulk viscosity from the shear viscosity only within the step involving relaxation of moments to preconditioned equilibria under collision, and the pre- and post-collision mapping matrices involve a simpler moment basis and account for the grid aspect ratios only via certain diagonal scaling matrices. This results in a modular implementation of the PRC-LBM, and its formulation involves simpler expressions for the necessary corrections and the transport coefficients. Moreover, we note while all the prior rectangular LB schemes (e.g.,~\cite{peng2016hydrodynamically,yahia2021central}) indicated that the speed of sound should be adjusted to accommodate for its variations with the grid aspect ratio when compared to the speed sound for the square lattice, they did not provide any rationale or explicit formulas to accomplish this other than providing some tabulated data. In this work, we provide some physical arguments to consistently obtain the speed of sound for the rectangular lattice, with its explicit parametrization by the grid aspect ratio. Finally, we will perform some numerical studies to demonstrate the accuracy of the PRC-LBM and reductions in the number of steps for convergence to the steady state for simulations of selected cases of inhomogeneous and anisotropic flows for different choices of the preconditioning parameter and the grid aspect ratio. This paper is organized as follows. The following section (Sec.~\ref{sec:2}) discusses a consistent approach for the selection of the speed of sound for lattice schemes based on rectangular lattice grids. Next, in Sec.~\ref{sec:3}, we present a Chapman-Enskog analysis of the preconditioned non-orthogonal moment LB formulation on a rectangular D2Q9 lattice and derive the correction terms necessary to eliminate the truncation errors due to grid anisotropy and non-Galilean invariant velocity terms arising from aliasing effects. Such corrections are shown to be parameterized by the grid aspect ratio, preconditioning parameter and the velocity gradients, where the latter are obtained locally from non-equilibrium moments. The construction of the preconditioned rectangular central moment LBM for an efficient implementation is discussed in Sec.~\ref{sec:4}, with the attendant algorithmic details of the PRC-LBM provided in~\ref{sec:algorithmic-details-PRC-LBM}. Then, in Sec.~\ref{sec:5}, we present numerical results for some case studies involving anisotropic and inhomogeneous shear flows, validating the PRC-LBM for accuracy and demonstrating convergence acceleration via preconditioning on rectangular lattice grids for various characteristic parameters. Moreover, comparisons between the preconditioned rectangular central moment LBM (PRC-LBM) another formulation involving the preconditioned rectangular raw moment LBM are made in Sec.~\ref{sec:comparisonsbetweenmodels}. The conclusions of this work are highlighted in Sec.~\ref{sec:6}. \section{Selection of Speed of Sound on Rectangular Lattice Grid for Physical Consistency} \label{sec:2}\par Before discussing the preconditioned rectangular central moment LB scheme and its analysis, we will now present a general physical consideration on the selection of the speed of sound for rectangular lattice grids and its relation to the sound speed of the usual square lattice. For the D2Q9 \emph{square} lattice shown in Fig.~\ref{fig:lattice}(a), with a lattice spacing $\Delta x$ and a time step $\Delta t$ resulting in the particle speed $c=\Delta x/\Delta t$, based on considerations of isotropy and Galilean invariance, it is well known that its optimal value of the speed of sound $c_{s*}$ is given by \begin{equation}\label{Eq:1} c_{s*}= \frac{1}{\sqrt{3}}c = \frac{1}{\sqrt{3}}\frac{\Delta x}{\Delta t}= \frac{1}{\sqrt{3}}. \end{equation} Thus, $c_{s*}=1/\sqrt{3}$ in the usual lattice units (i.e., when $\Delta x=\Delta t=1.0$). For the two possible arrangements of the D2Q9 \emph{rectangular} lattice shown in Figs.~\ref{fig:lattice}(b) and ~\ref{fig:lattice}(c), we can generally define a \emph{grid aspect ratio} $r$ representing the ratio of the grid spacing in the $y$ direction with respect to that in the $x$ direction, i.e., $r=\Delta y/\Delta x$. \begin{figure}[H] \centering \includegraphics[width=0.8\textwidth] {lattice} \caption{Two-dimensional, nine velocities (D2Q9) lattice with different possible arrangements based on the grid aspect ratio $r$.} \label{fig:lattice} \end{figure} Denoting the particle speeds in the $x$ and $y$ directions, respectively, as $c_x=\Delta x/\Delta t = c$ and $c_y=\Delta y/\Delta t$, it then follows that $c_x=c$ and $c_y=r \;c$. From these two particle speeds, one may introduce two possible values of the speed of sound $c_{sx}$ and $c_{sy}$ in the two coordinate directions: $c_{sx}=c_{s*}$ and $c_{sy}=r \;c_{s*}$. However, it can be readily established from a Chapman-Enskog analysis that the pressure field is related to the local density times the square of the speed of sound, which reflects the equation of state of a weakly compressible athermal fluid motion represented by the LB method. Since the pressure field is an isotropic and a scalar quantity, the rectangular LB schemes need to use only one among the possible values of the speed of sound to satisfy physical consistency. Moreover, given that the speed of sound is a fraction of the particle speed, and in order to be consistent with the Courant-Friedrichs-Lewy condition, we prescribe that the effective speed of sound on the rectangular lattice be chosen as the one that provides the \emph{minimum} value among the two possibilities, i.e., $c_s= \mbox{min}(c_{sx}, c_{sy})$, so that it picks the more limiting case. In other words, our selection procedure for the speed of sound on the rectangular lattice $c_s$ relative to that for the corresponding square lattice $c_{s*}$ can be written as \begin{equation}\label{eq:2} c_s= q \; c_{s*}, \quad q= \mbox{min}(1,r). \end{equation} Thus, if $r<1$, $c_s= r c_{s*}$ (see Fig.~\ref{fig:lattice}(b)) and when $r>1$, $c_s= c_{s*}$ (see Fig.~\ref{fig:lattice}(c)). Moreover, when $r=1$, it naturally recovers the optimal value of $1/\sqrt{3}$ used for the square lattice. Thus, Eq.~\eqref{eq:2} automatically adapts the sound speed according to the grid aspect ratio, $r$, unlike previous LB schemes based on the rectangular lattice (e.g.,~\cite{peng2016hydrodynamically,yahia2021central}), where no such expressions are provided. Also, noting that a Chapman-Enksog analysis relates the kinematic viscosity of the fluid $\nu$, to a relaxation parameter and the square of the speed of sound $c_s^2$ (see the next section), it follows that $\nu$ is then parameterized by $q^2$, which facilitates maintaining numerical stability self consistently as the grid aspect ratio $r$ is varied with the use of rectangular lattice grids. Finally, we mention here that the specification of the Mach number $\mbox{Ma}$ for the rectangular LB simulations should be based on Eq.~\eqref{eq:2}, i.e., for any characteristic flow speed $U$, $\mbox{Ma}=U/c_s= U/(q c_{s*})= \mbox{Ma}_*/q$, where $\mbox{Ma}_*$ is the Mach number used for the square lattice. We will formulate our central moment LB scheme on a $D2Q9$ rectangular lattice grid, where the particle velocity components $e_x$ and $e_y$ in the directions $x$ and $y$ (see Fig.~\ref{fig:lattice}) following the definition of the grid aspect ratio $r$ can be written as \begin{subequations} \begin{eqnarray} \ket{e_{x}} &=& (0,1,0,-1,0,1,-1,-1,1)^\dag, \label{eq:3a}\\ \ket{e_{y}} &=& (0, 0, r, 0, -r, r, r, -r, -r)^\dag,\label{eq:3b} \end{eqnarray} \end{subequations} where $\ket{\cdot}$ denotes a column vector based on the standard `ket' notation and $\dag$ refers to taking the transpose. We will also need the following 9-dimensional vector in defining the moment basis in the next section: \begin{eqnarray} \ket{1} = (1,1,1,1,1,1,1,1,1)^\dag. \label{eq:4} \end{eqnarray} \section{Chapman-Enskog Analysis of Preconditioned LBE a Rectangular Lattice Grid: Isotropy Corrections, Macroscopic Flow Equations, and Local Expressions for Velocity Gradients}\label{sec:3} In the following, we will construct a preconditioned rectangular central moment LBM on the D2Q9 rectangular lattice with grid aspect ratio-adapted speed of sound to solve the following preconditioned NS equations: \begin{subequations}\label{eq:pNSE} \begin{eqnarray}\label{eq:5} &\partial_t \rho + \bm{\nabla}\cdot \left( \rho \bm{u}\right) = 0, \end{eqnarray} \begin{eqnarray}\label{eq:6} &\partial_t \left( \rho \bm{u}\right)+\bm{\nabla}\cdot\left( \cfrac{\rho \bm{u} \bm{u}}{\gamma}\right)= -\cfrac{1}{\gamma}\bm{\nabla}{p}+\cfrac{1}{\gamma}\; \bm{\nabla}\cdot \tensor{\tau}+\cfrac{\bm{F}}{\gamma}, \end{eqnarray} \end{subequations} where $\gamma$ is the preconditioned parameter used to achieve convergence acceleration to the steady state, $\bm{u}$ and $\rho$ are the fluid velocity and density, respectively, $p=\gamma c_s^2 \rho $ is the pressure field and $\tensor{\tau}$ is the viscous stress tensor $\tensor{\tau}=\rho \nu (\bm{\nabla}\bm{u}+ (\bm{\nabla}\bm{u})^\dag)$, and $\bm{F}$ is the body force. \subsection{Moment basis, and definitions of central moments and raw moments} In this regard, as in our previous work~\cite{yahia2021central,yahia2021three}, we employ a linearly independent set of non-orthogonal basis vectors for moments, by noting that they are chosen to especially allow for the separation of the isotropic parts from the non-isotropic parts of the second order moments for independent specification of the transport coefficients (i.e., the shear and bulk viscosities). For the D2Q9 lattice, such basis vectors are defined using a combination of the monomials of the type $\ket{e_x^m e_y^n}$, where $m$ and $n$ are integer exponents, as follows: \begin{equation} \label{eq:7} \tensor{T}= \Big[\ket{T_{0}},\ket{T_{1}},\ket{T_{2}},\ldots,\ket{T_{8}} \Big]^{\dag}, \end{equation} where \begin{align}\label{eq:8} &\ket{T_0}=\ket{1}, \qquad \ket{T_1}=\ket{e_x},\qquad \ket{T_2}=\ket{e_y}, \qquad \ket{T_3}=\ket{e_x^2 +e_y^2},\qquad \ket{T_4}=\ket{e_x^2-e_y^2}, \nonumber\\ &\ket{T_5}=\ket{{e_x} {e_y}},\qquad\ket{T_6}=\ket{e_x^2 {e_y}},\qquad \ket{T_7}=\ket{{e_x} e_y^2},\qquad \ket{T_8}=\ket{e_x^2 e_y^2}. \end{align} Then, defining the sets of discrete distribution functions $\mathbf{f}$, their equilibria $\mathbf{f}^{eq}$, and the source terms $\mathbf{S}$, which represent the effect of the body force $\bm{F}=(F_x,F_y)$ on the fluid motion, respectively, as \begin{subequations} \begin{eqnarray}\label{eq:8A} &\mathbf{f}=\left(f_{0},f_{1},f_{2},\ldots,f_{8}\right)^{\dag}, \quad \mathbf{{f}}^{eq}=\left({f}_{0}^{eq},{f}_{1}^{eq},{f}_{2}^{eq},\ldots,{f}_{8}^{eq}\right)^{\dag},\quad &\mathbf{S}=\left({S}_{0},{S}_{1},{S}_{2},\ldots,{S}_{8}\right)^{\dag}, \end{eqnarray} \end{subequations} we can then express their \emph{raw moments} of order ($m+n$), $k_{mn}^\prime$, $k_{mn}^{eq\prime}$, and $\sigma_{mn}^\prime$, respectively, \begin{subequations}\label{eq:9A} \begin{eqnarray} &k_{mn}^\prime= \sum_{\alpha=0}^{8} f_{\alpha} \;e_{\alpha x}^m e_{\alpha y}^n,\\ &k_{mn}^{eq\prime}= \sum_{\alpha=0}^{8} f_{\alpha}^{eq} \;e_{\alpha x}^m e_{\alpha y}^n,\\ &\sigma_{mn}^\prime= \sum_{\alpha=0}^{8} S_{\alpha} \;e_{\alpha x}^m e_{\alpha y}^n. \end{eqnarray} \end{subequations} Similarly, we can write the corresponding \emph{central moments} $k_{mn}$, $k_{mn}^{eq}$ and $\sigma_{mn}$, respectively, by subtracting the particle velocities ($e_{\alpha x}, e_{\alpha y}$) by the fluid velocity ($u_x,u_y$) as follows: \begin{subequations}\label{eq:9B} \begin{eqnarray} &k_{mn}= \sum_{\alpha=0}^{8} f_{\alpha} \;(e_{\alpha x} -u_x)^m (e_{\alpha y} -u_y)^n, \\ &k_{mn}^{eq}= \sum_{\alpha=0}^{8} f_{\alpha}^{eq} \;(e_{\alpha x} -u_x)^m (e_{\alpha y} -u_y)^n\\ &\sigma_{mn}= \sum_{\alpha=0}^{8} S_{\alpha} \;(e_{\alpha x} -u_x)^m (e_{\alpha y} -u_y)^n. \end{eqnarray} \end{subequations} For convenience, we collect the various raw moments supported by the lattice set in view of the moment basis given in Eq.~\eqref{eq:8} in the form of the following 9-dimensional vectors as \begin{subequations} \label{eq:9AA} \begin{eqnarray} &\mathbf{n}=\left(k_{00}^\prime, k_{10}^\prime, k_{01}^\prime, k_{20}^\prime+ k_{02}^\prime, k_{20}^\prime - k_{02}^\prime, k_{11}^\prime, k_{21}^\prime, k_{12}^\prime, k_{22}^\prime\right)^{\dag},\\ &\mathbf{n}^{eq}=\left(k_{00}^{eq \prime}, k_{10}^{eq \prime}, k_{01}^{eq \prime}, k_{20}^{eq \prime}+ k_{02}^{eq\prime}, k_{20}^{eq\prime} - k_{02}^{eq\prime}, k_{11}^{eq\prime}, k_{21}^{eq\prime}, k_{12}^{eq\prime}, k_{22}^{eq\prime}\right)^{\dag},\label{eq:basemomenteqvector}\\ &\mathbf{\Psi}=\Big(\sigma_{00}^\prime, \sigma_{10}^\prime, \sigma_{01}^\prime, \sigma_{20}^\prime+ \sigma_{02}^\prime, \sigma_{20}^\prime- \sigma_{02}^\prime, \sigma_{11}^\prime, \sigma_{21}^\prime, \sigma_{12}^\prime, \sigma_{22}^\prime\Big)^{\dag}, \end{eqnarray} \end{subequations} Then, the mappings between the various raw moments and the distribution functions can be compactly expressed via the matrix $\tensor{T}$ as \begin{eqnarray} \label{eq:9} &\mathbf{n}= \tensor{T} \mathbf{f}, \quad \mathbf{n}^{eq}= \tensor{T} \mathbf{f^{eq}}, \quad \mathbf{\Psi}= \tensor{T}\mathbf{S}. \end{eqnarray} Here, it should be mentioned that we use the combinations of the second order moment basis $\ket{e_x^2 +e_y^2}$ and $\ket{e_x^2 -e_y^2}$ to retain the flexibility of an independent specification of the bulk viscosity and shear viscosity, and the matrix $\tensor{T}$ as formulated above then facilitates in the demonstration of the consistency of our approach with the preconditioned NS equations and in the derivation of the required attendant corrections in the reminder of this section. However, in the actual implementation of the algorithm in the next section (see Sec.~\ref{sec:4}), we introduce the effects equivalent to the independent evolution of the moments related to $\ket{e_x^2 +e_y^2}$ and $\ket{e_x^2 -e_y^2}$ only within the sub-step involving the relaxations under collision and not for performing the mappings between the distribution functions and moments. \subsection{Preconditioned Lattice Boltzmann Equation} Next, it is important to note that the use of the rectangular lattice would result in an anisotropic form of the viscous stress tensor dependent on the grid aspect ratio $r$. Such spurious effects along with the truncation errors arising from the non-Galilean invariant aliasing effects on the D2Q9 lattice dependent on the cubic velocity terms and the preconditioning parameter need to be eliminated via certain counteracting corrections, which appear in the evolution of the non-equilibrium part of the second order moments (see Ref~\cite{yahia2021central}). Since by construction, the non-equilibrium second order central moments are identical to those of raw moments, it suffices to perform a consistency analysis and derive the necessary correction terms based on the simpler preconditioned rectangular raw moment MRT formulation of the lattice Boltzmann equation (MRT-LBE) written in a compact matrix-vector form given by \begin{equation}\label{eq:10} \mathbf{f} (\bm{x}+\mathbf{e}\Delta t, t+\Delta t)- \mathbf{f} (\bm{x},t) = \tensor{T^{-1}} \Big[ \tensor{\Lambda} \; \left(\; \mathbf{n}^{eq}-\mathbf{n} \;\right) + \left(\tensor{I} - \frac{\Lambda}{2}\right) \mathbf{\Psi}\Delta t \Big], \end{equation} where $\tensor{I}$ is an identity matrix of dimension $9\times 9$ and $\tensor{\Lambda}$ is a diagonal matrix holding the relaxation parameters given by \begin{equation}\label{eq:11} \tensor{\Lambda} = \mbox{diag} \; \big( 0, 0, 0,\omega_3, \omega_4,\omega_5, \omega_6, \omega_7, \omega_8 \big). \end{equation} The solution of this LBE (Eq.~\eqref{eq:10}) yields the distribution functions $f_\alpha=f_\alpha(\bm{x},t+\Delta t)$, whose leading order moments then provide the hydrodynamic fields as \begin{equation}\label{eq:hydrodynamicfields} \rho =\sum_{\alpha=0}^{8} f_{\alpha}, \qquad \rho \bm{u} =\sum_{\alpha=0}^{8} f_{\alpha} \bm{e}_{\alpha} + \frac{\bm{F}}{2\gamma}\Delta t, \qquad p= \gamma c_s^2 \rho. \end{equation} The key issue here is the specification of the moment equilibria components appearing in $\mathbf{n}^{eq}$ in Eq.~\eqref{eq:10} so that the preconditioned NS equations (Eq.~\eqref{eq:pNSE}) can be recovered consistently on a rectangular lattice grid. \subsection{Preconditioned Equilibria and Sources: Raw Moments and Central Moments} \par In this regard, our starting point is the matching of the components of the discrete raw moment equilibria supported by the D2Q9 lattice with those following from the continuous Maxwell distribution, where the speed of sound is based on Eq.~\eqref{eq:2}. Then, we account for the modifications needed for recovering the preconditioned NS equations, which was obtained in an earlier analysis performed in Ref.~\cite{Hajabdollahi201897} in the case of the \emph{square} lattice. Using these as our initial formulation, we can then write the leading terms of the raw moment equilibria as \begin{align}\label{eq:12} &k_{00}^{eq\prime}=\rho, \quad\quad k_{10}^{eq\prime}=\rho u_x,\quad\quad k_{01}^{eq\prime}=\rho u_y,\nonumber\\ &k_{20}^{eq\prime}=q^2 c_{s*}^2\rho+\frac{\rho u_x^2}{\gamma} ,\quad\quad\qquad k_{02}^{eq\prime}=q^2c_{s*}^2\rho+\frac{\rho u_y^2}{\gamma}, \quad\quad\quad k_{11}^{eq\prime}=\frac{\rho u_x u_y}{\gamma},\nonumber \\ &k_{21}^{eq\prime}=\rho \left(q^2 c_{s*}^2+ \frac{u_x^2}{\gamma^2} \right)u_y,\quad\quad k_{12}^{eq\prime}=\rho \left(q^2 c_{s*}^2+ \frac{u_y^2}{\gamma^2}\right)u_x,\quad k_{22}^{eq\prime}=\rho q^4 c_{s*}^4 + \rho q^2 c_{s*}^2\left( u_x^2+ u_y^2 \right)+ \rho u_x^2 u_y^2. \end{align} The expressions in Eq.~\eqref{eq:12} need to be corrected further to consistently recover the preconditioned NS equations on \emph{rectangular} lattice grids, which will be accomplished later in this section. Moreover, the raw moment equilibria of the source terms also need to be scaled appropriately by the preconditioning parameter $\gamma$ as follows~\cite{premnath2009steady,Hajabdollahi201897}: \begin{align}\label{eq:13} &\sigma_{00}^{\prime}=0,\quad \sigma_{10}^{\prime}=\frac{F_x}{\gamma},\quad \sigma_{01}^{\prime}=\frac{F_y}{\gamma},\qquad \sigma_{20}^{\prime}= \frac{2F_x u_x}{\gamma^2}, \quad \sigma_{02}^{\prime}=\frac{2F_y u_y}{\gamma^2},\quad \sigma_{11}^{\prime}=\frac{ \left(F_xu_y+F_yu_x \right)}{\gamma^2}, \end{align} and $\hat{\sigma}_{mn}^\prime=0,$ \;if $(m+n)\ge 3$. Then, the countable set of preconditioned discrete central moment equilibria on the D2Q9 lattice can be obtained from the corresponding the raw moment equilibria given in Eq.~\eqref{eq:12} via the binomial transformations as \begin{align}\label{eq:14} &k_{00}^{eq}=\rho, \qquad k_{10}^{eq}=0,\qquad k_{01}^{eq}=0,\nonumber\\ &k_{20}^{eq}=q^2 c_{s*}^2\rho+\left(\frac{1}{\gamma}-1\right)\rho u_x^2 ,\qquad k_{02}^{eq}=q^2 c_{s*}^2\rho+\left(\frac{1}{\gamma}-1\right)\rho u_y^2,\qquad k_{11}^{eq}=\left(\frac{1}{\gamma}-1\right)\rho u_x u_y,,\nonumber\\ &k_{21}^{eq}=\left( \frac{1}{\gamma^2}-\frac{3}{\gamma}+2 \right)\rho u_x^2 u_y, \qquad k_{12}^{eq}=\left( \frac{1}{\gamma^2}-\frac{3}{\gamma}+2 \right)\rho u_x u_y^2 ,\qquad k_{22}^{eq}=q^4 c_{s*}^4\rho, \end{align} Note that since the fourth order component of the equilibrium central moment $k_{22}^{eq}$ does not appear at the leading order in Chapman-Enskog analysis of the preconditioned NS equations, for simplicity, we set it as $k_{22}^{eq}=q^4 c_{s*}^4\rho$ following our previous work~\cite{yahia2021central}. Moreover, similarly the central moment components of the source terms follow from the corresponding raw moments Eq.~\eqref{eq:13} using binomial expansions as \begin{align}\label{eq:13-central} &\sigma_{00}=0,\quad \sigma_{10}=\frac{F_x}{\gamma},\quad \sigma_{01}=\frac{ F_y}{\gamma},\nonumber\\ &\sigma_{20} = \left(\frac{1}{\gamma^2}-\frac{1}{\gamma}\right)2F_x u_x, \quad \sigma_{02}=\left(\frac{1}{\gamma^2}-\frac{1}{\gamma}\right)2F_y u_y,\quad \sigma_{11}=\left(\frac{1}{\gamma^2}-\frac{1}{\gamma}\right)\left(F_xu_y+F_yu_x \right), \end{align} and $\sigma_{mn}=0,$ \;if $(m+n)\ge 3$. An alternative approach to implementing body forces in central moment LB schemes has been proposed by Fei and Luo~\cite{fei2017consistent,fei2018three}. It has been used for various applications, including thermal flows and multiphase flows, and involves including the effect of the body force on the higher order central moments. By contrast, the forcing scheme given above and others such as in~\cite{geier2015cumulant,hajabdollahi2018symmetrized} involve the effect of body forces up to the second order moments, and are constructed to recover the hydrodynamics (Navier-Stokes equations) as prescribed by the Chapman-Enskog analysis. \subsection{Chapman-Enskog Analysis: Identification of Truncation Errors due to Grid Anisotropy, Preconditioning and Non-Galilean Invariance from Aliasing Effects on the D2Q9 Rectangular Lattice}\par Next, we will perform a Chapman-Enskog (C-E) analysis~\cite{chapman1990mathematical} in order to determine the truncation errors arising from grid anisotropy with the use of the rectangular lattice and the non-Galilean invariant (GI) cubic velocity terms due to the aliasing effects manifesting as a result of the discreteness of the D2Q9 lattice. This would be carried out following the approach taken in our previous works~\cite{premnath2009incorporating,Hajabdollahi201897,yahia2021central}. First, expanding the moments about their equilibria and the time derivative by means of a multiple time expansion, we write \begin{equation}\label{eq:C-Eexpansion} \mathbf{n}= \sum_{j=0}^{\infty} \epsilon^{j} \mathbf{n}^{(j)}, \quad \partial_t= \sum_{j=0}^{\infty} {\epsilon}^{j} {\partial_{t_j}}, \end{equation} where $\epsilon= \Delta t$ represents the perturbation parameter serving in what follows to delineating the terms of different orders. Substituting the above equation in Eq.~\eqref{eq:10} and rewriting its left side via a Taylor series expansion and converting the resulting expression in terms of moments using $\mathbf{f}= \tensor{T}^{-1} \mathbf{n}$, we obtain the evolution equations of the moments of different orders of $\epsilon$, i.e., $O(\epsilon^k)$, where $k=0,1$, and $2$ as \begin{subequations}\label{eq:16} \begin{eqnarray} \centering &O (\epsilon^0 ): \mathbf{n}^{(0)} = \mathbf{n}^{eq}, \label{eq:16a}\\ &O (\epsilon^1 ): \left(\partial_{t_0} + \bm{E}_i \partial_i\right) \mathbf{n}^{(0)} = - \tensor{\Lambda} \; \mathbf{n}^{(1)}+\mathbf{\Psi}, \label{eq:16b} \\ &O (\epsilon^2 ): \partial_{t_1} \; \mathbf{n}^{(0)} + \left(\partial_{t_0} + \bm{E}_i \partial_i\right) \;\left[ \tensor{I} - \frac{\tensor{\Lambda}} {2} \right] \mathbf{n}^{(1)} = - \tensor{\Lambda} \; \mathbf{n}^{(2)}, \label{eq:16c} \end{eqnarray} \end{subequations} where $\tensor{E}_i= \tensor{T} \;( \mathbf{e}_i \; \tensor{I})\tensor{T}^{-1}$ and $ \mathbf{e}_i=\ket{e_{i}}$, $ i \in (x,y)$. Then, substituting the raw moments and the source terms shown in Eqs.~\eqref{eq:12} and \eqref{eq:13}, respectively, into Eq.~\eqref{eq:16b}, the relevant moment system $O(\epsilon)$ which are relevant in recovering the preconditioned hydrodynamics can be written as \begin{subequations} \begin{eqnarray}\label{eq:17a} &\partial_{t_0}\rho + \partial_x \rho u_x + \partial_y \rho u_y = 0, \end{eqnarray} \begin{eqnarray}\label{eq:17b} &\partial_{t_0}\rho u_x + \partial_x (\rho q^2 c_{s*}^2 +\rho u_x^2/\gamma) + \partial_y (\rho u_x u_y/\gamma) = F_x/\gamma, \end{eqnarray} \begin{eqnarray}\label{eq:17c} &\partial_{t_0}\rho u_y + \partial_x (\rho u_x u_y/\gamma)+ \partial_y (\rho q^2 c_{s*}^2 + \rho u_y^2/\gamma) = F_y/\gamma, \end{eqnarray} \begin{eqnarray}\label{eq:17d} & \partial_{t_0}\left[2 \rho q^2 c_{s*}^2 + \rho( u_x^2+ u_y^2)/\gamma\right]+ \partial_x \left[(1 + q^2 c_{s*}^2)\rho u_x + \rho u_x u_y^2/\gamma^2 \right] + \partial_y \left[(r^2 + q^2 c_{s*}^2)\rho u_y + \rho u_x^2 u_y/\gamma^2 \right] = \nonumber \\ & - \omega_3\; n_3^{(1)}+ 2 \left( F_x u_x + F_y u_y \right)/\gamma^2, \end{eqnarray} \begin{eqnarray}\label{eq:17e} & \partial_{t_0}\left[\rho (u_x^2 - u_y^2)/\gamma\right]+ \partial_x \left[(1 - q^2 c_{s*}^2)\rho u_x - \rho u_x u_y^2/\gamma^2\right] + \partial_y \left[(-r^2 + q^2 c_{s*}^2)\rho u_y +\rho u_x^2 u_y/\gamma^2\right] = \nonumber \\ &- \omega_4\; n_4^{(1)}+ 2 \left(F_x u_x - F_y u_y\right)/\gamma^2, \end{eqnarray} \begin{eqnarray}\label{eq:17f} & \partial_{t_0}(\rho u_x u_y/\gamma)+ \partial_x \left[ q^2 c_{s*}^2 \rho u_y + \rho u_x^2 u_y/\gamma^2 \right] + \partial_y \left[q^2 c_{s*}^2 \rho u_x +\rho u_x u_y^2/\gamma^2 \right] = \nonumber \\ &-\omega_5\; n_5^{(1)}+ \left(F_x u_y+ F_y u_x \right)/\gamma^2. \end{eqnarray} \end{subequations} Similarly, the $O(\epsilon^2)$ evolution equations for the conserved moments at the slower time scale $t_1$ reads from Eq.~\eqref{eq:16c} as \begin{subequations} \begin{eqnarray} &\partial_{t_1}\rho=0, \label{eq:18a}\\& \partial_{t_1}\left(\rho u_x\right)+\partial_x \left[\dfrac{1}{2}\left(1-\dfrac{\omega_3}{2}\right) n_3^{(1)}+\dfrac{1}{2}\left(1-\dfrac{\omega_4}{2}\right)n_4^{(1)}\right] +\partial_y \left[\left(1-\dfrac{\omega_5}{2}\right)n_5^{(1)}\right]=0, \label{eq:18b}\\ &\partial_{t_1}\left(\rho u_y\right)+\partial_x \left[\left(1-\dfrac{\omega_5}{2}\right) n_5^{(1)}\right]+\partial_y \left[\dfrac{1}{2}\left(1-\dfrac{\omega_3}{2}\right)n_3^{(1)}-\dfrac{1}{2}\left(1-\dfrac{\omega_4}{2}\right)n_4^{(1)}\right]=0. \label{eq:18c} \end{eqnarray} \end{subequations} The above Eqs.~\eqref{eq:18a}-\eqref{eq:18c} depend on the components of the non-equilibrium moments $n_3^{(1)}$, $n_4^{(1)}$ and $n_5^{(1)}$, which can be obtained from Eqs.~(\ref{eq:17d})-(\ref{eq:17f}). Hence, rewriting Eqs.~(\ref{eq:17d})-(\ref{eq:17f}) to express the non-equilibrium moments as \begin{subequations} \begin{align} &n_3^{(1)}= \frac{1}{\omega_3} \Big\{-\partial_{t_0}\left[ 2 q^2 c_{s*}^2 \rho + \rho u_x^2/\gamma + \rho u_y^2/\gamma \right] - \partial_x \left[(1+q^2 c_{s*}^2)\rho u_x\right]- \partial_x (\rho u_x u_y^2/\gamma^2)- \partial_y \left[(r^2 +q^2 c_{s*}^2)\rho u_y\right]\nonumber\\ & \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \quad - \partial_y (\rho u_x^2 u_y/\gamma^2)+ 2(F_x u_x + F_y u_y)/\gamma^2 \Big\}, \label{eq:20a}\\ &n_4^{(1)}= \frac{1}{\omega_4} \Big\{-\partial_{t_0}[(\rho u_x^2-\rho u_y^2)/\gamma] - \partial_x \left[(1- q^2 c_{s*}^2)\rho u_x\right]+ \partial_x(\rho u_x u_y^2/\gamma^2) + \partial_y \left[(r^2-q^2 c_{s*}^2)\rho u_y\right]\nonumber\\ &\; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \quad - \partial_y(\rho u_x^2 u_y/\gamma^2) + 2(F_x u_x - F_y u_y)/\gamma^2 \Big\}, \label{eq:20b}\\ &n_5^{(1)}= \frac{1}{\omega_5} \Big\{-\partial_{t_0}(\rho u_x u_y/\gamma) - \partial_x (q^2 c_{s*}^2 \rho u_y)- \partial_x (\rho u_x^2 u_y/\gamma^2)- \partial_y (q^2 c_{s*}^2\rho u_x)- \partial_y(\rho u_x u_y^2/\gamma^2) + (F_x u_y + F_x u_y)/\gamma^2 \Big\}.\label{eq:20c} \end{align} \end{subequations} Clearly, the second order non-equilibrium moments $n_3^{(1)}$, $n_4^{(1)}$ and $n_5^{(1)}$ involve terms related to the non-GI cubic velocity errors, whose prefactors are dependent on the preconditioning parameter $\gamma$ and the grid-anisotropy error terms dependent on the grid aspect ratio $r$, in addition to those are related to the physically consistent terms that contribute towards the viscous stress tensor. Denoting the truncation errors related to the grid anisotropy as $E_{js}$ and the non-GI cubic velocity terms as $E_{jg}$ for $j=3,4$ and $5$, and after replacing the time derivatives appearing in Eqs.~\eqref{eq:20a}-\eqref{eq:20c} in terms of the spatial derivatives of the conserved moments via Eqs.~\eqref{eq:17a}-\eqref{eq:17c}, we can then simplify the resulting equations by retaining terms up to $O(u_i^3)$ (see Refs.~\cite{Hajabdollahi201897} and~ \cite{yahia2021central} for details). Then, the final expressions for the second order non-equilibrium moment components on the rectangular lattice resulting for our LB formulation can be written as follows: \begin{subequations}\label{eq:2ndordernoneqmmomentswitherrorterms} \begin{eqnarray} n_3^{(1)} &=& -\frac{2 q^2 c_{s*}^2}{\omega_3} \rho \bm{\nabla}\cdot \bm{u}+ E_{3g}+ E_{3s} ,\\ n_4^{(1)} &=&-\frac{2 q^2 c_{s*}^2 }{\omega_4} \rho \left(\partial_x u_x - \partial_y u_y \right)+ E_{4g}+ E_{4s},\\ n_5^{(1)} &=& -\frac{q^2 c_{s*}^2\rho}{\omega_5} \left(\partial_x u_y + \partial_y u_x \right)+ E_{5g}, \end{eqnarray} \end{subequations} where the expressions for the truncation errors due to grid anisotropy $E_{3s}$ and $E_{4s}$, and the non-GI cubic velocity aliasing errors $E_{3g}$, $ E_{4g}$,a nd $ E_{5g}$ read as \begin{subequations}\label{eq:truncationerrors-gridanisotropy} \begin{eqnarray} E_{3s}&=& \cfrac{1}{\omega_3} (3 q^2 c_{s*}^2 -1)\rho \partial_x u_x+ \cfrac{1}{\omega_3} (3 q^2 c_{s*}^2 -r^2)\rho \partial_y u_y, \label{eq:truncationerrors3s}\\ E_{4s}&=& \dfrac{1}{\omega_4} (3 q^2 c_{s*}^2 -1)\rho \partial_x u_x- \dfrac{1}{\omega_4} (3 q^2 c_{s*}^2 -r^2)\rho \partial_y u_y,\label{eq:truncationerrors4s} \end{eqnarray} \end{subequations} and \begin{subequations}\label{eq:truncationerrors-nonGI} \begin{eqnarray} E_{3g} &=& \cfrac{1}{\omega_3} \left[(2/\gamma+1)q^2 c_{s*}^2 -1 \right] u_x \partial_x \rho+ \cfrac{1}{\omega_3} \left[(2/\gamma+1) q^2 c_{s*}^2 -r^2 \right] u_y \partial_y \rho + \cfrac{M_3}{\omega_3} \; \partial_x u_x+ \cfrac{N_3}{\omega_3}\; \partial_y u_y,\label{eq:truncationerrors3g}\\ E_{4g} &=& \dfrac{1}{\omega_4} \left[ (2/\gamma+1)q^2 c_{s*}^2 -1 \right] u_x \partial_x \rho- \dfrac{1}{\omega_4} \left[(2/\gamma+1)q^2 c_{s*}^2 -r^2 \right] u_y \partial_y \rho + \dfrac{M_4}{\omega_4}\partial_x u_x+ \dfrac{N_4}{\omega_4}\partial_y u_y,\label{eq:truncationerrors4g}\\ E_{5g} &=& \frac{1}{\omega_5}(1/\gamma-1) q^2 c_{s*}^2 \left(u_x \partial_y \rho + u_y \partial_x \rho\right) + \frac{1}{\omega_5} (1/\gamma^2-1/\gamma) \rho u_x u_y \left( \partial_x u_x+ \partial_y u_y\right),\label{eq:truncationerrors5g} \end{eqnarray} \end{subequations} where the prefactors $M_3$, $N_3$, $M_4$ and $N_4$ appearing in Eqs.~\eqref{eq:truncationerrors3g} and \eqref{eq:truncationerrors4g} can be expressed as \begin{subequations}\label{eq:prefactors} \begin{eqnarray} M_3 &=& \rho \left[(4/\gamma^2-1/\gamma) u_x^2 + (1/\gamma^2-1/\gamma) u_y^2\right],\label{eq:prefactors1}\\ N_3 &=& \rho \left[(4/\gamma^2-1/\gamma) u_y^2 + (1/\gamma^2-1/\gamma) u_x^2\right],\label{eq:prefactors2}\\ M_4 &=& \rho \left[(4/\gamma^2-1/\gamma) u_x^2 - (1/\gamma^2-1/\gamma) u_y^2\right],\label{eq:prefactors3}\\ N_4 &=& \rho \left[-(4/\gamma^2-1/\gamma)u_y^2 + (1/\gamma^2-1/\gamma) u_x^2\right].\label{eq:prefactors4} \end{eqnarray} \end{subequations} Recognizing $q=\mbox{min}\{r,1\}$, it is evident that the various truncation errors given above are dependent on the preconditioning parameter $\gamma$ and the grid aspect ratio $r$, which need to be eliminated. \subsection{Corrections via Extended Moment Equilibria for Elimination of Truncation Errors due to Grid Anisotropy, Preconditioning, and Aliasing Effects}\label{subsec:correctionsextendemomentequilibria} In this regard, we propose an extended moment equilibria $\mathbf{n}^\mathit{eq,eff}$ \begin{eqnarray}\label{eq:30} \mathbf{n}^\mathit{eq,eff}= \mathbf{n}^{eq}+ \Delta t \mathbf{n}^{eq(1)}, \end{eqnarray} where $\mathbf{n}^{eq(1)}$ are the corrections made to the base moment equilibria $\mathbf{n}^{eq}$ introduced in Eqs.~\eqref{eq:basemomenteqvector} and~\eqref{eq:12}. As shown in Eqs.~\eqref{eq:2ndordernoneqmmomentswitherrorterms}, \eqref{eq:truncationerrors-gridanisotropy}, and \eqref{eq:truncationerrors-nonGI}, the truncation errors exist in the evolution of the second order moments $n_3$, $n_4$ and $n_5$, and involve spatial derivatives of the velocity field and the density field. Recognizing this fact, for the purpose of consistently recovering the preconditioned NS equations in a rectangular lattice grid, we thus write the following expressions for the corrections to the moment equilibria, where it suffices to introduce them to only the second order components: \begin{eqnarray}\label{eq:31} n^{eq(1)}_j = \begin{cases} \theta_{3x} \partial_x u_x + \theta_{3y} \partial_y u_y + \lambda_{3x} \partial_x \rho + \lambda_{3y} \partial_y \rho & \quad j =3\\ \theta_{4x} \partial_x u_x - \theta_{4y} \partial_y u_y + \lambda_{4y} \partial_x \rho + \lambda_{4y} \partial_y \rho & \quad j =4\\ \theta_{5x} \partial_x u_x + \theta_{5y} \partial_y u_y + \lambda_{5y} \partial_x \rho + \lambda_{5y} \partial_y \rho & \quad j =5\\ 0 & \quad \mbox{otherwise},\\ \end{cases} \end{eqnarray} Here, $\theta_{jx}$, $\theta_{jy}$, $\lambda_{jx}$, $\lambda_{jy}$, where $j=3,4,$ and $5$, are the unknown coefficients, which will be determined by carrying out a modified C-E expansion that includes the extended moment equilibria given in Eq.~\eqref{eq:30}. Thus, replacing the expansions appearing in Eq.~\eqref{eq:C-Eexpansion} with \begin{eqnarray} \label{eq:C-Eexpansion1} \mathbf{n} &=& \mathbf{n}^\mathit{eq,eff}+\epsilon \mathbf{n}^{(1)} + \epsilon^2 \mathbf{n}^{(2)}+ \ldots = \mathbf{n}^{(0)}+\underline{\epsilon \mathbf{n}^{eq(1)}} +\epsilon \mathbf{n}^{(1)}+ \epsilon^2 \mathbf{n}^{(2)}+ \ldots \nonumber\\ \partial_t&=&\partial_{t_0} +\epsilon \partial_{t_1} + \epsilon^2 \partial_{t_2}+ \ldots, \end{eqnarray} and performing the same steps that follow Eq.~\eqref{eq:C-Eexpansion} with using Eq.~\eqref{eq:C-Eexpansion1}, then the evolution of the moment systems at various orders of $\epsilon$ in Eq.~\eqref{eq:16} modify to the following by accounting for the presence of the corrections $\mathbf{n}^{eq(1)}$: \begin{subequations} \begin{eqnarray}\label{eq:16-extendedequilibria} \centering &O (\epsilon^0 ): \mathbf{n}^{(0)} = \mathbf{n}^\mathit{eq}, \label{eq:16-extendedequilibria-a}\\ &O (\epsilon^1 ): \left({\partial_{t_0}} + \tensor{E}_i \partial_i\right) \mathbf{n}^{(0)} = - \tensor{\Lambda} \left[ \mathbf{n}^{(1)}- \underline{\mathbf{n}^{eq(1)}}\right] +\mathbf{\Psi}, \label{eq:16-extendedequilibria-b} \\ &O (\epsilon^2 ): {\partial_{t_1}} \mathbf{n}^{(0)} + \left({\partial_{t_0}} + \tensor{E}_i \partial_i\right) \left[\left( \tensor{I} - \frac{\tensor{\Lambda}} {2} \right)\mathbf{n}^{(1)}\right] + \left({\partial_{t_0}} + \tensor{E}_i \partial_i\right) \underline{\left[ \frac{\tensor{\Lambda}} {2} \mathbf{n}^{eq(1)}\right]}= - \tensor{\Lambda}\mathbf{n}^{(2)}. \label{eq:16-extendedequilibria-c} \end{eqnarray} \end{subequations} In view of the derivation given in the previous section and the changes appearing in Eq.~\eqref{eq:16-extendedequilibria-b} relative to Eq.~\eqref{eq:16b}, the second order non-equilibrium moments for the rectangular lattice with preconditioning in Eq.~\eqref{eq:2ndordernoneqmmomentswitherrorterms} modify to \begin{subequations}\label{eq:2ndordernoneqmmomentswitherrorterms-extended} \begin{eqnarray} n_3^{(1)} &=& -\frac{2 q^2 c_{s*}^2}{\omega_3} \rho \bm{\nabla}\cdot \bm{u}+ E_{3g}+ E_{3s}+\underline{n_3^{eq(1)}}, \label{eq:2ndordernoneqmmomentswitherrorterms-extended1}\\ n_4^{(1)} &=&-\frac{2 q^2 c_{s*}^2 }{\omega_4} \rho \left(\partial_x u_x - \partial_y u_y \right)+ E_{4g}+ E_{4s}+\underline{n_4^{eq(1)}}, \label{eq:2ndordernoneqmmomentswitherrorterms-extended2}\\ n_5^{(1)} &=& -\frac{q^2 c_{s*}^2\rho}{\omega_5} \left(\partial_x u_y + \partial_y u_x \right)+ E_{5g}+\underline{n_5^{eq(1)}}, \label{eq:2ndordernoneqmmomentswitherrorterms-extended3} \end{eqnarray} \end{subequations} where the error terms $E_{3g}$, $E_{3s}$, $E_{4g}$, $E_{4s}$, and $E_{5g}$ are given in the previous section in Eqs.~\eqref{eq:truncationerrors-gridanisotropy} and~\eqref{eq:truncationerrors-nonGI}. Now, in order to derive explicit formulas for the corrections $n_3^{eq(1)}$, $n_4^{eq(1)}$ and $n_5^{eq(1)}$, we need certain constraint relationships between them and the error terms. These follow from combining Eq.~\eqref{eq:16-extendedequilibria-b} and $\epsilon$ times Eq.~\eqref{eq:16-extendedequilibria-c} and using the expressions for the non-equilibrium moments in Eq.~\eqref{eq:2ndordernoneqmmomentswitherrorterms-extended} along with $\partial_t=\partial_{t_0} + \epsilon \partial_{t_1}$ to obtain the effective evolution equations for the conserved moments, which would include both the truncation error terms identified earlier and the unknown corrections whose combined effects are set to zero so that the evolution equations correspond to the preconditioned NS equations. See e.g., Refs.~\cite{Hajabdollahi201897,yahia2021central,yahia2021three} for details of these steps. Writing the truncation error terms compactly in the form a vector $\mathbf{\Xi}$ \begin{equation}\label{eq:errorvector} \mathbf{\Xi}=\left(\varphi_{0},\varphi_{1},\varphi_{2},\dots,\varphi_{8}\right)^{\dag}, \end{equation} where \begin{eqnarray}\label{eq:errorvector1} \varphi_j = \begin{cases} E_{3s}+ E_{3g}& \quad j =3\\ E_{4s}+ E_{4g}& \quad j =4\\ E_{5g}& \quad j =5\\ 0 & \quad \mbox{otherwise}, \end{cases} \end{eqnarray} then the necessary constraint equation between the vector of moment corrections $\mathbf{n}^{eq(1)}$ identified whose functional forms with unknown coefficients are given in Eq.~\eqref{eq:31} and the above vector holding the truncation errors $\mathbf{\Xi}$ (see Eqs.~\eqref{eq:errorvector} and~\eqref{eq:errorvector1}) reads as \begin{equation}\label{eq:constraint-errors-corrections} \mathbf{n}^{eq(1)}+ \left( \tensor{I} - \frac{\tensor{\Lambda}} {2} \right)\mathbf{\Xi}=0, \end{equation} which, in component form, becomes \begin{equation}\label{eq:constraint-errors-corrections-component-form} n_j^{eq(1)}+ \left( 1 - \frac{\omega_j} {2} \right)(E_{js}+ E_{jg})=0, \quad j=3,4,5. \end{equation} Evaluating Eq.~\eqref{eq:constraint-errors-corrections-component-form} and using Eqs.~\eqref{eq:31},~\eqref{eq:errorvector} and~\eqref{eq:errorvector1} for $j=3, 4$ and $5$, respectively, we get \begin{eqnarray*} &&\theta_{3x} \partial_x u_x + \theta_{3y} \partial_y u_y + \lambda_{3x} \partial_x \rho + \lambda_{3y} \partial_y \rho = -\left( 1 - \dfrac{\omega_3} {2} \right)E_{3g} -\left( 1 - \dfrac{\omega_3} {2} \right) E_{3s} \\ &&= -\left(\dfrac{1}{\omega_3}- \dfrac{1}{2}\right) \left[M_3 +(q^2 c_{s*}^2 -1)\rho \right] \partial_x u_x -\left(\dfrac{1}{\omega_3}- \dfrac{1}{2}\right) \left[N_3 +(q^2 c_{s*}^2 -r^2)\rho \right]\partial_y u_y \\ &&- \left(\dfrac{1}{\omega_3}- \dfrac{1}{2}\right) \left[\left(\dfrac{2}{\gamma}+1 \right) q^2 c_{s*}^2 -1 \right] u_x \partial_x \rho- \left(\dfrac{1}{\omega_3}- \dfrac{1}{2}\right) \left[\left(\dfrac{2}{\gamma}+1 \right) q^2 c_{s*}^2 -r^2 \right]u_y \partial_y \rho. \end{eqnarray*} \begin{eqnarray*} &&\theta_{4x} \partial_x u_x - \theta_{4y} \partial_y u_y + \lambda_{4x} \partial_x \rho + \lambda_{4y} \partial_y \rho = -\left( 1 - \dfrac{\omega_4} {2} \right)E_{4g} -\left( 1 - \dfrac{\omega_4} {2} \right) E_{4s} \\ &&= -\left(\dfrac{1}{\omega_4}- \dfrac{1}{2}\right) \left[M_4 +(q^2 c_{s*}^2 -1)\rho \right] \partial_x u_x -\left(\dfrac{1}{\omega_4}- \dfrac{1}{2}\right) \left[M_4 +(q^2 c_{s*}^2 -r^2)\rho \right]\partial_y u_y \\ &&- \left(\dfrac{1}{\omega_4}- \dfrac{1}{2}\right) \left[\left(\dfrac{2}{\gamma}+1 \right) q^2 c_{s*}^2 -1 \right] u_x \partial_x \rho+ \left(\dfrac{1}{\omega_4}- \dfrac{1}{2}\right) \left[\left(\dfrac{2}{\gamma}+1 \right) q^2 c_{s*}^2 -r^2 \right]u_y \partial_y \rho, \end{eqnarray*} \begin{eqnarray*} &&\theta_{5x} \partial_x u_x + \theta_{5y} \partial_y u_y + \lambda_{5x} \partial_x \rho + \lambda_{5y} \partial_y \rho = -\left( 1 - \dfrac{\omega_5} {2} \right)E_{5g} \\ &&= -\left(\dfrac{1}{\omega_5}- \dfrac{1}{2}\right) \left(\dfrac{1}{\gamma^2}-\dfrac{1}{\gamma} \right) \rho u_x u_y (\partial_x u_x +\partial_y u_y) -\left(\dfrac{1}{\omega_5}- \dfrac{1}{2}\right) \left(\dfrac{1}{\gamma}-1 \right) q^2 c_{s*}^2 \left(u_x \partial_x \rho +u_y \partial_y \rho\right), \end{eqnarray*} Comparing the terms involving the spatial gradients of the same type of quantity in each side of the above three equations, we finally get the coefficients for the correction terms in the second order moment equilibria as \begin{subequations}\label{eq:38} \begin{eqnarray} \theta_{3x}&=& -\Big[ M_3 + \left(3 q^2 c_{s*}^2- 1\right)\rho \Big] \left( \frac{1}{\omega_3}- \frac{1}{2}\right),\\ \theta_{3y}&=& -\Big[ N_3 + \left(3 q^2 c_{s*}^2- r^2\right)\rho \Big] \left(\frac{1}{\omega_3}- \frac{1}{2}\right) ,\\ \lambda_{3x}&=& - \left[\left(\frac{2}{\gamma}+1 \right) q^2 c_{s*}^2 -1 \right] \left(\frac{1}{\omega_3}- \frac{1}{2}\right) u_x,\\ \lambda_{3y}&=& - \left[\left(\frac{2}{\gamma}+1 \right) q^2 c_{s*}^2 -r^2 \right]\left(\frac{1}{\omega_3}- \frac{1}{2}\right) u_y, \end{eqnarray} \end{subequations} \begin{subequations}\label{eq:39} \begin{eqnarray} \theta_{4x}&=& -\Big[ M_4 + \left(3 q^2 c_{s*}^2- 1\right)\rho \Big] \left( \frac{1}{\omega_4}- \frac{1}{2}\right),\\ \theta_{4y}&=& +\Big[ N_4 - \left(3 q^2 c_{s*}^2- r^2\right)\rho \Big] \left(\frac{1}{\omega_4}- \frac{1}{2}\right) ,\\ \lambda_{4x}&=& - \left[\left(\frac{2}{\gamma}+1 \right) q^2 c_{s*}^2 -1 \right] \left(\frac{1}{\omega_4}- \frac{1}{2}\right) u_x,\\ \lambda_{4y}&=& + \left[\left(\frac{2}{\gamma}+1 \right) q^2 c_{s*}^2 -r^2 \right]\left(\frac{1}{\omega_4}- \frac{1}{2}\right) u_y, \end{eqnarray} \end{subequations} \begin{subequations}\label{eq:40} \begin{eqnarray} \theta_{5x}&=& -\left(\frac{1}{\gamma^2}-\frac{1}{\gamma} \right) \left( \frac{1}{\omega_5}- \frac{1}{2}\right)\rho u_x u_y ,\\ \theta_{5y}&=& -\left(\frac{1}{\gamma^2}-\frac{1}{\gamma} \right) \left( \frac{1}{\omega_5}- \frac{1}{2}\right)\rho u_x u_y,\\ \lambda_{5x}&=& - \left(\frac{1}{\gamma}-1 \right) \left(\frac{1}{\omega_5}- \frac{1}{2}\right) q^2 c_{s*}^2 u_y,\\ \lambda_{5y}&=& - \left(\frac{1}{\gamma}-1 \right) \left(\frac{1}{\omega_5}- \frac{1}{2}\right) q^2 c_{s*}^2 u_x. \end{eqnarray} \end{subequations} These expressions (Eqs.~\eqref{eq:38}-\eqref{eq:40}) together with Eqs.~\eqref{eq:30} and~\eqref{eq:31} are among the main results of this work that contribute towards formulating a new preconditioned LB approach on a rectangular lattice grid. The above choices for the moment equilibria corrections, which depend on both the grid aspect ratio $r$ and the preconditioning parameter $\gamma$, ensures that the resulting algorithm using a rectangular lattice represents the preconditioned NS equations with the shear viscosity $\nu$ and bulk viscosity $\xi$ satisfying the following relationships among the various model parameters: \begin{equation}\label{eq:transport-coefficients} \nu =\gamma q^2 c_{s*}^2 \left( \frac{1}{\omega_j}- \frac{1}{2} \right)\Delta t,\quad j=4,5, \;\; \quad \xi = \gamma q^2 c_{s*}^2 \left( \frac{1}{\omega_3}- \frac{1}{2} \right)\Delta t, \end{equation} where the optimal value of $c_{s*}^2$ is $1/3$, and the emergent pressure field $p$ is given by $p=\gamma q^2 c_{s*}^2 \rho$. We emphasize here that the simple expressions given in Eq.~\eqref{eq:transport-coefficients} self-consistently parameterize the transport coefficients in terms of $q$ which is given in Eq.~\eqref{eq:2} and maintains desired numerical stability in LB simulations using rectangular lattice grids. Unlike in previous works (see e.g.,~\cite{peng2016lattice,peng2016hydrodynamically,zong2016designing}), there is no need to rely on trial and error to adjust the speed of sound when a rectangular lattice is used and the grid aspect ratio is varied to any desired value. \subsection{Strain rate tensor components based on non-equilibrium moments} We will now show the diagonal components of the strain rate tensor $\partial_x u_x$ and $\partial_y u_y$, which appear in the moment equilibria corrections given in Eqs.~\eqref{eq:30},~\eqref{eq:31} and Eqs.~\eqref{eq:38}-\eqref{eq:40} can be computed locally via second-order non-equilibrium moments. First, using Eqs.~\eqref{eq:2ndordernoneqmmomentswitherrorterms-extended1} and~\eqref{eq:2ndordernoneqmmomentswitherrorterms-extended2}, and simplifying via Eq.~\eqref{eq:constraint-errors-corrections-component-form}, we obtain \begin{eqnarray*} n_3^{(1)}&=& -\cfrac{ 2 q^2 c_{s*}^2}{\omega_3} \rho \left(\partial_x u_x + \partial_y u_y \right)+ \cfrac{\omega_3}{2} E_{3g}+ \cfrac{\omega_3}{2} E_{3s},\\ n_4^{(1)}&=& -\dfrac{ 2 q^2 c_{s*}^2}{\omega_4} \rho \left(\partial_x u_x - \partial_y u_y \right)+ \dfrac{\omega_4}{2} E_{4g}+ \dfrac{\omega_4}{2} E_{4s} \end{eqnarray*} Then, substituting for $E_{3s}$, $E_{4s}$, $E_{3g}$, and $E_{4g}$ using Eqs.~\eqref{eq:truncationerrors-gridanisotropy} and \eqref{eq:truncationerrors-nonGI} in the last two equations and rearranging them leads to \begin{eqnarray}\label{eq:41} &\left[-\dfrac{2 q^2 c_{s*}^2}{\omega_3}\rho+ \dfrac{M_3}{2} + \dfrac{1}{2}(3 q^2 c_{s*}^2-1) \rho \right] \partial_x u_x + \left[-\dfrac{2 q^2c_{s*}^2}{\omega_3}\rho + \dfrac{N_3}{2}+ \dfrac{1}{2}(3 q^2 c_{s*}^2-r^2) \rho \right] \partial_y u_y \nonumber \\ &=n_3^{(1)}-\dfrac{1}{2} \left[ \left(\dfrac{2}{\gamma}+1\right) q^2 c_{s*}^2-1 \right] u_x \partial_x \rho - \dfrac{1}{2} \left[ \left(\dfrac{2}{\gamma}+1\right) q^2 c_{s*}^2-r^2 \right] u_y \partial_y \rho. \end{eqnarray} \begin{eqnarray}\label{eq:42} &\left[-\dfrac{2 q^2 c_{s*}^2}{\omega_4}\rho+ \dfrac{M_4}{2} + \dfrac{1}{2}(3 q^2 c_{s*}^2-1) \rho \right] \partial_x u_x + \left[+\dfrac{2 q^2c_{s*}^2}{\omega_4}\rho + \dfrac{N_4}{2}- \dfrac{1}{2}(3 q^2 c_{s*}^2-r^2) \rho \right] \partial_y u_y \nonumber \\ &=n_4^{(1)}-\dfrac{1}{2} \left[ \left(\dfrac{2}{\gamma}+1\right) q^2 c_{s*}^2-1 \right] u_x \partial_x \rho + \dfrac{1}{2} \left[ \left(\dfrac{2}{\gamma}+1\right) q^2 c_{s*}^2-r^2 \right] u_y \partial_y \rho. \end{eqnarray} Based on Eqs.~\eqref{eq:41} and \eqref{eq:42}, the required local expressions for the diagonal components of the strain rate tensor $\partial_x u_x$ and $\partial_y u_y$ can be obtained. In this regard, we first introduce the following intermediate variables \begin{subequations}\label{eq:43} \begin{eqnarray} A&=& \cfrac{1}{2} \left[\left(\frac{2}{\gamma}+1\right) q^2 c_{s*}^2 - 1 \right] u_x,\qquad B = \cfrac{1}{2} \left[\left(\frac{2}{\gamma}+1\right) q^2 c_{s*}^2 - r^2 \right] u_y,\\ e_{3\rho}&=& -A \partial_x \rho - B \partial_y \rho,\qquad\qquad\qquad e_{4\rho}= -A \partial_x \rho + B \partial_y \rho, \end{eqnarray} \end{subequations} where the density gradients $\partial_x \rho$ and $\partial_y \rho$ may be obtained via an isotropic finite-difference scheme. The non-equilibrium moments $n_3^{(1)}$ and $n_4^{(1)}$ appearing in Eqs.~\eqref{eq:41} and \eqref{eq:42} can be computed using either raw moments or central moments as \begin{eqnarray*} n_3^{(1)}&=& \left(k_{20}^\prime + k_{02}^\prime \right)- \left(k_{20}^{eq \prime} + k_{02}^{eq\prime} \right)= \left(k_{20} + k_{02} \right)- \left(k_{20}^{eq} + k_{02}^{eq} \right) \nonumber\\ \quad &=& \left(k_{20} + k_{02} \right)- 2 q^2 c_{s*}^2\rho- \left(\frac{1}{\gamma}-1\right) (u_x^2 +u_y^2)\rho,\\ n_4^{(1)}&=& \left(k_{20}^\prime - k_{02}^\prime \right)- \left(k_{20}^{eq \prime} - k_{02}^{eq\prime} \right)= \left(k_{20} - k_{02} \right)- \left(k_{20}^{eq} - k_{02}^{eq} \right) \nonumber\\ \quad &=& \left(k_{20} - k_{02} \right)- \left(\frac{1}{\gamma}-1\right) (u_x^2 -u_y^2)\rho. \end{eqnarray*} Based on these considerations, we can then identify the right sides and the left sides of Eqs.~\eqref{eq:41} and \eqref{eq:42}, respectively, conveniently by further introducing the following additional intermediate variables \begin{subequations}\label{eq:45} \begin{eqnarray} R_3 &=& n_3^{(1)}+ e_{3\rho} = k_{20} + k_{02}- 2 q^2 c_{s*}^2\rho- \left(\frac{1}{\gamma}-1\right) (u_x^2 +u_y^2)\rho +e_{3\rho},\\ R_4 &=& n_4^{(1)}+ e_{4\rho} = k_{20} - k_{02} - \left(\frac{1}{\gamma}-1\right) (u_x^2 -u_y^2)\rho + e_{4\rho}, \end{eqnarray} \end{subequations} and \begin{subequations}\label{eq:46} \begin{eqnarray} &&C_{3x}= \left[-\frac{2 q^2 c_{s*}^2}{\omega_3}\rho + \frac{M_3}{2} + \frac{1}{2}(3 q^2 c_{s*}^2-1)\rho \right],\quad C_{3y} = \left[-\frac{2 q^2 c_{s*}^2}{\omega_3}\rho + \frac{N_3}{2} + \frac{1}{2}(3 q^2 c_{s*}^2-r^2)\rho \right],\\ &&C_{4x}= \left[-\frac{2 q^2 c_{s*}^2}{\omega_4} \rho + \frac{M_4}{2} + \frac{1}{2}(3 q^2 c_{s*}^2-1)\rho \right],\quad C_{4y} = \left[+\frac{2 q^2 c_{s*}^2}{\omega_4} \rho + \frac{N_4}{2} - \frac{1}{2}(3 q^2 c_{s*}^2-r^2)\rho \right], \end{eqnarray} \end{subequations} where $M_3$, $N_3$, $M_4$ and $N_4$ are given in Eq.~\eqref{eq:prefactors}. Then, Eqs.~\eqref{eq:41} and \eqref{eq:42} can be more compactly written as \begin{subequations}\label{eq:47} \begin{eqnarray} C_{3x}\partial_x u_x + C_{3y}\partial_y u_y &=& R_3,\\ C_{4x}\partial_x u_x + C_{4y}\partial_y u_y &=& R_4. \end{eqnarray} \end{subequations} Solving the last two equations, we finally get the required local expressions for the diagonal parts of the strain rate tensor as follows: \begin{equation}\label{eq:48} \partial_x u_x = \frac{\left[ C_{4y}R_3 - C_{3y}R_4 \right]}{\left[ C_{3x} C_{4y} - C_{4x} C_{3y} \right]},\qquad\qquad \partial_y u_y = \frac{1}{C_{3y}} \left[ R_3 - C_{3x} \partial_x u_x \right]. \end{equation} For completeness, we note that a similar relation for the off-diagonal component ($\partial_x u_y+ \partial_y u_x$) follows from combining Eqs.~\eqref{eq:2ndordernoneqmmomentswitherrorterms-extended3} and~\eqref{eq:constraint-errors-corrections-component-form} and then simplifying via Eq.~\eqref{eq:truncationerrors-nonGI}. \section{Preconditioned Rectangular Central Moment Lattice Boltzmann Method (PRC-LBM)}\label{sec:4} In this section, we will present a robust and efficient implementation of a LB algorithm on rectangular lattice grids for solving preconditioned NS equations using and extending the results of the C-E analysis performed in the last section. In this regard, we note that the effect of the moment basis $\tensor{T}$ as defined in Eqs.~\eqref{eq:7} and~\eqref{eq:8} will be equivalently utilized in our implementation in a more modular fashion so the LB schemes based on the square lattice can be readily extended for utilizing rectangular lattice grids along with preconditioning and the necessary corrections. This involves using a simpler re-defined moment basis in conjunction with diagonal scaling matrices based on the grid aspect ratio for performing the pre- and post-collision transformations between the raw moments and distribution functions, and segregate the evolution of the trace of the diagonal parts of the second order moments from the others for achieving independent variations of bulk and shear viscosities~\cite{geier2015cumulant}, along with accounting for the corrections to eliminate the grid-anisotropy and non-GI truncation errors, only within the collision step under moment relaxations (see Ref.~\cite{yahia2021three}). In other words, the linear combinations of moments as required are considered only for performing the collision step and not for any mappings. This represents an improvement over the implementation discussed over all the previous 2D rectangular LB schemes for the solution of the NS equations, including our recent work~\cite{yahia2021central}, and is consistent with our more recent 3D formulation~\cite{yahia2021three}), but extended here with a preconditioning strategy for convergence acceleration. Similar approach based on a natural independent moment set without involving the mixed moments has also been used in previous work~\cite{fei2018modeling,fei2018cascaded} by using a block diagonal relaxation matrix~\cite{asinari2008generalized} in the context of LB formulations using a square lattice. \subsection{Reformulation of the Preconditioned Rectangular Raw Moment LBE} Thus, we first introduce a moment basis $\tensor{Q}$, which unlike $\tensor{T}$ in Eqs.~\eqref{eq:7} and~\eqref{eq:8}, does not contain any combinations of the basis vectors, but only a set of bare basis vectors for the D2Q9 lattice: \begin{eqnarray}\label{eq:Q-momentbasis} \tensor{Q}=\Big[ \; \ket{1}, \ket{e_x}, \ket{e_y},\ket{e_x^2},\ket{e_y^2},\ket{{e_x} {e_y}}, \ket{e_x^2 e_y}, \ket{e_x e_y^2}, \ket{e_x^2 e_y^2}\; \Big] ^{\dag}, \end{eqnarray} where $\ket{e_x}$, $\ket{e_y}$ and $\ket{1}$ are given in Eqs.\eqref{eq:3a}-\eqref{eq:3b} and \eqref{eq:4}, respectively. Hence, $\tensor{Q}$ depends on the grid aspect ratio $r$. We can relate this moment basis for a \emph{rectangular} lattice $\tensor{Q}$ to an equivalent moment basis for a \emph{square} lattice $\tensor{P}$ given by \begin{eqnarray}\label{eq:P-momentbasis} \tensor{P}=\Big[ \; \ket{1}, \ket{\bar{e}_x}, \ket{\bar{e}_y},\ket{\bar{e}_x^2},\ket{\bar{e}_y^2},\ket{{\bar{e}_x} {\bar{e}_y}}, \ket{\bar{e}_x^2 \bar{e}_y}, \ket{\bar{e}_x \bar{e}_y^2}, \ket{\bar{e}_x^2 \bar{e}_y^2}\; \Big] ^{\dag}, \end{eqnarray} where the particle velocity components of the square lattice $\ket{\bar{e}_x}$ and $\ket{\bar{e}_y}$ are given as \begin{subequations} \begin{eqnarray*} \ket{\bar{e}_{x}} &=& (0,1,0,-1,0,1,-1,-1,1)^\dag, \label{eq:3a-square}\\ \ket{\bar{e}_{y}} &=& (0, 0, 1, 0, -1, 1, 1, -1, -1)^\dag.\label{eq:3b-square} \end{eqnarray*} \end{subequations} Evidently, the two moment basis matrices $\tensor{Q}$ and $\tensor{P}$ can be readily related via a diagonal scaling matrix $\tensor{S}$ \begin{equation}\label{eq:QandPrelation} \tensor{Q}= \tensor{S} \tensor{P}, \end{equation} where $\tensor{S}$ reads as \begin{equation}\label{eq:S-matrix} \tensor{S} = \mbox{diag}{\begin{bmatrix}\;1 & 1 & r & 1 & r^2 & r & r & r^2 & r^2 \; \end{bmatrix}}. \end{equation} Importantly, from Eq.~\eqref{eq:QandPrelation}, the matrix inverse of $\tensor{Q}$ follows directly from the inverse of $\tensor{P}$ for the square lattice, which is quite straightforward to perform, and the inverse of the scaling matrix $\tensor{S}$ using \begin{equation}\label{eq:QandPrelation-inverse} \tensor{Q}^{-1}= \tensor{P}^{-1}\tensor{S}^{-1}, \end{equation} where $\tensor{S}^{-1}$ is obtained from Eq.~\eqref{eq:S-matrix}, which being a diagonal matrix, by simply taking the reciprocal of each of its elements, i.e., \begin{equation}\label{eq:S-matrix-inverse} \tensor{S}^{-1} = \mbox{diag}{\begin{bmatrix}\;1 & 1 & r^{-1} & 1 & r^{-2} & r^{-1} & r^{-1} & r^{-2} & r^{-2} \; \end{bmatrix}}. \end{equation} In other words, $\tensor{Q}^{-1}$ for the rectangular lattice is easy to perform knowing the corresponding $\tensor{P}^{-1}$ by appropriate scalings of the latter's elements based on the grid aspect ratio $r$. By contrast, since $\tensor{T}$ is defined using combinations of the basis vectors, its inverse $\tensor{T}^{-1}$, involves cumbersome expressions with parameterizations based on $r$. This fact confers a significant advantage of using $\tensor{Q}$ (and its inverse) rather than $\tensor{T}$ in performing mappings between moments and distribution functions~\cite{yahia2021three} and is thus adopted in designing our LB algorithm in what follows. However, as mentioned earlier, the effect of such combinations should still be accounted for in the evolution of the moments, which we accomplish by formally introducing a matrix $\tensor{B}$ in \begin{equation}\label{eq:TandQrelation} \tensor{T}= \tensor{B} \tensor{Q}. \end{equation} Thus, $\tensor{B}$ expresses the combinations of the moments (for the second order components $\ket{e_x^2+e_y^2}$ and $\ket{e_x^2-e_y^2}$ in the case of the D2Q9 lattice), which will be effectively introduced in the LB scheme in the evolution of the corresponding combinations of moments under collision and not for mappings. For this purpose, using the moment basis defined by $\tensor{Q}$, we can then define a set of bare moments $\mathbf{m}$ from the distribution functions $\mathbf{f}$ (and vice versa) using \begin{eqnarray}\label{eq:f-Q-mappings} \mathbf{m}= \tensor{Q} \mathbf{f}, \qquad \mathbf{f}={\tensor{Q}}^{-1} \mathbf{m}, \end{eqnarray} where $\mathbf{m}$ is given by \begin{equation}\label{baremoments-m} \mathbf{m}=\left(k_{00}^\prime,k_{10}^\prime,k_{01}^\prime,k_{20}^\prime,k_{02}^\prime, k_{11}^\prime, k_{21}^\prime, k_{12}^\prime, k_{22}^\prime\right)^{\dag}, \end{equation} and similarly for the sets of raw moment equilibria and the source terms, respectively, via $\mathbf{m}^{eq}= \tensor{Q} \mathbf{f}^{eq}$ and $\mathbf{\Phi}= \tensor{Q} \mathbf{S}$ as \begin{subequations} \begin{eqnarray}\label{baremoments-meq-phi} \mathbf{m}^{eq}&=&\left(k_{00}^{eq\prime},k_{10}^{eq\prime},k_{01}^{eq\prime},k_{20}^{eq\prime},k_{02}^{eq\prime}, k_{11}^{eq\prime}, k_{21}^{eq\prime}, k_{12}^{eq\prime}, k_{22}^{eq\prime}\right)^{\dag},\\ \mathbf{\Phi}&=&\left(\sigma_{00}^\prime,\sigma_{10}^\prime,\sigma_{01}^\prime,\sigma_{20}^\prime,\sigma_{02}^\prime, \sigma_{11}^\prime, \sigma_{21}^\prime, \sigma_{12}^\prime, \sigma_{22}^\prime\right)^{\dag}. \end{eqnarray} \end{subequations} These represent the simpler bare moments versions of those given in Eq.~\eqref{eq:9AA} for the combined moments of various quantities (which followed from Eq.~\eqref{eq:9}). From the above developments by exploiting the properties of the various matrices introduced and rearranging, the preconditioned raw moment based MRT-LBE in Eq.~\eqref{eq:10} can be rewritten in the following equivalent form (see Ref.~\cite{yahia2021three} for details): \begin{equation}\label{eq:10-equivalent} \mathbf{f} (\bm{x}+\mathbf{e}\Delta t, t+\Delta t) = \tensor{P}^{-1}\tensor{S}^{-1} \Big[\mathbf{m} + \tensor{B}^{-1}\tensor{\Lambda}\;\left(\; \tensor{B}\mathbf{m}^{eq}-\tensor{B}\mathbf{m} \;\right) + \tensor{B}^{-1}\left(\tensor{I} - \frac{\tensor{\Lambda}}{2}\right) \tensor{B}\mathbf{\Phi}\Delta t \Big]. \end{equation} This equation (Eq.~\eqref{eq:10-equivalent}) can be more conveniently represented by splitting them in the form of the following sequence of sub-steps that are amenable for implementation: \begin{eqnarray} \mathbf{m}&=&\tensor{S}\tensor{P}\mathbf{f},\nonumber\\ \tilde{\mathbf{m}}&=&\mathbf{m} + \tensor{B}^{-1}\left\{\tensor{\Lambda}\;\left(\; \tensor{B}\mathbf{m}^{eq}-\tensor{B}\mathbf{m} \;\right) + \left(\tensor{I} - \frac{\tensor{\Lambda}}{2}\right) \tensor{B}\mathbf{\Phi}\Delta t\right\},\nonumber\\ \tilde{\mathbf{f}} (\bm{x},t) &=& \tensor{P}^{-1}\tensor{S}^{-1}\tilde{\mathbf{m}},\nonumber\\ \mathbf{f} (\bm{x}+\mathbf{e}\Delta t, t+\Delta t)&=&\tilde{\mathbf{f}} (\bm{x},t).\label{eq:LBErawmomentrectangularlattice} \end{eqnarray} Here, we emphasize that $\tensor{P}$ and $\tensor{P}^{-1}$ perform transformations between the distribution functions and raw moments in a way as done for the usual \emph{square} lattice using the non-orthogonal moment basis, $\tensor{S}$ and $\tensor{S}^{-1}$ reflect the simple scalings of the raw moments by factors based on grid aspect ratio and the order of the moment before and after collision, respectively, and $\tensor{B}$ and $\tensor{B}^{-1}$ represent combining moments prior to their relaxations under collision with the addition of the source terms, and their subsequent segregation, respectively. Equation~\eqref{eq:LBErawmomentrectangularlattice} expresses the preconditioned rectangular LBM based on \emph{raw moments}. As such, this scheme, by including both the numerical enhancement features, viz., preconditioning and rectangular lattice grids together, even in the context of raw moments, is new and suitable for implementation. Nevertheless, a number of prior studies (see e.g.,~\cite{ning2016numerical,chavez2018improving,hajabdollahi2019cascaded,hajabdollahi2020local,hajabdollahi2021central,hajabdollahi2021central,adam2019numerical,adam2021cascaded}), including those involving rectangular/cuboid lattices~\cite{yahia2021central,yahia2021three}, have demonstrated that constructing LB schemes involving the relaxations of \emph{central moments} under collision offer significant improvements in numerical stability over those based on raw moments. Hence, in this work we will only implement and perform a numerical study on the generalization of the above developments to central moments, viz., the preconditioned rectangular central moment LBE, which will be discussed next and followed by a summary of its algorithmic steps. \subsection{Formulation of the Preconditioned Rectangular Central Moment LBE} For this purpose, we will utilize the independently supported bare central moments defined in Eq.~\eqref{eq:9B} for the D2Q9 lattice and collect them in the form of the following vectors: \begin{subequations}\label{eq:centralmoments-mc} \begin{eqnarray} \mathbf{m}^c&=&\left(k_{00},k_{10},k_{01},k_{20},k_{02}, k_{11}, k_{21}, k_{12}, k_{22}\right)^{\dag}, \\ \mathbf{m}^{c,eq}&=&\left(k_{00}^{eq},k_{10}^{eq},k_{01}^{eq},k_{20}^{eq},k_{02}^{eq}, k_{11}^{eq}, k_{21}^{eq}, k_{12}^{eq}, k_{22}^{eq}\right)^{\dag},\\ \mathbf{\Phi}^c&=&\left(\sigma_{00},\sigma_{10},\sigma_{01},\sigma_{20},\sigma_{02}, \sigma_{11}, \sigma_{21}, \sigma_{12}, \sigma_{22}\right)^{\dag}, \end{eqnarray} \end{subequations} Now, the raw moments defined in Eq.~\eqref{eq:9A} can be related to the central moments in Eq.~\eqref{eq:9B} via straightforward binomial expansions involving the former in combinations with monomials of the fluid velocity components at different order (of the form $u_x^pu_y^q$). Thus, the mappings from the raw moments to central moments (and vice versa) can be formally expressed as \begin{eqnarray}\label{eq:m-mc-F-mappings} \mathbf{m}^c= \tensor{F} \mathbf{m}, \qquad \mathbf{m}={\tensor{F}}^{-1} \mathbf{m}^c, \end{eqnarray} where $\tensor{F}$ is referred to as the frame transformation matrix reflecting the binomial transforms of moments at different orders supported by the D2Q9 lattice. Such a formulation to represent the transformations between the raw moments and central moments for the complete set supported by the lattice in the form of a shift matrix was first introduced by Fei and Luo in~\cite{fei2017consistent,fei2018three}. It is given by \begin{equation} \label{eq:Fmatrix} \tensor{F} = \begin{bmatrix} 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0\\[4pt] -u_x & 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0\\[4pt] -u_y & 0 & 1 & 0 & 0 & 0 & 0 & 0 & 0 \\[4pt] u_x ^2 +u_y ^2 & -2 u_x & -2 u_y & 1 & 0 & 0 & 0 & 0 & 0 \\[4pt] u_x ^2 -u_y ^2 & -2 u_x & 2 u_y & 0 & 1 & 0 & 0 & 0 & 0 \\[4pt] u_x u_y & -u_y & -u_x & 0 & 0 & 1 & 0 & 0 & 0 \\[4pt] -u_x ^2 u_y & 2 u_x u_y & u_x ^2 & - \frac{1}{2} u_y & - \frac{1}{2} u_y & -2 u_x & 1 & 0 & 0 \\[4pt] -u_x u_y ^2 & u_y ^2 & 2 u_x u_y & - \frac{1}{2} u_x & \frac{1}{2} u_x & -2 u_y & 0 & 1 & 0 \\[4pt] u_x^2 u_y ^2 & -2 u_x u_y ^2 & -2 u_x^2 u_y & \frac{1}{2} (u_x^2+u_y^2) & \frac{1}{2} (u_y^2 -u_x^2) & 4 u_x u_y & -2 u_y & -2 u_x & 1\\. \end{bmatrix} \end{equation} Also, as noted in Ref.~\cite{yahia2021central}, its inverse ${\tensor{F}}^{-1}$ can be read off directly from the elements of ${\tensor{F}}={\tensor{F}}(u_x,u_y)$ with minor changes by exploiting the following property that exists for such transforms: ${\tensor{F}}^{-1}={\tensor{F}}(-u_x,-u_y)$. Thus, and thus naturally both of them are lower triangular matrices. Then, by an analogy with Eq.~\eqref{eq:10-equivalent}, we can write the following preconditioned rectangular central moment LBE by involving relaxations of central moments $\mathbf{m}^c$ under collision (rather than raw moments $\mathbf{m}$) and including the additional transforms between them and the raw moments (via $\tensor{F}$ and $\tensor{F}^{-1}$)~\cite{yahia2021three}: \begin{equation}\label{eq:10-equivalent-centralmoment} \mathbf{f} (\bm{x}+\mathbf{e}\Delta t, t+\Delta t) = \tensor{P}^{-1}\tensor{S}^{-1}\tensor{F}^{-1} \Big[\mathbf{m}^c + \tensor{B}^{-1}\tensor{\Lambda}\;\left(\; \tensor{B}\mathbf{m}^{eq,c}-\tensor{B}\mathbf{m}^c \;\right) + \tensor{B}^{-1}\left(\tensor{I} - \frac{\tensor{\Lambda}}{2}\right) \tensor{B}\mathbf{\Phi}^c\Delta t \Big]. \end{equation} It may be noted that this Eq.~(\ref{eq:10-equivalent-centralmoment}) has some similarities with those presented by Luo and collaborators~\cite{fei2017consistent,fei2018three}, where they presented the cascaded central moment LB method in a generalized MRT framework on a square lattice. Our notations follow from those presented in an earlier work of the second author~\cite{premnath2009incorporating} for the frame transformation matrix $\tensor{F}$. This matrix as shown in Eq.~(\ref{eq:Fmatrix}) is identical to the shift matrix used in~\cite{fei2017consistent,fei2018three}. Nevertheless, there are some key differences: Eq.~(\ref{eq:10-equivalent-centralmoment}) also involves the forward and inverse scaling transformations (via diagonal matrices) related to the grid aspect ratio $r$ to accommodate the use of a rectangular lattice in a modular fashion. Moreover, for the collision step, Refs.~\cite{fei2017consistent,fei2018three} combine the relaxation parameters of the second order moments similar to that in~\cite{asinari2008generalized}. By contrast, here, to execute the collision step, the second order moments are combined prior to collision, which are then relaxed at independent rates to their equilibria (with appropriate corrections based on $r$ and $\gamma$) and then segregated post collision. Such a strategy for performing collision was presented by Geier \emph{et al.}~\cite{geier2015cumulant}, and present work can be considered as an extension of such an approach for performing flow simulations on rectangular lattices with preconditioning. Also, it should be noted that not all collision models admit interpretations based on matrices. For example, the highly sophisticated and nonlinear cumulant LB scheme cannot be represented in the form of matrices. Hence, the algorithms (including the special cases for raw moments and central moments) presented in~\cite{geier2015cumulant} are given only as series of substeps and no matrices are utilized in this regard. Thus, to maintain generality of our approach, in what follows, we will represent our PRC-LBM in the form of a sequence of operations that conveys the essence of Eq.~(\ref{eq:10-equivalent-centralmoment}). This last equation (Eq.~\eqref{eq:10-equivalent-centralmoment}) can then be more conveniently split up into various sub-steps, which then results into the following preconditioned rectangular central moment LBM or PRC-LBM: \begin{eqnarray} \mathbf{m}^c&=&\tensor{F}\tensor{S}\tensor{P}\mathbf{f},\nonumber\\ \tilde{\mathbf{m}}^c&=&\mathbf{m}^c + \tensor{B}^{-1}\left\{\tensor{\Lambda}\;\left(\; \tensor{B}\mathbf{m}^{c,eq}-\tensor{B}\mathbf{m}^c \;\right) + \left(\tensor{I} - \frac{\tensor{\Lambda}}{2}\right) \tensor{B}\mathbf{\Phi}^c\Delta t\right\},\nonumber\\ \tilde{\mathbf{f}} (\bm{x},t) &=& \tensor{P}^{-1}\tensor{S}^{-1}\tensor{F}^{-1}\tilde{\mathbf{m}}^c,\nonumber\\ \mathbf{f} (\bm{x}+\mathbf{e}\Delta t, t+\Delta t)&=&\tilde{\mathbf{f}} (\bm{x},t).\label{eq:LBEcentralmomentrectangularlattice} \end{eqnarray} This PRC-LBM is a numerically more robust formulation than its raw moment counterpart given earlier in Eq.~\eqref{eq:LBErawmomentrectangularlattice}. The algorithmic details of the PRC-LBM to facilitate its implementation are discussed in~\ref{sec:algorithmic-details-PRC-LBM}. \section{Results and discussion} \label{sec:5} We will now discuss some case studies based on the PRC-LB algorithm for simulations of shear flows at various characteristic parameters that show its numerical validation against certain benchmark problems and the significant advantages of combining the rectangular lattice grid and preconditioning over the LB scheme based on the square lattice and without preconditioning. In this regard, as noted at the end of~\ref{sec:algorithmic-details-PRC-LBM}, in what follows, the no-slip boundary condition for the moving walls, which generate shear flows, are accounted for via the momentum augmented half-way bounce back approach and including the parametrization for the grid aspect ratio for the rectangular lattice given in our recent work~\cite{yahia2021central}. \subsection{2D Shear Flows in Lid-driven Square Cavity using PRC-LBM: Validation} First, we will assess the accuracy of the PRC-LBM for the simulation of the classical flow within a \emph{square} cavity of side $H$, whose top surface moves at a constant velocity $U$ setting up flow patterns that depend on the Reynolds number given by $\mbox{Re}=UH/\nu$. We performed simulations at Reynolds numbers of~$\mbox{Re}=100,1000$ and $3200$ at a fixed Mach number~$\mbox{Ma}=0.05$. A rectangular lattice with grid aspect ratio of $r=0.5$ using a grid resolution of $N_x\times N_y=200\times 400$ is employed by setting the preconditioning parameter $\gamma=0.1$ in our algorithm given in~\ref{sec:algorithmic-details-PRC-LBM}. The numerical results of the horizontal and vertical components of the velocity profiles along the centerlines of the cavity predicted by the PRC-LBM at the above three choices of $\mbox{Re}$ are compared against the benchmark numerical solutions given by Ghia \emph{et al.}~\cite{ghia1982high} in Fig.~\ref{fig:Restreams}. It is evident that the PRC-LBM results are in very good agreement with the benchmark data. \begin{figure}[H] \centering \advance\leftskip-1.7cm \subfloat[$\mbox{Re}=100$ ] { \includegraphics[width=.4\textwidth] {uvplotRe100} \label{fig:1a} } \subfloat[$\mbox{Re}=1000$] { \includegraphics[width=.4\textwidth] {uvplotRe1000} \label{fig:1b} } \\ \subfloat[$\mbox{Re}=3200$ ] { \includegraphics[width=.4\textwidth] {uvplotRe3200} \label{fig:1c} } \advance\leftskip0cm \caption{The components of the velocity profiles $u(y)$ and $v(x)$ along the vertical and horizontal centerlines of a square cavity, i.e., $x=H/2$ and $y=H/2$ respectively, computed using the PRC-LBM on a rectangular lattice grid of aspect ratio of $r=0.5$ with the preconditioning parameter $\gamma=0.1$ at \mbox{Ma}$=0.05$ for Reynolds numbers of (a)~\mbox{Re}$=100$, (b)~\mbox{Re}$=1000$, (c)~\mbox{Re}$=3200$ and compared with the benchmark numerical solutions of Ref.~\cite{ghia1982high} (symbols).} \label{fig:Restreams} \end{figure} \subsection{2D Shear Flows in Lid-driven Shallow and Deep Cavities using PRC-LBM: Validation and Convergence Acceleration} Next, we will demonstrate the accuracy and computational advantages of using the PRC-LBM for computing anisotropic and inhomogeneous shear flows inside \emph{rectangular} cavities of length $L$ and height $H$, and characterized by the geometric aspect ratio $\mbox{AR}=H/L$. As shown in Fig~\ref{fig:schematicavity}, we consider two cases: (a) shallow cavity with aspect ratio $\mbox{AR}<1$ and (b) deep cavity with aspect ratio $\mbox{AR}>1$. In each case, the flow is set up by the motion of the upper lid with a velocity $U$ in the positive $x$ direction which generates vortices that are different in size and shape based on the Reynolds number specified by $\mbox{Re}=UH/\nu$. The confinement effect characterized by the aspect ratio $\mbox{AR}$ results in the characteristic flow scales or the spatial gradients in velocities that can be different in different coordinate directions, which can be more naturally and efficiently resolved by using a rectangular lattice grid. In our previous work, we illustrated the benefits of using the rectangular lattice over that based on the square lattice for simulating such flows within shallow cavities with the use of fewer grid nodes for the former when compared to the latter~\cite{yahia2021central}. In the current study, we aim to show further improvements of utilizing preconditioning with rectangular lattice grids for convergence acceleration of flows to their steady states, resulting in dramatic cumulative advantages of simulating such flows using the square lattice and without preconditioning. \begin{figure}[H] \centering \advance\leftskip-1.7cm \subfloat[Shallow cavity ($\mbox{AR}<1$)] { \includegraphics[width=0.4\textwidth] {shallowcavity \label{fig:schematicshallowcavity} } \subfloat[Deep cavity ($\mbox{AR}>1$)] { \includegraphics[width=0.25\textwidth] {deepcavity \label{fig:schematicdeepcavity} } \\ \advance\leftskip0cm \caption{Schematic arrangements of the flows inside a 2D (a) shallow and (b) deep cavities of dimensions $L\times H$.} \label{fig:schematicavity} \end{figure} In this regard, first, simulations of flow inside a shallow rectangular cavity of aspect ratio $\mbox{AR}=0.25$ at a Reynolds number~$\mbox{Re}=100$ and Mach number~$\mbox{Ma}=0.06$ are performed using rectangular lattice with grid aspect ratio $r=\Delta y/\Delta x=0.2$. If $N_x$ and $N_y$ are the number of grid nodes resolving the cavity in $x$ and $y$ directions, respectively, the grid spacings in the respective directions satisfy $\Delta x=L/N_x$ and $\Delta y=H/N_y$, or $r=\mbox{AR}N_x/N_y$. Thus, $N_x=(r/\mbox{AR})N_y$ in the case of the rectangular lattice and $N_x=(1/\mbox{AR})N_y$ for the square lattice, where $r=1$. Choosing $N_y=125$, this results in $100\times 125$ as the total number of grid nodes for the rectangular lattice, while taking even somewhat smaller value of $N_y=100$, however, requires a total $400\times 100$ grid nodes for the square lattice case, which is significantly more when compared to the former. Thus, if the computed results in each case are in agrement with one another, this, by itself, is a saving in the memory storage and computational cost by a factor of over $3$ in using the rectangular lattice. Then, the effect of different levels of preconditioning in PRC-LBM is studied by considering $\gamma=1.0, 0.5, 0.1$ and $0.05$. Figures~\ref{ushallow_Re100} and \ref{vshallow_Re100} show comparisons of the centerline velocity components $u$ and $v$ computed using PRC-LBM with $r=0.2$ and $100\times 125$ for different $\gamma$ with the results obtained using the square lattice ($r=1$) with $400\times 100$ and without preconditioning ($\gamma=1$). It can be seen the PRC-LBM results are in remarkably good agreement for the entire range of the choice of $\gamma$, when compared to the corresponding velocity profiles for the square lattice case. \begin{figure}[H] \centering \advance\leftskip-1.7cm \subfloat[$\mbox{AR}=0.25$, \mbox{Re}=100, $r=0.2$] { \includegraphics[width=0.45\textwidth] {uvelocityMa01 \label{ushallow_Re100} } \subfloat[$\mbox{AR}=0.25$, \mbox{Re}=100, $r=0.2$] { \includegraphics[width=0.45\textwidth] {vvelocityMa01 \label{vshallow_Re100} } \\ \advance\leftskip0cm \caption{The velocity profiles along the centerlines of a shallow rectangular cavity of aspect ratio $\mbox{AR}=0.25$ at a Reynolds number~$\mbox{Re}=100$ and Mach number~$\mbox{Ma}=0.06$ computed using the PRC-LBM with a grid resolution $100\times 125$ using the rectangular lattice of grid aspect ratio $r=0.2$ with different levels of preconditioning, i.e., $\gamma=1.0, 0.5, 0.1$ and $0.05$, and compared with the results obtained using a square lattice ($r=1.0$) with a grid resolution of $400\times 100$ at $\gamma=1.0$. (a) $u$ component of the velocity along the vertical centerline, and (b) $v$ component of the velocity along the vertical centerline.} \label{fig:velocityprofile1} \end{figure} Then, in order to verify the benefits of utilizing the preconditioning procedure with the rectangular lattice, we study the convergence histories to the steady state in using the PRC-LBM at $r=0.2$ for two values of the Mach number and different $\gamma$ in each case. Figure~\ref{histconverge} shows the convergence histories at $\mbox{Ma}=0.06$ with $\gamma=1, 0.5,0.1$, and $0.08$ and Fig.~\ref{Mahistconverge} shows that at smaller Mach number $\mbox{Ma}=0.01$ with $\gamma=1, 0.5,0.1$, and $0.05$, where the residual global error of the $u$ velocity component is estimated under the second norm as $||u(t+20)-u(t)||_2$. At $\mbox{Ma}=0.06$, it takes about $610,000$ steps to reach the steady state without preconditioning ($\gamma=1$), while the PRC-LBM with $\gamma=0.08$ requires only about $24,000$ steps to reach similar residual error as the previous case, leading to a dramatic reduction in the number of steps for convergence by a factor of about $25$ in this case. On the other hand, at a lower $\mbox{Ma}=0.01$, the PRC-LBM takes about $5,000,000$ without preconditioning, while only about $97,000$ with preconditioning (using $\gamma=0.05$), with an even larger improvement corresponding to a reduction factor of about $51$. Clearly, at lower Mach numbers, the disparities between the flow speed and the sound speed are larger, and the associated higher stiffness is respectively alleviated to a greater degree with preconditioning, which is consistent with previous investigations on square lattice grids (see e.g.,~\cite{guo2004preconditioned,hajabdollahi2019improving}). Noting that we have already reduced the computational costs by involving the rectangular lattice grids when compared to that using the square lattice, preconditioning the rectangular central moment LBM provides a further, i.e., cumulative improvement in solving steady state flow problems more efficiently. However, it should be noted that while using smaller values of $\gamma$ does favor faster convergence speed, its smallest possible value is limited by the numerical stability considerations (see e.g.,~ \cite{guo2004preconditioned,premnath2009steady,izquierdo2009optimal}). The optimal value of the level of preconditioning is a compromise between convergence rate and stability. Typically, the minimum possible $\gamma$ is found to be proportional to the Mach number used in simulations. \begin{figure}[H] \centering \advance\leftskip-1.7cm \subfloat[$\mbox{Ma}=0.06$, $r=0.2$] { \includegraphics[width=0.45\textwidth] {vhistMa06 \label{histconverge} } \subfloat[$\mbox{Ma}=0.01$, $r=0.2$] { \includegraphics[width=0.45\textwidth] {vhistM01shallow \label{Mahistconverge} } \\ \advance\leftskip0cm \caption{Convergence histories to the steady state for simulations of flows within a shallow rectangular cavity of aspect ratio $\mbox{AR}=0.25$ at a Reynolds number~\mbox{Re}$=100$ using the PRC-LBM with a grid resolution of $100\times 125$ using a rectangular lattice of grid aspect ratio $r=0.2$ with different values of the preconditioned parameter $\gamma$ at (a) Mach number~\mbox{Ma}$=0.06$, and (b) Mach number~\mbox{Ma}$=0.01$.} \label{fig:2} \end{figure} Next, we simulate the flow inside a deep cavity ($H> L$) as shown in Fig.~\ref{fig:schematicdeepcavity} by considering $\mbox{AR}=2$ at $\mbox{Re}=100$ and $\mbox{Ma}=0.06$ computed using the PRC-LBM using $r=1.6$. Choosing $N_x=100$ for $r=1.6$, this leads to $N_y=125$ for the rectangular lattice case, while for the square lattice case, with $N_x=100$, we need $N_y=200$, i.e., more number of grid nodes in the $y$ direction. Results shown in Fig.~\ref{fig:deepvelocityprofile} present comparisons of the centerline profiles of the components of the velocity across the deep cavity computed using the PRC-LBM with $\gamma=1.0, 0.5, 0.1$ and $0.05$ using $100\times 125$ rectangular grids with $r=1.6$ against those based on the square lattice ($r=1$) with $100\times 200$ grid nodes and $\gamma=1.0$. As in the shallow cavity case, it can be seen that the preconditioned central moment LBM for all possible choice of $\gamma$ and with fewer number of grid nodes delivers solutions that are in very good agreement with those based on the square lattice. \begin{figure}[H] \centering \advance\leftskip-1.7cm \subfloat[$AR=2$, $\mbox{Re}=100$, $r=1.6$] { \includegraphics[width=0.45\textwidth] {uvelocityAR2 \label{udeep_Re100} } \subfloat[$AR=2$, $\mbox{Re=100}$, $r=1.6$] { \includegraphics[width=0.45\textwidth] {vvelocityAR2 \label{vdeep_Re100} } \\ \advance\leftskip0cm \caption{The velocity profiles along the centerlines of a deep rectangular cavity of aspect ratio $\mbox{AR}=2$ at a Reynolds number~$\mbox{Re}=100$ and Mach number~$\mbox{Ma}=0.06$ computed using the PRC-LBM with a grid resolution $100\times 125$ using the rectangular lattice of grid aspect ratio $r=1.6$ with different levels of preconditioning, i.e., $\gamma=1.0, 0.5, 0.1$ and $0.05$, and compared with the results obtained using a square lattice ($r=1.0$) with a grid resolution of $100\times 200$ at $\gamma=1.0$. (a) $u$ component of the velocity along the vertical centerline, and (b) $v$ component of the velocity along the vertical centerline.} \label{fig:deepvelocityprofile} \end{figure} Moreover, the convergence histories presented in Fig.~\ref{fig:deepvelocityhist} for deep cavity flow simulations at $\mbox{AR}=2$ and \mbox{Re}$=100$ using the PRC-LBM with $r=1.6$ with various levels of preconditioning again show a significant reduction in the number of steps for convergence -- for example, by a factor of $14$ with $\gamma=0.08$ for $\mbox{Ma}=0.06$, and a factor of $23$ with $\gamma=0.05$ for $\mbox{Ma}=0.02$ when compared to the corresponding cases without preconditioning. For the latter case with $\mbox{Ma}=0.02$, it may be noted that the choice $\gamma = 0.05$ is almost at the threshold of its smallest possible value dictated by stability considerations. As a result, it may be leading to a transient effect with the convergence history temporarily crossing over that for the case $\gamma = 0.1$ when the residual error is O($10^{-12}$). Nevertheless, when the residual error drops further down to O($10^{-13}$) or smaller, as expected, the convergence histories show that the case $\gamma = 0.05$ result in faster convergence to the steady state when compared to $\gamma = 0.1$. \begin{figure}[H] \centering \advance\leftskip-1.7cm \subfloat[$\mbox{Ma}=0.06$, $r=1.6$] { \includegraphics[width=0.45\textwidth] {vhistAR2Ma06 \label{udeephistMa06} } \subfloat[$\mbox{Ma}=0.02$, $r=1.6$] { \includegraphics[width=0.45\textwidth] {vhistAR2Ma02 \label{vdeephistMa02} } \\ \advance\leftskip0cm \caption{Convergence histories to the steady state for simulations of flows within a deep rectangular cavity of aspect ratio~\mbox{AR}$=2$ at a Reynolds number~\mbox{Re}$=100$ using the PRC-LBM with a grid resolution of $100\times 125$ using a rectangular lattice of grid aspect ratio $r=1.6$ with different values of the preconditioned parameter $\gamma$ at (a) Mach number~\mbox{Ma}$=0.06$, and (b) Mach number~\mbox{Ma}$=0.02$.} \label{fig:deepvelocityhist} \end{figure} As a last case study, we perform an investigation on the efficacy of the PRC-LBM for simulations of flows inside a deep cavity ($\mbox{AR}=2$) at a higher Reynolds number of $\mbox{Re}=1000$ at $\mbox{Ma}=0.06$ using a grid resolution of $N_x\times N_y=100\times 150$ corresponding to the grid aspect ratio of $r=1.33$. The results of the centerline velocity component profiles obtained using the PRC-LBM at $\gamma=1.0, 0.75, 0.5$ and $0.1$ are reported in Fig.~\ref{fig:deepvelocityRe1000} and compared against the square lattice results using $100\times 200$ grid nodes at $\gamma=1.0$. Evidently, the rectangular LB scheme with all possible choices of $\gamma$ yields solutions that are again in very good agreement with those based on the square lattice. Moreover, these results are further corroborated by the plots of the streamline contours at two Reynolds numbers~$\mbox{Re}=100$ and $\mbox{Re}=1000$ simulated using the square lattice ($r=1$) and the rectangular lattice ($r=1.6$ for $\mbox{Re}=100$ and $r=1.33$ for $\mbox{Re}=1000$) and presented in Fig.~\ref{fig:streamlines}, which show the ability of the PRC-LBM to compute the flow patterns accurately. \begin{figure}[H] \centering \advance\leftskip-1.7cm \subfloat[$AR=2$, $\mbox{Re}=1000$, $r=1.33$] { \includegraphics[width=0.45\textwidth] {uprofdeepRe1000 \label{udeepMa06Re1000} } \subfloat[$AR=2$, $\mbox{Re}=1000$, $r=1.33$] { \includegraphics[width=0.45\textwidth] {vprofdeepRe1000 \label{vdeepMa06Re1000} } \\ \advance\leftskip0cm \caption{The velocity profiles along the centerlines of a deep rectangular cavity of aspect ratio $\mbox{AR}=2$ at a Reynolds number~$\mbox{Re}=1000$ and Mach number~$\mbox{Ma}=0.06$ computed using the PRC-LBM with a grid resolution $100\times 150$ using the rectangular lattice of grid aspect ratio $r=1.33$ with different levels of preconditioning, i.e., $\gamma=1.0, 0.75, 0.5$ and $0.1$, and compared with the results obtained using a square lattice ($r=1.0$) with a grid resolution of $100\times 200$ at $\gamma=1.0$. (a) $u$ component of the velocity along the vertical centerline, and (b) $v$ component of the velocity along the vertical centerline.} \label{fig:deepvelocityRe1000} \end{figure} \begin{figure}[H] \centering \advance\leftskip-1.7cm \subfloat[$\mbox{Re}=100$, $AR=2$] { \includegraphics[width=0.45\textwidth] {PRCstreamlinesRe100 \label{streamlinesRe100} } \subfloat[$\mbox{Re}=1000$, $AR=2$] { \includegraphics[width=0.45\textwidth] {PRCstreamlinesRE1000 \label{streamsdeepRE1000} } \\ \advance\leftskip0cm \caption{Streamline contours of the flow field in a 2D rectangular deep cavity of aspect ratio $\mbox{AR}=2$ computed using the PRC-LBM with $\gamma=0.1$ on a rectangular lattice at (a) \mbox{Re}=100 using $r=1.6$, and (b) \mbox{Re}=1000 using $r=1.33$ and, in each case, compared with the results of the non-preconditioned LBM using the square lattice ($\gamma=1.0$ and $r=1$).} \label{fig:streamlines} \end{figure} Finally, the convergence histories presented in Fig.~\ref{fig:vhistdeepRe1000} for deep cavity flow simulations using a rectangular lattice grid ($r=2$) at $\mbox{Re}=1000$ and $\mbox{Ma}=0.06$ show that it takes about $11,154,000$ steps to reach the steady state without preconditioning, while the PRC-LBM with preconditioning ($\gamma=0.1$) requires significantly fewer steps of about $583,000$ to reach similar residual errors, delivering an improvement by a factor of about $19$. In general, the higher $\mbox{Re}=1000$ case takes longer to ready steady state when compared to the case at lower $\mbox{Re}=100$ (shown in Fig.~\ref{fig:deepvelocityhist}) since the former is set up by reducing the fluid viscosity when compared to the latter, resulting in a slower diffusion of momentum and its convergence. However, thanks to preconditioning, even at higher $\mbox{Re}$, the rectangular central moment LBM is able to achieve substantial savings in the overall computational effort. \begin{figure}[H] \centering \advance\leftskip-1.7cm \includegraphics[width=0.45\textwidth] {vhistdeepRe1000 \label{vdeepMa02} \\ \advance\leftskip0cm \caption{Convergence histories to the steady states for simulations of flows within a deep rectangular cavity of aspect ratio $\mbox{AR}=2$ at a Reynolds number~$\mbox{Re}=1000$ using the PRC-LBM with a grid resolution of $100\times 150$ using a rectangular lattice of grid aspect ratio $r=1.33$ with different values of the preconditioned parameter $\gamma$ at Mach number~$\mbox{Ma}=0.06$.} \label{fig:vhistdeepRe1000} \end{figure} \section{Comparisons between Preconditioned Rectangular LB Formulations based on Raw Moments and Central Moments} \label{sec:comparisonsbetweenmodels} As noted in the introduction, no other formulation other than the current work that combines both preconditioning and rectangular lattice grids are available in the literature. Nevertheless, it should be noted that the derivation presented in Sec.~\ref{sec:3} can be utilized to construct a preconditioned rectangular \emph{raw} moment LBM (referred to as the PRNR-LBM in what follows) by performing the collision step in terms of relaxations involving the raw moments using the corrections to the equilibria based on $\gamma$ and $r$ given in Sec.~ \ref{subsec:correctionsextendemomentequilibria}. The implementation of this strategy is summarized in Eq.~(\ref{eq:LBErawmomentrectangularlattice}). As such this PRNR-LBM can be regarded as a special case of the PRC-LBM based on central moments. Here, we note that the single-relaxation-time (SRT) formulations are well-known to have serious deficiencies when compared other collision models. For example, they are significantly less stable in simulating flows at relatively low viscosities or large Reynolds numbers, even when compared to the approaches based on raw moments, and it is quite cumbersome to combine the use of both preconditioning and rectangular lattice grids. Given these issues, they are not given further considerations in this work, and we will focus on attention making comparisons between the PRC-LBM and PRNR-LBM in what follows. First, we point out that provided a given collision model yields numerically stable flow simulations, the convergence rate to the steady state is primarily influenced by the choice of the preconditioning parameter, and the use of two different collision models under otherwise similar conditions, such as the same choice of model parameters, is expected to result in a similar convergence speed. Figure~\ref{fig:vhistPRcPRNR} presents convergence histories for the PRC-LBM and PRNR-LBM for the simulations of flow with a shallow cavity with aspect ratio $\mbox{AR}=0.25$, $r=0.5$, $\mbox{Re}=100$, and $\mbox{Ma}=0.06$ for two different values of $\gamma$ (equal to $0.1$ and $0.5$). For these choices, both the collision models result in the same convergence rate to the steady state for a fixed preconditioning parameter. \begin{figure}[H] \centering \advance\leftskip-1.7cm \includegraphics[width=0.45\textwidth] {vhistPRCRNR \label{vdeepMa02} \\ \advance\leftskip0cm \caption{Convergence histories to the steady states for simulations of flows within a shallow rectangular cavity of aspect ratio $\mbox{AR}=0.25$ at a Reynolds number~$\mbox{Re}=100$ using the PRC-LBM and PRNR-LBM with a grid resolution of $100\times 50$ using a rectangular lattice of grid aspect ratio $r=0.5$ with two different values of the preconditioned parameter $\gamma$ at Mach number~$\mbox{Ma}=0.06$.} \label{fig:vhistPRcPRNR} \end{figure} On the other hand, we will now demonstrate the advantage of performing the collision step in a frame of reference based on the local fluid velocity in the PRC-LBM when compared to using the PRNR-LBM involving the rest or the lattice frame of reference is related to improving robustness or numerical stability of computations. In this regard, we investigate the maximum Reynolds number achieved by PRC-LBM and PRNR-LBM for simulating flows within a shallow cavity with an aspect ratio of $\mbox{AR}=0.25$ with a rectangular grid using $r=0.25$ by maintaining the lid velocity at a relative small constant value of $U=0.02$ and reducing the shear viscosity of the fluid to the smallest possible value for which the computations remain numerically stable. The shear viscosity is varied by changing the relaxation parameters associated with the second order moments and the relaxation parameters for the higher order moments are set to unity for simplicity for both the collision models. Two different grid resolutions of $100\times100$, $200\times200$ are considered, and in each case, three different choices of the preconditioning parameter $\gamma = 1.0$, $0.5$ and $0.2$ are used. The results of these stability tests are presented in Fig.~\ref{fig:numericalstabilitytest}. Clearly, even for the relatively low lid velocity considered, the preconditioned rectangular LBM using central moments is consistently more stable when compared to the preconditioned rectangular LBM using raw moments, especially at smaller $\gamma$. These results extend those presented in Ref.~ \cite{yahia2021central} by including preconditioning effects. Further improvements in numerical stability are possible when simulating shear flows with larger characteristic velocities. Moreover, as emphasized by various studies involving central moments in LBM (see e.g.,~\cite{ hajabdollahi2019cascaded, hajabdollahi2021central}), the additional computational overhead of using central moments when compared to that of using raw moments is relatively small, by about 15\%, but with the benefit of stable simulations at significantly higher Reynolds numbers using the former when compared to the latter. \begin{figure}[H] \centering \advance\leftskip-1.7cm \subfloat[$N_x \times N_y=100 \times 100$, $AR=0.25$] { \includegraphics[width=0.45\textwidth] {stabilityRe1 \label{numericalstabilitytest-lowresolution} } \subfloat[$N_x \times N_y=200 \times 200$, $AR=0.25$] { \includegraphics[width=0.45\textwidth] {stabilityRe2 \label{numericalstabilitytest-highresolution} } \\ \advance\leftskip0cm \caption{Comparison of the maximum Reynolds number $\mbox{Re}$ for numerical stability of PRC-LBM and PRNR-LBM for simulations of flows within a shallow rectangular cavity of aspect ratio $\mbox{AR}=0.25$ with a fixed lid-velocity of $0.02$ at different mesh resolutions with a grid aspect ratio of $r=0.25$ for three different choices of the preconditioning parameter ($\gamma = 1.0$, $0.5$ and $0.2$).} \label{fig:numericalstabilitytest} \end{figure} \section{Summary and Conclusions} \label{sec:6} In this paper, we have developed a new LB algorithm based on central moments and using involving a preconditioning strategy and a rectangular lattice grid, viz., the PRC-LBM, for efficient simulations of inhomogeneous and anisotropic flows. By including a preconditioning parameter $\gamma$ in its moment equilibria and augmenting its second order components via corrections that eliminate the anisotropy effects associated with the rectangular lattice characterized by the grid aspect ratio $r$ and the non-Galilean invariant velocity terms due to the aliasing effects on the D2Q9 lattice, it can consistently represent the preconditioned Navier-Stokes equations. Such corrections to the equilibria, obtained via a Chapman-Enskog analysis, are related to the diagonal components of the velocity gradient tensor, which are expressed in terms of the non-equilibrium moments to facilitate local computations and their coefficients simultaneously depend on both $\gamma$ and $r$. In the construction of the PRC-LBM, unlike the prior rectangular LB formulations, we have used the natural, non-orthogonal moment basis with a physically consistent parametrization of the speed of sound based on $r$, and with a $\gamma$-adjusted equilibria obtained through a matching principle based on the continuous Maxwell distribution. These result in simpler correction terms for using rectangular lattice grids in conjunction with preconditioning, and which, while used here in terms of central moments for robustness, have general applicability for other collision models. Moreover, our algorithmic implementation is modular in nature with a clear interpretation based on special matrices, which also naturally extends to three dimensions using a cuboid lattice in solving the preconditioned Navier-Stokes equations. The PRC-LBM simulations of benchmark shear-driven flows within shallow and deep cavities with significant geometric anisotropy at various Reynolds numbers and Mach numbers and with different values of $r$ and $\gamma$, validate the method for accuracy and show improvements in stability when compared to another implementation based on raw moments. Furthermore, we demonstrate significant reductions in the computational cost with the use of PRC-LBM via convergence acceleration to the steady states and reduced memory storage when compared to the corresponding LB scheme using the square lattice and without involving preconditioning. As final concluding remarks, we note the following. Given the ubiquity of inhomogeneous and anisotropic flows, classical schemes for CFD invariably use stretched grids that adapt to the local flow conditions and rarely utilize uniform square or cubic grids. Thus, the use of square/cubic lattices in LB algorithms is far from optimal in the use of overall computational resources in performing such flow simulations efficiently. Previous efforts in developing LB schemes based on rectangular lattices (e.g.,~\cite{bouzidi2001lattice, zhou2012mrt, peng2016lattice,peng2016hydrodynamically,zong2016designing}), which were generally based on orthogonal moment basis and non-optimal equilibria, were significantly restrictive in terms of complexity of implementation, limited stability ranges and cumbersome approach involved in choosing the various model parameters. All these issues have been circumvented in our present rectangular LB formulation involving a non-orthogonal moment basis and the construction of equilibria based on a matching principle. It is modular in construction in the sense that an LB code for the square lattice can be readily extended to the rectangular lattice with some minor efforts based on the former by making few simple changes. As shown in the appendix on the algorithmic implementation, such changes include the use of pre-collision and post-collision grid aspect ratio-based scalings of the raw moments and the use of extended second order moment equilibria adjusted suitably based on the grid aspect ratio to recover the Navier-Stokes equations. These simple additional changes to the existing square lattice-based LB codes to accommodate the rectangular grids have been numerically demonstrated to result in savings by an order of magnitude or more in terms of computational cost and memory when compared to that based on the square lattice (consistent with Refs.~\cite{yahia2021central, yahia2021three}). Furthermore, the use of preconditioning with rectangular lattice grids resulted in significant convergence acceleration to the steady states leading to further reduction in the overall computational efforts. Hence, the PRC-LBM represents an efficient approach for flow simulations. \section*{Acknowledgements} The first author (EY) thanks the Department of Mechanical Engineering at the University of Colorado Denver for financial support. The second author would like to acknowledge the support of the US National Science Foundation (NSF) under Grant CBET-1705630. The second author (KNP) would like to also thank the NSF for support of the development of a computer cluster infrastructure through the project ``CC* Compute: Accelerating Science and Education by Campus and Grid Computing'' under Award 2019089.
1,108,101,565,350
arxiv
\section{#1}} \renewcommand{\theequation}{\arabic{section}.\arabic{equation}} \newcommand{\begin{equation}}{\begin{equation}} \newcommand{\end{equation}}{\end{equation}} \newcommand{\begin{displaymath}}{\begin{displaymath}} \newcommand{\end{displaymath}}{\end{displaymath}} \newcommand{\begin{eqnarray}}{\begin{eqnarray}} \newcommand{\end{eqnarray}}{\end{eqnarray}} \newcommand{\begin{eqnarray*}}{\begin{eqnarray*}} \newcommand{\end{eqnarray*}}{\end{eqnarray*}} \renewcommand{\k}{\kappa} \newcommand{\mu}{\mu} \newcommand{\nu}{\nu} \newcommand{\phi}{\phi} \renewcommand{\r}{\rho} \newcommand{\varrho}{\varrho} \newcommand{{\cal H}}{{\cal H}} \newcommand{{\cal N}}{{\cal N}} \newcommand{{\cal D}}{{\cal D}} \newcommand{{\cal O}}{{\cal O}} \newcommand{{\cal L}}{{\cal L}} \newcommand{\scriptscriptstyle}{\scriptscriptstyle} \newcommand{\labell}[1]{\label{#1}\qquad_{#1}} \newcommand{\reef}[1]{(\ref{#1})} \newcommand{\nonumber}{\nonumber} \newcommand{\partial}{\partial} \newcommand{\ti}[1]{\tilde{#1}} \newcommand{\textrm{d}}{\textrm{d}} \newcommand{\mt}[1]{\textrm{\tiny #1}} \newcommand{{\cal F}}{{\cal F}} \newcommand{{\cal A}}{{\cal A}} \newcommand{{\cal B}}{{\cal B}} \newcommand{\ell_s}{\ell_s} \newcommand{\zD}{\ensuremath{z_{D7}}} \newcommand{\tzD}{\ensuremath{\zeta_m}} \newcommand{\ensuremath{\tilde{\rho}}}{\ensuremath{\tilde{\rho}}} \newcommand{\ensuremath{\tilde{z}}}{\ensuremath{\tilde{z}}} \newcommand{\Rl}{\ensuremath{(R/l_s)^2} \newcommand{\tc}{\ensuremath{\sqrt{g_s N}}} \newcommand{\stc}{\ensuremath{(g_sN)^{\frac{1}{4}}}} \newcommand{\mq}{\ensuremath{m_q}} \newcommand{\ensuremath{\mbox{\small eff.}}}{\ensuremath{\mbox{\small eff.}}} \newcommand{\mbox{${\cal N}$}}{\mbox{${\cal N}$}} \newcommand{\ensuremath{{\cal Y}}}{\ensuremath{{\cal Y}}} \newcommand{\ensuremath{{\cal Y}^{\ell,\pm}}}{\ensuremath{{\cal Y}^{\ell,\pm}}} \newcommand{\ensuremath{\frac{\ell}{2}}}{\ensuremath{\frac{\ell}{2}}} \newcommand{\ensuremath{SU(2)_R\times SU(2)_L}}{\ensuremath{SU(2)_R\times SU(2)_L}} \newcommand{\ensuremath{\bar{\rho}}}{\ensuremath{\bar{\rho}}} \newcommand{\ensuremath{{SU(N)}} }{\ensuremath{{SU(N)}} } \newcommand{\ensuremath{\frac{1}{2}}}{\ensuremath{\frac{1}{2}}} \newcommand{\ensuremath{M_{\pi}}}{\ensuremath{M_{\pi}}} \newcommand{\ensuremath{\Lambda_{\mbox{\small QCD}}}}{\ensuremath{\Lambda_{\mbox{\small QCD}}}} \newcommand{\ensuremath{\chi+i\,e^{-\phi}}}{\ensuremath{\chi+i\,e^{-\phi}}} \newcommand{\ensuremath{SL(2,\bbz{})}}{\ensuremath{SL(2,\bbz{})}} \newcommand{\ensuremath{{\mathcal Im}}}{\ensuremath{{\mathcal Im}}} \newcommand{\ensuremath{\bar{1}}}{\ensuremath{\bar{1}}} \newcommand{\ensuremath{\bar{2}}}{\ensuremath{\bar{2}}} \newcommand{\ensuremath{\bar{\imath}}}{\ensuremath{\bar{\imath}}} \newcommand{\ensuremath{\bar{\jmath}}}{\ensuremath{\bar{\jmath}}} \newcommand{\ensuremath{\bar{k}}}{\ensuremath{\bar{k}}} \newcommand{\ensuremath{\bar{l}}}{\ensuremath{\bar{l}}} \newcommand{\ensuremath{\bar{a}}}{\ensuremath{\bar{a}}} \newcommand{\ensuremath{\bar{b}}}{\ensuremath{\bar{b}}} \newcommand{\ensuremath{\bar{c}}}{\ensuremath{\bar{c}}} \newcommand{\ensuremath{\bar{d}}}{\ensuremath{\bar{d}}} \newcommand{\ensuremath{\bar{z}}}{\ensuremath{\bar{z}}} \newcommand{\ensuremath{\bar{w}}}{\ensuremath{\bar{w}}} \newcommand{\ensuremath{\bar{\zeta}}}{\ensuremath{\bar{\zeta}}} \newcommand{\ensuremath{\bar{\tau}}}{\ensuremath{\bar{\tau}}} \newcommand{\ensuremath{\bar{A}}}{\ensuremath{\bar{A}}} \newcommand{\ensuremath{\bar{B}}}{\ensuremath{\bar{B}}} \newcommand{\ensuremath{\bar{C}}}{\ensuremath{\bar{C}}} \newcommand{\ensuremath{\bar{D}}}{\ensuremath{\bar{D}}} \newcommand{\N}[1]{\ensuremath{{\cal N}=#1}} \newcommand{\ensuremath{\tilde{K}}}{\ensuremath{\tilde{K}}} \newcommand{{\bf Ai}}{{\bf Ai}} \newcommand{{\bf I}}{{\bf I}} \newcommand{{\bf J}}{{\bf J}} \newcommand{{\bf K}}{{\bf K}} \newcommand{\ensuremath{\tilde{\eta}}}{\ensuremath{\tilde{\eta}}} \newcommand{\ensuremath{\bar{\partial}}}{\ensuremath{\bar{\partial}}} \def\tilde{\lambda} {\tilde{\lambda}} \def\tilde{r} {\tilde{r}} \def\tilde{\rho} {\tilde{\rho}} \def r_\mt{vac}{r_\mt{vac}} \newcommand{\ensuremath{\vec{n}}}{\ensuremath{\vec{n}}} \newcommand{\ensuremath{\tilde{\lambda}}}{\ensuremath{\tilde{\lambda}}} \newcommand{\ensuremath{\cos\theta}}{\ensuremath{\cos\theta}} \newcommand{\ensuremath{\sin\theta}}{\ensuremath{\sin\theta}} \newcommand{\ensuremath{\partial_\sigma}}{\ensuremath{\partial_\sigma}} \newcommand{\ensuremath{\dot{\theta}}}{\ensuremath{\dot{\theta}}} \newcommand{\ensuremath{\dot{\varphi}}}{\ensuremath{\dot{\varphi}}} \newcommand{\ensuremath{\varphi}}{\ensuremath{\varphi}} \newcommand{\ensuremath{\partial_t}}{\ensuremath{\partial_t}} \newcommand{\ensuremath{\partial_{\tau}}}{\ensuremath{\partial_{\tau}}} \newcommand{\ensuremath{\tilde{\sigma}}}{\ensuremath{\tilde{\sigma}}} \newcommand{\ensuremath{\varepsilon_i}}{\ensuremath{\varepsilon_i}} \newcommand{\ensuremath{\sigma_0}}{\ensuremath{\sigma_0}} \newcommand{\ensuremath{\mathrm{N}}}{\ensuremath{\mathrm{N}}} \newcommand{\ensuremath{\NC^{rs}_{mn}}}{\ensuremath{\ensuremath{\mathrm{N}}^{rs}_{mn}}} \newcommand{\ensuremath{\NC^{rs}_{mn}(\ei,\sz)}}{\ensuremath{\ensuremath{\mathrm{N}}^{rs}_{mn}(\ensuremath{\varepsilon_i},\ensuremath{\sigma_0})}} \newcommand{\ensuremath{1+\ensuremath{\sin\frac{\sz}{2}}}}{\ensuremath{1+\ensuremath{\sin\frac{\sz}{2}}}} \newcommand{\ensuremath{\sin\frac{\sz}{2}}}{\ensuremath{\sin\frac{\ensuremath{\sigma_0}}{2}}} \newcommand{\ensuremath{\cos\frac{\sz}{2}}}{\ensuremath{\cos\frac{\ensuremath{\sigma_0}}{2}}} \newcommand{\ensuremath{\mathrm{P}^l_m(\cos\sz)}}{\ensuremath{\mathrm{P}^l_m(\cos\ensuremath{\sigma_0})}} \newcommand{\ensuremath{\mathrm{sign}}}{\ensuremath{\mathrm{sign}}} \newcommand{\ensuremath{\hat{P}}}{\ensuremath{\hat{P}}} \newcommand{\ensuremath{\mathbb{I}}}{\ensuremath{\mathbb{I}}} \newcommand{{\cal E }}{{\cal E }} \newcommand{\ensuremath{\mbox{arccosh}}}{\ensuremath{\mbox{arccosh}}} \newcommand{\ensuremath{\mbox{cotan}}}{\ensuremath{\mbox{cotan}}} \newcommand{\ensuremath{\mathcal{U}}}{\ensuremath{\mathcal{U}}} \renewcommand{\Re}{\ensuremath{\mathrm{Re}}} \renewcommand{\Im}{\ensuremath{\mathrm{Im}}} \begin{document} \title{\LARGE \bf Minimal area surfaces dual to Wilson loops and the Mathieu equation } \author{ Changyu Huang\thanks{E-mail: \texttt{[email protected]}} , Yifei He\thanks{E-mail: \texttt{[email protected]}} , Martin Kruczenski\thanks{E-mail: \texttt{[email protected]}} \\ Department of Physics and Astronomy, Purdue University, \\ 525 Northwestern Avenue, W. Lafayette, IN 47907-2036.} \maketitle \begin{abstract} The AdS/CFT correspondence relates Wilson loops in \N{4} SYM to minimal area surfaces in \adss{5}{5} space. Recently, a new approach to study minimal area surfaces in $\ads{3}\subset\ads{5}$ was discussed based on a Schroedinger equation with a periodic potential determined by the Schwarzian derivative of the shape of the Wilson loop. Here we use the Mathieu equation, a standard example of a periodic potential, to obtain a class of Wilson loops such that the area of the dual minimal area surface can be computed analytically in terms of eigenvalues of such equation. As opposed to previous examples, these minimal surfaces have an umbilical point (where the principal curvatures are equal) and are invariant under $\lambda$-deformations. In various limits they reduce to the single and multiple wound circular Wilson loop and to the regular light-like polygons studied by Alday and Maldacena. In this last limit, the periodic potential becomes a series of deep wells each related to a light-like segment. Small corrections are described by a tight--binding approximation. In the circular limit they are well approximated by an expansion developed by A.Dekel. In the particular case of no umbilical points they reduce to a previous solution proposed by J. Toledo. The construction works both in Euclidean and Minkowski signature of \ads{3}. \end{abstract} \clearpage \newpage \section{Introduction} \label{intro} The AdS/CFT correspondence \cite{malda} is a duality between gauge theories and string theory. In the large N limit \cite{largeN} and at strong 't Hooft coupling the strings can be treated semi-classically. Since the classical solution of the string equations of motion is given by an extremal surface, the study of minimal area surfaces in \ads{} space is crucial in exploring the correspondence. In particular the relation between Wilson loops and minimal area surfaces ending at the boundary \cite{MRY} is an intense area of research \cite{cWL,WLref,WLint}. In the case of single contour, smooth Wilson loops that interests us in this paper, analytical results for the minimal area problem are known in the case of near circular Wilson loop \cite{SY,CHMS,Cagnazzo,Dekel} based on perturbation theory, and for the case of surfaces with no umbilical points\footnote{An umbilical point is a point where both principal curvatures coincide, in terms of the Pohlmeyer reduction described below it is a zero of the analytic function $f(z)$.} \cite{IKZ,KZ,IK} using Riemann theta functions along the lines of \cite{BB,BBook} and a solution proposed by Toledo \cite{Toledo} using the Y-function method. For surfaces with umbilical points the only analytical results are for light-like Wilson loops with cusps \cite{cusp,scatampl,AMSV}. An exception us the circular Wilson loop since the dual surface is a half-sphere and all points are umbillical (both curvatures equal). Finally let's point out that these developments are related to integrability properties of the sigma model that also appear in the closed string case \cite{ClosedStrings}. Going back to the minimal area problem, in this paper we concentrate in the case of Euclidean surfaces embedded in $EAdS_3$ and $AdS_3$ considered as subspaces of \adss{5}{5}. They are dual to space-like Wilson loops embedded in a subspace $\mathbb{R}^2\subset \mathbb{R}^{3,1}$ or $\mathbb{R}^{1,1}\subset \mathbb{R}^{3,1}$. For concreteness, in the rest of the introduction we review the $EAdS_3$ case, the $AdS_3$ case is similar. A standard approach to the minimal area surface problem is to use the Pohlmeyer reduction \cite{Pohlmeyer}. First, the world-sheet is taken to be a unit disk and the induced metric is written in conformal coordinates $z=\sigma+i\tau$, namely in coordinates where it is conformally flat: \begin{equation} ds^2 = 4 e^{2\alpha} dz d\ensuremath{\bar{z}}\, . \label{a1} \end{equation} for some function $\alpha(z,\ensuremath{\bar{z}})\in \mathbb{R}$. The crucial observation is that $\alpha(z,\ensuremath{\bar{z}})$ satisfies a generalized cosh-Gordon equation \begin{equation} \partial \bar{\partial} \alpha = e^{2\alpha} + f(z) \bar{f}(\bar{z}) e^{-2\alpha}\, , \label{a2} \end{equation} where $f(z)$ is a holomorphic function that depends on the shape of the Wilson loop. After solving this equation one can compute the regularized area by the formula \begin{equation} {\cal A}_f = -2\pi - 4 \int_D f\bar{f} e^{-2\alpha} d\sigma d\tau\, , \label{a3} \end{equation} where $D$ represents the unit disk $|z|\le 1$. This regularized area determines the expectation value of the corresponding Wilson loop in the field thepry at large 't Hooft coupling. When $f(z)$ has no zeros in the world-sheet, analytical solutions to the cosh-Gordon equation can be found using Riemann theta functions \cite{IKZ,KZ,IK}. Alternatively, given a nowhere vanishing $f(z)$ one can solve a Y-system type of equation \cite{Toledo} and from there compute the area. In the case where $f(z)$ has zeros, the only known analytical results are for the Minkowski case and then only for Wilson loops with light-like boundaries \cite{AMSV}. In this paper we find analytical results for the case where $f(z)=f_0 z^n$, namely $f(z)$ has a multiple zero and the Wilson loop is given by a smooth space-like curve with no cusps. In order to do that, we use a way to approach the problem described in \cite{WLMA} where it was argued that such surfaces can be studied by considering a Schroedinger equation defined on the Wilson loop. Indeed, consider for concreteness the case of the Poincar\'e patch of $EAdS_3$ whose boundary is $\mathbb{R}^2$ and suppose that such Wilson loop is given by an arbitrary parameterization $X(s) = X_1(s)+i X_2(s)$, $s=0\ldots 2\pi$. Then one can define a Schroedinger equation \begin{equation} -\partial_s^2 \chi + V(s) \chi(s) =0\, , \label{a4} \end{equation} where $V(s)$ is a complex potential given by \begin{equation} V(s) = V_0(s) + i V_1(s) =-\ensuremath{\frac{1}{2}} \{X(s),s\} \, , \label{a5} \end{equation} where $\{X(s),s\}$ denotes the Schwarzian derivative. Up to global conformal transformations we can reconstruct the shape of the Wilson loop by considering two linearly independent solutions of such equation and taking the ratio \begin{equation} X(s) = \frac{\chi_1(s)}{\chi_2(s)}\, . \label{a6} \end{equation} Such solutions are anti-periodic as can be seen from their explicit form: \begin{equation} \chi_1(s) = \frac{X(s)}{\sqrt{\partial_s X(s)}}, \ \ \ \chi_2 = \frac{1}{\sqrt{\partial_s X(s)}}\, . \label{a7} \end{equation} The method proceeds by defining a generalized potential depending on a complex spectral parameter $\lambda$: \begin{equation} V(\lambda,s) = V_0(s) + \frac{i}{2} \left(\lambda+\frac{1}{\lambda}\right) V_1(s) + \ensuremath{\frac{1}{2}} \left(\lambda-\frac{1}{\lambda}\right) V_2(s) \, , \label{a8} \end{equation} where $V_2(s)$ is such that the solutions of the corresponding Schroedinger equation are anti-periodic for any value of the parameter $\lambda$. Since the solutions of a Schroedinger equation with periodic potential are usually quasi-periodic, this provides an infinite set of condition that should be sufficient to determine the function $V_2(s)$. Once $V_2(s)$ is found the area follows from a simple formula. In this paper we used this method to find new Wilson loops whose shape and area can be computed analytically. The shape of the dual surface can then be obtained numerically. \section{Euclidean \ads{3} case} In this case we use Poincar\'e coordinates $(X_1,X_2,Z)$ where $EAdS_3$ has a metric \begin{equation} ds^2 = \frac{dX_1^2 + dX_2^2 + dZ^2}{Z^2}\, . \label{b1} \end{equation} The boundary is at $Z=0$ and the Wilson loop is given by a curve \begin{equation} X(s) = X_1(s) + i X_2(s)\, . \label{b2} \end{equation} where $X$ is a complex coordinate. The world--sheet is parameterized by a complex coordinate $z=re^{i\theta}$, $|z|=r\le 1$. The boundary conditions for the surface are that $Z(e^{i\theta})=0$ and $X(e^{i\theta})=X(s(\theta))$ for some reparameterization $s(\theta)$. The method to find minimal area surfaces that we proposed in \cite{WLMA} is based on obtaining anti-periodic solutions of a Schroedinger equation with periodic potential. The prototypical periodic Shroedinger equation is the well-known Mathieu equation \cite{Mathieu}: \begin{equation} \partial_u^2 \chi(u) + (a-2q\cos 2u) \chi(u) =0 \, , \label{b3} \end{equation} where $q$ is a parameter and $a$ plays the role of energy eigenvalue. It has quasi-periodic solutions known as Floquet solutions \begin{equation} \chi_\nu(u+\pi) = e^{i\pi\nu} \chi_\nu(u)\, , \label{b4} \end{equation} where the Mathieu characteristic $\nu=\nu(a,q)$ is a function of the parameters $a$ and $q$. For given $q$ we can fix $\nu$ to any desired value by choosing an appropriate value of $a=a_\nu(q)$. It is therefore natural to look for minimal area surfaces based on the Mathieu equation. In order to do that, we write the potential in eq.(\ref{a8}) as \begin{equation} V(\lambda,s) = -\frac{1}{4} + 6 \beta_2 - \lambda f_0 e^{i(n+2) s} + f_0 \frac{1}{\lambda} e^{-i(n+2) s}\, , \label{b5} \end{equation} corresponding to the real and periodic functions \begin{equation} V_0(s) = -\frac{1}{4} + 6\beta_2, \ \ \ V_1(s) = -2f_0\sin(n+2)s, \ \ \ V_2(s) = -2f_0\cos(n+2)s\, . \label{b5a} \end{equation} The independent parameters are $n$, an even integer and $f_0$, a real constant. The constant $\beta_2$, however, is not arbitrary. The reason is that, if this potential corresponds to solutions of the minimal area problem, then the solutions to the corresponding Schroedinger equation should be anti-periodic for any value of $\lambda$. Consider the case when $\lambda=e^{i\varphi}$. It is clear that the potential is the same up to a shift in the variable $s$ and therefore the periodicity of the solutions will be the same for any value of $\varphi$ and, by analyticity, for any complex value of $\lambda$. Therefore we only need to ensure that the solutions are anti-periodic for $\lambda=1$ which is easily done by choosing $\beta_2$ appropriately. Indeed, by defining \begin{equation} u(s)=\frac{(n+2)s+\varphi}{2}+\frac{\pi}{4}\, , \label{b6} \end{equation} the equation \begin{equation} -\partial_s^2 \chi(s) + V(\lambda,s) \chi(s) = 0\, , \label{b7} \end{equation} becomes the standard Mathieu equation with \begin{equation} a=\frac{1-24\beta_2}{(n+2)^2},\ q=\frac{4 i f_0}{(n+2)^2}\, . \label{b8} \end{equation} Now we look for a Floquet solution of Mathieu equation with real characteristic exponent $\nu$, i.e. \begin{equation} \chi_\nu(u+\pi)=e^{\pi i\nu}\chi_\nu(u)\, , \label{b9} \end{equation} or equivalently \begin{equation} \chi_\nu(u) = e^{i\nu u}\, p_\nu(q,u)\, , \label{b10} \end{equation} where $p_\nu(\theta)$ is $\pi$ periodic. Since we require $\chi(s)$ to be an anti-periodic function of $s=0\ldots 2\pi$, it follows that, when $\Delta s=2\pi$, \begin{equation} \begin{aligned} \Delta u &= (n+2) \pi\, , \\ (2k+1)\pi &= \nu \Delta u\, , \Rightarrow\ &\nu=\frac{2k+1}{n+2}\ \ \ (k\in \mathbb{Z})\, , \end{aligned} \label{b11} \end{equation} Thus there is a discrete family of solutions labeled by $k\in\mathbb{Z}$. Such solutions, however do not necessarily have one boundary. Later we will see that for $k=0$ they do (and also, in certain cases, for $k=n+1$). For each $\nu$ the Mathieu eigenvalue $a=a_\nu(q)$ can be determined\footnote{There are efficient numerical algorithms to obtain the eigenvalues, for example Mathematica implements such function $a_\nu(q)$.} and from there the constant \begin{equation} \beta_{2}=\frac{1}{24}\left[1-(n+2)^2a_\nu(q)\right]\, , \label{b12} \end{equation} that, as seen in the next section, gives the area of the minimal surface. The shape of the Wilson loop is given by the ratio of two independent solutions of the Mathieu equation \begin{equation} X(u)=\frac{\chi_\nu(u)}{\chi_{\nu}(-u)} = e^{2i\nu u}\,\frac{p_\nu(q,u)}{p_\nu(q,-u)} =\frac{Mc(u)+iMs(u)}{Mc(u)-iMs(u)}\, , \label{b13} \end{equation} where we used that replacing $u\rightarrow-u$ gives another solution of the Mathieu equation. For completeness, we also wrote the result in terms of the often used Mathieu sine and cosine defined as the odd and even solutions respectively. All Mathieu functions are evaluated for $q$ given in eq.(\ref{b8}) and the eigenvalue $a=a_\nu(q)$. At this point it is useful to make contact with the Pohlmeyer reduction method. Indeed, from \cite{WLMA}, the potential is given by \begin{equation} V(\theta) = -\frac{1}{4} + 6\beta_2(\theta) - f(\theta) \lambda e^{2i\theta} +\frac{1}{\lambda} e^{-2i\theta} \bar{f}(\theta)\, , \label{b14} \end{equation} which means that, if we identify $s=\theta$, these solutions correspond to a holomorphic function \begin{equation} f(z) = f_0 z^n\, , \label{b15} \end{equation} which has no poles inside the disk, therefore justifying the choice $s=\theta$. The function $\beta_2(\theta)$ is constant and given by eq.\eqref{b12}. Replacing in the generalized cosh-Gordon equation (\ref{a2}), one can see that the equation is solved by a rotationally invariant function $\alpha(r)$ satisfying \begin{equation} \partial_r^2 \alpha(r) + \frac{1}{r} \partial_r \alpha(r) = 4 e^{2\alpha(r)} + 4 f_0^2 r^{2n} e^{-2\alpha(r)}\, . \label{b16} \end{equation} Near the boundary $r\rightarrow 1$ this equation implies that, in terms of the variable $\xi=1-r^2$, \begin{equation} \alpha = -\ln \xi + \beta_2 \xi^2 + \beta_2 \xi^3 + {\cal O}(\xi^4), \ \ \ (\xi=1-r^2\rightarrow 0)\, , \label{b17} \end{equation} as already derived in \cite{WLMA} but with the observation that here $\beta_2$ is a constant independent of $\theta$. In the Minkowski case, such function $f(z)= f_0 z^n$ was studied by Alday and Maldacena \cite{AM2}\footnote{They were called ``regular polygon'' solutions.} where it was noticed that eq.(\ref{b16}) is equivalent to the Painleve III equation. However only the case of an infinite world-sheet was considered, corresponding to a light-like Wilson loop with cusps. For the case of smooth Wilson, loops J. Toledo recently found one such example of solution using his Y-system method \cite{Toledo}. Here it corresponds to the case $n=0$ where $f(z)$ does not vanish anywhere on the world-sheet. On the other hand the solutions presented here correspond to the case of smooth Wilson loops where $f(z)$ has zeros, a case where no exact results for the area were known before. This equation was also studied in \cite{Novokshenov} in relation to minimal area surfaces but the surfaces considered there were different (multiple boundaries) and the area was not computed. Nevertheless, those results have some overlap with the Euclidean case considered in this paper. \section{Computation of the Area} To compute the area we can use simple integration by parts in eq.(\ref{a3}). In order to do that we observe that eq.(\ref{b16}) implies that \begin{equation} \partial _{r}\left[ r^2(\partial _{r}\alpha)^2 +2r\partial _{r} \alpha -4r^{2}e^{2\alpha} +4f_0^{2}r^{2n+2}e^{-2\alpha} \right] = 8(n+2)f_0^2r^{2n+1}\, e^{-2\alpha}\, . \label{c1} \end{equation} Thus \begin{equation} \begin{aligned} {\cal A}_f + 2\pi &= -4 \int_D f\bar{f} e^{-2\alpha}\, r dr\, d\theta = -8 \pi f_0^2 \int_0^1 r^{2n+1} e^{-2\alpha(r)} \, dr \\ &= - \frac{\pi}{n+2}\int_0^1 \partial _{r}\left[ r^2(\partial _{r}\alpha)^2 +2r\partial _{r} \alpha -4r^{2}e^{2\alpha} +4f_0^{2}r^{2n+2}e^{-2\alpha} \right]\,dr \\ &= \frac{-\pi}{(n+2)} \left. \left[ r^2(\partial _{r}\alpha)^2 +2r\partial_{r}\alpha -4r^{2}e^{2\alpha} \right]\right|_{r\rightarrow 1}\\ &= \frac{24\pi\beta_2}{n+2}\, , \end{aligned} \label{c2} \end{equation} where we used the behavior (\ref{b17}) for $\alpha$ at the world-sheet boundary. In this case $\beta_2<0$ implying that ${\cal A}_f<-2\pi$. Finally, using eq.(\ref{b12}), the area can be put in terms of the eigenvalues $a_\nu(q)$ of the Mathieu equation \begin{equation} {\cal A}_f = -2\pi +\frac{\pi}{n+2} - (n+2)\pi a_\nu(q)\, . \label{c3} \end{equation} This concludes the computation of the area. However it is important to point out that the fact that the integrand in (\ref{a3}) is a total derivative is not a coincidence but quite generic. Indeed, in \cite{WLMA} and \cite{IK3}, using integration by parts, a formula for the area in terms for the Schwarzian derivative of the contour was given. It is valid whenever $f(z)$ has no zeros on the world-sheet, clearly not the case here since $f(z) = f_0 z^n$. The problem in using the formula is that, when $f(z)$ has zeros, the function $\sqrt{f}$ has cuts and the integration by parts in eq.(111) of \cite{IK3} gives rise to integrals around the cuts as in \cite{AMSV}. However, in the present case, when $n$ is even the function $\sqrt{f}$ is well--defined, thus we can define the function \begin{equation} W(z) = \int_0^z \sqrt{f(z')} dz' = \frac{2\sqrt{f_0}}{n+2}\, z^{\frac{n}{2}+1} \, , \label{c4} \end{equation} and use the same argument as in \cite{IK3}. It starts with the observation that the generalized cosh-Gordon equation implies \begin{equation} \begin{aligned} j &= j_z dz + j_{\ensuremath{\bar{z}}} d\ensuremath{\bar{z}} \, ,\\ j_z &= 4f\sqrt{\bar{f}} e^{-2\alpha} \, ,\\ j_{\ensuremath{\bar{z}}} &= \frac{2}{\sqrt{\bar{f}}}\left[\bar{\partial}^2\alpha-(\bar{\partial}\alpha)^2\right] \, ,\\ dj &= 0\, , \end{aligned} \end{equation} and therefore \begin{equation} \begin{aligned} {\cal A}_f + 2\pi &= - 4 \int_D f\bar{f} e^{-2\alpha} d\sigma d\tau \\ &= -\frac{i}{2} \int_D j\wedge d\bar{W} = \frac{i}{2}\int_D d(\bar{W}j) \\ &= \frac{i}{2}\oint_{\partial D} \bar{W} (j_z\,dz+j_{\ensuremath{\bar{z}}}\,d\ensuremath{\bar{z}}) \, . \end{aligned} \label{c5} \end{equation} A caveat is that in \cite{IK3} the component $j_{\ensuremath{\bar{z}}}$ was defined with an extra $\{\bar{W},\ensuremath{\bar{z}}\}$ so that $j$ transforms properly as a one form. Here this is not possible since $\{W,z\}$ has a double pole at zero and therefore it would give an extra contribution to the contour integral. Continuing the reasoning along the lines of \cite{IK3}, the boundary behavior of $j_{\ensuremath{\bar{z}}}$ is given by \begin{equation} j_{\ensuremath{\bar{z}}} = \frac{1}{\sqrt{\bar{f}}}(12 \beta_2(\theta) e^{2i\theta}) + {\cal O}(\xi) \ \ \ \ \ (\xi=1-r^2\rightarrow 0)\, , \label{c6} \end{equation} whereas $j_z\rightarrow 0$ and therefore \begin{equation} {\cal A}_f = -2\pi + \frac{i}{2} \oint_D \frac{\bar{W}}{\sqrt{\bar{f}}} 12\beta_2 e^{2i\theta} \partial_\theta\ensuremath{\bar{z}} \, d\theta\, . \label{c7} \end{equation} Replacing $W(z)$ from eq.\eqref{c4}, $f=f_0 z^n$ and $z=e^{i\theta}$ it follows that \begin{equation} {\cal A}_f= -2\pi + \frac{24\pi\beta_2}{n+2}\, , \label{c8} \end{equation} in perfect agreement with eq.(\ref{c2}). \section{Numerical checks, examples and limits of the solutions} The previous sections give analytic results for the shape of the Wilson loop (boundary curve) and for the area of the dual minimal area surface. It does not however give an analytic expression for the shape of the surface. In this section we use a numerical procedure to independently find these solutions (including the shape of the minimal surface) providing a numerical test of the previous results. In order to do so we first solve numerically the generalized cosh-Gordon equation for $\alpha$, eq.(\ref{a2}) and then the linear problem \cite{WLMA} for $\psi_{1,2}$ that leads to the shape of the surface and boundary contour. Finally, the integral in eq.(\ref{a3}) can be done numerically providing a check of the previous results. With the choice $f=f_0z^n$, $f_0\in \mathbb{R}$, the generalized cosh-Gordon equation reads \begin{equation} \partial \bar{\partial} \alpha = e^{2\alpha} + f_0^2 |z|^{2n} e^{-2\alpha} \, , \label{d1} \end{equation} and therefore has solutions that depend only on the radial coordinate $\alpha(r)$ where $z=r e^{i\theta}$. In this case the equation becomes \begin{equation} \frac{1}{4}\left[ \partial_r^2 \alpha + \frac{1}{r} \partial_r \alpha \right] = e^{2\alpha} + f_0^2 r^{2n} e^{-2\alpha}\, . \label{d2} \end{equation} For numerical purposes, it is convenient to define $r_0$ such that $f_0=r_0^{n+2}$ and then rescale $r= \frac{\tilde{r}}{r_0}$ and introduce $\tilde{\alpha}=\alpha-\ln r_0$ so that the equation becomes \begin{equation} \frac{1}{4}\left[ \partial_{\tilde{r}}^2 \tilde{\alpha} + \frac{1}{\tilde{r}} \partial_{\tilde{r}} \tilde{\alpha}\right] = e^{2\tilde{\alpha}} + \tilde{r}^{2n} e^{-2\tilde{\alpha}}\, . \label{d3} \end{equation} Now we choose a value of $\tilde{\alpha}(\tilde{r}=0)=\tilde{\alpha}_0$ and, using the boundary condition $\left.\partial_{\tilde{r}}\tilde{\alpha}\right|_{\tilde{r}=0}=0$, integrate the differential equation up to a value $\tilde{r}=r_0$ where $\tilde{\alpha}$ diverges. This allows us to find $r_0(\tilde{\alpha}_0)$, namely for numerical purposes $\tilde{\alpha}_0$ is the parameter that defines the solution and $f_0$ is derived. To reconstruct the surface we now use the formulas (and notation) of \cite{WLMA}. First the linear problem \begin{eqnarray} \partial_{\tilde{r}} \psi_1 &=& \left(e^{i\theta+\tilde{\alpha}}-\tilde{r}^n e^{-\tilde{\alpha} - i (n+1) \theta}\right)\, \psi_2 \, , \\ \partial_{\tilde{r}} \psi_2 &=& \left(e^{-i\theta+\tilde{\alpha}}+\tilde{r}^n e^{-\tilde{\alpha}+i(n+1)\theta}\right)\, \psi_1\, , \label{d3a} \end{eqnarray} is easily solved numerically\footnote{Numerically, eq.(\ref{d3}) and the linear problem are solved simultaneously.} and the shape of the surface reconstructed by first taking two linearly independent solutions $(\psi_1,\psi_2)$, $(\tilde{\psi}_1,\tilde{\psi}_2)$. For example by using the initial conditions, $(\psi_1(0)=1, \psi_2(0)=0)$ and $(\tilde{\psi}_1(0)=0, \tilde{\psi}_2=1)$. With those soltions, the following matrices can be constructed: \begin{equation} \mathbb{A} = \left(\begin{array}{cc} \psi_1 & \psi_2 \\ \tilde{\psi}_1 & \tilde{\psi}_2\end{array}\right), \ \ \ \ \mathbb{X} = \mathbb{A}.\mathbb{A}^\dagger\, . \label{d3b} \end{equation} The Poincar\'e coordinates are identified, in terms of the components of $\mathbb{X}$ as \begin{equation} Z = \frac{1}{\mathbb{X}_{22}}, \ \ \ \ X = X_1+iX_2 = \frac{\mathbb{X}_{21}}{\mathbb{X}_{22}}\, . \label{d3c} \end{equation} This method is rather crude and can be much improved by solving the equation using series expansions around $\tilde{r}=0$ and $\tilde{r}=r_0$ and then matching the expansions at some intermediate point to determine the function $r_0(\tilde{\alpha}_0)$. For example for $n=2$ the expansions read \begin{eqnarray} \tilde{\alpha} &=& \tilde{\alpha}_0 + e^{2\,\tilde{\alpha}_0} \tilde{r}^2 + \ensuremath{\frac{1}{2}} e^{4\,\tilde{\alpha}_0 } \tilde{r}^4+ \left( {\frac {{e^{6\,\tilde{\alpha}_0}}}{3}}+{\frac {{ e^{-2\,\tilde{\alpha}_0}}}{9}} \right) \tilde{r}^6+ \ldots\, , \\ \tilde{\alpha} &=& -\ln r_0- \ln \xi +\beta_2\, \xi^2+\beta_2\, \xi^3 + \frac{1}{10} \left( r_0^{20}+2 \beta_2^2+9 \beta_2 \right) \xi^4+ \ldots\, , \label{d4} \end{eqnarray} where $\xi=1-\frac{\tilde{r}^2}{r_0^2}$. The resulting shapes and values for the area match perfectly the results obtained from the Mathieu equation. To get an idea of the shape we present some typical results in figure \ref{SurfacesE}. \begin{figure} \centerline{\includegraphics[width=\textwidth]{SurfacesE.pdf}} \caption{Surface corresponding to $n=2$ and the parameters $[\tilde{\alpha}_0, r_0, \beta_2, {\cal A}_f]$ equal to (top to bottom): $[0.01,0.986067,-0.0200117, -6.660397]$, $[-.45, 1.4164114, -.43083877,-14.404305]$, $[-.95, 1.5079094, -1.3248754,-31.2565]$ } \label{SurfacesE} \end{figure} \subsection{Limits} Further, to get a better understanding of the solutions we consider several values of $n$ and swipe the values of $\tilde{\alpha}_0$ from minus to plus infinity. In the limit $\tilde{\alpha}_0\rightarrow \infty$ the exponential term $e^{-2\alpha}$ in the cosh-Gordon equation (\ref{d1}) becomes negligible and the solution approaches the one with $f(z)=0$, namely the circular Wilson loop. In the Mathieu equation $q\rightarrow 0$ and we can use standard techniques to approximate the solutions as deformations of the circle (see appendix). For finite values of $\tilde{\alpha}_0$ the Wilson loop resembles a smoothed out regular polygon with $n+2$ sides. As $\tilde{\alpha}_0$ decreases, the shape of the Wilson loop changes until at a certain value there is a discontinuity where the Mathieu characteristic $\nu(a,q)$ jumps from $\nu=\frac{1}{n+2}$ to $\nu=2-\frac{1}{n+2}$ and the Wilson loops becomes multiple wound. In the limit $\tilde{\alpha}_0\rightarrow -\infty$ it becomes a multiple wound circle, the winding number is $k=2n+3$. The constant $\beta_2$ takes the value $\beta_2=-\frac{1}{6}(n+1)(n+2)$ and the area is ${\cal A}_f=-2(2n+3)\pi$. Finally it is also worth mentioning the limit $n\rightarrow \infty$ in which case the function $r^{2n}$ vanishes in the relevant region ($r<1$). In this case the solution approaches the circular Wilson loop. \subsection{Relation to Painleve III equation} We conclude this section by mentioning that, as noted in \cite{Novokshenov} for the Euclidean and \cite{AM2}for the Minkowski case, the radial cosh/sinh-Gordon equation is equivalent to the Painleve III equation. Indeed, the change of variables \begin{equation} \begin{aligned} y &=\frac{1}{f_0}\frac{1}{r^n} e^{2\alpha}\, , \\ t &=r^{n+2}\, , \end{aligned} \label{d5} \end{equation} reduces eq.(\ref{d2}) to the standard form \begin{equation} \frac{y^2}{t}\left(\frac{d}{d\ln t}\right)^2(\ln y) =tyy''+yy'-t(y')^2 = b y + a y^3 +t(\delta + \gamma y^4)\, , \label{d6} \end{equation} with parameters \begin{equation} b=a=\frac{8f_0}{(n+2)^2}=-2iq, \ \ \delta=\gamma=0\, . \label{d7} \end{equation} The function $y(t)$ has a singularity at $t=1$ corresponding to $r=1$ namely the world-sheet boundary. Using the expansion (\ref{b17}) and after the change of variables (\ref{d5}) it follow that \begin{equation} y(t) = \frac{i}{q}\frac{1}{(1-t)^2} + \frac{i}{12}\frac{1-4a_\nu(q)}{q} (1+(1-t)) + {\cal O}[(1-t)^2]\, , \label{d8} \end{equation} namely, the solution $y(t)$ that has a double pole singularity at $t=1$ and the constant and linear coefficients in the Laurent expansion are determined by the eigenvalues $a_\nu(q)$ of the Mathieu equation. This provides an interesting relation between the Painleve III and Mathieu equation, namely that the Mathieu eigenvalues control the behavior near the movable singularity \cite{Novokshenov}. \section{Comparison with wavy-line approximation} In the limit $f_0\rightarrow 0$ the solutions approach the circular Wilson loop. In the Mathieu equation this corresponds to the limit $q\rightarrow 0$ that can be studied by standard perturbative methods. In this section, however we employ the method developed by A. Dekel in \cite{Dekel} to study an arbitrary near circular Wilson loop providing then a check of that method. The idea is to assume that the contour has the shape given by the ratio of Mathieu functions in eq.(\ref{b13}), expand it for small $q$ and use Dekel's method to compute the area and check the result by comparing with eq.(\ref{c3}). We start by parameterizing the contour as a perturbation of a circular Wilson loop: \begin{equation} X(\theta)=e^{i\, s(\theta)+\xi(s(\theta))}\, . \label{e1} \end{equation} Here $s(\theta)$ is the correct parametrization which we assume is unknown, and $\xi(s)$ to the third order of the perturbation takes the form (see appendix) \begin{equation} \xi(s)=a (-i q) \sin \left(\frac{s}{\nu }\right)+(-i q)^3 \left(b \sin \left(\frac{s}{\nu }\right)+c \sin \left(\frac{3 s}{\nu }\right)\right)\, , \label{e2} \end{equation} with \begin{equation} \begin{aligned} & a=\frac{\nu }{\nu ^2-1},\\ & b=-\frac{3 \nu \left(\nu ^2+5\right)}{16 \left(\nu ^2-4\right) \left(\nu ^2-1\right)^3}\, ,\\ & c=-\frac{\nu \left(5 \nu ^4+36 \nu ^2-89\right)}{48 \left(\nu ^2-9\right) \left(\nu ^2-4\right) \left(\nu ^2-1\right)^3}\, . \end{aligned} \label{e3} \end{equation} If we let $\epsilon=-iq$, $p=\frac{1}{\nu}$, then the contour becomes \begin{equation} X(\theta)=e^{is(\theta)+\epsilon a \sin (p s(\theta ))+\epsilon^3 (b\sin (p s(\theta ))+c\sin (3p s(\theta ))) }\, . \label{e4} \end{equation} With such parametrization, we can use the method introduced by Dekel in \cite{Dekel} for the area calculation of the minimal surface ending on $X(\theta)$. Here we briefly review the procedure. The expression for the regularized area is given in eq.\eqref{a3}. Therefore, we need to have $f(z)$ and solve the generalized cosh-Gordon equation for $\alpha(z,\bar{z})$ to calculate the area. These functions can be expanded as \begin{equation} \begin{aligned} f(z)&=\sum _{n=1}^{\infty}f_n(z)\epsilon^n\, .\\ \alpha(z,\bar{z})&=\text{ln}(\frac{1}{1-z\bar{z}})+\sum _{n=2}^{\infty}\alpha_n(z,\bar{z})\epsilon^n\, . \end{aligned} \label{e6} \end{equation} Meanwhile, the correct parametrization $s(\theta)$ has the expansion \begin{equation} s(\theta)=s(\theta)=\theta+\sum _{n=1}^{\infty} s_n(\theta)\epsilon^n\, . \label{e7} \end{equation} When $\epsilon=0$, $X(\theta)$, $s(\theta)$, $f(z)$ and $\alpha(z,\bar{z})$ reduce to the results of circular Wilson loop. From the given boundary contour $X(s(\theta))$ we can first calculate the real and imaginary parts of the Schwarzian derivative expressed with the unknown $s_n(\theta)$. Based on the relations \cite{WLMA} \begin{equation} \begin{aligned} &\text{Re}\{X(\theta), \theta\}=\frac{1}{2}-12\beta_2(\theta)\, ,\\ &\text{Im}\{X(\theta), \theta\}=-4\text{Im}(e^{2i\theta}f(\theta))\, , \end{aligned} \label{e7a} \end{equation} we then expand the LHS of the equations with the parameter $\epsilon$ and extract $f(\theta)$ and $\beta_2(\theta)$ order by order. Next we plug $f(z)$ into the generalized cosh-Gordon equation to solve for $\alpha(z,\bar{z})$ and expand the solution around $r=1$ to get $\beta_2(\theta)$ which we use to compare with \eqref{e7a} to fix $s_n(\theta)$. In the end, we plug $s_n(\theta)$ back into $f(z)$, $\alpha(z,\bar{z})$ and \eqref{e5} to get the area. As it turns out, the term $c\sin(3 p\,s(\theta))$ in the contour parametrization given in eq.\eqref{e2} appears in the area only at order $\epsilon^{6}$ and higher. Besides, to order $\epsilon^{4}$, the $b\sin(p\,s(\theta))$ term appears in the area expression as a first order effect, i.e., if we parametrize the contour as \begin{equation} X(\theta)=e^{is(\theta)+\epsilon a' \sin (p\,s(\theta ))+O(\epsilon^3)}\, , \label{e8} \end{equation} and let $a'=a+b\epsilon^2$, the area calculation will give the same result as the one given by \eqref{e2} to order $\epsilon^{4}$. Since we are comparing the area with the one given by Mathieu function only to the fourth order of the perturbation, we will adopt the parametrization \eqref{e8} and make the replacement \begin{equation} a'=\frac{\nu }{\nu ^2-1}+\frac{3 \nu \left(\nu ^2+5\right) q^2}{16 \left(\nu ^2-4\right) \left(\nu ^2-1\right)^3}\, , \label{e9} \end{equation} at the end. Our calculation shows that the expansions of the functions have the following form \begin{equation} \begin{split} s(\theta)&=\theta-\frac{a'^2(p+5p^3)\text{sin}(2p\theta)}{8(-1+4p^2)} \epsilon^2+O(\epsilon^4)\, ,\\ f(z)&=-\frac{1}{4} i a' p \left(p^2-1\right)z^{p-2} \epsilon +\frac{3 i a'^3 p^3 \left(p^2-1\right) \left(5 p^2+1\right)z^{p-2} }{64 \left(4 p^2-1\right)}\epsilon ^3+O(\epsilon^4)\, ,\\ \alpha(r,\theta)&=\text{ln}(\frac{1}{1-r^2})+\frac{a'^2 p}{16 r^2 \left(r^2-1\right)}(4 r^2+4 r^4-p (p+1)^2 r^{2 p}\\&+(p+1)^2 (3 p-4) r^{2 p+2}-(p-1)^2 (3 p+4) r^{2 p+4}+(p-1)^2 p r^{2 p+6})\epsilon ^2\\&+O(\epsilon^4)\, .\\ \end{split} \label{e10} \end{equation} Finally, the area is given by: \begin{equation} {\cal A}_{f}=-2 \pi -\frac{1}{2} p \left(p^2-1\right)\pi a'^2 \epsilon ^2+\frac{ p \left(p^2-1\right) \left(23 p^4+p^2\right)\pi a'^4 }{32 \left(4 p^2-1\right)}\epsilon ^4+O(\epsilon^5)\, . \label{e11} \end{equation} Plugging \eqref{e9} into \eqref{e11}, and setting $\epsilon=-iq$, $p=\frac{1}{\nu}$, the area is given by \begin{equation} {\cal A}_{f}=-2\pi+\frac{\pi q^2}{2 \nu (1-\nu ^2)}-\frac{\left(5 \nu ^2+7\right)\pi q^4}{32 \nu \left(4-\nu ^2\right) \left(1-\nu ^2\right)^3}+O(q^6)\, . \label{e12} \end{equation} This result agrees with the one we got from the Mathieu function calculation described in the appendix (see eq.\eqref{ap8}). Notice that, from eq.\eqref{b8}, $q$ is purely imaginary and therefore ${\cal A}_f<-2\pi$ (since $\nu=\frac{1}{n+2}<1$). \section{Minkowski case} It is interesting to consider the case of Minkowski signature since in that case the Wilson loops that we consider approach, in a limit, those used by Alday and Maldacena to compute scattering amplitudes \cite{AM1,AM2}. On the other hand, the methods we employ are similar to the Euclidean case. In Lorentzian signature, in Poincar\'e coordinates $(T,X,Z)$ the metric of \ads{3} is \begin{equation} ds^2 = \frac{-dT^2+dX^2+dZ^2}{Z^2}\, . \label{f1} \end{equation} The Wilson loop is given by a curve \begin{equation} x_+(s) = X(s) + T(s) , \ \ \ \ x_-(s)= X(s)-T(s)\, . \label{f2} \end{equation} The light-cone coordinates $x_\pm\in \mathbb{R}$ are introduced for convenience. However it turns out to be even more convenient to use a conformal transformation and define complex coordinates \begin{equation} \hat{x}_{\pm}=\mp i \frac{1\pm ix_{\pm}}{1\mp ix_{\pm}}\, , \label{f3} \end{equation} with the property $|\hat{x}_\pm|=1$. Besides Poincar\'e coordinates it is also useful to use global coordinates $(t,\phi,\rho)$ such that the metric is \begin{equation} ds^2 = -\cosh^2\!\rho\, dt^2 + d\rho^2 + \sinh^2\!\rho\, d\phi^2\, . \label{f4} \end{equation} The relation to Poincar\'e coordinates is better written using embedding coordinates $(X_{-1},X_0,X_1,X_2)$ satisfying $X_{-1}^2 + X_0^2 - X_1^2- X_2^2 = 1$ and related to the previous coordinates by \begin{equation} Z=\frac{1}{X_{-1}-X_2}\, ,\ X=\frac{X_1}{X_{-1}-X_2}\, ,\ T=\frac{X_0}{X_{-1}-X_2}\, , \label{f5} \end{equation} with the $AdS_3$ boundary located at $Z=0$ while the global coordinates are defined as \begin{equation} X_{-1}+iX_0=\cosh{\rho}\, e^{it}\, ,\ X_{1}+iX_2=\sinh{\rho}\, e^{i\phi}\, , \label{f6} \end{equation} with the boundary at $\rho\rightarrow\infty$. The boundary contour is given by a curve $(t(s),\phi(s))$ and is related to the coordinates $\hat{x}_\pm$ by \begin{equation} \hat{x}_\pm = e^{i(t\pm\phi)}\, . \label{f7} \end{equation} Now we use the method described in \cite{WLMA} for the Euclidean case and generalized in \cite{IK3} for the Minkowski case. In this case the Schroedinger equation associated with the Wilson loop shape reads \cite{IK3} \begin{equation} \begin{split} &-\partial_\theta^2 \chi(\theta) + V_{\lambda}(\theta) \chi(\theta)=0\, ,\\ V_{\lambda}(\theta) &= -\frac{1}{4} + 6\beta_2(\theta) + \frac{1}{\lambda} i f(e^{i\theta}) e^{2i\theta} - \lambda i\bar{f}(e^{-i\theta}) e^{-2i\theta}\, . \end{split} \label{f8} \end{equation} Given two linearly independent solutions $\chi_{1,2}^{\lambda}$ the shape of the boundary curve is determined as \begin{equation} \hat{x}_{\pm} = \frac{\chi_1^{\lambda=\pm1}}{\chi_2^{\lambda=\pm1}}\, . \label{f9} \end{equation} The solutions $\chi_{1,2}^{\lambda}$ should be chosen such that $|\hat{x}_\pm|=1$. Equivalently we can chose real solutions $\tilde{\chi}^\lambda_{1,2}$ and define the boundary shape as $x_\pm=\frac{\tilde{\chi}_1^{\lambda=\pm1}}{\tilde{\chi}_2^{\lambda=\pm1}}$. In the case $f(z)=f_0 z^n$, and after taking $\lambda=e^{i\varphi}$ and defining a new coordinate \begin{equation} u=\frac{(n+2)\theta-\varphi}{2}+\frac{\pi}{4}\, , \label{f10} \end{equation} we obtain a Mathieu equation with parameters \begin{equation} a=\frac{1-24\beta_2}{(n+2)^2},\ q=\frac{4f_0}{(n+2)^2}\, . \label{f11} \end{equation} The construction requires the solutions $\chi_{1,2}$ to be anti-periodic. We can then write them in terms of Floquet solutions \begin{equation} \chi_\nu(u+\pi)=e^{i\nu\pi}\chi_\nu(u)\,,\ \ \ \ \nu=\frac{2k+1}{n+2}\ \ \ (k\in Z)\, . \label{f12} \end{equation} Noting that the parameters $(a,q)$ in eq.\eqref{f11} are real, the conjugate of a Floquet solution is another Floquet solution. Further, $u\rightarrow u+\frac{\pi}{2}$ is equivalent to replacing $\lambda \rightarrow -\lambda$, thus, the shape of the Wilson loop can be written as \begin{eqnarray} \hat{x}_+ &=& \frac{\chi_\nu(u)}{(\chi_\nu(u))^*}\, , \\ \hat{x}_- &=& \frac{(\chi_\nu(u+\frac{\pi}{2}))^*}{\chi_\nu(u+\frac{\pi}{2})}\, , \label{f13} \end{eqnarray} which evidently satisfy $|\hat{x}_\pm|=1$. Now, for a given value of $\nu$ we find the corresponding value $a_\nu(q)$ and the constant $\beta_2$ in eq.\eqref{f8} follows as \begin{equation} \beta_{2}=-\frac{1}{24}\left[1-(n+2)^2 a_\nu(q)\right]\, . \label{f14} \end{equation} The area of the minimal surface ending on the boundary curve can be computed, as before, using integration by parts and results in \begin{eqnarray} {\cal A}_f &=& -2\pi + 4 \int_D f\bar{f} e^{-2\alpha} d\sigma d\tau \\ &=& -2\pi + \frac{24\pi\beta_2}{n+2} \\ &=& -2\pi -\frac{\pi}{n+2} + \pi (n+2) a_\nu(q)\, , \label{f15} \end{eqnarray} giving again an analytical formula for the area in terms of the eigenvalues of the Mathieu equation. Although the formula is the same as for the Euclidean case, now $\beta_2>0$ and the area is ${\cal A}_f>-2\pi$. \subsection{Numerics, examples and limits} As in the Euclidean case it is useful to plot the resulting surfaces to understand their behavior. To obtain the surface we refer to \cite{IK3} for the derivations and we just present here the summary of steps necessary to obtain the shapes. As a first step we solve the generalized sinh-Gordon equation \begin{equation} \partial \bar{\partial} \alpha = e^{2\alpha} - f(z) \bar{f}(\bar{z}) e^{-2\alpha}\, , \label{f16} \end{equation} that again, for $f=f_0\, z^n$ has radial solutions obeying \begin{equation} \frac{1}{4}\left[ \partial_r^2 \alpha + \frac{1}{r} \partial_r \alpha \right]= e^{2\alpha} - f_0^2 r^{2n} e^{-2\alpha}\, . \label{f17} \end{equation} Defining $r_0$ such that $f_0=r_0^{n+2}$ and doing the same rescalings as done to arrive at \eqref{d3} we obtain \begin{equation} \frac{1}{4}\left[ \partial_{\tilde{r}}^2 \tilde{\alpha} + \frac{1}{\tilde{r}} \partial_{\tilde{r}} \tilde{\alpha}\right] = e^{2\tilde{\alpha}} - \tilde{r}^{2n} e^{-2\tilde{\alpha}} \, . \label{f18} \end{equation} We then look for solutions with given value of $\tilde{\alpha}(0)$ and $\left.\partial_{\tilde{r}}\tilde{\alpha}\right|_{\tilde{r}=0}=0$. After that, the linear problem \begin{eqnarray} \partial_{\tilde{r}} \psi_1 &=& \left(\lambda e^{i\theta+\tilde{\alpha}}+\tilde{r}^n e^{-\tilde{\alpha} - i (n+1) \theta}\right)\, \psi_2 \, , \\ \partial_{\tilde{r}} \psi_2 &=& \left(\frac{1}{\lambda} e^{-i\theta+\tilde{\alpha}}+\tilde{r}^n e^{-\tilde{\alpha}+i(n+1)\theta}\right)\, \psi_1\, , \label{f19} \end{eqnarray} needs to be solved for $\lambda=1$ and $\lambda=-1$. Given the initial value $\tilde{\alpha}(0)=\alpha_0$, two linearly independent solutions can be found using, for example the initial conditions, $(\psi^\lambda_1(0)=1, \psi^\lambda_2(0)=-i)$ and $(\tilde{\psi}^\lambda_1(0)=i, \tilde{\psi}^\lambda_2=-1)$. Finally we can reconstruct the shape of the solutions through \begin{equation} \mathbb{A}^\lambda = \left(\begin{array}{cc} \psi_1^\lambda & \psi_2^\lambda \\ \tilde{\psi}_1^\lambda & \tilde{\psi}_2^\lambda \end{array}\right)\, , \ \ \ \ \ \ \ \ \mathbb{X} = R_M^{-1}\mathbb{A}^{\lambda=1}\left[\mathbb{A}^{\lambda=-1}\right]^{-1}R_M\, , \label{f20} \end{equation} where $R_M$ is a $2\times2$ matrix given by \begin{equation} R_M = \frac{1}{\sqrt{2}}\left(\mathbb{I}+i \sigma_1\right), \ \ \ \sigma_1=\left(\begin{array}{cc} 0&1\\ 1&0 \end{array} \right)\, . \label{f21} \end{equation} Finally, the global coordinates follow as \begin{equation} t = \arg(-\mathbb{X}_{11}), \ \ \phi = \arg(\mathbb{X}_{12}), \ \ \tanh\rho=\left|\frac{\mathbb{X}_{12}}{\mathbb{X}_{11}}\right|\, . \label{f22} \end{equation} Sweeping the possible values of $\tilde{\alpha}(0)$ we find that, when $\tilde{\alpha}(0)\rightarrow\infty$, the parameter $q$ in the Mathieu equation vanishes and the solution becomes the circular solution $t=0, \phi=\theta$, ($0<\theta<2\pi$). As we lower $\tilde{\alpha}(0)$, the circle starts deforming and takes the shape seen in figure \ref{SurfacesM}, seemingly a regularized version of a succession of light-like cusps. For a certain value of $\tilde{\alpha}(0)$, the parameter $q\rightarrow\infty$ and the Wilson loop becomes a series of light--like segments with the shape of the so called ``regular polygons'' considered in \cite{AM2}. It has $2(n+2)$ light-like cusps, the particular case $n=0$ has four light-like cusps and was first described in \cite{cusp}, eq.(71). For lower values of $\tilde{\alpha}(0)$, the solution no longer touches the boundary, it still ends in light like-lines but inside AdS. This can be seen in the last figure in figure \ref{SurfacesM}. It will be interesting to understand the physical meaning of such solutions. The particular limits, $q\rightarrow 0 $ and $q\rightarrow\infty$ are studied in the following subsections. \begin{figure} \centerline{\includegraphics[width=\textwidth]{SurfacesM.pdf}} \caption{Surface in global coordinates for Lorentzian \ads{3} corresponding to the parameters $[n=4, \tilde{\alpha}_0=0]$, $[n=2, \tilde{\alpha}_0=-0.3915941968]$ and $[n=2, \tilde{\alpha}_0=-1]$ respectively.} \label{SurfacesM} \end{figure} \subsection{Near circular ($q\rightarrow 0$ approximation)} In the near circular case, we can use a simple generalization of Dekel's procedure from Euclidean to Lorentzian signature to compute the expansion in $q$. It is interesting that it also works in this case. From the point of view of the Mathieu functions, the expansion is the same as the one given in the appendix since it does not depend on the signature of space time. The Wilson loop is now given by two real functions $x_{\pm}(\theta)$. As mentioned, it is convenient to rewrite the contour in terms of \begin{equation} \hat{x}_{\pm}(\theta)=\mp i\frac{1\pm ix_{\pm}(\theta)}{1\mp ix_{\pm}(\theta)}= e^{i(t\pm \phi)}\, , \label{f23} \end{equation} where $(t,\phi)$ are global coordinates. Notice that $\hat{x}_{\pm}(\theta)$ have modulus one. In the case of circular Wilson loops, the contour reduces to \begin{equation} \hat{x}_{\pm}(\theta)= e^{\pm i \phi}=e^{\pm i \theta}\, . \label{f24} \end{equation} Since the change of variables \eqref{f23} is a global conformal transformation, the Schwarzian derivative with respect to $\theta$ is invariant \begin{equation} \{\hat{x}_{\pm}(\theta),\theta\}=\{x_{\pm}(\theta),\theta\}\, . \label{f25} \end{equation} According to the relations \cite{IK3} \begin{equation} \{x_{\pm}(\theta),\theta\}=\frac{1}{2} -12\beta_{2}(\theta)\pm 2f e^{2i\theta}\pm 2\bar{f}e^{-2i\theta}\, , \label{f26} \end{equation} we have \begin{equation} \begin{aligned} &\{\hat{x}_{+},\theta\}+\{\hat{x}_{-},\theta\}=1-24\beta_{2}(\theta)\, ,\\ &\{\hat{x}_{+},\theta\}-\{\hat{x}_{-},\theta\}=4\text{Re}\{e^{2i\theta}f(\theta)\}\, . \end{aligned} \label{f27} \end{equation} This is the Minkowski version of \eqref{e7a}. Using such relations, we can repeat the procedure of minimal area calculation previously done in the Euclidean case, with the cosh-Gordon equation replaced by the sinh-Gordon equation. The perturbed contour is parametrized as \begin{equation} \hat{x}_{\pm}(\theta)=e^{\pm is(\theta)+i\epsilon j' \sin(p\,s(\theta))}\, , \label{f28} \end{equation} where the $i$ in front of the $\epsilon$ is due to the fact that $\hat{x}_{\pm}(\theta)$ has modulus one. Repeating the calculation, we have \begin{equation} \begin{split} s(\theta)&=\theta+\frac{j'^2(p+5p^3)\text{sin}(2p\theta)}{8(-1+4p^2)} \epsilon^2+O(\epsilon^4)\, ,\\ f(z)&=-\frac{1}{4} j' p \left(p^2-1\right)z^{p-2} \epsilon -\frac{3 j'^3 p^3 \left(p^2-1\right) \left(5 p^2+1\right)z^{p-2} }{64 \left(4 p^2-1\right)}\epsilon ^3+O(\epsilon^4)\, ,\\ \alpha(r,\theta)&=\text{ln}(\frac{1}{1-r^2})-\frac{j'^2 p}{16 r^2 \left(r^2-1\right)}(4 r^2+4 r^4-p (p+1)^2 r^{2 p}\\&+(p+1)^2 (3 p-4) r^{2 p+2}-(p-1)^2 (3 p+4) r^{2 p+4}+(p-1)^2 p r^{2 p+6})\epsilon ^2\\&+O(\epsilon^4)\, .\\ \end{split} \label{f29} \end{equation} The final result for the area is \begin{equation} {\cal A}_{f}=-2 \pi +\frac{1}{2} p \left(p^2-1\right)\pi j'^2 \epsilon ^2+\frac{ p \left(p^2-1\right) \left(23 p^4+p^2\right)\pi j'^4 }{32 \left(4 p^2-1\right)}\epsilon ^4+O(\epsilon^5)\, . \label{f30} \end{equation} The particular contour given by the Mathieu solution is parametrized as \begin{equation} \hat{x}_{\pm}(\theta)=e^{\pm is(\theta)+i\xi(s(\theta))}\, , \label{f31} \end{equation} where \begin{equation} \xi(s)=j q \sin \left(\frac{s}{\nu }\right)+q^3 \left(k \sin \left(\frac{s}{\nu }\right)+l \sin \left(\frac{3 s}{\nu }\right)\right)\, . \label{f32} \end{equation} The values of the coefficients are \begin{equation} \begin{aligned} & j=-\frac{\nu }{\nu ^2-1}\, ,\\ & k=-\frac{3 \nu \left(\nu ^2+5\right)}{16 \left(\nu ^2-4\right) \left(\nu ^2-1\right)^3}\, ,\\ & l=-\frac{\nu \left(5 \nu ^4+36 \nu ^2-89\right)}{48 \left(\nu ^2-9\right) \left(\nu ^2-4\right) \left(\nu ^2-1\right)^3}\, . \end{aligned} \label{f33} \end{equation} Similar to the Euclidean case, letting $\epsilon=q$, $p=\frac{1}{\nu}$ and making the replacement \begin{equation} j'=j+k\epsilon^2=-\frac{\nu }{\nu ^2-1}-\frac{3 \nu \left(\nu ^2+5\right) q^2}{16 \left(\nu ^2-4\right) \left(\nu ^2-1\right)^3}\, , \label{f34} \end{equation} in \eqref{f30}, we have the area \begin{equation} {\cal A}_{f}=-2\pi+\frac{\pi q^2}{2 \nu \left(1-\nu ^2\right)}-\frac{\left(5 \nu ^2+7\right)\pi q^4}{32 \nu \left(4-\nu ^2\right) \left(1-\nu ^2\right)^3}+O(q^5)\, . \label{f35} \end{equation} This result agree with the Mathieu equation calculation and is identical to the result in the Euclidean case. However, now $q$ is real (see eq.\eqref{f11}) and ${\cal A}_f>-2\pi$. \subsection{Near light-like solution, tight--binding approximation, $q\rightarrow \infty$} The Mathieu equation \eqref{b3} can be written as \begin{equation} -\partial_u^2 \chi + V(u) \chi = a \chi, \ \ \ \ V(u) = 2q\cos2u\, , \label{f36} \end{equation} which is a standard Schroedinger equation with potential $V(u)$ and energy $a$. From eq.\eqref{b6}, one can see that the cases $\lambda=e^{i\varphi}=\pm1$ correspond to $\varphi=0,\pi$ representing a shift of $0$ or $\pi/2$ in $u$ that simply correspond to taking the potential as $V(u)$ or $-V(u)$. Both cases are similar, let us consider first the case $\lambda=-1$ where the potential is $-V(u)$ which is relevant to determine $x_-(s)$ (whereas $+V(u)$ corresponds to $x_+$). In the limit $q\rightarrow \infty$ such potential becomes a set of $n+2$ separated wells, the tunneling between those is suppressed by a potential barrier of height $q$ (and fixed width). The wave function for the ground state is a superposition of localized wave-functions at the different minima. This is clearly seen in figure \ref{TBpot} where the potential and modulus of the Mathieu function are displayed. \begin{figure} \centerline{\includegraphics[width=0.5\textwidth]{TBpot.png}} \caption{Potential (red) and modulus of the Mathieu function (blue) for $q=20.25$ and $n=2$. Clearly the wave-function localizes at the minima.} \label{TBpot} \end{figure} In this regime, the Mathieu functions can be well approximated by using the tight-binding approximation. It is given by \begin{equation} \chi_{\nu}^{TB}(u) = \sum_{j=-\infty}^{\infty} e^{ij\pi\nu}\, \chi_0(u-j\pi) \, . \label{f36} \end{equation} This functions clearly satisfies the periodicity condition $\chi_{\nu}^{TB}(u+\pi) = e^{i\pi\nu} \chi_{\nu}^{TB}(u)$ and obeys (approximately) the wave-equation if $\chi_0(u)$ is chosen appropriately. Here we take $\chi_0$ to be a real polynomial times a Gaussian and choose the polynomial such as to minimize the expectation value of the energy. The result is \begin{equation} \chi_0(u) = \left(1+\frac{1}{8} u^2 +\frac{1}{12} \sqrt{q} u^4\right)\, e^{-\sqrt{q} u^2}\, , \label{f37} \end{equation} with energy given by \begin{equation} a_\nu(q) = -2 q + 2\sqrt{q} - \frac{1}{4} - \frac{1}{32}\frac{1}{\sqrt{q}} - \frac{3}{256} \frac{1}{q} + {\cal O}\left(\frac{1}{q^{3/2}}\right)\, . \label{f38} \end{equation} Notice that in this approximation $a_\nu(q)$ is independent of $\nu$, namely around the given value of $a_\nu(q)$ there is an exponentially small window where $\nu$ changes from 0 to 1. An improved result is obtained by replacing the wave function $\chi_0$ away from $u=0$ by a WKB approximation: \begin{equation} \chi_0^{WKB} = \frac{C}{\kappa(u)}\, e^{\int^u \kappa(u')du'} , \ \ \ \ \kappa(u)=\sqrt{-a(q)-2q\cos 2u}\, , \label{f39} \end{equation} where the constant $C$ is chosen to match the previous approximation $\chi_0(u)$ at some intermediate point. Since the shape is given by \begin{equation} \hat{x}_- = \frac{\left(\chi^{TB}_\nu\right)^*}{\chi^{TB}_\nu}\, , \label{f40} \end{equation} and the function $\chi_0(u)$ is real, the phase of $\chi^{TB}_\nu$ changes abruptly when $u$ crosses a maximum of the potential. For example at $u=\frac{\pi}{2}$ in fig. \ref{TBpot}, the wave function localized around $u=0$ is replaced by the function localized at $u=\pi$ that is multiplied by a different phase in eq.\eqref{f36}. This abrupt changes correspond to the straight lines along $x_-$. When we consider $\lambda=+1$ the potential is inverted and therefore the minima of the potential correspond to straight lines along $x_+$. The cusps are associated to the intermediate regions between maxima and minima. In those regions, the phase of $\chi_\nu$ remains constant implying that $\hat{x}_+$, $\hat{x}_-$ are also constant and equal to their value at the cusp. It seems possible to define a general procedure to expand around the light-like Wilson loop similar to what can be done around the circle. We leave that for future work. We conclude this part by computing the area in this approximation using eqs.\eqref{f15} and \eqref{f38}: \begin{equation} {\cal A}_f = -2\pi- \frac{\pi}{n+2}+\pi(n+2) \left[-2 q + 2\sqrt{q} - \frac{1}{4} - \frac{1}{32}\frac{1}{\sqrt{q}} - \frac{3}{256} \frac{1}{q} + {\cal O}\left(\frac{1}{q^{3/2}}\right) \right]\, . \label{f41} \end{equation} When $q\rightarrow\infty$ there is a divergence proportional to the number of cusps as we expect from the cusp anomaly computations \cite{cusp, AM1,AMSV}. \section{Conclusions} One of the most important observables in gauge theories is the Wilson loop. The AdS/CFT correspondence opened up the possibility of studying such operator at strong 't Hooft coupling by using minimal area surfaces. In this paper we consider a previously proposed method to find such surfaces based on the properties of a Schroedinger equation with a potential given by the Schwarzian derivative of the shape of the Wilson loop. It allows us to find new interesting Wilson loops dual to surfaces whose area can be computed analytically in terms of eigenvalues of the Mathieu equation. Those Wilson loops have interesting properties since they can be seen as deformed or regularized versions of previously known solutions: the regular polygons with light-like cusps and also of the multiple wound circular Wilson loop. It also allowed us to check analytically Dekel's method to expand around the circular solutions. Finally, in the near light--like case, we found that the potential becomes a series of separated wells each one associated with a light-like segment. This opens up the possibility of studying systematically perturbations around such Wilson loop, a subject that we leave for future work. Summarizing, we believe that the idea of studying Wilson loops through its associated Schroedinger equation is a productive one that uses the integrability properties of the system and that presumably will lead to further insight into this important operator. \section{Acknowledgments} We are very grateful to A. Dekel, S. Komatsu, J. Toledo and P. Vieira for useful comments and discussions on this topic. This work was supported in part by DOE through grant DE-SC0007884. M.K. is very grateful to Perimeter Institute for Theoretical Physics for hospitality while this work was being completed.
1,108,101,565,351
arxiv
\section{Introduction} \label{sec:intro} Deep learning offers a powerful framework for learning increasingly complex representations for visual recognition tasks. The work of Krizhevsky \etal \cite{KSH13} convincingly demonstrated that deep neural networks can be very effective in classifying images in the challenging Imagenet benchmark \cite{DDSL+09}, significantly outperforming computer vision systems built on top of engineered features like SIFT \cite{Lowe04}. Their success spurred a lot of interest in the machine learning and computer vision communities. Subsequent work has improved our understanding and has refined certain aspects of this class of models \cite{ZeFe13b}. A number of different studies have further shown that the features learned by deep neural networks are generic and can be successfully employed in a black-box fashion in other datasets or tasks such as image detection \cite{ZeFe13b, OuWa13, SEZM+14, GDDM14, RASC14, CSVZ14}. The deep learning models that so far have proven most successful in image recognition tasks are feed-forward convolutional neural networks trained in a supervised fashion to minimize a regularized training set classification error by back-propagation. Their recent success is partly due to the availability of large annotated datasets and fast GPU computing, and partly due to some important methodological developments such as dropout regularization and rectifier linear activations \cite{KSH13}. However, the key building blocks of deep neural networks for images have been around for many years \cite{LBBH98}: (1) convolutional multi-layer neural networks with small receptive fields that spatially share parameters within each layer. (2) Gradual abstraction and spatial resolution reduction after each convolutional layer as we ascend the network hierarchy, most effectively via max-pooling \cite{RiPo99, JKRL09}. In this work we build a deep neural network around the epitomic representation \cite{JFK03}. The image epitome is a data structure appropriate for learning translation-aware image representations, naturally disentagling appearance and position modeling of visual patterns. In the context of deep learning, an epitomic convolution layer substitutes a pair of consecutive convolution and max-pooling layers typically used in deep convolutional neural networks. In epitomic matching, for each regularly-spaced input data patch in the lower layer we search across filters in the epitomic dictionary for the strongest response. In max-pooling on the other hand, for each filter in the dictionary we search within a window in the lower input data layer for the strongest response. Epitomic matching is thus an input-centered dual alternative to the filter-centered standard max-pooling. We investigate two main deep epitomic network model variants. Our first variant employs a dictionary of mini-epitomes at each network layer. Each mini-epitome is only slightly larger than the corresponding input data patch, just enough to accomodate for the desired extent of position invariance. For each input data patch, the mini-epitome layer outputs a single value per mini-epitome, which is the maximum response across all filters in the mini-epitome. Our second topographic variant uses just a few large epitomes at each network layer. For each input data patch, the topographic epitome layer outputs multiple values per large epitome, which are the local maximum responses at regularly spaced positions within each topography. We quantitatively evaluate the proposed model primarily in image classification experiments on the Imagenet ILSVRC-2012 large-scale image classification task. We train the model by error back-propagation to minimize the classification log-loss, similarly to \cite{KSH13}. Our best mini-epitomic variant achieves 13.6\% top-5 error on the validation set, which is 0.6\% better than a conventional max-pooled convolutional network of comparable structure whose error rate is 14.2\%. Note that the error rate of the original model in \cite{KSH13} is 18.2\%, using however a smaller network. All these performance numbers refer to classification with a single network. We also find that the proposed epitomic model converges faster, especially when the filters in the dictionary are mean- and contrast-normalized, which is related to \cite{ZeFe13b}. We have found this normalization to also accelerate convergence of standard max-pooled networks. We further show that a deep epitomic network trained on Imagenet can be effectively used as black-box feature extractor for tasks such as Caltech-101 image classification. Finally, we report excellent image classification results on the MNIST and CIFAR-10 benchmarks with smaller deep epitomic networks trained from scratch on these small-image datasets. \paragraph{Related work} Our model builds on the epitomic image representation \cite{JFK03}, which was initially geared towards image and video modeling tasks. Single-level dictionaries of image epitomes learned in an unsupervised fashion for image denoising have been explored in \cite{AhEl08, BMBP11}. Recently, single-level mini-epitomes learned by a variant of K-means have been proposed as an alternative to SIFT for image classification \cite{PCY14}. To our knowledge, epitomes have not been studied before in conjunction with deep models or learned to optimize a supervised objective. The proposed epitomic model is closely related to maxout networks \cite{GWMCB13}. Similarly to epitomic matching, the response of a maxout layer is the maximum across filter responses. The critical difference is that the epitomic layer is hard-wired to model position invariance, since filters extracted from an epitome share values in their area of overlap. This parameter sharing significantly reduces the number of free parameters that need to be learned. Maxout is typically used in conjunction with max-pooling \cite{GWMCB13}, while epitomes fully substitute for it. Moreover, maxout requires random input perturbations with dropout during model training, otherwise it is prone to creating inactive features. On the contrary, we have found that learning deep epitomic networks does not require dropout in the convolutional layers -- similarly to \cite{KSH13}, we only use dropout regularization in the fully connected layers of our network. Other variants of max pooling have been explored before. Stochastic pooling \cite{ZeFe13a} has been proposed in conjunction with supervised learning. Probabilistic pooling \cite{LGRN09} and deconvolutional networks \cite{ZKTF10} have been proposed before in conjunction with unsupervised learning, avoiding the theoretical and practical difficulties associated with building probabilistic models on top of max-pooling. While we do not explore it in this paper, we are also very interested in pursuing unsupervised learning methods appropriate for the deep epitomic representation. The topographic variant of the proposed epitomic model naturally learns topographic feature maps. Adjacent filters in a single epitome share values in their area of overlap, and thus constitute a hard-wired topographic map. This relates the proposed model to topographic ICA \cite{HyHo01} and related models \cite{OWH06, KRFL09, LRMD+12}, which are typically trained to optimize unsupervised objectives. \section{Deep Epitomic Convolutional Networks} \label{sec:model} \begin{figure}[!tbp] \centering \begin{tabular}{c c} \includegraphics[width=0.45\columnwidth]{figs/illustration/conv_max}& \includegraphics[width=0.45\columnwidth]{figs/illustration/epitomic_conv}\\ (a)&(b) \end{tabular} \caption{(a) Standard max-pooled convolution: For each filter we look for its best match within a small window in the data layer. (b) Proposed epitomic convolution (mini-epitome variant): For input data patches sparsely sampled on a regular grid we look for their best match in each mini-epitome.} \label{fig:epitome_diagram} \end{figure} \subsection{Mini-Epitomic deep networks} We first describe a single layer of the mini-epitome variant of the proposed model, with reference to Fig.~\ref{fig:epitome_diagram}. In standard max-pooled convolution, we have a dictionary of $K$ filters of spatial size $\by{W}{W}$ pixels spanning $C$ channels, which we represent as real-valued vectors $\{\wv_k\}_{k=1}^K$ with $W \cdot W \cdot C$ elements. We apply each of them in a convolutional fashion to every $\by{W}{W}$ input patch $\{\xv_i\}$ densely extracted from each position in the input layer which also has $C$ channels. A reduced resolution output map is produced by computing the maximum response within a small $\by{D}{D}$ window of displacements $p \in \mathcal{N}_{input}$ around positions $i$ in the input map which are $D$ pixels apart from each other. The output map $\{z_{i,k}\}$ of standard max-pooled convolution has spatial resolution reduced by a factor of $D$ across each dimension and will consist of $K$ channels, one for each of the $K$ filters. Specifically: \begin{equation} (z_{i,k},p_{i,k}) \leftarrow \max_{p \in \mathcal{N}_{image}} \xv_{i+p}^T \wv_k \, \label{eq:conv-max} \end{equation} where $p_{i,k}$ points to the input layer position where the maximum is attained. In the proposed epitomic convolution scheme we replace the filters with larger mini-epitomes $\{\vv_k\}_{k=1}^K$ of spatial size $\by{V}{V}$ pixels, where $V = W+D-1$. Each mini-epitome contains $D^2$ filters $\{\wv_{k,p}\}_{k=1}^K$ of size $\by{W}{W}$, one for each of the $\by{D}{D}$ displacements $p \in \mathcal{N}_{epit}$ in the epitome. We \emph{sparsely} extract patches $\{\xv_i\}$ from the input layer on a regular grid with stride $D$ pixels. In the proposed epitomic convolution model we reverse the role of filters and input layer patches, computing the maximum response over epitomic positions rather than input layer positions: \begin{equation} (y_{i,k},p_{i,k}) \leftarrow \max_{p \in \mathcal{N}_{epitome}} \xv_i^T \wv_{k,p} \, \label{eq:epitomic-conv} \end{equation} where $p_{i,k}$ now points to the position in the epitome where the maximum is attained. Since the input position is fixed, we can think of epitomic matching as an input-centered dual alternative to the filter-centered standard max-pooling. Computing the maximum response over filters rather than image positions resembles the maxout scheme of \cite{GWMCB13}, yet in the proposed model the filters within the epitome are constrained to share values in their area of overlap. Similarly to max-pooled convolution, the epitomic convolution output map $\{y_{i,k}\}$ has $K$ channels and is subsampled by a factor of $D$ across each spatial dimension. Epitomic convolution has the same computational cost as max-pooled convolution. For each output map value, they both require computing $D^2$ inner products followed by finding the maximum response. Epitomic convolution requires $D^2$ times more work per input patch, but this is fully offset by the fact that we extract input patches sparsely with a stride of $D$ pixels. Similarly to standard max-pooling, the main computational primitive is multi-channel convolution with the set of filters in the epitomic dictionary, which we implement as matrix-matrix multiplication and carry out on the GPU, using the cuBLAS library. To build a deep epitomic model, we stack multiple epitomic convolution layers on top of each other. The output of each layer passes through a rectified linear activation unit $y_{i,k} \leftarrow \max(y_{i,k} + \beta_k, 0)$ and fed as input to the subsequent layer, where $\beta_k$ is the bias. Similarly to \cite{KSH13}, the final two layers of our network for Imagenet image classification are fully connected and are regularized by dropout. We learn the model parameters (epitomic weights and biases for each layer) in a supervised fashion by error back propagation. We present full details of our model architecture and training methodology in the experimental section. \begin{figure*}[!tbp] \centering \begin{tabular}{c c c c c c} \multicolumn{3}{c}{\includegraphics[scale=1.4]{figs/visualize/epito2b30/conv1_0}}& \multicolumn{3}{c}{\includegraphics[scale=1.4]{figs/visualize/epitonb30/conv1_0}}\\ \multicolumn{3}{c}{(a) Mini-epitomes}& \multicolumn{3}{c}{(b) Mini-epitomes + normalization}\\ \\ \multicolumn{2}{c}{\includegraphics[scale=1.4]{figs/visualize/overfeat2b30/conv1_0}}& \multicolumn{2}{c}{\includegraphics[scale=1.4]{figs/visualize/overfeatnb30/conv1_0}}& \multicolumn{2}{c}{\includegraphics[scale=1.4]{figs/visualize/topon2b30/conv1_0}}\\ \multicolumn{2}{c}{(c) Max-pooling}& \multicolumn{2}{c}{(d) Max-pooling + normalization}& \multicolumn{2}{c}{(e) Topographic + normaliz.}\\ \end{tabular} \caption{Filters at the first convolutional layer for different models trained on Imagenet, shown at the same scale. For all models the input color image patch has size \by{8}{8} pixels. (a) Proposed \textsl{Epitomic} model with 96 mini-epitomes, each having size \by{12}{12} pixels. (b) Same as (a) with mean+contrast normalization. (c) Baseline \textsl{Max-Pool} model with 96 filters of size \by{8}{8} pixels each. (d) Same as (c) with mean+contrast normalization. (e) Proposed \textsl{Topographic} model with 4 epitomes of size \by{36}{36} pixels each and mean+contrast normalization.} \label{fig:visualize_conv1} \end{figure*} \subsection{Topographic deep networks} We have also experimented with a topographic variant of the proposed deep epitomic network. For this we use a dictionary with just a few large epitomes of spatial size $\by{V}{V}$ pixels, with $V \ge W+D-1$. We retain the local maximum responses over $\by{D}{D}$ neighborhoods spaced $D$ pixels apart in each of the epitomes, thus yielding $( \lfloor ((V-W+1)-D)/D \rfloor + 1)^2$ output values for each of the $K$ epitomes in the dictionary. The mini-epitomic variant can be considered as a special case of the topographic one when $V = W+D-1$. \subsection{Optional mean and contrast normalization} \label{sec:mean_con_norm} Motivated by \cite{ZeFe13b}, we have also explored the effect of filter mean and contrast normalization on deep epitomic network training. More specifically, we considered a variant of the model where the epitomic convolution responses are computed as: \begin{equation} (y_{i,k},p_{i,k}) \leftarrow \max_{p \in \mathcal{N}_{epitome}} \frac{\xv_i^T \bar{\wv}_{k,p}}{\norm{\bar{\wv}_{k,p}}_\lambda} \, \label{eq:epitomic-conv-norm} \end{equation} where $\bar{\wv}_{k,p}$ is a mean-normalized version of the filters and $\norm{\bar{\wv}_{k,p}}_\lambda \triangleq (\bar{\wv}_{k,p}^T \bar{\wv}_{k,p} + \lambda)^{1/2}$ is their contrast, with $\lambda = 0.01$ a small positive constant. This normalization requires only a slight modification of the stochastic gradient descent update formula and incurs negligible computational overhead. Note that the contrast normalization explored here is slightly different than the one in \cite{ZeFe13b}, who only scale down the filters whenever their contrast exceeds a pre-defined threshold. We have found the mean and contrast normalization of Eq.~\eqref{eq:epitomic-conv-norm} to be crucial for learning the topographic version of the proposed model. We have also found that it significantly accelerates learning of the mini-epitome version of the proposed model, as well as the standard max-pooled convolutional model, without however significantly affecting the final performance of these two model. \section{Image Classification Experiments} \subsection{Image classification tasks} We have performed most of our experimental investigation on the Imagenet ILSVRC-2012 dataset \cite{DDSL+09}, focusing on the task of image classification. This dataset contains more than 1.2 million training images, 50,000 validation images, and 100,000 test images. Each image is assigned to one out of 1,000 possible object categories. Performance is evaluated using the top-5 classification error. Such large-scale image datasets have proven so far essential to successfully train big deep neural networks with supervised criteria. Similarly to other recent works \cite{ZeFe13b, RASC14, CSVZ14}, we also evaluate deep epitomic networks trained on Imagenet as a black-box visual feature front-end on the Caltech-101 benchmark \cite{FFP04}. This involves classifying images into one out of 102 possible image classes. We further consider two standard classification benchmarks involving thumbnail-sized images, the MNIST digit \cite{LeCo98} and the CIFAR-10 \cite{Kriz09}, both involving classification into 10 possible classes. \subsection{Network architecture and training methodology} For our Imagenet experiments, we compare the proposed deep mini-epitomic and topographic deep networks with deep convolutional networks employing standard max-pooling. For fair comparison, we use as similar architectures as possible, involving in all cases six convolutional layers, followed by two fully-connected layers and a 1000-way softmax layer. We use rectified linear activation units throughout the network. Similarly to \cite{KSH13}, we apply local response normalization (LRN) to the output of the first two convolutional layers and dropout to the output of the two fully-connected layers. \begin{table}[t] \setlength{\tabcolsep}{3pt} \begin{center} \scalebox{1.00} { \begin{tabular}{|l||c|c|c|c|c|c||c|c||c|} \hline Layer & 1 & 2 & 3 & 4 & 5 & 6 & 7 & 8 & Out \\ \hline \hline Type & conv + & conv + & conv & conv & conv & conv + & full + & full + & full \\ & lrn + max& lrn + max& & & & max & dropout& dropout& \\ \hline Output channels & 96 & 192 & 256 & 384 & 512 & 512 & 4096 & 4096 & 1000 \\ \hline Filter size & 8x8 & 6x6 & 3x3 & 3x3 & 3x3 & 3x3 & - & - & - \\ \hline Input stride & 2x2 & 1x1 & 1x1 & 1x1 & 1x1 & 1x1 & - & - & - \\ \hline Pooling size & 3x3 & 2x2 & - & - & - & 3x3 & - & - & - \\ \hline \end{tabular} } \caption{Architecture of the baseline \textsl{Max-Pool} convolutional network.} \label{tab:max_pool_net} \end{center} \end{table} The architecture of our baseline \textsl{Max-Pool} network is specified on Table~\ref{tab:max_pool_net}. It employs max-pooling in the convolutional layers 1, 2, and 6. To accelerate computation, it uses an image stride equal to 2 pixels in the first layer. It has a similar structure with the Overfeat model \cite{SEZM+14}, yet significantly fewer neurons in the convolutional layers 2 to 6. Another difference with \cite{SEZM+14} is the use of LRN, which to our experience facilitates training. The architecture of the proposed \textsl{Epitomic} network is specified on Table~\ref{tab:mini_epitome_net}. It has exactly the same number of neurons at each layer as the \textsl{Max-Pool} model but it uses mini-epitomes in place of convolution + max pooling at layers 1, 2, and 6. It uses the same filter sizes with the \textsl{Max-Pool} model and the mini-epitome sizes have been selected so as to allow the same extent of translation invariance as the corresponding layers in the baseline model. We use input image stride equal to 4 pixels and further perform epitomic search with stride equal to 2 pixels in the first layer to also accelerate computation. \begin{table}[t] \setlength{\tabcolsep}{3pt} \begin{center} \scalebox{1.00} { \begin{tabular}{|l||c|c|c|c|c|c||c|c||c|} \hline Layer & 1 & 2 & 3 & 4 & 5 & 6 & 7 & 8 & Out \\ \hline \hline Type & epit-conv& epit-conv& conv & conv & conv & epit-conv & full + & full + & full \\ & + lrn & + lrn & & & & & dropout& dropout& \\ \hline Output channels & 96 & 192 & 256 & 384 & 512 & 512 & 4096 & 4096 & 1000 \\ \hline Epitome size & 12x12 & 8x8 & - & - & - & 5x5 & - & - & - \\ \hline Filter size & 8x8 & 6x6 & 3x3 & 3x3 & 3x3 & 3x3 & - & - & - \\ \hline Input stride & 4x4 & 3x3 & 1x1 & 1x1 & 1x1 & 3x3 & - & - & - \\ \hline Epitome stride & 2x2 & 1x1 & - & - & - & 1x1 & - & - & - \\ \hline \end{tabular} } \caption{Architecture of the proposed \textsl{Epitomic} convolutional network.} \label{tab:mini_epitome_net} \end{center} \end{table} The architecture of our second proposed \textsl{Topographic} network is specified on Table~\ref{tab:topographic_net}. It uses four epitomes at layers 1, 2 and eight epitomes at layer 6 to learn topographic feature maps. It uses the same filter sizes as the previous two models and the epitome sizes have been selected so as each layer produces roughly the same number of output channels when allowing the same extent of translation invariance as the corresponding layers in the other two models. \begin{table}[t] \setlength{\tabcolsep}{3pt} \begin{center} \scalebox{1.00} { \begin{tabular}{|l||c|c|c|c|c|c||c|c||c|} \hline Layer & 1 & 2 & 3 & 4 & 5 & 6 & 7 & 8 & Out \\ \hline \hline Type & epit-conv& epit-conv& conv & conv & conv & epit-conv & full + & full + & full \\ & + lrn & + lrn & & & & & dropout& dropout& \\ \hline Output channels & 4x25 & 4x49 & 256 & 384 & 512 & 8x64 & 4096 & 4096 & 1000 \\ \hline Epitome size & 36x36 & 26x26 & - & - & - & 26x26 & - & - & - \\ \hline Filter size & 8x8 & 6x6 & 3x3 & 3x3 & 3x3 & 3x3 & - & - & - \\ \hline Input stride & 4x4 & 3x3 & 1x1 & 1x1 & 1x1 & 3x3 & - & - & - \\ \hline Epitome stride & 2x2 & 1x1 & - & - & - & 1x1 & - & - & - \\ \hline Epit. pool size & 3x3 & 3x3 & - & - & - & 3x3 & - & - & - \\ \hline \end{tabular} } \caption{Architecture of the proposed \textsl{Topographic} convolutional network.} \label{tab:topographic_net} \end{center} \end{table} We have also tried variants of the three models above where we activate the mean and contrast normalization scheme of Section~\ref{sec:mean_con_norm} in layers 1, 2, and 6 of the network. We followed the methodology of \cite{KSH13} in training our models. We used stochastic gradient ascent with learning rate initialized to 0.01 and decreased by a factor of 10 each time the validation error stopped improving. We used momentum equal to 0.9 and mini-batches of 128 images. The weight decay factor was equal to $\by{5}{10^{-4}}$. Importantly, weight decay needs to be turned off for the layers that use mean and contrast normalization. Training each of the three models takes two weeks using a single NVIDIA Titan GPU. Similarly to \cite{CSVZ14}, we resized the training images to have small dimension equal to 256 pixels while preserving their aspect ratio and not cropping their large dimension. We also subtracted for each image pixel the global mean RGB color values computed over the whole Imagenet training set. During training, we presented the networks with $\by{220}{220}$ crops randomly sampled from the resized image area, flipped left-to-right with probability 0.5, also injecting global color noise exactly as in \cite{KSH13}. During evaluation, we presented the networks with 10 regularly sampled image crops (center + 4 corners, as well as their left-to-right flipped versions). \subsection{Weight visualization} We visualize in Figure~\ref{fig:visualize_conv1} the layer weights at the first layer of the networks above. The networks learn receptive fields sensitive to edge, blob, texture, and color patterns. \subsection{Classification results} We report at Table~\ref{tab:imagenet_results} our results on the Imagenet ILSVRC-2012 benchmark, also including results previously reported in the literature \cite{KSH13, ZeFe13b, SEZM+14}. These all refer to the top-5 error on the validation set and are obtained with a single network. Our best result at 13.6\% with the proposed \textsl{Epitomic-Norm} network is 0.6\% better than the baseline \textsl{Max-Pool} result at 14.2\% error. Our \textsl{Topographic-Norm} network scores less well, yielding 15.4\% error rate, which however is still better than \cite{KSH13, ZeFe13b}. Mean and contrast normalization had little effect on final performance for the \textsl{Max-Pool} and \textsl{Epitomic} models, but we found it essential for learning the \textsl{Topographic} model. The improved performance that we got with the \textsl{Max-Pool} baseline network compared to Overfeat \cite{SEZM+14} is most likely due to our use of LRN and aspect ratio preserving image resizing. When preparing this manuscript, we became aware of the work of \cite{CSVZ14} that reports an even lower 13.1\% error rate with a max-pooled network, using however significantly more neurons than we do in the convolutional layers 2 to 5. \begin{table}[t] \setlength{\tabcolsep}{3pt} \begin{center} \scalebox{0.9} { \begin{tabular}{|l||c|c|c||c|c|c|c|c|} \hline Model & Krizhevsky & Zeiler-Fergus & Overfeat & Max-Pool & Max-Pool & Epitomic & Epitomic & Topographic \\ & \cite{KSH13}& \cite{ZeFe13b} &\cite{SEZM+14}& & + norm & & + norm & + norm \\ \hline Top-5 Error & 18.2\% & 16.0\% & 14.7\% & 14.2\% & 14.4\% &\bf{13.7\%}&\bf{13.6\%}& 15.4\% \\ \hline \end{tabular} } \caption{Imagenet ILSVRC-2012 top-5 error on validation set. All performance figures are obtained with a single network, averaging classification probabilities over 10 image crops (center + 4 corners, as well as their left-to-right flipped versions).} \label{tab:imagenet_results} \end{center} \end{table} We next assess the quality of the proposed model trained on Imagenet as black-box feature extractor for Caltech-101 image classification. For this purpose, we used the 4096-dimensional output of the last fully-connected layer, without doing any fine-tuning of the network weights for the new task. We trained a 102-way SVM classifier using \textsl{libsvm} \cite{ChLi11} and the default regularization parameter. For this experiment we just resized the Caltech-101 images to size \by{220}{220} without preserving their aspect ratio and computed a single feature vector per image. We normalized the feature vector to have unit length before feeding it into the SVM. We report at Table~\ref{tab:caltech101_results} the mean classification accuracy obtained with the different networks. The proposed \textsl{Epitomic} model performs at 87.8\%, 0.5\% better than the baseline \textsl{Max-Pool} model. \begin{table}[t] \setlength{\tabcolsep}{3pt} \begin{center} \scalebox{0.9} { \begin{tabular}{|l||c||c|c|c|c|c|} \hline Model & Zeiler-Fergus & Max-Pool & Max-Pool & Epitomic & Epitomic & Topographic \\ & \cite{ZeFe13b} & & + norm & & + norm & + norm \\ \hline Mean Accuracy & 86.5\% & 87.3\% & 85.3\% &\bf{87.8\%}& 87.4\% & 85.8\% \\ \hline \end{tabular} } \caption{Caltech-101 mean accuracy with deep networks pretrained on Imagenet.} \label{tab:caltech101_results} \end{center} \end{table} We have also performed experiments with the epitomic model on classifying small images on the MNIST and CIFAR-10 datasets. For these tasks we have trained much smaller networks from scratch, using three epitomic convolutional layers, followed by one fully-connected layer and the final softmax classification layer. Because of the small training set sizes, we have found it beneficial to also employ dropout regularization in the epitomic convolution layers. At Table~\ref{tab:mnist_cifar10_results} we report the classification error rates we obtained. Our results are comparable to maxout \cite{GWMCB13}, which achieves state-of-art results on these tasks. \begin{table}[t] \begin{center} \scalebox{0.9} { \begin{tabular}{c c} \begin{tabular}{|l||c|c|} \hline Model & Maxout & Epitomic \\ \hline Error rate & 0.45\% & 0.44\% \\ \hline \end{tabular} & \begin{tabular}{|l||c|c|} \hline Model & Maxout & Epitomic \\ \hline Error rate & 9.38\% & 9.43\% \\ \hline \end{tabular} \\ (a) MNIST & (b) CIFAR-10 \end{tabular} } \caption{Classification error rates on small image datasets for maxout \cite{GWMCB13} and the proposed mini-epitomic deep network: (a) MNIST. (b) CIFAR-10.} \label{tab:mnist_cifar10_results} \end{center} \end{table} \subsection{Mean-contrast normalization and convergence speed} We comment on the learning speed and convergence properties of the different models we experimented with on Imagenet. We show in Figure~\ref{fig:imagenet_optim} how the top-5 validation error improves as learning progresses for the different models we tested, with or without mean+contrast normalization. For reference, we also include a corresponding plot we re-produced for the original model of Krizhevsky \etal \cite{KSH13}. We observe that mean+contrast normalization significantly accelerates convergence of both epitomic and max-pooled models, without however significantly influencing the final model quality. The epitomic models exhibit somewhat improved convergence behavior during learning compared to the max-pooled baselines whose performance fluctuates more. \begin{figure}[!tbp] \centering \includegraphics[width=0.8\columnwidth]{figs/optimization/val_acc} \caption{Top-5 validation set accuracy (center non-flipped crop only) for different models and normalization.} \label{fig:imagenet_optim} \end{figure} \section{Conclusions} In this paper we have explored the potential of the epitomic representation as a building block for deep neural networks. We have shown that an epitomic layer can successfully substitute a pair of consecutive convolution and max-pooling layers. We have proposed two deep epitomic variants, one featuring mini-epitomes that empirically performs best in image classification, and one featuring large epitomes and learns topographically organized feature maps. We have shown that the proposed epitomic model performs around 0.5\% better than the max-pooled baseline on the challenging Imagenet benchmark and other image classification tasks. In future work, we are very interested in developing methods for unsupervised or semi-supervised training of deep epitomic models, exploiting the fact that the epitomic representation is more amenable than max-pooling for incorporating image reconstruction objectives. \paragraph{Reproducibility} We implemented the proposed methods by extending the excellent Caffe software framework \cite{Jia13}. When this work gets published we will publicly share our source code and configuration files with exact parameters fully reproducing the results reported in this paper. \paragraph{Acknowledgments} We gratefully acknowledge the support of NVIDIA Corporation with the donation of GPUs used for this research. \bibliographystyle{ieee}
1,108,101,565,352
arxiv
\section{Introduction} \label{sec:intro} The {\em Kepler} science team is reporting on its first planet discoveries (\citealp{koi7,borucki10,koi17,koi18,koi97}) in this volume -- discoveries discerned from data taken in the first weeks of science operations. The progression from pixels to planet detection passes through numerous stages, from target selection, data collection, and aperture photometry to transiting planet search, data validation, and follow-up observations. In this progression, data validation (DV) and follow-up operations (FOP) are both tasks concerned, in part, with the vetting of false positives. They are distinct, however, in that DV performs false-positive elimination from diagnostics that can be pulled out of the {\em Kepler} data itself, while the FOP relies on additional observations such as moderate and high-precision spectroscopic radial velocities \citep{fop}. Automated DV is under development in the Science Operations Center at NASA Ames Research Center and will soon provide pipeline generation of metrics, reports, and graphics that the science team will use to sift through the thousands of transit events that {\em Kepler} expects to detect. These, our first discoveries, were scrutinized in non-pipeline fashion with some metrics taken from the pipeline DV, and others developed in real time to get the job done as efficiently as possible to support the 2009 ground-based observing season -- the first since launch. Many of the tools that sprang up out of this effort have fed back into the DV development, ensuring it will be a powerful analysis pipeline for future processing. A full description of the DV pipeline currently under development can be found in \citet{pipeline}. This communication describes the pre-pipeline DV tools applied to this first analysis of science data. An event flagged by the {\em Transiting Planet Search} (TPS) pipeline as having transit-like features with a total detection statistic greater than 7.1-$\sigma$ is termed a {\em Threshold Crossing Event} (TCE). Each TCE light curve is modeled, and those returning a companion radius $< 2 R_J$ are assigned a {\em Kepler Object of Interest} (KOI) number. Only those that pass the DV tests described herein are submitted to the follow-up observers for spectroscopic vetting, confirmation, and characterization. The objective is to eliminate as many of the false-positives as possible to efficiently utilize the limited amount of telescope time available for follow-up observations. We note that elimination of a TCE (i.e. transit detection) as a viable planet candidate does not imply that the target itself is no longer observed or no longer subjected to transit searches. Moreover, a light curve may yield multiple threshold-crossing events (TCEs). Each is considered independently with regards to the planet interpretation. The large majority of false-positives are caused by either grazing or diluted eclipsing binaries (\citep{brown03}), the latter being the more likely. Dilution occurs when light from a nearby star falls within the photometric aperture of the foreground star, where ``nearby'' can be either a true physical companion to the star or a chance projection on the sky. We refer to the latter scenario as a Background Eclipsing Binary (BGEB). The photometric and astrometric precision of the {\em Kepler} photometer affords us unprecedented opportunities to vet out the false positives from the flux and photocenter timeseries themselves. Section~\ref{sec:obs} gives a brief summary of the data, from acquisition to transit detection. Section~\ref{sec:modeling} describes the metrics used to identify grazing and diluted EBs derived from light curve modeling, and Section~\ref{sec:centroid} addresses the analysis of the photocenter variations. The stars used to exemplify the various techniques will be referred to by their KOI designation as well as the {\em KeplerID} archived in the {\em Kepler Input Catalog}\footnote{\url{http://archive.stsci.edu/kepler}} (KIC). \section{Observations, Light Curves, and Transit Detection} \label{sec:obs} The analysis is based on two sets of data: 1) a 9.7-day run, May 2 through May 12, 2009, during the commission period to measure the initial photometric performance of the instrument, and 2) the first 33.5 days of science operations, May 13 through June 17, 2009, the end of which is marked by a $90^\circ$ quarterly roll of the spacecraft about the optical axis \citep{haas}. The former data set is referred to as Quarter 0 (Q0), while the latter is referred to as Quarter 1 (Q1). All relatively uncrowded stars brighter than $Kp=13.6$ (where $Kp$ is the apparent magnitude in the {\em Kepler} passband) make up a list of 52,496 targets observed during Q0. The Q1 target list contains 156,097 stars, $93\%$ of which were selected based on metrics that address the detectability of terrestrial-size planets \citep{targetselection} -- metrics dependent on stellar properties (e.g. effective temperature, surface gravity, and apparent magnitude). The intersection of the two lists contains 45,742 targets. The aperture photometry, data conditioning, and transit search algorithms are described in \citet{pipeline}. A discussion of the resulting flux timeseries is given in \citet{longcadence}. The transiting planet search applied to all stars yielded several thousand TCEs, some of which were triggered by artifacts in the light curves and single strong outliers. This was mitigated by comparing the $\chi^2$ statistic of a transit model fit to the folded light curve against the $\chi^2$ statistic of a linear fit to the same. If the two are statistically indistinguishable, the TCE is disregarded. A constraint on the transit duration (1-hour) is used to isolate TCEs triggered by strong outliers in the flux timeseries. The transit model provides an estimate of the companion radius. TCEs with a companion radius larger than $2R_J$ are also disregarded. Such large radii are likely associated with late M-dwarfs and are not considered high priority planetary candidates. This leaves approximately $10^3$ TCEs. \section{Light Curve Modeling} \label{sec:modeling} We use the analytic expressions of \citet{man02} to model the transit of the planet. For our initial fits we use \citet{cla00} V-band nonlinear limb darkening parameters for the star as an approximation for the {\em Kepler} bandpass. Values for the stellar radius ($R_{\star}$), effective temperature ($T_{\rm eff}$) and surface gravity ($\log g$) are retrieved from the KIC. The stellar mass ($M_{\star}$) is computed from $R_{\star}$ and $\log g$. With $M_{\star}$ and $R_{\star}$ fixed to their initial values a transit fit is computed to determine the orbital period, phase, orbital inclination, and planetary radius, $R_p$. The best fit is found using a Levenberg-Marquardt minimization algorithm \citep{pre92}. In the case of an eclipsing binary where the radii and temperature of the two components are slightly different, the depths of the eclipses imprinted on the light curve will be different. Every other transit will have a different depth. When at least 2 transits are present in the light curve, the transit model is recomputed for the odd and even numbered transits where only the companion radius is allowed to vary. If the companion radii differ by more than 3-$\sigma$ then the TCE is rejected. Figure~\ref{fig:oddEven} illustrates such an example. The upper panel shows the unfolded Q0+Q1 light curve of KOI-106 (KeplerID 10489525) -- a somewhat evolved F-type star according to the KIC. Transit modeling provides an estimate of $0.48 R_J$ for the companion radius, $1.61236 \pm 0.00013$ days for the orbital period, and 74.06$^\circ$ for the inclination (shown in the lower panel). The difference between the modeled companion radii utilizing first the odd transits and then the even transits, is given as depth-sig$=10.1\sigma$. The folded light curve is shown in the bottom panel where the odd transits are plotted with asterisks and the even transits are plotted with plus symbols. The difference in the depths of the odd versus even transits is clear, indicating either a grazing eclipse of a binary system or, more likely, a diluted background eclipsing binary. In this case, the true orbital period is twice that identified by the TPS pipeline. The same type of astrophysical false-positive can be identified by searching for very shallow secondary events at phase=0.5. We must proceed with caution, however, in the interpretation of the secondary event since the occultation of a planet can also produce a secondary as is the case of HAT-P-7b \citep{hatp7b}. If the depth of the secondary eclipse has a significance greater than 2-$\sigma$, then the dayside temperature of the planet candidate can be estimated. The flux ratio of the planet and star ($F_p$/$F_*$) over the instrumental bandpass is given by the depth of the secondary eclipse. The ratio of the planet and star radii is obtained from the transit depth. By assuming the star and companion behave as blackbodies and the flux ratio is bolometric, the dayside effective temperature can be estimated. For Kepler-5b (KOI-18; KeplerID 8191672) \citep{koi18} -- a confirmed planet with a 2.6-$\sigma$ secondary transit -- we find $F_p$/$F_*=3.3 \times 10 ^{-5}$, which gives $T_{\rm eff}=1657\pm223$ where an error of 30\% is assumed for the input stellar luminosity and radius. This estimate is a lower limit as a significant fraction of the planetary flux is emitted at wavelengths longer than the red edge of the {\em Kepler} bandpass, but it is a useful diagnostic to determine whether the depth of the secondary eclipse is consistent with a strongly irradiated planet. To make this comparison, we estimate the equilibrium temperature, \begin{equation}\label{eq:teq} T_{eq}=T_* (R_*/2a)^{1/2} [f(1-A_B)]^{1/4}, \end{equation} for the companion, where $R_*$, $T_*$ are the stellar radius and temperature, with the planet at distance $a$ with a Bond albedo of $A_B$, and $f$ is a proxy for atmospheric thermal circulation. We assume $A_B = 0.1$ for highly irradiated planets \citep{row06} and $f=1$ for efficient heat distribution to the night side. These choices give a rough estimate for the dayside temperature of the planet assuming stellar irradiation is the primary energy source. Assuming a 30\% error in the input stellar parameters and that star and planet act as blackbodies we find $T_{eq}=1868\pm284$ K. The consistency of $T_{\rm eff}$ and $T_{eq}$ to within 1-$\sigma$ demonstrates that the secondary eclipse is consistent with a strongly irradiated planet. If $T_{\rm eff}$ were found to be much larger than $T_{eq}$, then the companion is likely self-luminous and thus a stellar binary. Figure~\ref{fig:secEclipse} illustrates the case of a secondary eclipse that is not consistent with occulted emission from a planet. The light curve of KOI-23 (KeplerID 9071386), upper panel, as well as the phase folded transit, lower panel, is again shown. The middle panel shows a close-up of the light curve at phase=0.5 where a 16-$\sigma$ secondary transit is evident. Equation~\ref{eq:teq} yields $T_{eq}=1618\pm259$ K which is inconsistent with the effective temperature of the companion ($2554\pm206$ K) derived from the light curve (difference $>3.5$-$\sigma$). The secondary transit of KOI-23 does not support the planetary companion interpretation. This system is likely a grazing or diluted EB. We note, however, that the temperature argument is invalid if the planet is young or in an eccentric orbit. \section{Photocenter Motion Diagnostics} \label{sec:centroid} Tracking the photocenter of the photometric aperture given {\em Kepler's} very high SNR and stable pointing is an effective means of identifying background eclipsing binaries. The dimming of any object in the aperture will shift the photocenter of the light distribution since the photocenter is determined by the combination of various diffuse and discrete sources. The apparent change in the position of the target star due to a background eclipse event is dependent on the separation of the stars, their relative brightnesses, and the transit/eclipse depth. A $50\%$ eclipse from a star 5 magnitudes dimmer than the target star and offset by one pixel will cause a 5 millipixel shift assuming 1) there are only two stars plus diffuse background in the photometric aperture and 2) all of the flux from the BGEB is included in the photometric aperture. Pre-launch simulations of the photometric and astrometric stability of the instrument predict a 6.5-hour precision in the flux-weighted centroids for a 12th magnitude star of 20 $\mu$pix (80 $\mu$arcsec). To help determine whether background eclipsing binaries in the aperture of a KOI are responsible for the transit-like features identified by TPS, we correlate changes in centroid location with the transit-like features in the photometry. Systematics due primarily to focus and pointing changes \citep{pipeline,longcadence} were removed from the flux and centroid timeseries. A high-pass filter was then applied to remove signatures occurring on timescales longer than 2 days. Isolated outliers are removed from both the flux and centroid timeseries using a 5-sample wide moving median filter and rejecting points beyond 10 median absolute deviations from the moving median. Figure~\ref{fig:centrTimeSer} shows the flux timeseries (upper) and the photocenter (flux-weighted centroid) timeseries of the row (middle) and column (lower) residuals for KOI-140 (KeplerID 5130369). A 1 and 3 millipixel shift projected onto row and column, respectively, and correlated with the transit signal is clearly seen even at the 30-minute cadence. An alternate way of displaying the same information is shown in Figure~\ref{fig:rainPlot}. Here, the fractional change in brightness is plotted against the residual pixel position of the flux-weighted centroids, with row and column values distinguished using different symbols. The ``cloud'' of points near (0,0) represents the out-of-transit points from which we estimate the per-cadence uncertainty -- 0.3 millipixels in the case of KOI-140. The points that rain down are the in-transit points. If the target star is responsible for the transits and if the photometric aperture is free from contaminating flux spatially offset from the target star, then the rain will fall vertically under the cloud corresponding to zero centroid residuals. In the case of KOI-140, the rain falls diagonally, showing a $-0.68$ millipixel row residual and a $+3.02$ millipixel column residual. With 27 in-transit samples, this corresponds to a 12-$\sigma$ and 52-$\sigma$ significance, respectively. These centroid offsets together with knowledge of the apparent brightnesses of stars in the vicinity (from the KIC) allow us to identify the star responsible for the transit-like event. In crowded fields, the flux distribution in a photometric aperture is rarely limited to the target star plus diffuse background. Complex spatial flux distributions imply that the out-of-transit photocenter is not necessarily centered on the target star. Consequently, the photocenter can shift in unexpected directions during transit even when it is the target star itself that is responsible for the signal. For this reason, we look towards other diagnostics in an effort to confirm the identity of the transiting (or eclipsing) object. KOI-08 (KeplerID 5903312) is an example of a BGEB. The aperture is especially crowded, with 8 stars less than 3 magnitudes fainter than the target located within a 3-pixel radius. The centroid timeseries and rain plot show a $\sim$0.75 millipixel centroid shift in both row and column. An average of the images taken at cadences occurring during transit is subtracted from an out-of-transit image. The upper panel of Figure~\ref{fig:diffim} shows the comparison of the out-of-transit image of KOI-08 (right) with the differenced image (left). The brightest pixel in the difference image shows the spatial location of the pixels contributing most strongly to the flux change during the transit. In this case, the position is offset by approximately one pixel in both row and column from the nominal target. This coincides with an object in the KIC that is 1.5 magnitudes fainter in the {\em Kepler} passband than the target star (2.45 magnitudes in 2MASS J). The lower panel shows two single-pixel flux timeseries. The upper corresponds to the brightest pixel in the direct image, and the lower corresponds to the brightest pixel in the difference image. The transit event disappears completely from the pixel timeseries drawn from the target star. To the contrary, the fractional transit depth is larger in the flux timeseries of the brightest pixel in the difference image. The centroid timeseries, the rain plot, the difference image, and the pixel flux timeseries collectively allow us to identify the majority of BGEBs directly from {\em Kepler} data without the need for more complex modeling or observations. \section{Summary} \label{sec:summary} Modeling light curves under the assumption that the companion is a planet provides discriminators against grazing and diluted eclipsing binaries. The metrics used to flag likely false-positives are the odd/even transit depth statistics and the secondary eclipse statistic. The latter is complimented by a comparison of the equilibrium temperature of the planet computed from the orbial and stellar characteristics versus the day-side temperature of the planet computed from the transit/occultation depths. When the two are markedly different, the planet interpretation for the secondary is discarded. The centroid timeseries is a powerful discriminator against diluted eclipsing binaries. KOI-140 is presented as an example of a BGEB identified via centroid analysis. This 13.8-magnitude star yields a per-cadence (30-min) centroid precision of 0.3 millipixels, or 83 $\mu$pix at 6.5-hours. Rain plots exemplified in Figure~\ref{fig:rainPlot} provide an efficient visual assessment of the likelihood of a BGEB. For more complex flux distributions in the photometric aperture, the difference image (out-of-transit minus in-transit) often clearly shows the location of the BGEB. Individual pixel flux timeseries confirm the source in that they isolate the flux of the transiting system, minimizing dilution, and thereby diminishing (or augmenting) the transit depth. This is effective when the BGEB is spatially well-separated from the target star and the stars are sufficiently bright. All {\em Kepler Objects of Interest} must pass the modeling and centroid inspections before being passed on for follow-up observations. In the upcoming year, these metrics will be folded into pipeline Data Validation tools forming a more efficient TCE-to-KOI filter. \acknowledgments The authors wish to acknowledge the {\em Kepler Science Operations Center}, especially Hema Chandrasekharan for her work on the transiting planet search software. Funding for this Discovery mission is provided by NASA's Science Mission Directorate.
1,108,101,565,353
arxiv
\section{Introduction} \label{sec:intro} The explosive advances of convolution neural networks (CNNs) are mainly driven by continuously growing model parameters, incurring deployment difficulty on resource-constrained devices. By directly removing parameters for a sparse model, network sparsity emerges as an important technique to reduce model complexity~\cite{lecun1989optimal,mozer1989skeletonization,hoefler2021sparsity}. Broadly speaking, methods in the literature can be divided into after-training sparsity, before-training sparsity and during-training sparsity~\cite{liu2021sparse}. After-training sparsity aims to remove parameters in pre-trained models~\cite{han2015learning}, while before-training sparsity attempts to refrain from the time-consuming pre-training process by constructing sparse models at random initialization~\cite{lee2018snip}. Recent advances advocate during-training sparsity which consults the sparsity process throughout network training~\cite{evci2020rigging, kusupati2020soft}. Particularly, the core of network sparsity lies in selecting to-be-removed parameters such that it can satisfy: 1) desired sparse rate; 2) acceptable performance compromise. To this end, the most straightforward solution is to remove these parameters causing the least increase in the training loss $\mathcal{L}$. Then, by leveraging the first-order Taylor expansion of the loss function $\mathcal{L}$ to approximate the influence of removing parameter ${\mathbf{w}}_i$, the key criteria for measuring weight importance can be expressed as $\frac{\partial \mathcal{L}}{\partial {\mathbf{w}}_i}{\mathbf{w}}_i$, leading to gradient-driven sparsity. Recent advances rewrite this criteria using higher-order Taylor expansion~\cite{wang2020picking, molchanov2016pruning}, which will be detailed in the next section. \begin{figure}[!t] \begin{center} \includegraphics[height=0.63\linewidth]{sparsity_acc.pdf} \end{center} \centering\vspace{-1.5em} \caption{\label{fig:acc}Top-1 accuracy \emph{v.s.} sparsity with ResNet-50 on ImageNet. The proposed OptG significantly improves performance of other approaches, especially at the ultra-high sparsity levels. } \vspace{-1.3em} \end{figure} Despite the progress, existing gradient-driven methods are built upon the premise of independence among weights. However, this assumption contradicts with the practical implementation, in which, parameters are collectively making effort to derive the network output. Usually, existing methods remove weights once-for-all~\cite{lee2018snip, wang2020picking}. Consequently, the computed loss change used to remove weights deviates a lot from the actual loss change and such deviation is proportional to the number of removed weights at one time. Thus, it is necessary to overcome this independence paradox in order to pursue a better performance. Beyond the gradient-driven sparsity, recent developments on supermask training~\cite{zhou2019deconstructing, ramanujan2020s, zhang2021lottery} show that high-performing sparse subnetworks can be located without modifying any weight. Instead, they choose to update mask values using the straight-through-estimator (STE)~\cite{bengio2013estimating}. In this paper, we innovatively prove that the essence of supermask training is to accumulate the criteria in gradient-driven sparsity for both preserved and removed weights. Also, we show that this manner can partially solve the independence paradox. Unfortunately, the fixed weights in supermask training fail to eliminate the independence paradox, thus, the performance is still sub-optimal. In this paper, we propose to optimize the gradient-driven sparsity by integrating the advantage of supermask training in overcoming the independence paradox. Our method, termed OptG, conducts sparse weights and supermask training simultaneously with a novel supermask optimizer that continuously accumulates the mask gradients of each training iteration and only updates the mask at the beginning of each training epoch. In this way, the remaining parameters can be well tuned on the training set to eliminate the error gap. We further equip the supermask optimizer with a sparsity-aware learning rate schedule that allocates the learning rate of supermasks proportional to the sparsity level of current training iteration, such that the deviation caused by the independence paradox can be further reduced, enabling an optimization for the gradient-driven sparsity. Extensive experiments demonstrate that OptG dominates its counterparts, especially at extreme sparse rate that suffers most significant error gap from the independence paradox. For instance, OptG removes 98\% parameters of ResNet-50 while still achieving 67.20\% top-1 accuracy on ImageNet, surpassing the recent strong baseline STR~\cite{kusupati2020soft} that only reaches 62.84\% by a large margin. Our main contributions are summarized below: \begin{itemize} \item We prove the existence of the independence paradox in gradient-driven sparsity, which causes an error gap in loss change for measuring weight importance. \item We reveal the essence of supermask training is to accumulate weights gradients in gradient-driven sparsity, which partly solves the independence paradox. \item We propose OptG that further optimizes the gradient-driven sparsity using a novel mask optimizer that overcomes the problem of the independence paradox. \item Extensive experiments demonstrate the advantage of the proposed OptG over many existing state-of-the-arts in sparsifying modern CNNs. \end{itemize} \section{Related Work} This section covers the spectrum of studies on sparsifying CNNs that closely related to our work. A more comprehensive overview can be found in the recent survey~\cite{hoefler2021sparsity}. \subsection{Sparsity Granularity} The granularity of network sparsity varies from coarse grain to fine grain. The former is indicated to removing the entire channels or filters towards a structured subnetwork~\cite{he2017channel, lin2020hrank, he2019filter}. Though well suited to a practical speedup on regular hardware devices, significant performance degradation usually occurs at a high sparse rate~\cite{ding2018auto,luo2017thinet,he2020learning, he2018soft}. The latter removes individual neurons at any location of the network to pursue an unstructured subnetwork~\cite{han2015learning, mocanu2018scalable}. It has been proved to well retain the performance even under an extremely high sparse rate~\cite{kusupati2020soft, gale2019state}. Moreover, many recent efforts also show great promise of unstructured sparse networks in practical acceleration~\cite{gale2020sparse,elsen2020fast,zhou2021learning,zhang2022learning}. In particular, the recent 2:4 sparse pattern has been well supported by Nvidia A100 GPUs to accomplish 2$\times$ speedups. \subsection{When to Sparsify} According to the time point the sparsity is applied, we empirically categorize existing works into three groups~\cite{liu2021sparse}. \textbf{After-training sparsity} was firstly adopted by the optimal brain damage~\cite{lecun1989optimal}. Since then, the followers obey a three-step pipeline, including model pretraining, parameter removing and network fine-tuning~\cite{han2015learning,molchanov2016pruning,ding2019centripetal, lemaire2019structured}. Unfortunately, in the cases that pre-trained models are missing and hardware resources are limited, the aforementioned approaches become impractical due to the expensive fine-tuning process. \textbf{Before-training sparsity} attempts to conduct network sparsity on randomly initialized networks for efficient model deployment. Trough removing weights using gradient-driven measurement~\cite{lee2018snip, wang2020picking} or heuristic design~\cite{tanaka2020pruning}, a sparse subnetwork can be available in a one-shot manner. Nevertheless, the performance gap of this group still exists compared with the after-training sparsity~\cite{frankle2020pruning}. \textbf{During-training sparsity} has been drawing increasing attention for its performance retaining~\cite{mocanu2018scalable,mostafa2019parameter,lin2020dynamic}. In each training iteration, the parameters will be removed or revived according to a predefined criterion. Consequently, the sparse subnetwork can be obtained in a single training process. For instance, RigL~\cite{evci2020rigging} re-allocates the removed weights according to their dense gradients, while Sparse Momentum~\cite{dettmers2019sparse} considers the mean momentum magnitude of each layer as a redistribution criterion. Besides, the performance of during-training sparsity can be further enhanced if the desired sparsity is gradually achieved in an incremental manner~\cite{zhu2017prune,liu2021sparse,liu2021we}. \subsection{Layer-wise Sparsity Allocation} It has been a wide consensus in the community that layer-wise sparsity allocation, \emph{i.e.}, sparse rate of each layer, is a core in network sparsity~\cite{liu2018rethinking, gale2019state, lee2020layer}. Majorities of existing methods implement layer-wise sparsity using a static or dynamic design~\cite{evci2020rigging,lee2020layer}. Typically, the global sorting manner also leads to a non-uniform sparsity allocation~\cite{han2015learning}. Unfortunately, as pointed by~\cite{tanaka2020pruning}, this may result in an extremely imbalanced sparsity budget that parameters in some layers are mostly removed, which further disables the network training. Therefore, recent studies~\cite{kusupati2020soft,savarese2019winning} pursue the layer-wise sparsity in a trainable manner, which, however, requires a complex hyper-parameter tunning and often leads to an unstable sparse rate. Different from the above approaches, our OptG can safely conduct pruning in a global manner to automatically obtain the layer-wise sparsity allocation, without complex parameter tuning. \subsection{Lottery Ticket Hypothesis and the Supermask} The lottery ticket hypothesis~\cite{frankle2018lottery} reveals that there exist randomly-initialized sparse networks that can be trained independently to match the performance of the dense model. Following this conjecture, recent empirical studies~\cite{zhou2019deconstructing,ramanujan2020s, zhang2021lottery} have further confirmed the existence of supermask, which simply updates the mask values to obtain sparse subnetworks using the straight-through-estimator (STE)~\cite{bengio2013estimating} without the necessity of modifying weight values. For instance, ~\cite{ramanujan2020s} showed that a randomly initialized Wide ResNet-50 sparsified by a supermask, can match the performance of ResNet-32 trained on ImageNet. \cite{orseau2020logarithmic,pensia2020optimal} further proved that the existence of supermask relies on a logarithmic over-parameterization. Despite the progress, an in-depth analysis remains unexplored on why a subnet exists without modifying weight values. \section{Methodology} \subsection{Background}\label{background} \textbf{Notations}. Denoting the weights of a convolution neural network as ${\mathbf{w}} \in \mathbb{R}^N$ where $N$ is the weight number, network sparsity can be viewed as multiplying a binary mask ${\mathbf{m}} \in \{0, 1 \}^N$ on ${\mathbf{w}}$ as (${\mathbf{w}} \odot {\mathbf{m}} $) where $\odot$ represents the element-wise product. Consequently, the state of the $i$-th mask ${\mathbf{m}}_i$ indicates whether ${\mathbf{w}}_i$ is removed ($0$) or not ($1$). Let $\mathcal{L}(\cdot)$ and $\mathcal{D}$ be the training loss and training dataset. Essentially, given a sparse rate $P$, network sparsity aims to obtain a sparse $\mathbf{m}$ subject to $\frac{{||{\mathbf{m}}||}_0}{N} \leq 1 - P$, meanwhile minimizing $\mathcal{L}(\cdot)$ on $\mathcal{D}$. To this end, various scenarios have been proposed to derive ${\mathbf{m}}$. In what follows, we discuss two cases that are mostly related to our method. \textbf{Gradient-driven sparsity}. The studies of network sparsity using weight gradient date back to the last few decades~\cite{lecun1989optimal, mozer1989skeletonization}. The basic idea of these methods is to leverage weight gradients to approximate change in the loss function $\mathcal{L}(\cdot)$ when removing some parameters. The overall optimization can be formulated as: \begin{equation} \min_{{\mathbf{w}}} \mathcal{L}({\mathbf{w}} \odot {\mathbf{m}}\; ; \;\mathcal{D}) \;\;\; \emph{s.t.} \;\;\; \frac{{||{\mathbf{m}}||}_0}{N} \leq 1 - P. \label{eq1} \end{equation} Then, it is easy to know that the loss change after removing a single weight ${\mathbf{w}}_i$ is: \begin{equation} \Delta \mathcal{L}({\mathbf{w}}_i ; \mathcal{D}) = \mathcal{L}({\mathbf{m}}_i=0 ; \mathcal{D})-\mathcal{L}({\mathbf{m}}_i = 1 ; \mathcal{D}). \end{equation} It is intuitive that $\Delta \mathcal{L}({\mathbf{w}}_i ; \mathcal{D}) < 0$ indicates a loss drop, which means the removal of ${\mathbf{w}}_i$ results in better performance. To obtain a sparse ${\mathbf{m}}$, one naive approach is to repeatedly compute the loss change for each weight in ${\mathbf{w}}$, and then set masks of these parameters with smaller $\Delta \mathcal{L}({\mathbf{w}}_i ; \mathcal{D})$ to $0$s, and $1$s otherwise. However, modern CNNs tend to have parameters in millions, making it expensive to perform this one-by-one loss calculation. Fortunately, $\Delta \mathcal{L}({\mathbf{w}}_i ; \mathcal{D})$ can be approximated via the Taylor series expansion. Considering the first-order case~\cite{molchanov2016pruning}, $\Delta \mathcal{L}({\mathbf{w}}_i ; \mathcal{D})$ can be reformulated as: \begin{equation} \begin{split} \Delta \mathcal{L}&({\mathbf{w}}_i ; \mathcal{D}) = \mathcal{L}({\mathbf{m}}_i=0 ; \mathcal{D})-\mathcal{L}({\mathbf{m}}_i = 1 ; \mathcal{D}) \\ & = \mathcal{L}({\mathbf{m}}_i=1 ; \mathcal{D})- \frac{\partial \mathcal{L}}{\partial ({\mathbf{w}}_i \odot {\mathbf{m}}_i)} ({\mathbf{w}}_i \odot {\mathbf{m}}_i) \\ & + R_1({\mathbf{m}}_i=0) - \mathcal{L}({\mathbf{m}}_i = 1 ; \mathcal{D}) \\ & = -\frac{\partial \mathcal{L}}{\partial ({\mathbf{w}}_i \odot {\mathbf{m}}_i)} ({\mathbf{w}}_i \odot {\mathbf{m}}_i) + R_1({\mathbf{m}}_i=0). \end{split} \label{eq2} \end{equation} If we ignore the first-order remainder $R_1({\mathbf{m}}_i=0)$, then: \begin{equation} \Delta \mathcal{L}({\mathbf{w}}_i ; \mathcal{D}) \approx -\frac{\partial \mathcal{L}}{\partial ({\mathbf{w}}_i \odot {\mathbf{m}}_i)} ({\mathbf{w}}_i \odot {\mathbf{m}}_i). \label{noremainder} \end{equation} Eq.\,(\ref{noremainder}) can be an efficient alternative to approximating $\mathcal{L}({\mathbf{w}}_i ; \mathcal{D})$, since for all weights, the term ${\mathbf{w}}_i \odot {\mathbf{m}}_i$ can be available in a single forward propagation and the term $-\frac{\partial \mathcal{L}}{\partial ({\mathbf{w}}_i \odot {\mathbf{m}}_i)}$ can be derived in a single backward propagation. Consequently, the format of Eq.\,(\ref{noremainder}) has served as a basis in modern gradient-driven network sparsity~\cite{mozer1989skeletonization}. Many recent variants are further excavated based on this format. Taylor-FO~\cite{molchanov2016pruning} considers $(\frac{\partial \mathcal{L}}{\partial {\mathbf{w}}} \odot {\mathbf{w}}) ^ 2$ as a saliency metric to sparsify a pre-trained model, which is similar to the prune-at-initialization SNIP~\cite{lee2018snip} that uses $|\frac{\partial \mathcal{L}}{\partial {\mathbf{w}}} \odot {\mathbf{w}}|$ instead. GrasP~\cite{wang2020picking} leverages the second-order Taylor series and derives the pruning criterion of $-\mathbf{H}\frac{\partial \mathcal{L}}{\partial {\mathbf{w}}} {\mathbf{w}}$, where $\mathbf{H}$ denotes the Hessian matrix. Besides, another variant $|\frac{\partial \mathcal{L}}{\partial ({\mathbf{w}}_i \odot {\mathbf{m}}_i)}|$ is used to indicate if some pruned weights should be revived during sparse training. Though great effort has been made, these existing works are developed on the premise of independence assumption that weights are irrelevant to each other, which however, is on the contrary in practice. As results, their performance remains an open issue. \textbf{Supermask-driven sparsity}. In gradient-driven sparsity, the values of weight vector ${\mathbf{w}}$ are updated in the backward propagation and the mask ${\mathbf{m}}$ is recomputed using the above criterion in the forward propagation. Instead, many recent developments reveal that a high-performing sparse subnet can be found without the necessity of modifying any weight~\cite{zhou2019deconstructing,zhang2021lottery,zhou2019deconstructing}. Typically, these methods can be complemented by updating the mask vector ${\mathbf{m}}$. The corresponding learning objective can be formulated as: \vspace{-0.5em} \begin{equation} \min_{{\mathbf{m}}} \mathcal{L}({\mathbf{w}} \odot {\mathbf{m}}\; ; \;\mathcal{D}) \;\;\; \emph{s.t.} \;\;\; \frac{{||{\mathbf{m}}||}_0}{N} \leq 1 - P. \label{eq4} \end{equation} To stress, the objective of Eq.\,(\ref{eq4}) differs from that of Eq.\,(\ref{eq1}) in that its optimized variable is ${\mathbf{m}}$ instead of ${\mathbf{w}}$ which instead is regarded as a constant vector in Eq.\,(\ref{eq4}). Existing studies~\cite{zhou2019deconstructing,ramanujan2020s} optimize Eq.\,(\ref{eq4}) by first relaxing the discrete ${\mathbf{m}} \in \{0,1\}^N$ to a continuous version of $\hat{{\mathbf{m}}} \in \mathbb{R}^N$. Then, in the forward propagation, the discrete mask ${\mathbf{m}}$ is generated by applying a binary function $h(\cdot)$ to $\hat{{\mathbf{m}}} \in \mathbb{R}^N$ as: \vspace{-0.5em} \begin{equation} h(\mathbf{\hat{m}}_i) = \left\{ \begin{array}{ll} 0, \textrm{if $\mathbf{\hat{m}}_i$ in the top-$\lceil P \cdot N \rceil$ smallest values of $\mathbf{\hat{m}}$,}\\ 1, \textrm{otherwise.} \end{array} \right. \label{eq5} \end{equation} In the backward propagation, due to the non-differentiable in the above equation, the straight-through estimator (STE)~\cite{bengio2013estimating} is used as an alternative to approximate the mask gradient as: \vspace{-0.5em} \begin{equation} \begin{split} \frac{\partial \mathcal{L}}{\partial \mathbf{\hat{m}}_i} & = \frac{\mathcal{\partial L}}{\partial (h(\mathbf{\hat{m}}_i)\odot{\mathbf{w}}_i)}\frac{\partial (h(\mathbf{\hat{m}}_i)\odot{\mathbf{w}}_i)}{\partial h(\mathbf{\hat{m}}_i)}\frac{h(\mathbf{\hat{m}}_i)}{\mathbf{\hat{m}}_i} \\ & \approx \frac{\mathcal{\partial L}}{\partial(h(\mathbf{\hat{m}}_i)\odot{\mathbf{w}}_i)}\frac{\partial(h(\mathbf{\hat{m}}_i)\odot{\mathbf{w}}_i)}{\partial h(\hat{{\mathbf{m}}}_i)} \\& =\frac{\mathcal{\partial L}}{\partial(h(\mathbf{\hat{m}}_i)\odot{\mathbf{w}}_i)}{\mathbf{w}}_i. \end{split} \label{mask_gradient} \end{equation} By updating $\hat{{\mathbf{m}}}$, a high-performing sparse subnet can be finally located. Nevertheless, to date, no one has dived into an exploration of how a subnet can be identified without modifying weight values. In Sec.~\ref{insights}, we give a detailed explanation and show that gradient-driven sparsity and supermask-driven sparsity are the same in essence. \subsection{Independence Paradox\label{paradox}} The gradient-driven sparsity using Eq.\,(\ref{noremainder}) neglects the high-order terms of Taylor series as well as the remainder. Luckily, it has been experimentally proved that the first-order gradient~\cite{lee2018snip} shows performance on par with the higher-order ones~\cite{wang2020picking}. Nevertheless, existing gradient-driven sparsity is built upon the assumption that weights are irrelevant to each other, which is contrary to the practical implementation~\cite{molchanov2016pruning, lee2018snip} where a large number of weights are usually removed simultaneously. Considering the case that two weights ${\mathbf{w}}_i$ and ${\mathbf{w}}_j$ are removed, if independent, the loss change of removing ${\mathbf{w}}_i$ using Eq.~(\ref{eq2}) is: \begin{equation} \begin{split} \Delta \mathcal{L}({\mathbf{w}}_i ; \mathcal{D}) &= \mathcal{L}({\mathbf{m}}_i=0, {\mathbf{m}}_j=1, {\mathbf{w}}; \mathcal{D}) \\ &-\mathcal{L}({\mathbf{m}}_i = 1, {\mathbf{m}}_j = 1, {\mathbf{w}} ; \mathcal{D}). \label{eq7} \end{split} \end{equation} However, considering that ${\mathbf{w}}_j$ has been removed as well, the actual loss change due to the removal of ${\mathbf{w}}_i$ should become: \begin{equation} \begin{split} \Delta\mathcal{L}({\mathbf{w}}_i^* ; \mathcal{D}) &= \mathcal{L}({\mathbf{m}}_i=0, {\mathbf{m}}_j=0, {\mathbf{w}}^* ; \mathcal{D})\\ &-\mathcal{L}({\mathbf{m}}_i = 1, {\mathbf{m}}_j = 0, {\mathbf{w}}^* ; \mathcal{D}), \label{eq8} \end{split} \end{equation} where ${\mathbf{w}}^*$ indicates the state of original ${\mathbf{w}}$ after the removal of ${\mathbf{w}}_j$ and a follow-up fine-tuning on the dataset $\mathcal{D}$. It is easy to know that Eq.\,(\ref{eq7}) is actually built upon the premise of preserving ${\mathbf{w}}_j$. However, the practice removes ${\mathbf{w}}_i$ and ${\mathbf{w}}_j$ simultaneously, which indicates a loss change of Eq.\,(\ref{eq8}). As tremendous weights are removed at once in the real cases~\cite{lee2018snip, wang2020picking}, such variants of loss change between Eq.~(\ref{eq7}) and Eq.~(\ref{eq8}) could raise sharply, which we quantitatively demonstrate in the supplementary material. Thus, there exists an independence paradox in existing studies and the error gap in gradient-driven sparsity is proportional to the total number of removed weights each time. Note that, some recent advances~\cite{evci2020rigging,zhu2017prune} advocate incremental pruning that removes a small portion of weights each time. For instance, RigL~\cite{evci2020rigging} removes a small fraction of weights and activates new ones iteratively, while Zhu~\emph{et al.}~\cite{zhu2017prune} proposed to gradually increase the number of removed weights until the desired sparse rate is satisfied. Though not explicitly stated, these works indeed accomplish network sparsity by reducing the removed parameters so as to relieve the error gap caused by the independence paradox. Nevertheless, the error gap still exists and thus their performance is sub-optimal. Therefore, an in-depth exploration to overcome this independence paradox remains to be done. \subsection{Our Insights on Supermask-driven Sparsity}\label{insights} In this subsection, we show that the success of supermask-driven sparsity, to some extent, mitigates the aforementioned independence paradox. We first prove that the mechanism of supermask-driven sparsity actually meets the first-order based gradient-driven sparsity. Specifically, let the mask $\hat{{\mathbf{m}}}_i$ at the $t$-th training iteration be $\hat{{\mathbf{m}}}_i^t$. Combining the mask gradient in Eq.\,(\ref{mask_gradient}), $\hat{{\mathbf{m}}}_i^t$ can be derived via SGD as: \vspace{-0.5em} \begin{equation} \begin{split} \hat{{\mathbf{m}}}_i^t &= \hat{{\mathbf{m}}}_i^{t-1} - \eta \frac{\partial \mathcal{L}}{\partial \mathbf{\hat{m}}_i^t}\\ &= \hat{{\mathbf{m}}}_i^{t-1} - \eta \frac{\partial \mathcal{L}}{ \partial(h(\mathbf{\hat{m}}_i^t)\odot{\mathbf{w}}_i)} {\mathbf{w}}_i, \label{mask_tth} \end{split} \end{equation} where $\eta$ indicates the learning rate. Note that the momentum and weight decay items are neglected here for simplicity. When $h(\mathbf{\hat{m}}^t_i) = 1$, we have $(h(\mathbf{\hat{m}}_i^t)\odot{\mathbf{w}}_i) = {\mathbf{w}}_i$, which leads the updating items of $\hat{{\mathbf{m}}}_i^t$ to the result of Eq.~(\ref{noremainder}). Nevertheless, when $h(\mathbf{\hat{m}}_i) = 0$, which indicates a removal of ${\mathbf{w}}_i$, the computing result of Eq.\,(\ref{mask_tth}) remains unclear. Here we show that in this case, the updating items of $\hat{{\mathbf{m}}}_i^t$ become the loss change for adding back the corresponding pruned weight ${\mathbf{w}}_i$, which can be derived as: \vspace{-0.3em} \begin{equation} \begin{split} \Delta^{+} & \mathcal{L}({\mathbf{w}}_i ; \mathcal{D}) = \mathcal{L}({\mathbf{m}}_i=1 ; \mathcal{D})-\mathcal{L}({\mathbf{m}}_i = 0 ; \mathcal{D}) \\ & = \mathcal{L}({\mathbf{m}}_i=0 ; \mathcal{D})- \frac{\partial \mathcal{L}}{\partial (h(\mathbf{\hat{m}}_i^t)\odot{\mathbf{w}}_i)} (0 - {\mathbf{w}}_i) \\ & \quad+ R_1({\mathbf{m}}_i=1) - \mathcal{L}({\mathbf{m}}_i = 0 ; \mathcal{D}) \\ & = \frac{\partial \mathcal{L}}{\partial (h(\mathbf{\hat{m}}_i^t)\odot{\mathbf{w}}_i)} (h(\mathbf{\hat{m}}_i^t)\odot{\mathbf{w}}_i) + R_1({\mathbf{m}}_i=1) \\ & \approx \frac{\partial \mathcal{L}}{\partial (h(\mathbf{\hat{m}}_i^t)\odot{\mathbf{w}}_i)} (h(\mathbf{\hat{m}}_i^t)\odot{\mathbf{w}}_i) \\ &= \frac{\partial \mathcal{L}}{ \partial(h(\mathbf{\hat{m}}_i^t)\odot{\mathbf{w}}_i)} {\mathbf{w}}_i. \end{split} \label{add_back} \end{equation} This is exactly the pursued mask gradient in Eq.\,(\ref{mask_tth}). Thus, the updating rule of $\hat{{\mathbf{m}}}_i$ can be organized as: \vspace{-1em} \begin{equation} \hat{{\mathbf{m}}}_i^t = \left\{ \begin{array}{ll} \hat{{\mathbf{m}}}_i^{t-1} + \eta \Delta \mathcal{L}({\mathbf{w}}_i ; \mathcal{D}) , \; \textrm{if $h(\mathbf{\hat{m}}^{t-1}_i) = 1$,}\\ \hat{{\mathbf{m}}}_i^{t-1} -\eta \Delta^{+}\mathcal{L}({\mathbf{w}}_i ; \mathcal{D}), \; \textrm{otherwise.} \end{array} \right. \label{essence} \end{equation} As defined previously, $\Delta \mathcal{L}({\mathbf{w}}_i ; \mathcal{D})$ is the loss change after removing ${\mathbf{w}}_i$ and we expect a large value of $\Delta \mathcal{L}({\mathbf{w}}_i ; \mathcal{D})$ if ${\mathbf{w}}_i$ is important to the network performance. Similarly, a small $\Delta^{+}\mathcal{L}({\mathbf{w}}_i ; \mathcal{D})$ also indicates the removed weight is vital to the network and should be revived in the case of supermask-driven sparsity. Overall, from Eq.\,(\ref{essence}), we can know that the updating in supermask-driven sparsity is indeed to accumulate the criteria of both preserved and removed weights in gradient-driven sparsity. Thus, we can conclude that the manner of supermask-driven sparsity to obtain a sparse mask ${\mathbf{m}}$ is indeed similar to that of gradient-driven sparsity. Further, we show that the key to supermask training is that it partially solves the independence paradox. Similarly, given that ${\mathbf{w}}_i$ and ${\mathbf{w}}_j$ are removed at the ($t-1$)-th training iteration, we have $h(\hat{{\mathbf{m}}}_i^{t-1}) = 0$. According to Eq.\,(\ref{essence}), the loss change for adding back ${\mathbf{w}}_i$ becomes: \vspace{0em} \begin{equation} \begin{split} \Delta^{+} \mathcal{L}({\mathbf{w}}_i ; \mathcal{D}) &= \mathcal{L}({\mathbf{m}}_i=1, {\mathbf{m}}_j=0, {\mathbf{w}} ; \mathcal{D})\\ &-\mathcal{L}({\mathbf{m}}_i = 0, {\mathbf{m}}_j = 0, {\mathbf{w}} ; \mathcal{D}). \end{split} \label{closer} \end{equation} Obviously, Eq.\,(\ref{closer}) is closer to the actual loss change of Eq.\,(\ref{eq8}) than independent-based Eq.~(\ref{eq7}). Thus, the error gap from the independence paradox can be well compensated by reviving ${\mathbf{w}}_i$ if it is important to the network performance during the following training iterations. This well explains why supermask-driven sparsity can perform well in existing studies~\cite{ramanujan2020s, zhang2021lottery}. Nevertheless, as the weights are kept fixed during the supermask training, there is still a distance between Eq.\,(\ref{eq8}) and Eq.\,(\ref{closer}), which implies that the independence paradox is still not comprehensively solved yet. \subsection{Optimizing Gradient-driven Sparsity} Herein, we propose to interleave the supermask training into during-training learning to further optimize the gradient-driven sparsity. The learning objective of our method, termed OptG, can be formulated as: \vspace{-1.2em} \begin{equation} \min_{{\mathbf{m}}, {\mathbf{w}}} \mathcal{L}({\mathbf{w}} \odot {\mathbf{m}}\; ; \;\mathcal{D}) \;\;\; \emph{s.t.} \;\;\; \frac{{||{\mathbf{m}}||}_0}{N} \leq 1 - P . \label{object_optg} \end{equation} The motive of our OptG is to conduct weight optimization and mask training simultaneously. Therefore, when ${\mathbf{w}}_j$ is pruned, the remaining weights are further trained on $\mathcal{D}$ and the update item of $\hat{\mathbf{m}}_i$ in the $t$-th training iteration is: \begin{figure*}[!t] \begin{center} \includegraphics[height=0.28\linewidth]{optimizer.pdf} \end{center} \vspace{-1.5em} \caption{\label{fig:mask_optimizer}(a) The progression of the overall sparsity and (b) the learning rate of the supermasks with different $\alpha$ over the course of training. } \vspace{-1em} \end{figure*} \vspace{-1.2em} \begin{equation} \begin{split} \Delta \mathcal{L}({\mathbf{w}}_i ; \mathcal{D}) &= \mathcal{L}({\mathbf{m}}_i=1, {\mathbf{m}}_j=0, {\mathbf{w}}^t ; \mathcal{D})\\ &-\mathcal{L}({\mathbf{m}}_i = 0, {\mathbf{m}}_j = 0, {\mathbf{w}}^t ; \mathcal{D}), \label{eq14} \end{split} \end{equation} where ${\mathbf{w}}^t$ is the trained weights after the $t$-th training iteration. Nevertheless, it is clear that ${\mathbf{w}}^t$ barely reflects the weight tuning on the whole training set as ${\mathbf{w}}^t$ has been trained only for one iteration. Moreover, if the binary function $h(\cdot)$, \emph{i.e.}, Eq.~(\ref{eq5}), is applied during each forward propagation of masks, the network topology may be changed frequently, which can lead to an unstable training process. To solve the above-mentioned problem, we introduce a novel supermask optimizer towards comprehensively solving the independence paradox. In particular, we apply $h(\cdot)$ to revive and prune weights at the beginning of each training epoch. Then, we continuously accumulate the mask gradient during each training iteration via Eq.~(\ref{mask_tth}), but keep the binary mask fixed. Therefore, the preserved weights can be sufficiently retrained on the training set, enabling our mask updating process to reach Eq.~(\ref{eq8}). From another perspective, as discussed in Sec.~\ref{paradox}, the error caused by the independence paradox is actually proportional to the number of removed weights and an alternatively way to eliminate it is to gradually remove weights. Nevertheless, existing studies~\cite{zhu2017prune, liu2021sparse} choose to rapidly improve the sparsity level during the early stage of training, which we point out does not fit our goal for optimizing gradient-driven sparsity. To explain, previous techniques~\cite{zhu2017prune, liu2021sparse} revive weights to 0s, where our supermask training revive weights to the values before they are sparsified in order to mitigate the independence paradox, which implies that the weights require sufficient training as reviving a random-initialized weight is usually meaningless from the perspective of Eq.~(\ref{add_back}). Thus, we choose to increase the sparse rate from 0 to the target sparsity rate $P$ in a more smooth manner as: \begin{equation} P_k = \frac{P}{1+e^{-\alpha (k-0.5\tau)}}, \label{eq15} \end{equation} where $k$ and $\tau$ represent the current and total training epoch, $\alpha$ is a hyperparameter that controls the total epoch for achieving sparsity. Fig.~\ref{fig:mask_optimizer} (a) shows that the sparsity ascent rate at initialization can be relatively smooth and we experimentally proved in Sec.~\ref{ablation} that such sparsity schedule well boosts the performance of OptG for reviving weights to their original values. Based on this schedule, we further embed a novel paradox-aware supermask learning rate schedule with our supermask optimizer. In detail, the mask learning rate at epoch $k$, denoted as $\eta_{\hat{{\mathbf{m}}},k}$, is calculated as: \begin{equation} \eta_{\hat{{\mathbf{m}}},k} = \frac{\eta_{{\mathbf{w}}, k}}{1+e^{-\alpha (k-0.5\tau)}}, \label{eq16} \end{equation} where $\eta_{{\mathbf{w}}, k}$ is the learning rate of weights at the $k$-th epoch scheduled by the cosine annealing~\cite{loshchilov2016sgdr}. As can be reffered in Fig.~\ref{fig:mask_optimizer} (b), when the sparse rate of the network is low, the learning rate of masks is also small as the calculation of gradient score will be seriously interfered by the independence paradox. Moreover, when the sparse rate reaches $P$, the learning rate of masks can follow weights to sufficiently optimize the gradient-driven criteria while guaranteening training convergence. We summarize the workflow of OptG in Alg.~\ref{alg:optg}. Note that our OptG sorts the weights in a global manner to automatically decide a layer-wise sparsity budget, thus avoiding the rule-of-thumb design~\cite{evci2020rigging} or complex hyper-parameter tuning for learning sparsity distributions~\cite{he2018soft}. \section{Experiments}\label{experiment} \subsection{Settings} We conduct extensive experiments to evaluate the efficacy of our OptG in sparsifying VGGNet-19~\cite{simonyan2015very} on small scale CIFAR-10/100~\cite{krizhevsky2009learning} and ResNet-50~\cite{he2016deep}, MobileNet-V1~\cite{howard2017mobilenets} on large scale ImageNet~\cite{deng2009imagenet}. Besides, we compare our OptG with several the state-of-the-arts including SNIP~\cite{lee2018snip}, GraSP~\cite{wang2020picking}, SET~\cite{mocanu2018scalable}, GMP~\cite{gale2019state}, SynFlow~\cite{tanaka2020pruning}, DNW~\cite{wortsman2019discovering}, RigL~\cite{evci2020rigging}, GSM~\cite{ding2019global}, STR~\cite{kusupati2020soft} and GraNet~\cite{liu2021sparse}. We implement OptG with PyTorch~\cite{pytorch2015}. Particularly, we set $\alpha = 0.5$ in all experiments and leverage the SGD optimizer to update the weights and their masks with a gradually-increasing sparsity rate Eq.\,(\ref{eq15}). On CIFAR-10 and CIFAR-100, we train the networks for 160 epochs with a weight decay of $1\times10^{-3}$. On ImageNet, the weight decay is set to $5\times10^{-4}$ for ResNet-50 and $4\times10^{-5}$ for MobileNet-V1. We train ResNet-50 for 100 epochs and MobileNet-V1 for 180 epochs, respectively. Besides, the initial learning rate is set to 0.1, which is then decayed by the cosine annealing scheduler~\cite{loshchilov2016sgdr}. All experiments are run with NVIDIA Tesla V100 GPUs. \begin{algorithm}[!t] \SetKwInOut{Input}{Require} \SetKwInOut{Output}{Output} \caption{Optimizing the gradient-driven sparsity.} \label{alg:optg} \Input{ Network weights ${\mathbf{w}}$ and masks $\hat{{\mathbf{m}}}$, target sparsity $P$, total training epoch $\tau$.} ${\mathbf{w}}$ $\gets$ randomly initialization, $\hat{{\mathbf{m}}}$ $\gets$ $\mathbf{0}$; \\ \For{$k$ $\gets$ 1, 2, $\dots$, $\tau$ } { Get current sparse rate via Eq.~(\ref{eq15});\\ Set the learning rate of $\hat{{\mathbf{m}}}$ via Eq.~(\ref{eq16}); \\ Get ${\mathbf{m}}$ from $\hat{{\mathbf{m}}}$ via Eq.~(\ref{eq5});\\ \For{each training step $t$ } { Forward propagation via (${\mathbf{w}} \odot {\mathbf{m}}$); \\ Compute the gradient of $\hat{{\mathbf{m}}}$ via Eq.~(\ref{mask_gradient});\\ Update ${\mathbf{w}}, \hat{{\mathbf{m}}}$ using SGD optimizer; \\ } } \end{algorithm} \begin{table}[!t] \caption{Performance comparison of VGGNet-19 on CIFAR.} \label{tab:cifar10} \centering \vspace{-0.5em} \begin{tabular}{@{}lcccc@{}} \toprule Dataset & \multicolumn{2}{c}{\;\;\;\;\; CIFAR-10 \;\;\;\;\;} & \multicolumn{2}{c}{\;\;\;\;\; CIFAR-100 \;\;\;\;\;}\\ \midrule Sparse Rate & 90\% & 95\% & 90\% & 95\% \\ \cmidrule{1-1} \cmidrule(lr){2-3} \cmidrule(lr){4-5} VGGNet-19 & 93.85 & - & 73.43 & - \\ \cmidrule{1-1} \cmidrule(lr){2-3} \cmidrule(lr){4-5} SET & 92.46 & 91.73 & 72.36 & 69.81 \\ SNIP & 93.63 & 93.43 & 72.84 & 71.83 \\ GraSP & 93.30 & 93.04 &72.19& 71.95 \\ SynFlow & 93.35 & 93.45 &72.24& 71.77 \\ STR & 93.73 & 93.27 & 71.93 & 71.14 \\ RigL & 93.47 & 93.35 & 71.82 & 71.53 \\ GMP & 93.59 & 93.58 & 73.10 & 72.30 \\ GraNet & 93.80 & 93.72 & 73.74 & 73.10 \\ \rowcolor[gray]{0.9} OptG & \textbf{93.84} & \textbf{93.79} & \textbf{73.80} & \textbf{73.24} \\ \bottomrule \end{tabular} \centering\vspace{-1mm} \end{table} \subsection{Quantitative Results} \textbf{VGGNet-19}. Tab.~\ref{tab:cifar10} shows the performance of different methods for sparsifying the classic VGGNet with 19 layers on CIFAR-10/100 datasets. Compared with the competitors, our OptG yields better accuracy under the same sparse rate on both datasets. For instance, compared with SNIP~\cite{lee2018snip} that suffers serious performance degradation of 4.26\% when pruning 95\% parameters on CIFAR-10 (89.59\% for SNIP and 93.85\% for the baseline), the proposed OptG only lose negligible accuracy of 0.01\% (93.84\% for OptG), despite they are both built on gradient information. On CIFAR-100, our OptG provides significantly better accuracy against other gradient-driven approaches including GrasP~\cite{wang2020picking} and RigL~\cite{evci2020rigging}, which demonstrates the superiority of optimizing the gradient-driven criteria in network sparsity. \begin{table}[!t] \caption{Performance comparison of ResNet-50 on ImageNet.} \centering\vspace{-0.5em} \label{tab:imagenet_res50} \begin{tabular}{@{}lcccc@{}} \toprule Method & Sparsity & Params & FLOPs & Top-1 Acc. \\ \midrule ResNet-50 & 0.00 & 25.6M & 4.09G & 77.01 \\ \midrule SNIP & 90.00 &2.56M & 409M & 67.20 \\ SET & 90.00 & 2.56M & 409M & 69.60 \\ GSM & 90.00 & 2.56M & 409M & 73.29 \\ GMP & 90.00 & 2.56M & 409M & 73.91 \\ DNW & 90.00 & 2.56M & 409M & 74.00 \\ RigL & 90.00 & 2.56M & 960M & 73.00\\ STR & 90.55 & 2.41M & 341M & 74.01 \\ GraNet & 90.00 & 2.56M & 650M & 74.20 \\ \rowcolor[gray]{0.9} OptG & 90.00 & 2.56M & 342M & \textbf{74.55}\\ \midrule GMP & 95.00 & 1.28M & 204M & 70.59 \\ DNW & 95.00 & 1.28M & 204M & 68.30 \\ RigL & 95.00 & 1.28M & 490M & 70.00 \\ STR & 95.03 & 1.27M & 159M & 70.40 \\ GraNet & 95.00 & 1.28M & 490M & 72.30 \\ \rowcolor[gray]{0.9} OptG & 95.00 & 1.28M & 221M & \textbf{72.45}\\ \midrule RigL & 96.50 & 0.90M & 450M & 67.20 \\ STR & 96.11 & 0.99M & 127M & 67.78 \\ STR & 96.53 & 0.88M & 117M & 67.22 \\ GraNet & 96.50 & 0.90M & 368M & 70.50 \\ \rowcolor[gray]{0.9} OptG & 96.50 & 0.90M & 179M & \textbf{70.85} \\ \midrule GMP & 98.00 & 0.51M & 82M & 57.90 \\ DNW & 98.00 & 0.51M & 82M & 58.20 \\ STR & 97.78 & 0.57M & 80M & 62.84 \\ STR & 98.05 & 0.50M & 73M & 61.46 \\ GraNet & 98.00 & 0.51M & 199M & 64.14 \\ \rowcolor[gray]{0.9} OptG & 98.00 & 0.51M & 126M & \textbf{67.20}\\ \midrule GMP & 99.00 & 0.26M & 41M & 44.78 \\ STR & 98.98 & 0.26M & 47M & 51.82 \\ STR & 98.79 & 0.31M & 54M & 54.79 \\ GraNet & 99.00 & 0.26M & 123M & 58.08 \\ \rowcolor[gray]{0.9} OptG & 99.00 & 0.26M & 83M & \textbf{62.10}\\ \bottomrule \end{tabular} \centering\vspace{-1mm} \end{table} \textbf{ResNet-50}. The comparison of compressing ResNet-50~\cite{he2016deep} between the proposed OptG and its counterparts on the large-scale ImageNet dataset is presented in Tab.~\ref{tab:imagenet_res50}. As can be seen, our OptG well surpasses its competitors across different sparse rates. For example, in comparison with the gradient-driven approach RigL at a sparse rate of 90\%, OptG greatly reduces the FLOPs to 342M with an accuracy of 74.55\%, while RigL only reaches 73.00\% with much higher FLOPs of 960M. Although GraNet shows comparable accuracy at 95\% sparsity (72.30\% for GraNet and 72.45\% for OptG), its performance is built upon preserving more than 2$\times$ FLOPs than OptG (490M FLOPs for GraNet and 221M for OptG). Further, the superiority of OptG over other methods is proportional to the sparsity level. When the sparse rate reaches 98.00\%, all existing studies suffer severe performance degradation. In contrast, OptG presents an amazing result of 67.20\% top-1 accuracy, which well surpasses the recent advances of STR by 4.36\% and DNW by 9.00\%. Furthermore, at an extreme sparse rate of around 99.00\%, the proposed OptG still retains a high accuracy of 62.10\%, which surpasses the second best GraNet by a large margin of 4.02\%. These comparison results well demonstrate the efficacy of our OptG for solving the independence paradox in compressing the large-scale ResNet. \textbf{MobileNet-V1}. MobileNet-V1~\cite{howard2017mobilenets} is a lightweight network with depth-wise convolution. Thus, compared with ResNet-50, it is more different to sparsify MobileNet-V1 without performance compromise. Nevertheless, results on Tab.\,\ref{tab:imagenet_mobv1} show that our OptG still offers reliable performance on such a challenging task. Specifically, OptG achieves a top-1 accuracy of 70.27\% at a sparse rate of 89.00\%, which is 3.75\% higher than that of STR which suffers more parameter burden as well. A similar observation can be found when the sparse rate is around 90.00\%. Our OptG reduces the parameters to 0.41M, meanwhile it still preserves the accuracy of 66.80\%, surpassing other methods by a significant margin. Thus, OptG well demonstrates its ability to sparsify lightweight networks. \subsection{Ablation Studies}\label{ablation} \textbf{The gradual sparsity schedule.} We first perform ablation studies for our proposed sparsity schedule in Eq.~(\ref{eq15}). We consider the classic sparsity schedule proposed by Zhu~\emph{et al.}~\cite{zhu2017prune} for comparison, which increases the sparsity level rapidly in the early training epochs. Tab.~\ref{tab:ablation1} lists the performance of different sparsity techniques under the same gradual sparsity schedule. As can be observed, OptG takes the lead at both sparsity schedules, and the advantage of OptG is more obvious under our proposed schedule. Note that applying our schedule to other methods even leads to worse performance compared with the schedule proposed by Zhu~\emph{et al.}~\cite{zhu2017prune} except for DPF that also revives weights to their original value before being sparsified. This is reasonable for that other methods generally revive weights to 0s, which, if carried out in the latter training process, can not ensure a sufficient training. Therefore, different motivations lead to a unique schedule for OptG. \begin{table}[!t] \caption{Performance comparison of MobileNet-V1 on ImageNet.} \centering\vspace{-0.5em} \label{tab:imagenet_mobv1} \resizebox{\columnwidth}{!}{ \begin{tabular}{@{}lcccc@{}} \toprule Method & Sparsity & Params & FLOPs & Top-1 Acc. \\ \midrule MobileNet-V1 & 0.00 & 4.12M & 569M & 71.95 \\ \midrule GMP & 74.11 & 1.09M & 163M & 67.70 \\ STR & 75.28 & 1.04M & 101M & 68.35 \\ STR & 79.07 & 0.88M & 81M & 66.52 \\ \rowcolor[gray]{0.9} OptG & 80.00 & 0.82M & 124M & \textbf{70.27} \\ \midrule GMP & 89.03 & 0.46M & 82M & 61.80 \\ STR & 85.80 & 0.60M & 55M & 64.83 \\ STR & 89.01 & 0.46M & 42M & 62.10 \\ STR & 89.62 & 0.44M & 40M & 61.51 \\ \rowcolor[gray]{0.9} OptG & 90.00 & 0.41M & 80M & \textbf{66.80} \\ \bottomrule \end{tabular}} \end{table} \begin{table}[!t] \caption{Performance comparison of ResNet-50 at 95\% sparsity on ImageNet under our proposed sparse schedule and Zhu~\emph{et al.}\cite{zhu2017prune}.} \centering\vspace{-0.5em} \label{tab:ablation1} \resizebox{\columnwidth}{!}{ \begin{tabular}{@{}lcccc} \toprule Method & Schedule & Top-1 Acc. & Schedule & Top-1 Acc.\\ \midrule SET& Zhu~\emph{et al.} & 68.40 & Ours & 66.10 \\ RigL& Zhu~\emph{et al.} & 71.39 & Ours & 70.01 \\ DPF & Zhu~\emph{et al.} & 71.03 & Ours & 71.66 \\ \rowcolor[gray]{0.9} OptG& Zhu~\emph{et al.} & \textbf{71.82} & Ours & \textbf{72.38} \\ \bottomrule \end{tabular}} \end{table} \textbf{Supermask optimizer.} Next, we investigate the components in our proposed supermask optimizer, including the update frequency of the binary mask ${\mathbf{m}}$ and the proposed paradox-aware mask learning rate schedule. In detail, we compare our schedule with two competitors, unfolded as the same learning rate with network weights (Weight LR) and a constant learning rate of $0.1$ (Constant LR). Meanwhile, we also investigate how the update frequency of the binary mask influences the performance of OptG. Results in Fig.~\ref{fig:ablation} suggest that (1) frequently updating the binary mask,~\emph{i.e.}, prune and revive weights leads to significant performance degradation due to the unstable sparse training process and (2) our proposed paradox-aware mask learning rate schedule well surpasses the other two variants, which demonstrates the efficacy of OptG for adjusting the mask learning rate by looking at the ascending rate of sparsity so as to maximally alleviate the error-gap caused by the independence paradox. \begin{figure}[!t] \begin{center} \includegraphics[height=0.7\linewidth]{ablation.pdf} \end{center} \centering\vspace{0em} \caption{\label{fig:ablation}Ablation studies for the supermask optimizer in OptG. } \vspace{-2em} \end{figure} \section{Limitation} We further discuss the limitations of OptG, which will be our future focus. Firstly, OptG requires more training FLOPs compared with other sparse training methods due to its specialized-designed sparsity schedule for solving the independence paradox. Nevertheless, we state that in sparse training, less training costs rarely indicate the opportunity to lessen the training time as the speedup for the irregular sparse weight tensors on common hardware is indeed negligible. Besides, despite the dominating of OptG at extreme sparsity levels, its performance at relatively-low sparsity rates remains to be improved. At last, our limited hardware resources disable us to verify the efficacy of OptG beyond the convolutional neural networks. More results for sparsifying the popular Vision Transformer models (ViTs) are expected in our further work. \section{Conclusion} In this paper, we have proposed to optimize the gradient-driven criteria in network sparsity, termed OptG. In particular, we first point out the independence paradox in previous approaches and show an effective trail to solve this paradox based on revealing the empirical success of supermask training. Following this trail, we further propose to solve the independence paradox by interleaving the supermask training process into during-training sparsity with a revival-friendly sparsity schedule and a paradox-aware supermask optimizer. Extensive experiments on various tasks demonstrate that our OptG can automatically obtain layer-wise sparsity burden, while achieving state-of-the-art performance at all sparsity regimes. Our work re-emphasizes the great potential of gradient-driven pruning and we expect future advances for the gradient-driven criteria optimizer. \section*{Acknowledgement} This work is supported by the National Science Fund for Distinguished Young Scholars (No.62025603), the National Natural Science Foundation of China (No.U1705262, No.62072386, No.62072387, No.62072389, No.62002305, No.61772443, No.61802324 and No.61702136) and Guangdong Basic and Applied Basic Research Foundation (No.2019B1515120049). {\small \bibliographystyle{ieee_fullname}
1,108,101,565,354
arxiv
\section{Introduction \label{sec:Introduction}} Online social networking platforms are being increasingly used by campaigners, activists and marketing managers for promoting ideas, brands and products. In particular, the ability to recommend news articles \cite{Leskovec2009}, videos, and even products \cite{Leskovec2006} by friends and acquaintances through online social networking platforms is being increasingly recognized by marketing gurus as well as political campaigners and activists. Influencing the spread of content through social media enables campaigners to mold the opinions of a large group of individuals. In most cases, campaigners and advertisers aim to spread their message to as many individuals as possible while respecting budget constraints. This calls for a judicious allocation of limited resources, like money and manpower, for ensuring highest possible outreach, i.e., the proportion of individuals who receive the message. \par Individuals share information with other individuals in their social network using Twitter tweets, Facebook posts or simply face to face meetings. These individuals may in turn pass the same to their friends and so on, leading to an information epidemic. However, individuals may also become bored or disillusioned with the message over time and decide to stop spreading it. { Past research suggests that such social effects may lead to opinion polarization in social systems \cite{Sinha2006}}. This can be exploited by a campaigner who desires to influence such spreading or opinion formation by incentivizing individuals to evangelize more vigorously by providing them with referral rewards in the form of discounts, cash back or other attractive offers. Due to budget constraints, it may not be feasible to incentivize all, or even a majority of the population. Individuals have varying amount of influence over others, e.g., ordinary individuals may have social connections extending to only close family and friends, while others may have a large number of social connections which can enable them to influence large groups \cite{Goldenberg2009}. Thus, it would seem that incentivizing highly influential individuals would be the obvious strategy. However, recruiting influential people can be very costly, which may result in the campaigner running out of funds after recruiting just a handful of celebrities, which in turn may result in suboptimal outreach size. \par A resource constrained campaigner, for a given cost budget, may want to maximize the proportion of informed individuals, while other campaigners who care more about campaign outreach than resource costs, may desire to minimize costs for achieving a given number of informed individuals. We address both the resource allocation challenges by formulating and solving two optimization problems with the help of \emph{bond percolation theory}. \par A similar problem of preventing epidemics through vaccinations has received a lot of attention \cite{Cohen2003,Shaw2010,Ruan2012,Starnini2013,Peng2013}. However, in these problems the cost of vaccination is uniform for all individuals, and hence it is sufficient to calculate the minimum number of vaccinations. Information diffusion can also be maximized by selecting an optimal set of seeds, i.e., individuals best suited to \emph{start} an epidemic \cite{Kempe2003,Chen2009a,Chen2010a}. This is different from our strategy which involves incentivizing individuals to \emph{spread} the message. It is possible to address the problem posed here using optimal control theory, which involves computing the optimal resource allocation in real time for ensuring maximum possible outreach size by a give deadline \cite{Karnik2012,Dayama2012, Kandhway2014a,Kandhway2014,Kandhway2014b}. However, the optimal control solution is not only difficult to compute, but also very hard to implement as it requires a centralized real time controller. Furthermore, recent work, \cite{Karnik2012,Dayama2012, Kandhway2014a,Kandhway2014,Kandhway2014b}, on optimal campaigning in social networks does not address the problem of minimizing the cost while gurantering an outreach size. Our formulation allows us to solve both the problems. \par Our model assumes two types of individuals viz. the \emph{`ordinary'} and the \emph{`selected'}, and they are connected to one another through a social network. Before the campaign starts, the selected individuals are incentivized to spread the message more vigorously than the ordinary. We use the \emph{Susceptible Infected Recovered} (SIR) model for modeling the information epidemic. For a given set of selected individuals, we first calculate the size of the information outbreak using network percolation theory, and then find the set of selected nodes which, 1. minimizes the cost for achieving a given proportion of informed individuals, and 2. maximize the fraction of informed individual for a given cost budget. We believe that our approach of using percolation theory to formulate an optimization problem is the first of its kind. \par The detailed model description can be found in Sec. \ref{sec:Model}, percolation analysis in Sec. \ref{sec:Analysis}, the problem formulation in Sec. \ref{sec:Problem Formulation}, numerical results in Sec. \ref{sec:Numerical Results}, and finally conclusions are discussed in Sec. \ref{sec:Conclusion}. \section{Model \label{sec:Model}} We divide the total population of $N$ individuals in two types: the ordinary (type $1$) and the selected (type $2$). Before the campaign starts selected individuals are provided incentives to spread the information more vigorously. These individuals are connected with one another through a social network, which is represented by an undirected graph (network). Nodes represent individuals while a link embodies the communication pathways between individuals. Let $P(k)$ be the degree distribution of the social network. For analytical tractability, we assume that the network is uncorrelated \cite{Barrat2008}. We generate an uncorrelated network using the configuration model \cite{molloy1995}. A sequence of $N$ integers, called the degree sequence, is obtained by sampling the degree distribution. Thus each node is associated with an integer which is assumed to be the number of half edges or stubs associated with the node. Assuming that the total number of stubs is even, each stub is chosen at random and joined with another randomly selected stub. The process continues until all stubs are exhausted. Self loops and multiple edges are possible, but the number of such self loops and multiple edges goes to zero as $N \to \infty$ with high probability. We assume that $N$ is large but finite. Let $\phi (k)$ be the proportion of individuals with $k$ degrees that are provided incentives for vigorously spreading the message, i.e., proportion of nodes with degree $k$ that are type 2 nodes. The goal is to find the optimum $\phi(k)$ for maximizing the epidemic size (or minimizing the cost). The actual individuals can be identified by sampling from a population of individuals with degree $k$ with probability $\phi(k)$. \par We assume that the information campaign starts with a randomly chosen individual, who may pass the information to her neighbors, who in turn may pass the same to their neighbors and so on. However, as the initial enthusiasm wanes, individuals may start loosing interest in spreading the information message. This is similar to the diffusion of infectious diseases in a population of susceptible individuals. Since, we account for individuals loosing interest in spreading the message, we use a continuous time SIR process to model the information diffusion. The entire population can be divided into three classes, those who haven't heard the message (susceptible class), those who have heard it and are actively spreading it (infected class) and those who have heard the message but have stopped spreading it (recovered class). \par Let $\beta_1$ be the rate of information spread for an ordinary node (Type 1), while $\beta_2$ for a selected node (Type 2). In other words, the probability that a type $i$ individual `infects' her susceptible neighbors in small time $dt$ is $\beta_idt +o(dt)$. Note that this is independent of the type of the susceptible node. Let $\mu_i$ be the rate at which type $i$ infected individuals move to the recovered state. The larger the $\mu_i$ the lesser the time an individual spends in spreading the message. Since type $2$ individuals are incentivized to spread information more vigorously, $\beta_2 > \beta_1$ and $\mu_2 < \mu_1$. Let $T_i$ be the probability that a type $i$ infected node infects its susceptible neighbors (any type) before it recovers ($i \in \{0,1\}$). It can be easily shown that $T_i = \frac{\beta_i}{\beta_i + \mu_i}$, see \cite{Newman2002}. Therefore, $T_2 > T_1$. $T_i$ can be interpreted as the probability that a link connecting type $i$ infected node to any susceptible node is occupied. We refer to such links as type $i$ links and $T_i$ the occupation probability for link of type $i$. This mapping allows us to apply bond percolation theory for obtaining the size of the information epidemic \cite{Newman2010}. \section{Analysis \label{sec:Analysis}} We first aim to calculate the proportion of individuals who have received the message, or in other words, the proportion of recovered individuals at $t \to \infty$. Let $P(k' \mid k)$ be the probability of encountering a node of degree $k'$ by traversing a randomly chosen link from a node of degree $k$. In other words, $P(k' \mid k)$ is the probability that a node with degree $k$ has a neighbor with degree $k'$. For a network generated by configuration model, $P(k' \mid k)= \frac{k'P(k')}{\langle k \rangle}$ \cite{Newman2010}, where $\langle k^i \rangle$ is the $i^{th}$ moment of $P(k)$. \par Let $q$ be the probability of encountering a type 2 node by traversing a randomly chosen link from a node of degree $k$. Therefore, $q = \sum\limits_{k'=1}^{\infty}Pr($Neighboring node is type 2 $\mid$ neighboring node has degree $k')\cdot Pr($Neighboring node has degree $k'\mid$ original node has degree $k)$. \begin{align*} q= \frac{1}{\langle k \rangle}\sum_{k=1}^{\infty}k\phi(k)P(k) \end{align*} The probability that a randomly chosen node has $k_1$ type 1 and $k_2$ type 2 neighbors $= P(k_1,k_2) = \sum\limits_{k:k=k_1+k_2}^{\infty} Pr(k_1,k_2\mid$node has degree $k)P(k) $. \par For a large $N$, the event that a given node has degree $k$, can be approximated to be independent of the event that another node, having a common neighbor with the given node, has degree $k'$. This is true since the degree sequence is generated by independent samples from the distribution, and for a large $N$ the effect of sampling without replacement is negligible. The probability that a node is selected (type 2), is a function of its degree, hence the event that a node is type 1 (or 2) is independent of the event that any other node is type 1 (or 2). This allows us to write: \begin{align*} P(k_1,k_2)= {k_1+k_2 \choose k_2}q^{k_2}(1-q)^{k_1}P(k_1+k_2) \end{align*} Let $Q(k)$ be the excess degree distribution, i.e., the degree distribution of a node arrived at by following a randomly chosen link without counting that link. For the configuration model $Q(k) = (k+1)P(k+1)/ <k>$. Let $Q(k_1,k_2)$ be the excess degree distribution for connections to type 1 and type 2 nodes. \begin{align*} Q(k_1,k_2) = {k_1+k_2 \choose k_2}q^{k_2}(1-q)^{k_1}Q(k_1+k_2) \end{align*} Let $\tilde{P}(\tilde{k}_1,\tilde{k}_2)$ and $\tilde{Q}(\tilde{k}_1,\tilde{k}_2)$ be the distribution and the excess distribution of the number of type 1 and type 2 neighbors that have received the information message. In other words the distribution and the excess distribution of type $i$ occupied links. {\small \begin{align*} \tilde{P}(\tilde{k}_1,\tilde{k}_2) &= \sum_{k_1=\tilde{k}_1}^{\infty}\sum_{k_2=\tilde{k}_2}^{\infty}P(k_1,k_2)\prod_{i=1}^{2}{k_i \choose \tilde{k}_i}T_i^{\tilde{k}_i}(1-T_i)^{k_i-\tilde{k}_i} \\ \tilde{Q}(\tilde{k}_1,\tilde{k}_2) &= \sum_{k_1=\tilde{k}_1}^{\infty}\sum_{k_2=\tilde{k}_2}^{\infty}Q(k_1,k_2)\prod_{i=1}^{2}{k_i \choose \tilde{k}_i}T_i^{\tilde{k}_i}(1-T_i)^{k_i-\tilde{k}_i} \end{align*} } \begin{table}[!t] \centering \begin{tabular}{l l} \hline Generating function & Distribution \\ \hline $G(u_1,u_2)$ & $P(k_1,k_2)$ \\ $F(u_1,u_2)$ & $Q(k_1,k_2)$ \\ $\tilde{G}(u_1,u_2)$ & $\tilde{P}(\tilde{k}_1,\tilde{k}_2)$ \\ $\tilde{F}(u_1,u_2)$ & $\tilde{Q}(\tilde{k}_1,\tilde{k}_2)$ \\ $\tilde{H}_i(u_1,u_2)$ & Proportion of type 1 and type 2 nodes, \\ & who have received the message, in a component \\ & reached from a type $i$ link.\\ $\tilde{J}_i(u_1,u_2)$ & No. of type 1 and type 2 nodes \\ & who have received the message, in a component \\ & reached from a node $i$.\\ $\tilde{J}(u_1,u_2)$ & No. of type 1 and type 2 nodes \\ & who have received the message, in a component \\ & reached from a randomly chosen node.\\ \hline \end{tabular} \caption{List of probability generating functions.} \label{table:pgf} \end{table} The probability generating functions for the distributions used in the analysis above are listed in Table \ref{table:pgf}. For example $G(u_1,u_2)$ is given by : \begin{align*} G(u_1,u_2) = \sum\limits_{k_1,k_2=0}^{\infty} u_1^{k_1}u_2^{k_2}P(k_1,k_2) \end{align*} Now, $\tilde{G}(u_1,u_2)$ is given by \begin{align*} & \sum_{\tilde{k}_1,\tilde{k}_2}^{\infty}u_1^{\tilde{k}_1}u_2^{\tilde{k}_2}\sum_{k_1 = \tilde{k}_1}\sum_{k_2=\tilde{k}_2}P(k_1,k_2)\prod_{i=1}^{2}{k_i \choose \tilde{k}_i}T_i^{\tilde{k}_i}(1-T_i)^{k_i-\tilde{k}_i} \\ &= \sum_{k_1,k_2}^{\infty}(1+(u_1-1)T_1)^{k_1}(1+(u_2-1)T_2)^{k_2}P(k_1,k_2) \\ &= G\left(1+(u_1-1)T_1,1+(u_2-1)T_2\right) \end{align*} Similarly, $\tilde{F}(u_1,u_2) = F(1+(u_1-1)T_1,1+(u_2-1)T_2)$ \par A component is a \emph{small} cluster of nodes that have received the information message. By small we mean that the cluster is finite and does not scale with the network size. However, at the phase transition, the average size of the cluster diverges (as $N \to \infty$). An information epidemic outbreak is possible only when the average size of the cluster diverges. In this regime the component is termed as a giant connected component (GCC) and it grows with the network size. Let $\tilde{H}_i(u_1,u_2)$ be the generating function of the distribution of the number of type 1 and type 2 nodes in a component arrived at from a type $i$ link. Let $\tilde{J}_i(u_1,u_2)$ and $\tilde{J}(u_1,u_2)$ be the generating functions of the distribution of the number of type 1 and type 2 nodes in a component arrived at from node $i$ and a randomly chosen node, respectively. \par \begin{figure} \centering \includegraphics[width = 0.6\textwidth]{Percolation.pdf} \caption{(Color Online) Illustration of components. The red boxes represent the components reached by type $1$ link while green boxes represent components reached by a type $2$ link. A type 2 node is represented by a green circle while a red circle represents a type 1 node. } \label{fig:percolation} \end{figure} Let random variable $Y_i$ be the number of type 1 and 2 nodes, that have received the message, in a component arrived at from a type $i$ link. The probability of encountering closed loops in finite cluster is $O(N^{-1})$ \cite{Newman2002} which can be neglected for large $N$. The tree like structure of the cluster allows us to write the size of the component encountered by traversing the link, as the sum of the size of components encountered after traversing the links emanating from the node at the end of the initial link. This is illustrated in Fig. \ref{fig:percolation}. Hence, $Y_i$ can be written as: \begin{align*} Y_i = 1 + \tilde{K}_1Y_1 + \tilde{K}_2Y_2 \end{align*} where random variable $\tilde{K}_i$ is the number of type $i$ neighbors of the end node of type $i$ link that have received the message; the arrival link is not counted (excess degree). Since, the size of the components along different links are mutually independent (absence of loops) we can write the above equation in terms of probability generating functions. \begin{align*} \tilde{H}_i(u_1,u_2) &= u_i\tilde{H}_1(u_1,u_2)^{\tilde{K}_1}\tilde{H}_2(u_1,u_2)^{\tilde{K}_2} \\ &= u_i \sum_{\tilde{k}_1,\tilde{k}_2}^{\infty} \tilde{H}_1^{k_1}(u_1,u_2)\tilde{H}_2^{k_2}(u_1,u_2)\tilde{Q}(\tilde{k}_1,\tilde{k}_2) \\ &= u_i\tilde{F}(\tilde{H}_1(u_1,u_2) ,\tilde{H}_2(u_1,u_2) ) \\ \end{align*} Which can also be written as \begin{align} \tilde{H}_i(u_1,u_2) = u_iF\left(1+(\tilde{H}_1(u_1,u_2)-1)T_1,1+(\tilde{H}_2(u_1,u_2)-1)T_2 \right) \label{eqn:pgfCluster} \end{align} Similarly, $\tilde{J}(u_1,u_2)$ can be expressed as : \begin{align*} \tilde{J}_i(u_1,u_2) &= u_i\sum_{\tilde{k}_1,\tilde{k}_2}^{\infty} \tilde{H}_1^{k_1}(u_1,u_2)\tilde{H}_2^{k_2}(u_1,u_2)\tilde{P}(\tilde{k}_1,\tilde{k}_2) \\ &= u_i\tilde{G}(\tilde{H}_1(u_1,u_2) ,\tilde{H}_2(u_1,u_2) ) \\ \tilde{J}(u_1,u_2) &= (1-p)\tilde{J}_1(u_1,u_2) +p \tilde{J}_2(u_1,u_2) \end{align*} where $p$ is the probability of choosing a type 2 node, $p = \sum \limits_{k=1}^{\infty}P(k)\phi(k)$. The following theorem describes the phase transition conditions required for an outbreak and the size of the such an outbreak. The proof can be found in \ref{appendix:theorem1}. \begin{thm} The condition required for a small cluster to become a giant connected component is given by: $\tilde{\nu}\geq 1 $, where \begin{align*} \tilde{\nu} = T_1\sum_{k_1,k_2}^{\infty}k_1Q(k_1,k_2) + T_2\sum\limits_{k_1,k_2}^{\infty}k_2Q(k_1,k_2) \end{align*} and the proportion of nodes in the giant connected component (size of GCC) is given by $1-\psi$, \begin{align*} \psi = \sum_{k_1,k_2}^{\infty}(1+(u^*-1)T_1)^{k_1}(1+(u^*-1)T_2)^{k_2}P(k_1,k_2) \end{align*} where $u^*$ is the solution of the fixed point equation \begin{align*} u = \sum_{k_1,k_2}^{\infty}(1+(u-1)T_1)^{k_1}(1+(u-1)T_2)^{k_2}Q(k_1,k_2) \end{align*} \end{thm} \par The size of the information epidemic outbreak can now be used for formulating the optimization problem. \section{Problem Formulation \label{sec:Problem Formulation}} Providing incentives in the form of referral rewards for low degree nodes, or sponsorship offers for celebrities (high degree nodes) is costly. Since, the cost is a function of the degree let $c(k)$ be the cost of providing incentivizing a node with degree $k$. The average cost, $\bar{c}(\boldsymbol{\phi})$, is given by $\sum\limits_{k=1}^{\infty}c(k)Pr($node is selected $\mid$ node has degree $k)P(k) = \sum\limits_{k=1}^{\infty} c(k)\phi(k)P(k)$. The proportion of type 2 individuals is given by $\sum\limits_{k=1}^{\infty}\phi(k)P(k)$. \par We formulate two optimization problems, viz., one which minimizes cost while enforcing a lower bound on the epidemic size, and the other which maximizes the epidemic size for a given cost budget. For both the problems, the evaluation of the size of the epidemic requires one to numerically solve a fixed point equation. Thus, there is no straightforward method to solve the optimization problem such as the Karush Kuhn Tucker (KKT) conditions, because evaluating the objective function requires one to solve a fixed point equation. We show that this problem can be reduced to a linear program, which can then be solved easily using any off the shelf LP solver. \subsection{Cost minimization problem} Providing guarantees on the minimum number of individuals who will be informed about the campaign is appropriate for campaigns with large funding, such as election campaigns where message penetration is more important than the cost. The guarantee on epidemic size is written as a constraint to the optimization problem. The cost $\bar{c}(\boldsymbol{\phi})$ is minimized subject to $1 - \psi \geq \gamma$ where $\gamma \ \in \ [0,1]$ and $\boldsymbol{\phi}$ is the control variable. If $\gamma = 0$, the constraint becomes $\tilde{\nu} \leq 1$, as $\gamma = 0$ implies $\psi = 1$ which is the same as $\tilde{\nu} \leq 1$. A finite amount of money, may put a constraint on the number of type 2 individuals. The proportion of type 2 individuals is given by $\sum_{k=1}^{\infty} \phi(k)P(k)$. This translates in to the constraint : $\sum_{k=1}^{\infty} \phi(k)P(k) \leq B$, where budget $B \ \in \ [0,1]$. \par The following theorem which is our principle contribution allows us to solve a possible non convex problem by solving a linear program. The key insight is that the probability of outbreak is monotonically decreasing in $q$, which then allows one to write the optimization problem as a linear program. The intuition behind this claim is that since $q$ is the probability of finding a type 2 node on a randomly chosen link, increase in $q$ is equivalent to the increase in number of type 2 individuals resulting in a higher epidemic size. \begin{thm} \label{theorem2} If $T_2 > T_1$, then $\psi \ \in \ (0,1)$, is strictly decreasing with respect to $q$, i.e, $\frac{d\psi}{dq} <0$ for all $q \ \in \ [0,1] $. For the $\psi =0$ case $(\tilde{\nu}\geq 1)$, $\tilde{\nu}$ is strictly increasing with respect to $q$, i.e, $\frac{d\tilde{\nu}}{dq} > 0, \ \forall \ \ q \ \in \ [0,1]$, where $q = \frac{1}{\langle k \rangle}\sum\limits_{k=1}^{\infty}k\phi(k)P(k)$. \end{thm} \begin{proof} The proof follows from Lemmas \ref{lemma2} and \ref{lemma3} detailed in Appendix \ref{appendix:theorem2}. \end{proof} Since, $\frac{d\psi}{dq} < 0$, the epidemic size constraint can be written as $\frac{1}{\langle k \rangle}\sum\limits_{k=1}^{\infty}k\phi(k)P(k) \geq q^*$, where $ \psi(q) \mid_{q=q^*} \ = 1-\gamma$. The optimization problem can now be written as follows: \begin{align} \begin{aligned} & \underset{\boldsymbol{\phi}}{\text{minimize}} \ \ \ \ \sum_{k=1}^{\infty} c(k)\phi(k)P(k) \\ & \text{subject to:} \ \ \\ & \frac{1}{\langle k\rangle}\sum_{k=1}^{\infty}k\phi(k)P(k) \geq q^* \\ & \sum_{k=1}^{\infty} \phi(k)P(k) \leq B \\ & \boldsymbol{0} \leq \boldsymbol{\phi} \leq \boldsymbol{1} \label{eqn:LinOpt1} \end{aligned} \end{align} The above problem is a linear program which can be solved by any off-the-shelf LP solver. \par The optimization problem described above may not be feasible for all values of $T_1$ and for all possible degree distributions. Assume, $B=1$, the problem becomes infeasible if $1 - \psi \leq \gamma$ when $T_2$ is at the maximum possible value, i.e., all individuals are incentivized and yet $1 - \psi \leq \gamma$. \subsection{Epidemic Size Maximization Problem} We now look at the problem of maximizing the information epidemic size (outreach) in a resource constrained scenario. More, specifically we study a scenario where the cost budget is finite. Thus the outbreak size $1-\psi$ must be maximized subject to a cost constraint. Since $\frac{d\psi}{dq} < 0$, maximizing $q$ is equivalent to maximizing $1-\psi$. Thus the problem is equivalent to the following linear program. \begin{align} \begin{aligned} & \underset{\boldsymbol{\phi}}{\text{maximize}} \ \ \ \ \sum_{k=1}^{\infty}k\phi(k)P(k) \\ & \text{subject to:} \ \ \\ & \sum_{k=1}^{\infty} c(k)\phi(k)P(k) \leq C \\ & \sum_{k=1}^{\infty} \phi(k)P(k) \leq B \\ & \boldsymbol{0} \leq \boldsymbol{\phi} \leq \boldsymbol{1} \label{eqn:linOpt2} \end{aligned} \end{align} The linear program can now be solved using any standard linear programing solver. Note that constants $T_1, T_2$ do not play any role in problem (\ref{eqn:linOpt2}), while they do play a role in problem (\ref{eqn:LinOpt1}) because $q^*$ is a function of $T_1$ and $T_2$. \section{Numerical Results \label{sec:Numerical Results}} As an illustration, we study the solution of the optimization problem for a linear cost, i.e., $c(k)=k$. The higher the degree, the higher the cost. Note that even if cost is non linear in $k$, the optimization problem remains a linear program. In the real world, the cost may be different, but whatever the cost function, the solution can be obtained by simply solving a linear program. \par We used an uncorrelated random graph generated using the configuration model technique with power law degree distribution ($P(k)\propto k^{- \alpha}$), $\alpha = 2.5$. \subsection{Cost Minimization Problem} \begin{figure} \centering \begin{subfigure}[b]{0.49\textwidth} \includegraphics[width = \textwidth]{CostMinimizationSoln.pdf} \caption{} \label{fig:CostMinSoln} \end{subfigure} \begin{subfigure}[b]{0.49\textwidth} \includegraphics[width = \textwidth]{NumSelectedvsT1.pdf} \caption{} \label{fig:Type2ForT1} \end{subfigure} \caption{(Color Online) (a) Solution $\boldsymbol{\phi}$, for different values of $T_1$; (b) Optimal proportion of type 2 nodes required to meet outreach constraint. Parameters: $T_2 = 0.6,\ B = 0.7, \ \gamma = 0.2$.} \end{figure} We solved the cost minimization linear program using the `\emph{linprog}' MATLAB solver; $q^*$ was computed numerically using the bisection method. In Fig. \ref{fig:CostMinSoln}, we plot the solution $\boldsymbol{\phi}$ for different values of $T_1$. The solution shows that only about $50 \%$ of high degree nodes need to be incetivized for $T_1$ values ranging from $0.3$ to $0.43$. As $T_1$ decreases from $0.47$ to $0.3$ , the proportion of high degree nodes that are incentivized remain fairly constant (50\%), while the proportion of incentivized low degree nodes increase. In Fig. \ref{fig:Type2ForT1}, we plot the optimal proportion of individuals that need to be incentivized for achieving the given outreach size. \subsection{Epidemic Size Maximization Problem} \begin{figure} \centering \begin{subfigure}[b]{0.49\textwidth} \includegraphics[width = \textwidth]{SizeVsCost.pdf} \caption{} \label{fig:SizeVsCost} \end{subfigure} \begin{subfigure}[b]{0.49\textwidth} \includegraphics[width = \textwidth]{NumSelectedVsCost.pdf} \caption{} \label{fig:Type2ForCost} \end{subfigure} \caption{(Color Online) (a) Size of the information epidemic for varying cost budget (C). (b) Optimal proportion of type 2 nodes required to meet outreach constraint as a function of cost budget (C). Parameters: $T_2 = 0.6,\ B = 0.7$.} \end{figure} The solution, $\boldsymbol{\phi}$, is very similar to the one in problem (\ref{eqn:LinOpt1}), and hence we do not show it here. In Fig. \ref{fig:SizeVsCost}, we plot the size of the epidemic for varying cost budget $C$. As expected, the epidemic size increases with $C$ because higher the budget, the higher the proportion of incentivized individuals. However, at some point epidemic size saturates, this is because all nodes have been incentivized and therefore nothing more can be done to increase the outreach size. This is verified by Fig. \ref{fig:Type2ForCost}, the fraction of type 2 nodes hit $1$, when $C=3$. \section{Conclusion and Future Work\label{sec:Conclusion}} To summarize, we studied the problem of maximizing information spreading in a social networks. More specifically, we considered a scenario where individuals are incentivized to vigorously spread the campaign message to their neighbors, and we proposed a mechanism to identify the individuals who should be incetivized. Using bond percolation theory we calculated the size of the information epidemic outbreak and the conditions for the occurrence of such outbreaks. We then formulated an optimization problem for minimizing the expected cost of incentivizing individuals while providing guarantees on the information epidemic size. Although the optimization problem could not be addressed using standard analytical tools, Theorem \ref{theorem2} enabled us to compute the global optimum by solving a linear program. We believe that our approach of using percolation theory to formulate an optimization problem is the first of its kind. \par { For the sake of analytical tractability we assumed an uncorrelated network, however in reality real world social networks have positive degree-degree correlations \cite{Newman2003b}. Such networks with positive degree associativity percolate more easily compared to uncorrelated networks \cite{Newman2002b, Noh2007}. Therefore, for the problem of minimizing cost the given campaign size could be achieved with a slightly lesser cost, while in the second problem, the theoretical optimal size would be a lower bound and the actual campaign size would be slightly larger than the theoretical. Apart from positive degree associativity, social networks are also found to contain community structures \cite{Newman2003b}. The presence of communities may slow down information spreading leading to a reduction in the campaign size. This may happen as most links point inside the community rather than outside it, thus localizing the information spread \cite{Wu2008}. However, if the network contains high degree nodes that bridge different communities, then incetivizing such nodes may substantially increase the campaign size. A similar finding was reported in \cite{Salathe2010}, where authors investigated usefulness of targeted vaccinations on nodes that bridge communities.} \par { Although SIR models are widely used to model epidemics, they have some limitations. They fail to capture the fact that individuals may stop spreading when they perceive that most of their neighbors already known the information. This is captured by the Maki-Thompson model \cite{Nekovee2007} which forces the recovery rate to be an increasing function of the number of informed individuals she contacts. Thus the recovery rate for an infected node is a function of her degree. An SIR process has a fixed recovery rate and hence the current results would approximately hold for an Maki Thompson process on Erdos-Renyi networks, where every node on average has the same degree. However, our results for SIR may not generalize for the Maki Thompson spread model on scale free networks. High degree nodes may have a higher chance of being connected to informed individuals which may lead them to stop spreading to other uninformed nodes. } \par { An interesting extension to this problem, which was suggested by the anonymous referee, is to compute a targeted incentivization strategy for two interacting campaigns. For example, the campaigner may want to maximize campaign $A$ given that campaign $B$, which has either run its course or is simultaneously running along with $A$, either reinforces or hinders campaign $A$. This is an important problem since such interacting campaigns are often observed during parliamentary or presidential elections. Although the current results may not shed much light on such questions, we believe that they lay the foundation for investigating such problems which we hope to address in the future. }
1,108,101,565,355
arxiv
\subsection{Results} Throughout the paper, we will understand the {\em minimal polynomial} of an algebraic number to be its minimal polynomial over $\mathbb{Z}$; we obtain this by multiplying the traditional minimal polynomial over $\mathbb{Q}$ by the smallest positive integer such that all its coefficients become integers. Counting algebraic integers, as in (\refeq{barthm}), is equivalent to counting only those algebraic numbers whose minimal polynomial has leading coefficient 1. Our primary goal in this paper is to count algebraic numbers of fixed degree and bounded height subject to specifying {\em any} number of the leftmost and rightmost coefficients of their minimal polynomials. Besides specializing to the cases of algebraic numbers and algebraic integers above, this will allow us to count units, algebraic integers with given norm, algebraic integers with given trace, and algebraic integers with given norm and trace. To state our theorem, we need a little notation. Our asymptotic counts will involve the Chern-Vaaler constants \begin{equation}\label{vddef} V_d = 2^{d+1}(d+1)^s \prod_{j=1}^s \frac{(2j)^{d-2j}}{(2j+1)^{d+1-2j}}, \end{equation} where $s = \lfloor(d-1)/2\rfloor.$ These constants are volumes of certain star bodies discussed later. For integers $m$, $n$, and $d$ with $0 < m$, $0 \leq n$, and $m+n \leq d$, and integer vectors $\vec\ell \in \mathbb{Z}^m$ and $\vec r \in \mathbb{Z}^n$, we write $\mc{N}(d,\vec\ell,\vec r,\mc{H})$ for the number of algebraic numbers of degree $d$ and height at most $\mc{H}$, whose minimal polynomial is of the form \begin{equation} f(z) = \ell_0 z^d + \cdots + \ell_{m-1}z^{d-(m-1)}+x_mz^{d-m} + \cdots + x_{d-n}z^n + r_{d-n+1}z^{n-1} + \cdots + r_d. \end{equation} Lastly, we set $g = d-m-n$. In the statements below, the implied constants depend on all parameters stated other than $\mc{H}$. \begin{theorem}\label{maincor} Fix $d$, $\vec\ell \in \mathbb{Z}^m$, and $\vec r \in \mathbb{Z}^n$ as above. Assume that $\ell_0 >0$, that \begin{equation} \gcd(\ell_0,\dots,\ell_{m-1},r_{d-n+1},\dots,r_d) = 1, \end{equation} and that $r_d \neq 0$ if $n>0$. Then as $\mc{H} \to \infty$ we have \begin{equation} \mc{N}(d,\vec\ell,\vec r,\mc{H}) = d\cdot V_g\cdot \mc{H}^{d(g+1)} + O\left(\mc{H}^{d(g+\frac{1}{2})}\log \mc{H}\right). \end{equation} \end{theorem} This generalizes the situation one faces when counting algebraic integers, whose minimal polynomials are monic ($m=1$, $n=0$, $\vec\ell = (1)$). Certain special cases are of particular interest, and we prove stronger power savings terms for them. \begin{corollary}\label{unitcor} Let $d \geq 2$, and let $N(\mc{O}^*_d,\mc{H})$ denote the number of units in the algebraic integers of height at most $\mc{H}$ and degree $d$ over $\mathbb{Q}$. Then as $\mc{H} \to \infty$ we have \begin{equation} N(\mc{O}^*_d,\mc{H}) = 2d\cdot V_{d-2}\cdot \mc{H}^{d(d-1)} + O\left(\mc{H}^{d(d-2)}\right). \end{equation} \end{corollary} \begin{corollary}\label{normcor} Let $\nu \neq 0$ be an integer, $d \geq 2$, and let $\mc{N}_{\Nm=\nu}(d,\mc{H})$ denote the number of algebraic integers with norm $\nu$, of height at most $\mc{H}$ and degree $d$ over $\mathbb{Q}$. Then as $\mc{H} \to \infty$ we have \begin{equation} \mc{N}_{\Nm=\nu}(d,\mc{H}) = d\cdot V_{d-2}\cdot \mc{H}^{d(d-1)} + O\left(\mc{H}^{d(d-2)}\right). \end{equation} \end{corollary} \begin{corollary}\label{tracecor} Let $\tau$ be an integer, $d \geq 2$, and let $\mc{N}_{\Tr=\tau}(d,\mc{H})$ denote the number of algebraic integers with trace $\tau$, of height at most $\mc{H}$ and degree $d$ over $\mathbb{Q}$. Then as $\mc{H} \to \infty$ we have \begin{equation} \mc{N}_{\Tr=\tau}(d,\mc{H}) = d\cdot V_{d-2}\cdot \mc{H}^{d(d-1)} + \left\{ \begin{array}{ll} O\left(\mc{H} \right), &\textup{if}~ d =2\vspace{7pt}\\ O\left(\mc{H}^{3} \log \mc{H} \right), &\textup{if}~ d =3\vspace{7pt}\\%~\textup{and}\\ O\left(\mc{H}^{d(d-2)}\right), &\textup{if}~ d \geq 4. \end{array} \right. \end{equation} \end{corollary} \begin{corollary}\label{normtracecor} Let $\nu \neq 0$ and $\tau$ be integers, $d \geq 3$, and let $\mc{N}_{\Nm=\nu,\Tr = \tau}(d,\mc{H})$ denote the number of algebraic integers with norm $\nu$, trace $\tau$, of height at most $\mc{H}$ and degree $d$ over $\mathbb{Q}$. Then as $\mc{H} \to \infty$ we have \begin{equation} \mc{N}_{\Nm=\nu,\Tr=\tau}(d,\mc{H}) = d\cdot V_{d-3}\cdot \mc{H}^{d(d-2)} + O(\mc{H}^{d(d-3)}). \end{equation} \end{corollary} \begin{remark}\normalfont In Corollaries \ref{normcor} through \ref{normtracecor}, the main term of the asymptotic doesn't depend on the specific coefficients being enforced. Thus these may be interpreted as results on the equidistribution of norms and traces.\end{remark} \begin{remark}\label{maninremark}\normalfont The type of counts found in this paper are related to Manin's conjecture, which addresses the asymptotic number of rational points of bounded height on Fano varieties. Counting points of degree $d$ and bounded height in $\QQbar$, or equivalently, on $\mathbb{P}^1$, can be transferred to a question of counting rational points of bounded height on the $d$-th symmetric product of $\mathbb{P}^1$, which is $\mathbb{P}^d$. This is what Masser and Vaaler implicitly do when they count algebraic numbers by counting their minimal polynomials (as does this paper; see the Methods subsection below). However, one needs to use a non-standard height on $\mathbb{P}^d$; Le Rudulier takes this approach explicitly \cite[Th\'eor\`eme 1.1]{lerudulier}, thereby re-proving and generalizing (the main term of) the result of Masser and Vaaler. It should be noted, though, that while the shape of the main term -- a constant times the appropriate power of the height -- follows from known results on Manin's conjecture, {\em explicitly} determining the constant in front relies ultimately on an archimedean volume calculation of Chern and Vaaler. Barroero's count of algebraic integers of degree $d$ corresponds to counting rational points on $\mathbb{P}^d$ that are integral with respect to the hyperplane at infinity. As noted in \cite[Remarque 5.3]{lerudulier}, the shape of the count's main term then follows from general results of Chambert-Loir and Tschinkel on counting integral points of bounded height on equivariant compactifications of affine spaces \cite[Theorem 3.5.6]{clt}. Our own units count corresponds to counting points on $\mathbb{P}^d$ integral with respect to {\em two} hyperplanes. Again, the shape of the main term -- a constant times the correct power of the height -- follows from general integral point counts for toric varieties \cite[Theorem 3.11.5]{thereisnoorderinwhichyoucanwritetoricandcltsuchthatthereisntaconfusingdoubleletter}. However, that constant is expressed as a product of local integrals and Galois-cohomological invariants. It is unclear to the authors of this paper whether the constant can be calculated explicitly without knowledge of the volumes of slices we compute. Regardless, the error terms obtained by using the general toric results are significantly weaker than those in this paper, and their dependence on $d$ cannot be made explicit. \end{remark} The second goal of this paper is to give explicit error terms, which we feel is especially justified in this context, beyond general principles of error-term morality. Namely, it's natural to ask questions about properties of ``random algebraic numbers" (or random algebraic integers, random units, etc.). For example: ``What's the probability that a random element of $\QQbar$ generates a Galois extension of $\mathbb{Q}$?" How to make sense of a question like this? There are models from other arithmetic contexts; for example, if we're asked ``What's the probability that a random positive integer is square-free?" we know what to do: count the number of square-free integers from $1$ to $N$, divide that by $N$, and ask if that proportion has a limit as $N$ grows (Answer: Yes, $\frac{6}{\pi^2}$). Note that the easiest part is dividing by $N$, the number of elements in your finite box. In order to make sense of probabilistic statements in the context of $\QQbar$, one would like to first take a box of bounded height and degree (which will have only finitely many algebraic numbers by Northcott), determine the relevant proportion within that finite box, and then let the box size grow. But now the denominator in question is far from trivial; unlike counting the number of integers from $1$ to $N$, estimating how many algebraic numbers are in a height-degree box is a more delicate matter. In the context of $\QQbar$, where there are {\em two} natural parameters to increase (the height and the degree), the gold standard for a ``probabilistic" result would be that it holds for any increasing set of height-degree boxes such that the minimum of the height and degree goes to infinity. To prove results that even approach this standard (e.g. one might require that the height of the boxes grows at least as fact as some function of the degree), one likely needs good estimates for how many numbers are in a height-degree box to begin with. Without an estimate that holds uniformly in both $\mc{H}$ and $d$, one would be justified in making statements about random elements in $\QQbar$ of fixed degree $d$, but not random elements of $\QQbar$ overall. Thus controlling the error terms in the theorems above is crucial. \begin{figure}[H] \begin{center}\includegraphics[width=\textwidth, keepaspectratio=true]{hdplot2.png}\end{center} \caption{Algebraic numbers of degree $d \leq 4$ and height $H \leq 1.5$. Each dot represents $d$ conjugate algebraic numbers.} \end{figure} To this end, in this paper we give explicit error bounds for the algebraic number counts of Masser and Vaaler, the algebraic integer counts of Barroero, and our own unit counts. Below $p_d(T)$ is a polynomial defined in Section \ref{starsec} whose leading term is $V_{d-1} T^d$, so our result is consistent with (\refeq{barthm}). \begin{theorem}\label{exsum} Let $\QQbar_d$ denote the set of algebraic numbers of degree $d$ over $\mathbb{Q}$, let $\mc{O}_d$ denote the set of algebraic integers of degree $d$ over $\mathbb{Q}$, and let $\mc{O}^*_d$ denote the set of units of degree $d$ over $\mathbb{Q}$ in the ring of all algebraic integers. For all $d \geq 3$ we have \begin{equation} \begin{array}{lll} \textup{(I\phantom{ii})}~ \left| N(\QQbar_d,\mc{H}) - \frac{d\cdot V_d}{2\zeta(d+1)}\mc{H}^{d(d+1)} \right| &\leq 3.37 \cdot (15.01)^{d^2}\cdot \mc{H}^{d^2}, &\textup{for}~\mc{H} \geq 1;\vspace{7pt}\\ \textup{(ii\phantom{i})}~ \left|N(\mc{O}_d,\mc{H}) - d p_d(\mc{H}^{d}) \right| &\leq 1.13 \cdot 4^d d^{d} 2^{d^2}\cdot \mc{H}^{d(d-1)}, &\textup{for}~\mc{H} \geq 1; and\vspace{7pt}\\ \textup{(iii)}~ \left| N(\mc{O}^*_d,\mc{H}) - 2d V_{d-2}\cdot \mc{H}^{d(d-1)}\right| &\leq 0.0000126\cdot d^3 4^d (15.01)^{d^2} &\hspace{-10pt} \cdot ~\mc{H}^{d(d-1)-1}, \vspace{7pt}\\ &&\textup{for}~ \mc{H} \geq d2^{d+1/d} . \end{array} \end{equation} \end{theorem} \subsection{Methods} The starting point of all our proofs is the relationship between the height of an algebraic number and the Mahler measure of its minimal polynomial. Recall that the Mahler measure $\mu(f)$ of a polynomial with complex coefficients \begin{equation} f(z) = w_0z^d + w_1z^{d-1} +\cdots + w_d = w_0(z-\alpha_1)\cdots(z-\alpha_d) \in \mathbb{C}[z], \end{equation} with $w_0 \not = 0$, is defined by \begin{equation} \mu(f) = |w_0| \prod_{i=1}^d \max\{1,|\alpha_i|\}, \end{equation} and $\mu(0)$ is defined to be zero. It's immediate that the Mahler measure is multiplicative: $\mu(f_1f_2)=\mu(f_1)\mu(f_2)$. Crucially for our purposes, if $f(z)$ is the minimal polynomial of an algebraic number $\alpha$, then we have (see for example \cite[Proposition 1.6.6]{bombierigubler}) \begin{equation} \mu(f) = H(\alpha)^d. \end{equation}Thus, in order to count degree $d$ algebraic numbers of height at most $\mc{H}$, we can instead count integer polynomials of Mahler measure at most $\mc{H}^d$. We identify a polynomial with its vector of coefficients, so that counting integer polynomials amounts to counting lattice points. To do this we employ techniques from the geometry of numbers, which make rigorous the idea that, for a reasonable subset of Euclidean space, the number of integer lattice points in the set should be approximated by its volume. So for example, the number of integer polynomials with degree at most $d$ and Mahler measure at most $T$ should be roughly the volume of the set of such \emph{real} polynomials $$\{f \in \mathbb{R}[z]_{\operatorname{deg }\leq d} ~\big|~ \mu(f) \leq T\} \subset \mathbb{R}^{d+1}.$$ Note that by multiplicativity of the Mahler measure, this set is the same as $T\mc{U}_d$, where \begin{equation} \mc{U}_d := \{f \in \mathbb{R}[z]_{\operatorname{deg }\leq d} ~\big|~ \mu(f) \leq 1\}. \end{equation} The set $\mc{U}_d$ will be our primary object of study. It is a closed, compact ``star body,'' i.e. a subset of euclidean space closed under scaling by numbers in $[0,1]$. Chern and Vaaler \cite[Corollary 2]{chernvaaler} explicitly determined the volume of $\mc{U}_d$. In a rather heroic calculation, they showed that $V_d := \vol_{d+1}(\mc{U}_d)$ is given by the positive rational number in (\refeq{vddef})\footnote{Our $\mc{U}_d$ is the same as what would be denoted by $\mathscr{S}_{d+1}$ in the notation of \cite{chernvaaler}, and our $V_d$ matches their $V_{d+1}$. Our subscripts correspond to the degree of the polynomials being counted rather than the dimension of the space.}. Thus by geometry of numbers, and noting that $\vol(T\mc{U}_d) = T^{d+1} \cdot \vol(\mc{U}_d),$ one expects the number of integer polynomials of degree at most $d$ and Mahler measure at most $T$ to be approximately $T^{d+1} \cdot V_d$. Chern and Vaaler proved this is indeed the case. Masser and Vaaler then showed how to refine this count of all such polynomials to just minimal polynomials, which let them prove the algebraic number count in (\refeq{mvthm}). What if you only want to count algebraic integers? Again, the above approach suggests you should do that by counting their minimal polynomials. Algebraic integers are characterized by having \emph{monic} minimal polynomials. Thus one is naturally led to seek the volume of the ``monic slice" of $T \mc{U}_d$ consisting of those real polynomials with leading coefficient 1. However, these slices are no longer dilations of each other, so their volumes aren't determined by knowing the volume of one such slice. Still, Chern and Vaaler were able to compute the volumes of monic slices of $T\mc{U}_d$; rather than a constant times a power of $T$, they are given by a polynomial in $T$, whose leading term is $V_{d-1} T^d$. Geometry of numbers can then be applied again to obtain the algebraic integer count in (\refeq{barthm}). In order to count units of degree $d$, or algebraic integers with given norm and/or trace, one needs to take higher-codimension slices. For example, the minimal polynomial of a unit will have leading coefficient 1 and constant coefficient $\pm1$. But one quickly discovers that these higher-dimensional slices have volumes that are, in general, no longer polynomial in $T$. Rather than trying to explicitly calculate these volumes, we depart from the methods of earlier works, and instead approximate the volumes of such slices. When we cut a dilate $T\mc{U}_d$ by a certain kind of linear space, then as $T$ grows the slices look more and more like a lower-dimensional unit star body; this will be explained in Section \ref{volsec}. This explains the appearance of the volume $V_d$ in all of our asymptotic counts. We also use a careful analysis of the boundary of $\mc{U}_d$ to show that the above convergence happens relatively fast; this makes our approximations precise enough to obtain algebraic number counts with good power-saving error terms. We state here our main result on counting polynomials. For non-negative integers $m$, $n$, and $d$ with $0 < m+n \leq d$, and integer vectors $\vec\ell \in \mathbb{Z}^m$ and $\vec r \in \mathbb{Z}^n$, let $\mc{M}(d,\vec\ell,\vec r,T)$ denote the number of polynomials $f$ of the form \begin{equation}\label{specpoly} f(z) = \ell_0 z^d + \cdots + \ell_{m-1}z^{d-(m-1)}+x_mz^{d-m} + \cdots + x_{d-n}z^n + r_{d-n+1}z^{n-1} + \cdots + r_d \end{equation} with Mahler measure at most $T$, where $x_m,\dots,x_{d-n}$ are integers. Let $g = d-m-n$. Combining our volume estimates with a counting principle of Davenport, we obtain the following. \begin{theorem}\label{mainthm} For all $0 <m+n \leq d$, $\vec\ell \in \mathbb{Z}^m$, and $\vec r \in \mathbb{Z}^n$, as $T \to \infty$ we have \begin{equation} \mc{M}(d,\vec\ell,\vec r,T) = V_g\cdot T^{g+1} + O(T^{g}). \end{equation} \end{theorem} \noindent Here the implied constant depends on $d, \vec\ell,$ and $\vec r$. Now we briefly discuss the methods used in the second half of the paper to prove our explicit results, and how these results fit in with the literature. Chern and Vaaler's \cite[Theorem 3]{chernvaaler}, which is the main ingredient in (\refeq{mvthm}), gives an asymptotic count of the number of integer polynomials of given degree $d$ and Mahler measure at most $T$. The error term in this result contains a full power savings -- order $T^d$ against a main term of order $T^{d+1}$ -- but the implied constant in the error term is not made explicit. They do produce an explicit error term of order $T^{d+1-1/d}$ in \cite[Theorem 5]{chernvaaler} using \cite[Theorem 4]{chernvaaler}, which is a quantitative statement on the continuity of the Mahler measure. Our Theorem \ref{genpolycount} below makes the constant in the error term of \cite[Theorem 3]{chernvaaler} explicit, using a careful study of the boundary of $\mc{U}_d$. We apply the classical Lipschitz counting principle in place of the Davenport principle; the latter is not very amenable to producing explicit bounds. Theorem \ref{moniccount} is the analogous result to Theorem \ref{genpolycount} for monic polynomials, and is obtained in a similar manner. However, the application of the Lipschitz principle is more delicate in this case. We also prove an explicit version of our Theorem \ref{mainthm} counting polynomials with specified coefficients (Theorem \ref{slicecount}). For this result we also apply \cite[Theorem 4]{chernvaaler}, and, reminiscent of Chern and Vaaler's application, this method yields an inferior power savings. We now describe the organization of the paper. In Section \ref{starsec} we collect key facts about the unit star body $\mc{U}_d$, including a detailed discussion of its boundary. In Section \ref{countingsec} we describe the counting principles we use to estimate the difference between the number of lattice points in a set and the set's volume. In Section \ref{volsec} we estimate the volume of the sets in which we must count lattice points to prove Theorem \ref{mainthm}; this theorem is then proved in Section \ref{latticesec}. In Section \ref{finalcountsec} we transfer our counts for polynomials to counts for various kinds of algebraic numbers, thereby proving Theorem \ref{maincor} and Corollaries \ref{unitcor}-\ref{normtracecor}. This involves using a version of Hilbert's irreducibility theorem to account for reducible polynomials. The rest of the paper is devoted to obtaining explicit versions of these counts. In Section \ref{cpebsec} we prove the aforementioned explicit version of \cite[Theorem 3]{chernvaaler} on counting polynomials of given degree and bounded Mahler measure, and in Section \ref{monicsec} we do the same for the count of monic polynomials. Section \ref{slicessec} contains a version of the general Theorem \ref{mainthm} with an explicit error term, at the cost of weaker power savings. In Section \ref{sievingsec} we begin to convert our explicit counts of polynomials to explicit counts of minimal polynomials. The main piece of this is showing that the reducible polynomials are negligible. We follow the techniques for this used by Masser and Vaaler (sharper than the more general Hilbert irreducibility method described above), obtaining explicit bounds. In Section \ref{exthmssec} we prove our final explicit results on counting algebraic numbers, including explicit versions of Masser and Vaaler's result (\refeq{mvthm}), Barroero's result (\refeq{barthm}), and Corollaries \ref{unitcor} and \ref{normcor}. Finally, we include an appendix with some estimates for various expressions involving binomial coefficients which occur in our explicit error terms throughout the paper. \subsection*{Acknowledgments} The authors would like to thank Antoine Chambert-Loir for useful correspondence related to Remark \ref{maninremark}, and Melanie Matchett Wood for useful comments on an early draft of this paper. \subsection{All polynomials}\label{allpolysec} Let $\mc{M}(d,T)$ denote the number of integer polynomials of degree \emph{exactly} $d$ and Mahler measure at most $T$, and let $\mc{M}^{red}(d,T)$ denote the number of such polynomials that are reducible. Recall that $\mc{M}(\ld d, T)$ denotes the number of integer polynomials of degree \emph{at most} $d$ and Mahler measure at most $T$. By (\refeq{coeffbound}), for all $d \geq 0$ and $T >0$ we have \begin{equation}\label{upper} \mc{M}(d,T) \leq \mc{M}(\ld d,T) \leq C_{0,0}(d)T^{d+1} \leq c_0 2^{d+1} P(d)T^{d+1}, \end{equation} where $c_0 = 3159/1024$, using Lemma \ref{Mest} from the appendix. \begin{proposition}\label{allred} We have \begin{align} \mc{M}^{red}(d,T) \leq \left\{ \begin{array}{ll} 1758\cdot T^2\log T, &\textup{if} ~d = 2,~T \geq 2,~\textup{and} \vspace{7pt}\\ 16c_0^2 4^d P(d-1)\cdot T^d, &\textup{if}~d\geq 3,~T \geq 1. \end{array} \right. \end{align} \end{proposition} \begin{proof} For a reducible polynomial $f$ of degree $d$ and Mahler measure at most $T$, there exist $1 \leq d_2 \leq d_1 \leq d-1$ such that $f=f_1 f_2$, where each $f_i$ is an integer polynomial with deg$(f_i) = d_i$. Of course we have $d=d_1 + d_2$. Let $k$ be the unique integer such that $2^{k-1}\leq \mu(f_1) < 2^k$. We have $1 \leq k \leq K$, where $K=\lfloor\frac{\text{log }T}{\text{log }2}\rfloor+1$, and $\mu(f_2) \leq 2^{1-k}T$. Given such a pair $(d_1, d_2)$, by (\refeq{upper}) there are at most $c_0 2^{d_1+1} P(d_1)2^{k(d_1+1)}$ choices of such an $f_1$, and at most $c_0 2^{d_2+1}P(d_2)(2^{1-k}T)^{d_2+1}$ choices for $f_2$. Assume first that $d_1>d_2$. We'll use below that $P(d_1)P(d_2)$ is always less than or equal to $P(d-1),$ by Lemma \ref{Pesus} in the appendix. Summing over all possible $k$ and applying (\refeq{thingy}), the number of pairs of polynomials is at most \begin{align} &\sum_{k=1}^K c_02^{d_1+1}P(d_1)c_0 2^{d_2+1}P(d_2)2^{k(d_1+1)} (2^{1-k}T)^{d_2+1}=4c_0^2 2^{d}P(d_1)P(d_2)(2T)^{d_2+1}\sum_{k=1}^K 2^{k(d_1-d_2)}\\ &\leq 4c_0^2 2^{d}P(d-1)(2T)^{d_2+1}\left[2 \cdot2^{K(d_1-d_2)}\right] \leq 8c_0^2 2^{d} P(d-1)(2T)^{d_1+1} \leq 16c_0^2 2^{d}2^{d_1}P(d-1)T^d. \end{align} If instead $d_1=d_2=\frac{d}{2}$, (so in particular $d$ is even), then the first line above is at most \begin{equation} 4c_0^2 2^d P(d-1)(2T)^{d_1+1}K. \end{equation} In the case $d=2$, note that for $T \geq 2$ we have $K \leq \frac{2}{\log(2)} \log T$, and so \begin{align} \mc{M}^{red}(2,T) &\leq 4c_0^2 2^2 P(1)(2T)^{1+1}K \leq 64c_0^2T^2 \frac{2}{\log(2)}\log T \\&= \frac{128c_0^2}{\log(2)} \cdot T^2 \log T \leq 1758 \cdot T^2 \log T. \end{align} Whenever $T \geq 1$ we have $K \leq 2T$, and thus for even $d \geq 4$, \begin{align} 4c_0^2 2^d P(d-1)(2T)^{d_1+1}K \leq 8c_0^2 2^d2^{d_1} P(d-1)T^{\frac{d}{2}+1}\cdot 2T\leq 16c_0^22^d2^{d_1}P(d-1)T^d, \end{align} so we have the same bound we had when we assumed $d_2 < d_1.$ Finally, for any $d \geq 3$, summing over the possible values of $d_1$ gives that \begin{align} \mc{M}^{red}(d,T) &\leq \sum_{d_1=\lceil \frac{d}{2} \rceil}^{d-1}16c_0^22^d2^{d_1}P(d-1)T^d \leq 16c_0^22^dP(d-1)T^d\sum_{d_1=1}^{d-1}2^{d_1} \\ &= 16c_0^22^dP(d-1)T^d (2^d-2)\leq 16c_0^2 4^d P(d-1)\cdot T^d. \end{align} \end{proof} We follow the proof of \cite[Lemma 2]{masservaaler1} in counting primitive polynomials, but we'll keep track of implied constants. For $n=1, 2, \dots$, let $\mc{M}^{n}(\ld d,T)$ denote the number of \emph{nonzero} integer polynomials of degree at most $d$ and Mahler measure at most $T$, such that the greatest common divisor of the coefficients is $n$. We let $\mc{M}^{n}(d,T)$ denote the corresponding number of polynomials with degree \emph{exactly} $d$, so $\mc{M}^1(d,T)$ is the number of primitive polynomials of degree $d$ and Mahler measure at most $T$. Recall that $\kappa_0(d)$ is a function of $d$ appearing in Theorem \ref{genpolycount}. \begin{theorem}\label{allsieve} For all $d \geq 2$ and $T \geq 1$ we have \begin{align} \left|\mc{M}^{1}(d,T) - \frac{V_d}{\zeta(d+1)}T^{d+1}\right| \leq \left(\frac{V_d}{d}+1\right)T + \big(C_{0,0}(d-1) + \zeta(d)\kappa_0(d)\big)T^d, \end{align} where $\zeta$ is the Riemann zeta-function. \end{theorem} \begin{proof} Being careful to account for the zero polynomial, we have \begin{equation} \mc{M}(\ld d,T) -1= \sum_{1 \leq n \leq T}\mc{M}^{n}(\ld d,T) = \sum_{1 \leq n \leq T} \mc{M}^{1}\left(\ld d,T/n\right). \end{equation} By M\"{o}bius inversion (below we commit a sin of notation overloading and let $\mu$ denote the M\"{o}bius function), this tells us that \begin{equation} \mc{M}^{1}(\ld d,T)=\sum_{1 \leq n \leq T}\mu(n)\left[\mc{M}\left(\ld d,T/n\right) - 1\right]. \end{equation} Combining this with Theorem \ref{genpolycount} and (\refeq{upper}), we have \begin{align} &\left|\mc{M}^{1}(d,T) - V_dT^{d+1}\sum_{1 \leq n \leq T}\frac{\mu(n)}{n^{d+1}}\right| \\ &=\left|\mc{M}^{1}(d,T) - \M^{1}(\ld d,T) + \sum_{n=1}^T \mu(n)\big[\mc{M}(\ld d,T/n)-1\big] - V_dT^{d+1}\sum_{n=1}^T\frac{\mu(n)}{n^{d+1}}\right|\\ &\leq \mc{M}^1(\ld d-1,T) + \sum_{n=1}^T |\mu(n)| + \sum_{n=1}^T \left|\mc{M}(\ld d,T/n) - V_d (T/n)^{d+1}\right| \\ &\leq \mc{M}(\leq d-1,T) + T + \sum_{n=1}^T \kappa_0(d)(T/n)^d \leq C_{0,0}(d-1)T^d + T + \kappa_0(d)T^d \sum_{n=1}^T \frac{1}{n^d} \\ &\leq T + \big(C_{0,0}(d-1) + \zeta(d)\kappa_0(d)\big)T^d. \end{align} This in turn gives \begin{align} \left|\mc{M}^{1}(d,T) - \frac{V_d}{\zeta(d+1)}T^{d+1}\right| &\leq V_dT^{d+1}\sum_{n=T+1}^\infty n^{-(d+1)} + T + \big(C_{0,0}(d-1) + \zeta(d)\kappa_0(d)\big)T^d\\ &\leq \left(\frac{V_d}{d}+1\right)T + \big(C_{0,0}(d-1) + \zeta(d)\kappa_0(d)\big)T^d, \end{align} by applying the integral estimate \begin{equation} \sum_{n=T+1}^\infty n^{-(d+1)} \leq d^{-1}T^{-d}. \end{equation} This establishes the proposition. \end{proof} \subsection{Monic polynomials} Next, let $\mc{M}_1(d,T)$ denote the number of monic integer polynomials of degree $d$ and Mahler measure at most $T$, and let $\mc{M}_1^{red}(d,T)$ denote the number of such polynomials that are reducible. Using (\ref{coeffbound}), we have for all $d \geq 0$ and $T>0$ that \begin{equation}\label{zz} \mc{M}_1(d,T) \leq C_{1,0}(d)T^d \leq c_1 2^d P(d) T^d, \end{equation} where $c_1 = \frac{1053}{512}$, from Lemma \ref{Mest} in the appendix. We'll assume $d \geq 2$. In estimating the number of reducible monic polynomials, we follow the pattern of the proof of Proposition \ref{allred}, noting that if a \emph{monic} polynomial is reducible, its factors can be chosen to be monic. Using the same notation as in that proof, we have that the number of pairs of monic polynomials of degree $d_1$ and $d_2$, with $d_1 > d_2$, is at most \begin{align} \sum_{k=1}^K c_12^{d_1}P(d_1)c_1 2^{d_2}P(d_2)2^{kd_1} (2^{1-k}T)^{d_2} &=c_1^2 2^{d}P(d_1)P(d_2)(2T)^{d_2}\sum_{k=1}^K 2^{k(d_1-d_2)}\\ &\leq 2c_1^2 2^d 2^{d_1} P(d-1)T^{d-1}. \end{align} Noting that \begin{equation} \frac{16c_1^2}{\log 2} < 98, \end{equation} we continue almost exactly as in Proposition \ref{allred} and obtain the following. \begin{proposition}\label{monicred} We have \begin{align} \mc{M}_1^{red}(d,T) \leq \left\{ \begin{array}{ll} 98\cdot T\log T, &\textup{if} ~d = 2,~T \geq 2,~\textup{and} \vspace{7pt}\\ 2c_1^2 4^d P(d-1)\cdot T^{d-1}, &\textup{if}~d\geq 3,~T \geq 1. \end{array} \right. \end{align} \end{proposition} \subsection{Monic polynomials with given final coefficient} Next we want to bound the number of reducible, monic, integer polynomials with fixed constant coefficient. For $r$ a nonzero integer, let $\mc{M}^{red}(d,(1),(r),T)$ denote the number of reducible monic polynomials with constant coefficient $r$, degree $d$, and Mahler measure at most $T$. Using (\ref{coeffbound}), we have for all $d \geq 0$ and $T>0$ that \begin{equation}\label{zz} \mc{M}(d,(1),(r),T) \leq C_{1,1}(d)T^{d-1} \leq c_2 2^{d-1} P(d) T^{d-1}, \end{equation} where $c_2 = \frac{351}{256}$, from Lemma \ref{Mest} in the appendix. Let $\omega(r)$ denote the number of positive divisors of $r$. We'll assume $d > 2$; if $d=2$, we easily have the constant bound $\mc{M}^{red}(d,(1),(r),T) \leq \omega(r)+1$. For a polynomial $f$ counted by $\mc{M}^{red}(d,(1),(r),T)$, there exist $1 \leq d_2 \leq d_1 \leq d-1$ such that $f=f_1 f_2$, where each $f_i$ is an integer polynomial with deg$(f_i) = d_i$, and of course the constant coefficient of $f$ is the product of those of $f_1$ and $f_2$. Define $k$ as in the previous two cases. Given such a pair $(d_1,d_2)$, summing over the $2 \omega(r)$ possibilities for the final coefficient of $f_1$ there are at most $2\omega(r)c_22^{d_1-1}P(d_1)2^{k(d_1-1)}$ choices of such an $f_1$, and then at most $c_22^{d_2-1}P(d_2)(2^{1-k}T)^{d_2-1}$ choices for $f_2$. The rest proceeds essentially as before, and we find that: \begin{proposition}\label{normsieveprop} For $T \geq 1$, we have \begin{align} \mc{M}^{red}(d,(1),(r),T) \leq \left\{ \begin{array}{ll} \omega(r)+1, &\textup{if} ~d = 2\vspace{7pt}\\ \frac{1}{2} \omega(r) c_2^2 4^d P(d-1)\cdot T^{d-2}, &\textup{if}~d\geq 3. \end{array} \right. \end{align} \end{proposition} \subsection{Monic polynomials with a given second coefficient} For our next case, we want to bound the number of reducible, monic, integer polynomials with a given second leading coefficient. Let $\mc{M}^{red}(d,(1,t),(),T)$ denote the number of reducible monic polynomials of degree $d\geq 3$ (we'll treat $d=2$ separately at the end) with integer coefficients, second leading coefficient equal to $t$, and Mahler measure at most $T$. \begin{proposition}\label{tracesieveprop} For all $t \in \mathbb{Z}$ we have \begin{align} \mc{M}^{red}(d,(1,t),(),T) \leq \left\{ \begin{array}{ll} \frac{1}{2}\displaystyle{\sqrt{t^2+4T}+1}, &\textup{if}~d=2,~T \geq 1;\vspace{7pt}\\ \displaystyle{\frac{96}{\log 2}\cdot T\log T}, &\textup{if}~d=3,~T \geq 2;~\textup{and}\vspace{7pt}\\ \displaystyle{d 2^{2d-1} P(d-1) \cdot T^{d-2}}, &\textup{if}~d\geq 4,~T \geq 1. \end{array} \right. \end{align} \end{proposition} \begin{proof} As before, we write such a polynomial as $f=f_1f_2$, with \begin{equation} f_1(z) = z^{d_1} + x_1z^{d_1-1} + \cdots x_{d_1},~\textup{and}~ f_2(z) = z^{d_2} + y_1z^{d_2-1} + \cdots y_{d_2}. \end{equation} Also as before, we enforce $1 \leq d_2 \leq d_1 \leq d-1$ to avoid double-counting, and we define $k$ as in the previous three cases. For $1 \leq i \leq d_1$ and $1 \leq j \leq d_2$, we have \begin{equation}\label{cobds} |x_i| \leq {d_1 \choose i} 2^k,~\textup{and}~|y_j| \leq {d_2 \choose j} 2^{1-k}T. \end{equation} We also, of course, have \begin{equation}\label{tracesum} x_1 + y_1 = t. \end{equation} First assume $d_1 > d_2 + 1.$ Observe that the number of integer lattice points $(x_1,y_1)$ in $[-M_1,M_1] \times [-M_2,M_2]$ such that $x_1+y_1=t$ is at most $2 \min\{M_1,M_2\}+1$. So the number of $(x_1,\dots,x_{d_1},y_1,\dots,y_{d_2})$ satisfying (\refeq{cobds}) and (\refeq{tracesum}) is at most \begin{align}\label{tracebound} &\left(2\min\{d_12^k, d_2 2^{1-k}T\}+1\right) \prod_{j=2}^{d_1}\left[2{d_1\choose j}2^k + 1\right] \cdot \prod_{j=2}^{d_2}\left[2{d_2 \choose j}2^{1-k}T+1\right]\\ &\leq\left(2\min\{d_12^k, d_2 2^{1-k}T\}+1\right)\cdot C_{2,0}(d_1) 2^{k(d_1-1)} \cdot C_{2,0}(d_2) (2^{1-k}T)^{d_2-1}\\ &\leq \left(2d\cdot 2^{1-k}T\right) (2T)^{d_2-1} 2^{k(d_1-d_2)} \cdot 2^{d_1-1} P(d_1) \cdot 2^{d_2-1} P(d_2)\\ &\leq d 2^{d-1} P(d-1)(2T)^{d_2} 2^{k(d_1-d_2-1)}, \end{align} using Lemma \ref{Mest}. Summing over all the possibilities $1 \leq k \leq K$, the number of possible pairs $f_1$ and $f_2$ of degrees $d_1$ and $d_2$, respectively, is at most \begin{align} d2^{d-1}P(d-1)(2T)^{d_2} \sum_{k=1}^K 2^{(d_1-d_2-1)k} &\leq d 2^{d-1} 2^{d_2} P(d-1)T^{d_2} \left[2 \cdot 2^{K(d_1-d_2-1)}\right]\\ &\leq d2^{d-1} 2^{d_1} P(d-1)T^{d-2}.\label{botz} \end{align} Now, if $d_1 = d_2 = \frac{d}{2}$ (in this case $d$ must be even), then the geometric sum above becomes $\sum_{k=1}^K{2^{-k}} \leq 1$. So for $d \geq 4$ again we obtain the estimate (\refeq{botz}) we achieved assuming $d_1 > d_2+1$. If $d_1 = d_2 + 1$ (so $d$ is odd), then the number of possible pairs is at most $d 2^{d-1} P(d-1) (2T)^{d_2} K,$ which does not exceed (\refeq{botz}) for $d \geq 5$, and for $d = 3$, $T \geq 2$ is at most \begin{equation} 3 \cdot 2^{3-1} P(2) (2T)^1 \frac{2 \log T}{\log 2} = \frac{96}{\log 2} \cdot T \log T, \end{equation} which gives us the $d=3$ case of the proposition. Finally, for $d \geq 4$ we sum over the at most $d/2$ possibilities for $(d_1,d_2)$, yielding \begin{align} \mc{M}^{red}(d,(1,t),(),T) &\leq d 2^{2d-1} P(d-1) T^{d-2}. \end{align} For the case $d=2$, we'll see that the error term is on the order of $\sqrt{T}$. Note that we are simply counting integers $c$ such that the polynomial \begin{equation} f(z) = (z^2+tz+c) = (z+x_1)(z+y_1) \end{equation} has Mahler measure at most $T$. Since we know $|c| \leq T$, it suffices to control the size of $\{x_1 \in \mathbb{Z} \ | \ |x_1(t-x_1)| \leq T\}$, which is itself bounded by the size of $\{x_1 \in \mathbb{Z} \ | \ x_1^2 - tx_1 \leq T\}$. By the quadratic formula, that last set is simply $\{x_1 \in \mathbb{Z} \ | \ \frac{t-\sqrt{t^2+4T}}{2} \leq x_1 \leq \frac{t+\sqrt{t^2+4T}}{2}\}$, which has size at most $\sqrt{t^2+4T} + 1$. To better bound the number of $c$ of the form $x_1(t-x_1)$, note that such a $c$ can be written in this form for exactly two values of $x_1$, except for at most one value of $c$ for which $x_1$ is unique (this occurs when $t$ is even). So overall, the number of such $c$ with $|c| \leq T$ is at most $\frac{1}{2}\sqrt{t^2+4T}+1$. \end{proof} \subsection{Monic polynomials with given second and final coefficient} For our final case, we want to bound the number of monic, reducible polynomials with a given second leading coefficient $t \in \mathbb{Z}$ and given constant coefficient $0 \neq r \in \mathbb{Z}$. We can clearly assume that $d\geq 3$ since we're imposing three coefficient conditions. We write $\mc{M}^{red}(d,(1,t),(r),T)$ for the number of reducible monic polynomials of degree $d$ with integer coefficients, second leading coefficient equal to $t$, and constant coefficient equal to $r$. We'll show this is $O(T^{d-3})$ in all cases. While we don't write an explicit bound for the error term, it should be clear from our proof that this is possible. \begin{proposition}\label{ntsieveprop} For all $d \geq 3$, $t \in \mathbb{Z}$, and $r \in \mathbb{Z} \setminus \{0\}$, we have \begin{equation} \mc{M}^{red}(d,(1,t),(r),T) = O\left(T^{d-3}\right). \end{equation} \end{proposition} \begin{proof} As before, we write such a polynomial as $f=f_1f_2$, with \begin{equation} f_1(z) = z^{d_1} + x_1z^{d_1-1} + \cdots x_{d_1},~\textup{and}~ f_2(z) = z^{d_2} + y_1z^{d_2-1} + \cdots y_{d_2}. \end{equation} We always enforce $1 \leq d_2 \leq d_1 \leq d-1$ to avoid double-counting. We'll consider the count in several different cases. First, if $d_2=1$, then $f_2 = z + y_{d_2}$, so we must have $y_{d_2} | r$ and $y_{d_2}+x_1 = t$. Thus there are only $2\omega(r)$ possible choices of $f_2$; each choice will in turn determine $x_{d_1}$ and $x_1$, so we have $O(T^{d_1-2})=O(T^{d-3})$ choices of $f_1$ altogether, by Theorem \ref{mainthm}. Note that this completely covers the case $d=3$. Now assume $d_2 \geq 2$, so $d\geq 4$. There are again only $2\omega(r)$ possible choices of $y_{d_2}$, and each one will determine what $x_{d_1}$ is (they must multiply to give $r$). Fix a choice of $y_{d_2}$ for now. Assume first that $d_1 > d_2+1.$ Again we take $k$ between 1 and $K = \left\lfloor \frac{\log T}{\log 2}\right\rfloor+1,$ and assume that $2^{k-1} \leq \mu(f_1) \leq 2^k$, so $\mu(f_2) \leq 2^{1-k}T$. Almost exactly as in (\refeq{tracebound}), we get that the number of $(x_1,\dots,x_{d_1-1},y_1,\dots,y_{d_2-1})$ contributing to $\mc{M}^{red}(d,(1,t),(r),T)$ is at most \begin{align} &\left(2\min\{d_12^k, d_2 2^{1-k}T\}+1\right)\cdot\prod_{i=2}^{d_1-1}\left[2{d_1\choose i}2^k+1\right]\cdot \prod_{j=2}^{d_2-1}\left[2{d_2 \choose j}(2^{1-k}T)+1\right]\\ &\leq \left(2d\cdot 2^{1-k}T\right)\cdot 2^{k(d_1-2)} C_{2,1}(d_1) \cdot (2^{1-k}T)^{d_2-2} C_{2,1}(d_2)\\ &= d 2^{d_2}C_{2,1}(d_1)C_{2,1}(d_2) T^{d_2-1} 2^{(d_1-d_2-1)k}\\ &\leq \frac{1}{64}d 2^d2^{d_2}P(d-1) T^{d_2-1} 2^{(d_1-d_2-1)k}, \end{align} using Lemmas \ref{Mest} and \ref{Pesus}. Summing over all the possibilities $1 \leq k \leq K$, the number of possible pairs $f_1$ and $f_2$ of degrees $d_1$ and $d_2$, respectively, is at most \begin{align} \frac{1}{64}d 2^d2^{d_2}P(d-1) T^{d_2-1} \sum_{k=1}^K2^{(d_1-d_2-1)k}\label{topsy2} &\leq \frac{1}{32}d 2^d2^{d_1}P(d-1)T^{d_1-2} \leq \frac{1}{32}d 2^d2^{d_1}P(d-1)T^{d-3}, \end{align} which is certainly $O(T^{d-3})$. Next, if $d_1 = d_2 = \frac{d}{2}$ (in this case $d$ must be even), then the expression in (\refeq{topsy2}), which contains a partial geometric sum that's bounded by 1, is at most \begin{equation} \frac{1}{64}d 2^d2^{d_2}P(d-1) T^{\frac{d}{2}-1}, \end{equation} which is certainly $O(T^{d-3})$ since $d\geq 4$. Lastly, if $d_1=d_2+1$, (so $d \geq 5$), then $d_2 \leq d-3$, and (using $K \leq 2T$) the expression in (\refeq{topsy2}) is at most \begin{align} \frac{1}{64}d 2^d2^{d_2}P(d-1)T^{d_2-1} K \leq \frac{1}{32}d 2^d2^{d_2}P(d-1)T^{d_2} \leq \frac{1}{32} d 2^d2^{d_2}P(d-1)T^{d-3}, \end{align} which is $O(T^{d-3})$. Finally, we sum over the $2\omega(r)$ possibilities for $y_{d_2}$ and the at most $d/2$ possibilities for $(d_1,d_2)$ and obtain overall that $\mc{M}^{red}(d,(1,t),(r),T) = O(T^{d-3}).$ \end{proof} \subsection{Volumes} As mentioned in the introduction, the exact volume of $\mc{U}_d$ was determined by Chern and Vaaler \cite[Corollary 2]{chernvaaler}: \begin{equation} V_d := \vol_{d+1}(\mc{U}_d) = 2^{d+1}(d+1)^s \prod_{j=1}^s \frac{(2j)^{d-2j}}{(2j+1)^{d+1-2j}}, \end{equation} where $s = \lfloor(d-1)/2\rfloor.$ We record some numerical information about the volume of $\mc{U}_d$. We note that a result like the one below would follow quite easily from the asymptotic formula for $V_d$ given in \cite[(1.31)]{chernvaaler}. However, this formula was given without proof and appears to contain an error. We settle for a simpler result. \begin{lemma}\label{volmax} We have \begin{align} V_d \leq V_{15} &= \frac{2658455991569831745807614120560689152}{13904872587870848957579157123046875}\\ &= \frac{2^{121}}{3^{20}\cdot 5^9 \cdot 7^9 \cdot 11^6 \cdot 13^4}\approx 191.1888 \end{align} for all $d \geq 0$, and \begin{equation} \lim_{d\to \infty}V_d = 0. \end{equation} \end{lemma} \begin{proof} Note using Stirling's estimates (see (\refeq{stir}) in the appendix) that for any positive integer $s$, we have \begin{align} \prod_{j=1}^s \left\{ \frac{2j}{2j+1}\right\} &= \frac{2^s s!}{(2s+1)!/(2^s s!)} = \frac{4^s s!^2}{(2s+1)!}\\ &\leq \frac{4^s(e^{1-s}s^{s+1/2})^2}{\sqrt{2\pi}e^{-2s-1}(2s+1)^{2s+3/2}} \leq \frac{4^s(e^{2-2s}s^{2s+1})}{\sqrt{2\pi}e^{-2s-1}(2s)^{2s+3/2}}\\ &\leq \frac{e^3 4^s s^{2s+1}}{\sqrt{2\pi} 4^s 2^{3/2} s^{2s+1} \sqrt{s}} \leq \frac{e^3}{4\sqrt{\pi s}}. \end{align} Suppose that $d$ is odd, so we may take $s = \left \lfloor \frac{d-1}{2}\right \rfloor = \left \lfloor \frac{(d+1)-1}{2}\right \rfloor.$ Then we have \begin{align} \frac{V_{d+1}}{V_d} &= \frac{2^{d+2}(d+2)^s}{2^{d+1}(d+1)^s}\prod_{j=1}^s\left\{\frac{(2j)^{d+1-2j}}{(2j)^{d-2j}}\right\} \prod_{j=1}^s\left\{\frac{(2j+1)^{d+1-2j}}{(2j+1)^{d+2-2j}}\right\}\\ &= 2\left(\frac{d+2}{d+1}\right)^s \prod_{j=1}^s \left\{ \frac{2j}{2j+1}\right\} \leq \left(\frac{d+2}{d+1}\right)^s \cdot \frac{e^3}{2\sqrt{\pi s}}.\\ \end{align} If $d$ is even and $s = \left \lfloor \frac{d-1}{2}\right\rfloor = \frac{d}{2}-1$, then $\left \lfloor \frac{(d+1)-1}{2}\right\rfloor = s+1$, and then we have \begin{align} \frac{V_{d+1}}{V_d}&=\frac{2^{d+2}(d+2)^{s+1}}{2^{d+1}(d+1)^s}\cdot \frac{d}{(d+1)^2}\prod_{j=1}^s\left\{\frac{(2j)^{d+1-2j}}{(2j)^{d-2j}}\right\} \prod_{j=1}^s\left\{\frac{(2j+1)^{d+1-2j}}{(2j+1)^{d+2-2j}}\right\}\\ &= 2\frac{(d+2)^{s}}{(d+1)^{s}}\cdot \frac{d^2+2d}{d^2+2d+1}\cdot \prod_{j=1}^{s}\left\{ \frac{2j}{2j+1}\right\} \leq \left(\frac{d+2}{d+1}\right)^{s} \cdot \frac{e^3}{2\sqrt{\pi s}}.\\ \end{align} In either case, the ratio of successive terms tends to zero, so in fact $V_d$ decays to zero faster than exponentially, proving the second claim of our lemma. For the first claim, it suffices to compute enough values of $V_d$. We see the maximum is attained at $d=15$, as advertised. \end{proof} For any $T \geq 0$, by (\refeq{dil}) we have that \begin{equation} \vol_{d+1}\left(\{\vec w \in \mathbb{R}^{d+1} ~\big|~ \mu(\vec w) \leq T\}\right)=\vol_{d+1}(T\mc{U}_d) = V_d \cdot T^{d+1}. \end{equation} Chern and Vaaler (see \cite[equation (1.16)]{chernvaaler}, corrected as in \cite[footnote on p. 38]{barroero14}) also computed the volume of the ``monic slice'' \begin{align} \label{monicdef} \mc{W}_{d,T} &:= \{(w_0,\dots,w_d) \in T\mc{U}_d ~\big|~ w_0 = 1\}. \end{align} They showed: \begin{align} \vol_d\left(\mc{W}_{d,T}\right) = p_d(T) &:= \mc{C}_d2^{-s}\{s!\}^{-1} \sum_{m=0}^s (-1)^m(d-2m)^s{s \choose m}T^{d-2m}, \label{fd} \end{align} where again \begin{equation} s = \left\lfloor\frac{d-1}{2}\right\rfloor,~\textup{and}~\mc{C}_d = 2^d \prod_{j=1}^s \left(\frac{2j}{2j+1}\right)^{d-2j}. \end{equation} Note that, since $p_d(T)$ is a polynomial in $T$, we automatically have (carefully inspecting the leading term): \begin{equation} \vol_d\left(\mc{W}_{d,T}\right) = V_{d-1} \cdot T^d + O(T^{d-1}). \end{equation} For other slices besides the monic one, we will have to work harder (in Section \ref{volsec}) to obtain such power savings. Along the way, it will become clear why the leading coefficient takes the form it does. \subsection{Semialgebraicity} Next we establish a qualitative result we will need in proving Theorem \ref{mainthm}. A (real) \emph{semialgebraic set} is a subset of euclidean space which is cut out by finitely many polynomial equations and/or inequalities, or a finite union of such subsets. Recall that semialgebraic sets are closed under finite unions and intersections, and they are closed under projection by the Tarski-Seidenberg theorem \cite[Theorem 1.5]{bierstonemilman}. \begin{lemma}\label{semialglemma} The set $\mc{U}_d\subset \mathbb{R}^{d+1}$ is semialgebraic. \end{lemma} \begin{proof} Our proof is similar to that of \cite[Lemma 4.1]{barroero14}. For $j = 0, \dots, d$, we wish to define a semialgebraic set $S_j \subset \mathbb{R}^{d+1}$ corresponding to degree $j$ polynomials in $\mc{U}_d$. We start by constructing auxiliary subsets of $\mathbb{R}^{d+1} \times \mathbb{C}^j$ corresponding to the polynomials' coefficients and roots, where $\mathbb{C}$ is identified with $\mathbb{R}^2$ in the obvious way. We define \begin{align} S_j^0 = \{(0, \dots,0,w_{d-j},\dots, w_d,\alpha_1,\dots,\alpha_j) \in \mathbb{R}^{d+1}\times \mathbb{C}^j ~\big|~ w_{d-j} \not = 0,~\textup{and}&\\ w_{d-j}z^j +w_{d-j+1}z^{j-1} + \cdots + w_d = w_{d-j}(z-\alpha_1) \cdots(z-\alpha_j)& \}, \end{align} where the equalities defining the set are given by equating the real part of each elementary symmetric function in the roots $\alpha_1,\dots,\alpha_j $ with the corresponding coefficient $w_i$, and setting the imaginary part to zero. To enforce $\mu((0, \dots,0,w_{d-j},\dots, w_d)) \leq 1$, we define $S_j^1$ to comprise those elements of $S_j^0$ such that all products of subsets of $\{\alpha_1,\dots,\alpha_j\}$ are less than or equal to $1/|w_{d-j}|$ in absolute value. Finally, we let $S_j$ be the projection of $S_j^1$ onto $\mathbb{R}^{d+1}$. Now simply note that \begin{equation} \mc{U}_d = \{0\} \cup \bigcup_{j=0}^d S_j. \end{equation} \end{proof} \begin{remark}\normalfont Note that for any $T>0$ the dilation $T\mc{U}_d$ is also semialgebraic, and is defined by the same number of polynomials (and of the same degrees) as is $\mc{U}_d$. \end{remark} \subsection{Boundary parametrizations}\label{paramsec} Next we describe the parametrization of the boundary of $\mc{U}_d$, which consists of vectors corresponding to polynomials with Mahler measure exactly 1. The simple idea behind the parametrization is that such a polynomial is the product of a \emph{monic} polynomial with all its roots inside (or on) the unit circle, and a polynomial with constant coefficient $\pm 1$ and all its roots outside (or on) the unit circle. Recall that $\mc{U}_d$ is a compact, symmetric star body in $\mathbb{R}^{d+1}$. The parametrization is described in \cite[Section 10]{chernvaaler}. We briefly summarize the key points here. The boundary $\partial \mc{U}_d$ is the union of $2d+2$ ``patches'' $\mc{P}_{k,d}^{\varepsilon}$, $k = 0,\dots,d$, $\varepsilon = \pm 1$. The patch $\mc{P}_{k,d}^{\varepsilon}$ is the image of a certain compact set $\mc{J}_{k,d}^{\varepsilon}$ under the map \begin{equation} b_{k,d}^{\varepsilon}:\mathbb{R}^k \times \mathbb{R}^{d-k} \to \mathbb{R}^{d+1}, \end{equation} defined by \begin{equation}\label{bdef} b_{k,d}^{\varepsilon}\big((x_1,\dots,x_k),(y_0,\dots,y_{d-k-1})\big) = B_{k,d}\big((1,x_1,\dots,x_k),(y_0,\dots,y_{d-k-1},\varepsilon)\big), \end{equation} \begin{equation} B_{k,d}\big((x_0,x_1,\dots,x_k),(y_0,\dots,y_{d-k})\big) = (w_0,\dots,w_d), \end{equation} with \begin{equation}\label{ws} w_i = \sum_{l=0}^k \sum_{\shortstack{$m=0$\\$l+m=i$}}^{d-k} x_ly_m, \quad\quad i=0,\dots,d. \end{equation} Note that this simply corresponds to the polynomial factorization \begin{align} w_0z^d + \cdots + w_d = (x_0z^k + \cdots+x_k)\cdot (y_0z^{d-k} + \cdots +y_{d-k}). \end{align} The sets $\mc{J}_{k,d}^{\varepsilon}$ are given by \begin{equation} \mc{J}_{k,d}^{\varepsilon} = J_k \times K_{d-k}^{\varepsilon} \subseteq \mathbb{R}^k \times \mathbb{R}^{d-k}, \end{equation} where \begin{align} J_k = \{\vec x \in \mathbb{R}^k ~\big|&~ \mu(1,\vec x) = 1\},~\textup{and}~\label{Jdef}\\K_{d-k}^{\varepsilon} = \{\vec y \in \mathbb{R}^{d-k} ~\big|&~ \mu(\vec y,\varepsilon) =1\}. \end{align} It will also be useful in Section \ref{monicsec} to have a parametrization of $\partial \mc{W}_{d,T}$, the boundary of a monic slice (see (\refeq{monicdef})), along the lines of that given for $\partial\mc{U}_d$ above. Consider a monic polynomial \begin{equation} f(z) = z^d + w_1z^{d-1} + \cdots + w_d \in \mathbb{R}[z], \end{equation} having Mahler measure equal to $T > 0$ and roots $\alpha_1,\dots,\alpha_d \in \mathbb{C}$. We note that such a polynomial can be factored as $f(z) = g_1(z)g_2(z)$, where $g_1$ and $g_2 \in \mathbb{R}[z]$ are monic, $\mu(g_1) = 1$ (forcing $\mu(g_2) = T$), the constant coefficient of $g_2$ is $\pm T$, and where $\deg(g_1) = k \in \{0,\dots,d-1\}$. To do this, we simply let \begin{equation} g_1(z) = \prod_{|\alpha_i| \leq 1}(z-\alpha_i), ~\textup{and}~ g_2(z) = \prod_{|\alpha_i| > 1}(z-\alpha_i). \end{equation} It is easy to check that $g_1$ and $g_2$ have the desired properties. For $k=0,\dots,d-1$, we let $J_k$ be as in (\refeq{Jdef}), and let \begin{align} Y_{d-k}^{\varepsilon T} &= \{\vec y \in \mathbb{R}^{d-k-1}~\big|~\mu(1,\vec y,\varepsilon T) = T\},~\textup{and}\\ \mc{L}_{k,d}^{\varepsilon T} &= J_k \times Y_{d-k}^{\varepsilon T} \subseteq \mathbb{R}^k \times \mathbb{R}^{d-k-1}, \end{align} for each $k = 0,\dots,d-1$, $\varepsilon = \pm 1$. We also define \begin{equation}\label{mpar1} \beta_{k,d}^{\varepsilon T}\big((x_1,\dots,x_k),(y_1,\dots,y_{d-k-1})\big) = B_{k,d}^{\varepsilon}\big((1,x_1,\dots,x_k),(1,y_1,\dots,y_{d-k-1},\varepsilon T)\big), \end{equation} similarly to (\refeq{bdef}). We have that $\partial \mc{W}_{d,T}$ is covered by the $2d$ ``patches'' \begin{equation}\label{mpar2} \beta_{k,d}^{\varepsilon T}\left(\mc{L}_{k,d}^{\varepsilon T}\right). \end{equation} \section{Introduction}\label{introductionsec} \input{introductionnew.tex} \section{The unit star body}\label{starsec} \input{star.tex} \section{Counting principles}\label{countingsec} \input{counting.tex} \section{Volumes of slices of star bodies}\label{volsec} \input{volume.tex} \section{Lattice points in slices: proof of Theorem \ref{mainthm}}\label{latticesec} \input{lattice.tex} \section{Proofs of Theorem \ref{maincor} and corollaries}\label{finalcountsec} \input{finalcountnew.tex} \section{Counting polynomials: explicit bounds}\label{cpebsec} \input{cpeb.tex} \section{Counting monic polynomials: explicit bounds}\label{monicsec} \input{monic.tex} \section{Lattice points in slices: explicit bounds}\label{slicessec} \input{slices.tex} \section{Reducible and imprimitive polynomials}\label{sievingsec} \input{sieving.tex} \section{Explicit results}\label{exthmssec}\label{exsec} \input{ex.tex} \section*{Appendix: combinatorial estimates} \setcounter{equation}{0} \input{comb.tex}
1,108,101,565,356
arxiv
\section{Introduction} Although rare, massive stars ($M_*>8-10M_\odot$) play a crucial role in multiple astrophysical domains in the Universe. Throughout their life they continuously lose mass via strong stellar winds that transfer energy and momentum to the interstellar medium. As the main engines of nucleosynthesis, they produce a series of elements and shed chemically processed material as they evolve through various phases of intense mass loss. And they do not simply die: they explode as spectacular supernovae, significantly enhancing the galactic environment of their host galaxies. Their end products, neutron stars and black holes, offer the opportunity to study extreme physics (in terms of gravity and temperature) as well as gamma-ray bursts and gravitational wave sources. As they are very luminous, they can be observed in more distant galaxies, which makes them the ideal tool for understanding stellar evolution across cosmological time, especially for interpreting observations from the first galaxies (such as those to be obtained from \textit{James Webb Space Telescope}). While the role of different stellar populations on galaxy evolution has been thoroughly investigated in the literature \citep{Bruzual2003, Maraston2005}, a key ingredient of models, the evolution of massive stars beyond the main sequence, is still uncertain \citep{Martins2013, Peters2013}. Apart from the initial mass, the main factors that determine the evolution and final stages of a single massive star are metallicity, stellar rotation, and mass loss \citep{Ekstrom2012, Georgy2013, Smith2014}. Additionally, the presence of a companion, which is common among massive stars with binary fractions of $\sim50-70\%$ (\citealt{Sana2012, Sana2013, Dunstall2015}), can significantly alter the evolution of a star through strong interactions \citep{deMink2014,Eldridge2017}. Although all these factors critically determine the future evolution and the final outcome of the star, they are, in many cases, not well constrained. In particular, mass loss is of paramount importance as it determines not only the stellar evolution but the enrichment and the formation of the immediate circumstellar environment (for a review, see \citealt{Smith2014} and references therein). Especially in the case of single stars, their strong radiation-driven winds during the main-sequence phase remove material continuously but not necessarily in a homogeneous way, due to clumping \citep{Owocki1999}. On top of that, there are various transition phases in the stellar evolution of massive stars during which they experience episodic activity and outbursts, such as Wolf-Rayet stars (WRs), Luminous Blue Variables (LBVs), Blue supergiants (BSGs), B[e] supergiants (B[e]SGs), Red supergiants (RSGs), and Yellow supergiants (YSGs). This contributes to the formation of complex structures, such as shells and bipolar nebulae in WRs and LBVs (\citealt{Gvaramadze2010, Wachter2010}) and disks in B[e]SGs (\citealt{Maravelias2018}). But how important the episodic mass loss is, how it depends on the metallicity (in different galaxies), and what links exist between the different evolutionary phases are still open questions. To address these questions, the European Research Council-funded project ASSESS\footnote{\url{https://assess.astro.noa.gr/}} (\textit{"Episodic Mass Loss in Evolved Massive stars: Key to Understanding the Explosive Early Universe"}) aims to determine the role of episodic mass by: (i) assembling a large sample of evolved massive stars in a selected number of nearby galaxies at a range of metallicities through multiwavelength photometry, (ii) performing follow-up spectroscopic observations on candidates to validate their nature and extract stellar parameters, and (iii) testing the observations against the assumptions and predictions of the stellar evolution models. In this paper we present our approach for the first step, which is to develop an automated classifier based on multiwavelength photometry. One major complication for this work is the lack of a sufficiently large number of massive stars with known spectral types. Some of these types are rare, which makes the identification of new sources in nearby galaxies even more difficult. Moreover, spectroscopic observations at these distances are challenging due to the time and large telescopes required. On the other hand, photometric observations can provide information for thousands of stars but at the cost of a much lower (spectral) resolution, leading to a coarser spectral-type classification (e.g., \citealt{Massey2006, Bonanos2009, Bonanos2010, Yang2019}). Using the Hertzsprung–Russell diagram (HRD) and color-color diagrams, one needs a detailed and careful approach to properly determine the boundaries between the different populations and identify new objects, a process that is not free from contaminants (e.g., \citealt{Yang2019}). To circumvent this problem, we can use a data-driven approach. In this case, data can be fed to more sophisticated algorithms that are capable of "learning" from the data and finding the mathematical relations that best separate the different classes. These machine-learning methods have been extremely successful in various problems in astronomy (see Sect. \ref{s:algorithms}). Still, though, applications of these techniques tailored for the classification of massive stars with photometry are, to the best of our knowledge, scarce if not almost nonexistent. \cite{Morello2018} studied the \textit{k}-nearest neighbors on IR colors to select WRs from other classes, while \cite{Dorn-Wallenstein2021} explored other techniques to obtain a wider classification based on \textit{Gaia} and IR colors for a large number of Galactic objects. This work provides an additional tool, focusing on massive stars in nearby galaxies. It presents the development of a photometric classifier, which will be used in a future work to provide the classification for thousands of previously unclassified sources\footnote{Code and other notes available at: \url{https://github.com/gmaravel/pc4masstars}}. In Sect. \ref{s:data} we present the construction of our training sample (spectral types, foreground removal, and photometric data). In Sect. \ref{s:ml} we provide a quick summary of the methods used and describe the class and feature selection, as well as the implementation and the optimization of the algorithms. In Sect. \ref{s:results} we show the performance of our classifier for the M31 and M33 galaxies (on which it was trained) and its application to an independent set of galaxies (IC 1613, WLM, and Sextans A). In Sect. \ref{s:discussion} we discuss the necessity of a good training sample and labels, as well as the feature sensitivity. Finally, in Sect. \ref{s:summary} we summarize and conclude our work. \section{Building the training sample} \label{s:data} In the following section we describe the steps we followed to create our training sample, starting for the available IR photometric catalogs, removing foreground sources using \textit{Gaia} astrometric information, and collecting spectral types from the literature. \begin{figure*}[hbt!] \centering \includegraphics[width=\textwidth]{M31-plots.png}\\ \includegraphics[width=\textwidth]{M31-Gaia-HRD.png} \caption{ Using \textit{Gaia} to identify and remove foreground sources. (A) Field-of-view of \textit{Gaia} sources (black dots) for M31. The big green ellipse marks the boundary we defined for the M31 galaxy, and the smaller green ellipses define M110 and M32 (inside the M31 ellipse) and are excluded. The blue dots highlight the sources in M31 with known spectral classification. (B) Foreground region, excluding the sources inside M110. (C) Distribution of the proper motion over its error for Dec, for all \textit{Gaia} sources in the foreground region, fitted with a spline function. (D) Distribution of the proper motion over its error for Dec (solid line), for all sources along the line-of-sight of M31, which includes both foreground and galactic (M31) sources. We fitted this with a scaled spline, to account for the number of foreground sources expected inside M31 (dashed line), and a Gaussian function (dotted line). The vertical dashed lines correspond to the $3\sigma$ threshold of the Gaussian. Any source with values outside this region is flagged as a potential foreground source. (E) \textit{Gaia} CMD of all sources identified as galactic (red points) and foreground (gray). The majority of the foreground sources lie on the yellow branch of the CMD, which is exactly the position at which we expect the largest fraction of the contamination.} \label{f:gaia_process} \end{figure*} \subsection{Surveys used} \label{s:surveys} Infrared bands are ideal probes for distinguishing massive stars and particular those with dusty environments \citep{Bonanos2009, Bonanos2010}. The use of IR colors is a successful method for target selection, as demonstrated by \citealt{Britavskiy2014, Britavskiy2015}. We based our catalog composition on mid-IR photometry ($3.6\, \mu m$, $4.5\, \mu m$, $5.8\, \mu m$, $8.0\, \mu m$, $24\, \mu m$), using pre-compiled point-source catalogs from the \textit{Spitzer} Space Telescope \citep{Khan2015, Khan2017, Williams2016}, which have only recently become publicly available. This allows us to use positions derived from a single instrument, a necessity for cross-matching since spectral typing comes from various works and instruments. The cross-match radius applied in all cases was 1", since this corresponds to a significant physical separation that grows gradually as a function of distance. Additionally, we only kept sources with single matches, as it is impossible to choose the correct match to the \textit{Spitzer} source when two or more candidates exist within the search radius (accounting for about 2-3\% of all sources in M31 and M33). Although the inclusion of near-IR data would help better sampling of the spectral energy distribution of our sources, this is currently impossible given the shallowness of Two Micron All-Sky Survey (2MASS) for our target galaxies, and the -- unfortunate -- lack of any other public all sky near-IR survey. Some data (for a particular band only; $J_{\rm{UK}}$) were collected from the United Kingdom Infra-Red Telescope (UKIRT) Hemisphere Survey\footnote{\url{http://wsa.roe.ac.uk/uhsDR1.html}} \citep{Dye2018}. The data set was supplemented with optical photometry ($g,r,i,z,y$) obtained from the Panoramic Survey Telescope and Rapid Response System (Pan-STARRS; \citealt{Chambers2016}), using sources with ${\rm \texttt{nDetections}}\geq2$ to exclude spurious detections\footnote{Although DR2 became available after the compilation of our catalogs, it contains information from the individual epochs. DR1 provides the object detection and their corresponding photometry from the stacked images, which we opted to use.}. We also collected photometry from \textit{Gaia} Data Release 2 (DR2) (G, G$_{\rm{BP}}$, G$_{\rm{RP}}$; \citealt{Gaia2016, Gaia2018b}). We investigated all other available surveys in the optical and IR but the aforementioned catalogs provided the most numerous and consistent sample with good astrometry for our target galaxies (M31, M33, IC 1613, WLM, and Sextans A). Significant populations of massive stars are well known for the Magellanic Clouds and the Milky Way but there are issues that prohibited us from using them. The Clouds are not covered by the Pan-STARRS survey, which means that photometry from other surveys should be used that would make the whole sample inhomogeneous (with all possible systematics introduced by the different instrumentation, data reductions, etc). Although Milky Way is covered by both Pan-STARRS and \textit{Spitzer} surveys, there are hardly any data available for the most interesting sources, such as B[e]SGs, WRs, and LBVs, through the \textit{Spitzer} Enhanced Imaging Products (which focus on the quality of the products and not completeness). Therefore, we limited ourselves to M31 and M33 galaxies when building our training sample. \subsection{Removing foreground stars} The source lists compiled from the photometric surveys described in the previous section, contain mostly genuine members of the corresponding galaxies. It is possible though that foreground sources may still contaminate these lists. To optimize our selection, we queried the \textit{Gaia} DR2 catalog \citep{Gaia2016, Gaia2018b}. With the statistical handling of the astrometric data we were able to identify and remove most probable foreground sources in the line-of-sight of our galaxies. We first defined a sufficiently large box around each galaxy: $3.5\,{\rm deg} \times 3.5\,{\rm deg} $ for M31 and $ 1.5\, {\rm deg} \times 1.5\, {\rm deg} $ for M33, which yielded 145837 and 34662 sources, respectively. From these we first excluded all sources with nonexistent or poorly defined proper motions (${\rm pmra\_error} \geq 3.0\, {\rm mas}$, $ {\rm pmdec\_error} \geq 3.0\, {\rm mas}$) or parallax (${\rm parallax\_error} \geq 1.5 {\rm mas}$), sources with large astrometric excess noise ($ {\rm astrometric\_excess\_noise} \geq 1.0$; following the cleaning suggestions by \citealt{Lindegren2018}), or sources that were fainter than our limit set in the optical (${\rm phot\_g\_mean\_mag} \geq 20.5$). These quality cuts left us with 78375 and 26553 sources in M31 and M33, respectively. The boundary for each galaxy was determined as the ellipse at which the star density dropped significantly at approximately the density of the background. This boundary was visually inspected also so that it masks the main body (disk) of each galaxy where our targets are expected to be located (and to exclude contaminating regions inside and outside the galaxy, namely M32 and M110 galaxies for M31, see Fig. \ref{f:gaia_process}, Panel A; for M33 see Fig. \ref{f:gaia_process-M33}). Therefore, we could securely assign the remainder of stars as foreground objects (see Fig. \ref{f:gaia_process}, Panel B). From these we obtained the distributions on the proper motions in RA and Dec (over their corresponding errors) and the parallax (over its error). We fitted these distributions with a spline to allow more flexibility (see Dec for example in Fig. \ref{f:gaia_process}, Panel C). Similarly, we plotted the distributions for all sources within the ellipse, which contained both galactic and foreground sources. To fit these we used a combination of a Gaussian and a spline function (see Fig. \ref{f:gaia_process}, Panel D). The spline was derived from the sources outside the galaxy (Fig. \ref{f:gaia_process}, Panel C), but when used for the sources within the ellipse it was scaled down according to the ratio of the area outside and inside the galaxy (assuming that the foreground distribution does not change). From the estimated widths of the Gaussian distributions (M31: $ {\rm pmRA/error} = 0.04 \pm 1.28$, $ {\rm pmDEC/error} = -0.03 \pm 1.48$, $ {\rm parallax/error} = 0.21 \pm 1.39$; M33: $ {\rm pmRA/error} = 0.12 \pm 1.18$, $ {\rm pmDEC/error} = 0.05 \pm 1.31$, $ {\rm parallax/error} = -0.03 \pm 1.16$) we defined as foreground sources those with values larger than $3\sigma$ in any of the above quantities. For the parallax we took into account the systematic 0.03 mas offset induced by the global zero point found by \cite{Lindegren2018}. This particular cut was applied only to sources with actual positive values, as zero or negative values are not decisive for exclusion. In the \textit{Gaia} color-magnitude diagram (CMD) of Fig. \ref{f:gaia_process}, Panel E, we show all sources identified as members of the host galaxy (red points) and foreground (gray). The majority of the foreground sources lie on the yellow branch of the CMD, which is exactly the position at which we expect the largest fraction of the contamination. This process was successful in the cases of M31 and M33 due to the numerous sources that allow their statistical handling. In the other galaxies, where the field-of-view is substantially smaller, the low numbers of sources led to a poorer (if any) estimation of these criteria. Consequently, for those galaxies we considered as foreground sources those with any of their \textit{Gaia} properties (${\rm pmRA/error}$, $ {\rm pmDEC/error}$, or $ {\rm parallax/error} $) larger than $3\sigma$ of the largest measured errors, following the most conservative approach. In practice, this means that we used the same criteria to characterize foreground sources as with M31. \subsection{Collecting spectral types} \label{s:data-spec_types} The use of any supervised machine-learning application requires a training sample. It is of paramount importance that the sample be well defined such that it covers the parameter space spanned by the objects under consideration. For this reason, we performed a meticulous search of the literature to obtain a sample as complete as possible to our knowledge with known spectral types (that were used as labels). The vast majority of collected data are found in M31 and M33. The source catalogs were retrieved primarily from \cite{Massey2016} as part of their Local Group Galaxy Survey (LGGS) survey, complemented by other works (see Table \ref{t:spectypes_refs} for the full list of numbers and references used). In all cases we carefully checked for and removed duplicates, while in a few cases we updated the classification of some sources based on newer works (e.g., candidate LBVs to B[e]SGs based on \citealt{Kraus2019a}). The initial catalogs for M31 and M33 contain 1142 and 1388 sources with spectral classification, respectively (see Fig. \ref{f:gaia_process}, Panel A, blue dots). Within these sources we purposely included some outliers (such as background galaxies and quasi-stelar objects; e.g., \citealt{Massey2019}). A significant fraction of these sources ($\sim64\%$) have \textit{Gaia} astrometric information. Applying the criteria of the previous section, we obtained 58 (M31) and 76 (M33) sources marked as foreground\footnote{ There are 696 (M31) and 926 (M33) sources with \textit{Gaia} information. The identification of 58 (M31) and 76 (M33) sources as foreground corresponds to a $\sim8\%$ contamination. Given that there are 446 (M31) and 462 (M33) additional sources but without \textit{Gaia} values we expect another $\sim72$ sources to be foreground (according to our criteria) that remained in our catalog.}. After removing those, we were left with 1084 M31 and 1312 M33 sources, which are cross-matched with the photometric catalogs, considering single matches only at 1" (see Sect. \ref{s:surveys}). After this screening process our final sample consists of 527 (M31) and 562 (M33) sources. \begin{table} \centering \caption{List of references with their corresponding number of sources that contribute to our collected sample.} \label{t:spectypes_refs} \begin{tabular}{llr} Galaxy (total) & Reference & \# sources \\ \hline \hline \multirow{3}{*}{WLM (36)} & \cite{Bresolin2006} & 20 \\ & \cite{Britavskiy2015} & 9 \\ & \cite{Levesque2012} & 7 \\ \hline \multirow{9}{*}{M31 (1142)} & \cite{Massey2016} & 966 \\ & \cite{Gordon2016} & 82 \\ & \cite{Neugent2019} & 37 \\ & \cite{Drout2009} & 18 \\ & \cite{Massey2019} & 17 \\ & \cite{Kraus2019a} & 11 \\ & \cite{Humphreys2017} & 6 \\ & \cite{Neugent2012} & 3 \\ & \cite{Massey2009} & 2 \\ \hline \multirow{4}{*}{IC 1613 (20)} & \cite{Garcia2013} & 9 \\ & \cite{Bresolin2007} & 9 \\ & \cite{Herrero2010} & 1 \\ & \cite{Britavskiy2014} & 1 \\ \hline \multirow{16}{*}{M33 (1388)} & \cite{Massey2016} & 1193 \\ & \cite{Massey1998} & 49 \\ & \cite{Neugent2019} & 46 \\ & \cite{Humphreys2017} & 24 \\ & \cite{Massey2007} & 13 \\ & \cite{Gordon2016} & 12 \\ & \cite{Drout2012} & 11 \\ & \cite{Massey2019} & 10 \\ & \cite{Kraus2019a} & 7 \\ & \cite{Massey1998a} & 6 \\ & \cite{Kourniotis2018} & 4 \\ & \cite{Humphreys2014} & 4 \\ & \cite{Massey1996} & 3 \\ & \cite{Martin2017} & 2 \\ & \cite{Neugent2011} & 2 \\ & \cite{Bruhweiler2003} & 2 \\ \hline \multirow{4}{*}{Sextans A (16)} & \cite{Camacho2016} & 9 \\ & \cite{Britavskiy2015} & 5 \\ & \cite{Britavskiy2014} & 1 \\ & \cite{Kaufer2004} & 1 \\ \hline \hline \end{tabular} \end{table} We compiled spectral types for three more galaxies, WLM, IC 1613, and Sextans A (see Table \ref{t:spectypes_refs}), to use as test cases. Among a larger collection of galaxies, these three offered the most numerous (albeit small) populations of classified massive stars: 36 sources in WLM, 20 in IC 1613, and 16 in Sextans A. Although a handful more sources could potentially be retrieved for other galaxies the effort to collect the data (individually from different works) would not match the very small increase in the sample. We present the first few lines of the compiled list of objects for guidance regarding its form and content in Table \ref{t:catalog_sptypes}. \section{Application of machine learning} \label{s:ml} In this section we provide a short description of the algorithms chosen for this work (for more details, see, e.g., \citealt{Baron2019, Ball2010}). The development of a classifier for massive stars requires the inclusion of "difficult" cases, such as those that are short-lived (e.g., YSGs with a duration of a few thousand years; \citealt{Neugent2010, Drout2009}) or very rare (e.g., LBVs, \citealt{Weis2020}; and B[e]SGs, \citealt{Kraus2019a}). To secure the training of the algorithms with specific targets, we preferred the use of supervised algorithms. However, any algorithm needs the proper input, which is determined by the class and feature selection. Finally, we show the implementation and the optimization of the methods. \subsection{Selected algorithms} \label{s:algorithms} Support Vector Machines \citep{Cortes1995} is one of the most well-established methods used in a wide range of topics. Some indicative examples include classification problems for variable stars \citep{Pashchenko2018}, black hole spin \citep{Gonzalez2019}, molecular outflows \citep{Zhang2020}, and supernova remnants \citep{Kopsacheili2020}. The method searches for the line or the hyperplane (in two or multiple dimensions, respectively) that separates the input data (features) into distinct classes. The optimal line (hyperplane) is defined as the one that maximizes the support vectors (i.e., the distance of each point with the boundary), which leads to the optimal distinction between the classes. One manifestation of the method, designed better for classification purposes, such as our problem, is the Support Vector Classification (SVC; \citealt{Ben-Hur2002}), which uses a kernel to better map the decision boundaries between the different classes. Astronomers are great machines when it comes to classification processes. A well-trained individual can easily identify the most important features for a particular problem (e.g., spectroscopic lines) and, according to specific (tree-like) criteria, can make fast and accurate decisions to classify sources. However, their strongest drawback is low efficiency as they can only process one object at a time. Although automated decision trees can be much more efficient than humans, they tend to overfit, that is to say, they learn the data they are trained on too well and can fail when applied to unseen data. A solution to overfitting is Random Forest (RF; \citealt{Breiman2001}), an ensemble of decision trees, each one trained on a random subset of the initial features and sample of sources. Some example works include \cite{Jayasinghe2018} and \cite{Pashchenko2018} for variable stars, \cite{Arnason2020} to identify new X-ray sources in M31, \cite{Moller2016} on supernovae Type Ia classification, \cite{Plewa2018} and \cite{Kyritsis2022} for stellar classification. When RF is called to action, the input features of an unlabeled object propagate through each decision tree and provide a predicted label. The final classification is the result of a majority vote among all labels predicted by independent trees. Therefore, RF overcomes the problems of single decision trees as they generalize very well and can handle large numbers of features and data efficiently. Neural networks originate from the idea of simulating the biological neural networks in animal brains \citep{McCulloch1943}. The nodes (that are located in layers) are connected and process an input signal according to their weight, which was assigned to them during the training process. Initial applications in astronomy were first performed in the 1990s (e.g., \citealt{Odewahn1992} on star and galaxy discrimination, \citealt{Storrie-Lombardi1992} on galactic morphology classification) but recent advance in computational power as well as in software development, allowing easy implementation, have revolutionized the field. Deeper and more complex neural network architectures have been developed, such as using deep convolutional networks to classify stellar spectra \citep{Sharma2020} and supernovae along with their host galaxies \citep{Muthukrishna2019}, generative adversarial networks to separate stars from quasars \citep{Makhija2019}, and recurrent neural networks for variable star classification \citep{Naul2018}. For the current project, a relatively simple shallow network with a few fully connected layers -- Multilayer Perceptron (MLP) -- proved sufficient. In summary, the aforementioned techniques are based on different concepts; for example, SVC tries to find the best hyperplane that separates the classes, RF decides the classification result based on the thresholds set at each node (for multiple trees), while neural networks attempt to highlight the differences in the features that best separate the classes. We implemented an ensemble meta-algorithm that combines the results from all three, different, approaches. Initially each method provides a classification result with a probability distribution across all selected classes (see Sect. \ref{s:class_selection}). Then these are further combined to obtain the final classification (described in detail in Sect. \ref{s:combining_models}). \subsection{Class selection} \label{s:class_selection} \begin{table} \centering \caption{Groups of spectral types of our initial sample (Col. 1) and their corresponding number of sources (Col. 2). Combining them into classes (Col.s 3) leads to the total number of sources combined per class (Col. 4; see Sect. \ref{s:class_selection}) and the final numbers (Col. 5) after removing 44 objects without full photometry in all bands.} \label{t:spectral_classes} \begin{tabular}{lc|ccc } \hline \hline Group & initial \# & Class & class \# & final \# w/phot\\ $[1]$ & $[2]$ & $[3]$ & $[4]$ & $[5]$\\ \hline O & 17 & BSG & \multirow{12}{*}{261} & \multirow{12}{*}{250} \\ Oc & 1 & - & & \\ Oe & 2 & BSG & & \\ On & 6 & BSG & & \\ B & 156 & BSG & & \\ Bc & 11 & - & & \\ Be & 7 & BSG & & \\ Bn & 18 & BSG & & \\ A & 51 & BSG & & \\ Ac & 3 & - & & \\ Ae & 2 & BSG & & \\ An & 2 & BSG & & \\ \hline WR & 50 & WR & \multirow{3}{*}{53} & \multirow{3}{*}{42} \\ WRc & 3 & - & & \\ WRn & 3 & WR & & \\ \hline LBV & 6 & LBV & \multirow{2}{*}{6} & \multirow{2}{*}{6} \\ LBVc & 18 & - & & \\ \hline BeBR & 6 & BeBR & \multirow{2}{*}{17} & \multirow{2}{*}{16} \\ BeBRc & 11 & BeBR & & \\ \hline F & 21 & YSG & \multirow{5}{*}{103} & \multirow{5}{*}{99} \\ Fc & 4 & - & & \\ G & 15 & YSG & & \\ YSG & 67 & YSG & & \\ YSGc & 16 & - & & \\ \hline K & 67 & RSG & \multirow{7}{*}{512} & \multirow{7}{*}{496} \\ Kc & 3 & - & & \\ M & 142 & RSG & & \\ Mc & 5 & - & & \\ RSG & 250 & RSG & & \\ RSGb & 53 & RSG & & \\ RSGc & 36 & - & & \\ \hline AGN & 2 & GAL & \multirow{4}{*}{24} & \multirow{4}{*}{23} \\ QSO & 17 & GAL & & \\ QSOc & 1 & - & & \\ GAL & 5 & GAL & & \\ \hline Total & 1077 & & 976 & 932 \\ \hline \hline \end{tabular} \end{table} When using supervised machine-learning algorithms it is necessary to properly select the output classes. In our case we are particularly interested in evolved massive stars, because the magnitude-limited observations of our target galaxies mainly probe the upper part of the HRD. In our compiled catalog we had a large range of spectral types, from detailed ones (such as M2.5I, F5Ia, and B1.5I) up to more generic terms (such as RSGs and YSGs). Given the small numbers per individual spectral type, as well as the continuous nature of spectral classification, which makes the separation of neighboring types difficult, we lack the ability to build a classifier sensitive to each individual spectral type. To address that we combined spectral types in broader classes, without taking into account luminosity classes (i.e., main-sequence stars and supergiants for the same spectral type were assigned to the same group). This is a two-step process as we first assigned all types to certain groups, and then, during the application of the classifier, we experimented with which classes are best detectable with our approach (given the lack of strict boundaries between these massive stars, which is a physical limitation and not a selection bias). For the first step, we grouped the 1089 sources (both in M31 and M33) as follows. First, sources of detailed subtypes were grouped by their parent type (e.g., B2 I and B1.5 Ia to the B group; A5 I and A7 I to the A group; M2.5 I and M2-2.5 I to the M group, etc.). Some individual cases with uncertain spectral type were assigned as follows: three K5-M0 I sources to the K group, one mid-late O to the O group, one F8-G0 I to the F group, and one A9I/F0I to the A group. Second, all sources with emission or nebular lines were assigned to the parent type group with an "e" or "n" indicator (e.g., B8 Ie to the Be group, G4 Ie to the Ge group, B1 I+Neb to the Bn group, and O3-6.5 V+Neb to the On group). Third, sources with an initial classification as RSGs or YSGs were assigned directly to their corresponding group. Fourth, RSG binaries with a B companion \citep{Neugent2019} were assigned to the RSGb group. Fifth, secure LBVs and B[e]s were kept as separate groups (as LBVs and BeBRs, respectively). A source classified as HotLBV was assigned to the LBV group. Sixth, all sources classified as WRs (of all subtypes), including some individual cases (WC6+B0 I, WN4.5+O6-9, WN3+abs, WNE+B3 I, WN4.5+O, and five Ofpe/WN9), were grouped under one group (WR), except three sources that are characterized by nebular lines and were assigned to the WRn group. Seventh, galaxies (GALs), active galactic nuclei (AGNs) and quasi-stellar objects (QSOs) were grouped under their corresponding groups. Eighth, all sources with an uncertainty flag (":" or "c") were assigned to their broader group followed by a "c" flag to indicate that these are candidates (i.e., not secure) classifications, such as Ac, Bc, YSGc, WRc, and QSOc. One source classified as B8Ipec/cLBV was assigned to the LBVc group. Finally, complex or very vague cases were disregarded. This entailed eight "HotSupergiant" sources and one source from each of the following types: "WarmSG," "LBV/Ofpe/WN9," "Non-WR(AI)," and "FeIIEm.Line(sgB[e])." Thus, after removing the 12 sources from the last step we are left with 1077, split into 35 groups (see Table \ref{t:spectral_classes}, Col. 1 and their corresponding numbers in Col. 2). However, these groups may contain similar objects, or in many cases a limited number of sources that may not be securely classified. To optimize our approach we experimented extensively by combining (similar) groups to broader classes to obtain the best results. All hot stars (i.e., O,B,A groups, including sources with emission "e" and nebular "n" lines) were combined under the BSG class after removing the uncertain sources (indicated as candidates). For the YSG class we considered all sources from the F, G, and YSG groups, again excluding only the candidates (i.e., members of the Fc and YSGc groups, especially as many of the YSGc are highly uncertain; \citealt{Massey2016}). For the RSG class we combined the K, M, RSG, and RSGb groups, excluding the candidates (i.e., Kc, Mc, and RSGc). The BeBR class includes both the secure and the candidate sources, because they show the same behavior (see Sect. \ref{s:feature_selection}) and there are more constraints to characterize a source as B[e] (see \citealt{Kraus2019a}). More specifically the BeBRc sources were actually the result of constraining further the classification of candidate LBVs \citep{Kraus2019a}. Therefore, we kept only the secure LBVs (LBV group) to form their corresponding class. For the WR class we used all available sources, although they are of different types, as a further division would not be efficient. The last class, GAL, includes all nonstellar background objects (galaxies, AGNs, QSOs, except for the one candidate QSO) that were used as potential outliers. We do not expect any other type of outlier (but for an $\sim8\%$ foreground contamination) since at the distances of our target galaxies we are actually probing the brighter parts of the HRD where the supergiant stars are located. The number of sources finally selected for each class is shown in Table \ref{t:spectral_classes} (Col. 4), where we used the class name to indicate which groups contribute to the class (Col. 3) while a "-" shows that a particular group is ignored. The total number of selected sources is 976. \subsection{Imbalance treatment} \label{s:imbalance_treatment} What is evident from Table \ref{t:spectral_classes} is that we have an imbalanced sample of classes, which is very typical in astronomical applications (see also \citealt{Dorn-Wallenstein2021} for similar problems). In particular, the RSG class is the most populated one (with $\sim500$ spurces), followed by the BSG class (with $\sim250$ sources), accounting for almost 80\% of the total sample. The YSG class includes about a hundred sources, but the WR, GAL, BeBR, and, most importantly, LBV classes include a few tens at most. To tackle this we can either use penalizing metrics of performance (i.e., evaluations in which the algorithm provides different weights to specific classes) or train the model using adjusted sample numbers (by over-sampling the least populated classes, and simultaneously under-sampling the most populated one). We experimented with both approaches and found a small gain when using the resampling approach. A typical approach to oversampling is duplicating objects. Although this may be a solution in many cases, it does not help with sampling the feature space better (i.e., it does not provide more information). An alternative approach is to create synthetic data. To this purpose, in this work we used a commonly adopted algorithm, the Synthetic Minority Oversampling TEchnique (SMOTE; \citealt{SMOTE}), which generates more data objects by following these steps: (i) it randomly selects a point (A) that corresponds to a minority class, (ii) it finds k-nearest neighbors (of the same class), (iii) it randomly chooses one of them (B), and (iv) it creates a synthetic point randomly along the line that connects A and B in the feature space. The benefits of this approach are that the feature space is better sampled and all features are taken into account to synthesize the new data points. On the other hand, it is limited in how representative the initial sample per class is of each class's feature space. In any case, the number of points to be added is arbitrary and can very well match the majority class. At the same time this procedure can create noise, especially when trying to oversample classes with very few sources (e.g., LBVs with only six sources available in total). Better results are obtained when the oversampling of the minority classes is combined with undersampling the majority class. For the latter we experimented with two similar approaches: the Tomek links \cite{Tomek} and the Edited Nearest Neighbors (ENN; \citealt{ENN}). In the first one, the method identifies the pairs of points that are closest to each other (in the feature space) and belong to different classes (the Tomek links). These are noisy instances or pairs located on the boundary between the two classes (in a multi-class problem it is the one-versus-rest scheme that is used, i.e., the minority compared to all other classes collectively referred to as the majority class). By removing the point corresponding to the majority class the class separation increases, and the number of majority class points are reduced. In the ENN approach the three-nearest neighbors to a minority point are found and removed when belonging to the majority class. Thus, the ENN approach is a bit more aggressive than Tomek links, as it removes more points. In conclusion, the combination of SMOTE, which creates synthetic points from the minority class to balance the majority class, and the undersampling technique (either Tomek links or ENN), which cleans irrelevant points in the boundary of the classes, help to increase the separation. For the implementation we used the \texttt{imbalanced-learn} package\footnote{\url{https://github.com/scikit-learn-contrib/imbalanced-learn}} \citep{Lematre2017} and more specifically the ENN approach \texttt{imblearn.combine.SMOTEENN()}, which provided slightly better results from Tomek links. We used \texttt{k\_neighbors=3} for SMOTE (due to the small number of LBVs). We opted to use the default values for \texttt{sampling\_strategy}, which corresponds to ``not majority'' for SMOTE (which means that all classes are resampled except for RSGs) and ``all'' for ENN function, which cleans the majority points (considering one-versus-rest classes). In Table \ref{t:resampling} we provide an example of the numbers and fractions of sources per class available before and after resampling (the whole sample). \begin{table} \caption{Number and fraction of sources per class before and after resampling to treat for imbalance (using the SMOTE ENN approach). The fractions correspond to the total number of sources used in the original and resampled sets, respectively.} \label{t:resampling} \begin{tabular}{c|cccc} \hline \hline Class & \multicolumn{2}{c}{Original sources} & \multicolumn{2}{c}{Resampled sources} \\ & (\#) & (\%) & (\#) & (\%) \\ \hline BSG & 250 & 26.8 & 496 & 14.9 \\ YSG & 99 & 10.6 & 488 & 14.6 \\ RSG & 496 & 53.2 & 493 & 14.8 \\ BeBR & 16 & 1.7 & 495 & 14.9 \\ LBV & 6 & 0.6 & 444 & 13.3 \\ WR & 42 & 4.5 & 453 & 13.6 \\ GAL & 23 & 2.4 & 452 & 13.6 \\ \hline \hline \end{tabular} \end{table} \subsection{Feature selection} \label{s:feature_selection} \begin{figure*} \centering \includegraphics[width=0.8\textwidth]{SEDs-all.pdf} \caption{Color indices (features) vs. wavelength per class. The dashed black lines correspond to the individual sources, and the solid colored lines corresponds to their average. The last panel contains only the averaged lines to highlight the differences between the classes, with the most pronounced differences in the $y-[3.6]$ index (as BeBRs are the brightest IR sources, on average, followed by the GAL, RSG, and WR classes; see text for more). The number of sources in each panel corresponds to the total number of selected sources (see Table \ref{t:spectral_classes}, Col. 5). The vertical dashed lines correspond to the average wavelength per color index, as shown at the top of the figure.} \label{f:SEDs-all} \end{figure*} \begin{table*} \caption{Data availability per class and photometric band. The first column lists the classes used and the second one the corresponding number of sources in the sample. For each class, the subsequent columns provide the fractions of sources with secure measurements in the corresponding photometric bands and their errors (which do not include objects with problematic measurements and upper limits).} \label{t:sample_phot_completness} \begin{tabular}{lccccccccccc} \hline Class & Sources & [3.6] & $\sigma_\textrm{[3.6]}$ & [4.5] & $\sigma_\textrm{[4.5]}$ & [5.8] & $\sigma_\textrm{[5.8]}$ & [8.0] & $\sigma_\textrm{[8.0]}$ & [24] & $\sigma_\textrm{[24]}$ \\ & (\#) & (\%) & (\%) & (\%) & (\%) & (\%) & (\%) & (\%) & (\%) & (\%) & (\%) \\ \hline BSG & 261 & 100 & 100 & 100 & 100 & 100 & 80 & 100 & 70 & 100 & 41 \\ YSG & 103 & 100 & 100 & 100 & 100 & 100 & 92 & 100 & 78 & 99 & 30 \\ RSG & 512 & 100 & 100 & 100 & 100 & 99 & 99 & 100 & 93 & 99 & 41 \\ BeBR & 17 & 100 & 100 & 100 & 100 & 100 & 100 & 100 & 100 & 100 & 94 \\ LBV & 6 & 100 & 100 & 100 & 100 & 100 & 100 & 100 & 100 & 100 & 66 \\ WR & 53 & 100 & 100 & 100 & 100 & 100 & 94 & 100 & 86 & 100 & 43 \\ GAL & 24 & 100 & 100 & 100 & 100 & 100 & 100 & 100 & 100 & 100 & 100 \\ \hline \end{tabular} \vspace{2mm} \begin{tabular}{lccccccccccc} \hline Class & Sources & $g$ & $\sigma_g$ & $r$ & $\sigma_r$ & $i$ & $\sigma_i$ & $z$ & $\sigma_z$ & $y$ & $\sigma_y$ \\ & (\#) & (\%) & (\%) & (\%) & (\%) & (\%) & (\%) & (\%) & (\%) & (\%) & (\%) \\ \hline BSG & 261 & 96 & 96 & 96 & 96 & 96 & 96 & 96 & 96 & 96 & 96 \\ YSG & 103 & 96 & 96 & 96 & 96 & 96 & 96 & 96 & 96 & 96 & 96 \\ RSG & 512 & 96 & 93 & 97 & 97 & 97 & 97 & 97 & 97 & 97 & 97 \\ BeBR & 17 & 100 & 100 & 100 & 100 & 100 & 100 & 100 & 100 & 94 & 94 \\ LBV & 6 & 100 & 100 & 100 & 100 & 100 & 100 & 100 & 100 & 100 & 100 \\ WR & 53 & 83 & 83 & 84 & 83 & 88 & 88 & 90 & 88 & 86 & 84 \\ GAL & 24 & 100 & 100 & 100 & 100 & 100 & 100 & 100 & 95 & 100 & 100 \\ \hline \end{tabular} \vspace{2mm} \begin{tabular}{lccccc} \hline Class & Sources & $J_{\rm{UK}}$ & G & G$_{\rm{BP}}$ & G$_{\rm{RP}}$ \\ & (\#) & (\%) & (\%) & (\%) & (\%) \\ \hline BSG & 261 & 81 & 90 & 87 & 87 \\ YSG & 103 & 82 & 96 & 95 & 95 \\ RSG & 512 & 84 & 96 & 94 & 94 \\ BeBR & 17 & 70 & 100 & 100 & 100 \\ LBV & 6 & 66 & 83 & 66 & 66 \\ WR & 53 & 75 & 71 & 50 & 50 \\ GAL & 24 & 83 & 95 & 83 & 83 \\ \hline \hline \end{tabular} \end{table*} Feature selection is a key step in any machine-learning problem. To properly select the optimal features in our case we first examined data availability. In Table \ref{t:sample_phot_completness} we list the different classes (Col. 1) and the number of available sources per class (Col. 2). In the following columns we provide the fractions of objects with photometry in the corresponding bands and with proper errors (i.e., excluding problematic sources and upper limits), per survey queried (\textit{Spitzer}, Pan-STARRS, UKIRT Hemisphere Survey, and \textit{Gaia}). To build our training sample, we required the sources to have well-measured values across all bands. To avoid significantly decreasing the size of the training sample (by almost half in the case of the LBV and BeBR classes), we chose not to include the $J_{\rm{UK}}$. Although we used \textit{Gaia} data to derive the criteria to identify foreground stars, the number of stars with \textit{Gaia} photometry in the majority of other nearby galaxies is limited. Thus, to ensure the applicability of our approach we discarded these bands also (which are partly covered by the Pan-STARRS bands). The IR catalogs were built upon source detection in both the [3.6] and [4.5] images (e.g., \citealt{Khan2015}), while the measurements in the longer bands were obtained by just performing photometry in those coordinates (regardless of the presence of a source or not). However, in most cases there is a growing (with wavelength) number of sources with only upper limits in the photometry. As these do not provide secure measurements for the training of the algorithm we could not use them. If we were to take into account sources with valid measurements up to the [24], the number of sources would end up dropping by more than 50\% for some classes (see in Table \ref{t:sample_phot_completness}, e.g., the corresponding fractions of WR and YSG with secure error measurements). As this is not really an option, we decided to remove all bands that contained a significant fraction of upper limits (i.e., above [5.8]). This, rather radical, selection is also justified by the fact that the majority of the unclassified sources (in the catalogs to which were are going to apply our method) do not have measurements in those bands. It is also interesting to point out that the majority of the disregarded sources belong to the RSG class (the most populated), which means that we do not lose any important information (for the training process). From the optical set of bands, we excluded $g$ for two reasons. About 130 sources fainter than 20.5 mag tend to have systematic issues with their photometry, especially red stars for which $g-r$ turns to bluer values. Also, due to the lack of known extinction laws for most galaxies, and the lack of data for the many sources, we opted not to correct for it. As $g$ is the band most affected by extinction we opted to use only the redder bands to minimize its impact \citep{Schlafly2011, Davenport2014}. Therefore, we kept $r$, $i$, $z$, and $y$ bands and we performed the same strict screening to remove sources with upper limits. In total, we excluded 44 sources, reflecting a small fraction of the sample ($\sim4.5\%$ - treating both M31 and M33 sources as a single catalog). We show the final number of sources per class in Col. 5 in Table \ref{t:spectral_classes} summing to 932 objects in total. To remove any distance dependence in the photometry we opted to work with color terms and obtained the consecutive magnitude differences: $r-i$, $i-z$, $z-y$, $y - [3.6]$, $[3.6] - [4.5]$. We examined different combinations of these color indices, but the difference in the accuracy with respect to the best ones found is negligible. Those combinations contained color indices with wider wavelength range that are affected more from extinction that the consecutive colors. Moreover, they tend to be systematically more correlated, resulting in poorer generalization (i.e., when applied to the test galaxies; Sect. \ref{s:other_galaxies}). Some (less pronounced) correlation still exists in the consecutive color set also, because of the use of each band into two color combinations (except for $r$ and [4.5]), and due to the stellar continuum, since the flux at each band is not totally independent from the flux measured in other bands. We also noticed that more optical colors help to better sample the optical part of the spectral energy distribution and separate more efficiently some classes (BSG and YSG in particular). The consecutive color set seems as the most intuitive selection, including well-studied colors. Moreover, it represents how the slopes of the spectral energy distribution changes with wavelength. We also experimented with other transformations of these data, such as fluxes, normalized fluxes, and standardizing data (scaling magnitudes around their mean over their standard deviation), but we did not see any significant improvement in the final classification results. Therefore, we opted for the simplest representation of the data, which is the aforementioned color set. In Fig. \ref{f:SEDs-all} we plot the color indices with respect to their corresponding wavelengths for both individual sources for each class and their average. In the last panel, we overplot all averaged lines to display the difference among the various classes. As this representation is equivalent to the consecutive slopes of the spectral energy distributions for each class we notice that the redder sources tend to have a more pronounced $y-[3.6]$ feature, a color index that characterizes the transition form the optical to the mid-IR photometry. The BeBR class presents the highest values due to the significant amount of dust (and therefore brighter IR magnitudes), followed by the GALs due to their Polycyclic Aromatic Hydrocarbons (PAH) emission, the (intrinsically redder sources) RSGs, and the WRs (due to their complex environments). \subsection{Implementation and optimization} An important step of every classification algorithm is to tune its hyperparameters, that is, the parameters that control the training process. After having defined these the algorithm determines the values of the parameters used for each model (e.g., weights) based on the training sample. The implementation of all three methods (SVC, RF, and MLP) was done through the \texttt{scikit-learn} v.0.23.1\footnote{\url{https://scikit-learn.org/}} \citep{sklearn}\footnote{For the MLP/neural networks we experimented extensively with TensorFlow v1.12.0 \citep{tensorflow2015} and Keras v2.2.4 API \citep{keras2015}. This allowed us to easily build and test various architectures for our networks. We used both dense (fully connected) and convolutional (CNN) layers, in which case the input data are 1D vectors of the features we are using. Given our tests we opted to use a simple dense network, which can be easily implemented also within the \texttt{scikit-learn} that helps with the overall simplification of the pipeline.}. For the optimal selection of the hyperparameters (and their corresponding errors), we performed a stratified K-fold cross-validation (CV; \texttt{sklearn.model\_selection.StratifiedKFold()}). With this, the whole sample is split into K subsamples or folds (five in our case), preserving the fraction representation of all classes of the initial sample into each of the folds. At each iteration one fold is used as the validation sample and the rest as training. By permuting the validation fold, the classifier is trained over the whole sample. Since we performed a resampling approach to correct for the imbalance in our initial sample (see Sect. \ref{s:imbalance_treatment}), we note that this process was performed only in the training folds, while the evaluation of the model's accuracy was done on the (unmodified) validation fold. We stress that the validation fold remained "unmodified" (i.e., it was not resampled) in order to avoid data leakage and hence overfitting. The final accuracy score is the average value, and its uncertainty corresponds to the standard deviation across all folds. For the SVC process we used the \texttt{sklearn.svm.SVC()} function. We opted to train this model with the following selection of hyperparameters: \texttt{probability=True} to get probabilities (instead of a single classification result), \texttt{decision\_function\_shape = 'ovo'}, which is the default option for multi-class problems, \texttt{kernel = 'linear'}, which is faster than the alternative nonlinear kernels and proved to be more efficient\footnote{The ``linear'' kernel was more efficient in recovering the LBV class systematically in contrast to the default ``rbf'' option. }, and \texttt{class\_weight='balanced'}, which gives more weight to rarer classes (even after the resampling approach, as described in Sect. \ref{s:imbalance_treatment}). We also optimized the regularization \textit{C} parameter, which represents a penalty for misclassifications (i.e., the objects falling on the "wrong" side of the separating hyperplane). For larger values a smaller margin for the hyperplane is selected so that the misclassified sources decrease and the classifier performs optimally for the training objects. This may result in poorer performance when applied to unseen data. On the opposite, smaller values of \textit{C} leads to a larger margin (i.e., a loose separation of the classes) at the cost of more misclassified objects. To optimize \textit{C} we tested the result in the accuracy by changing the value of \textit{C} from 0.01 to 200 (with a step of 0.1 in log space). We present these results in Fig. \ref{f:optimizing-SVC-C}, where the red line corresponds to the averaged values and the gray area to the $1\sigma$ error. As the parameter reaches fast to a plateau, the choice of this particular value does not affect significantly the accuracy above $\sim25$, which is the adopted value. For the RF classifier we used \texttt{sklearn.ensemble.RandomForestClassifier()}. To optimize it, we explored the following hyperparameters over a range of values: \texttt{n\_estimators}, which is the number of trees in the forest (10-1000, step 50), \texttt{max\_leaf\_nodes}, which limits the number of nodes in each tree (i.e., how large it can grow, from 2-100; step 2), and \texttt{max\_depth}, which is the maximum depth of the tree (1-100, step 2), while the rest of the hyperparameters were left to their default values. We present their corresponding validation curves as obtained from five-fold CV tests (with mean values as red lines and their $1\sigma$ uncertainty as gray areas) in Fig. \ref{f:optimizing-RF-curves}. Again, we see that above certain values the accuracy reaches to a plateau. Given the relative large uncertainties and the statistical nature of this test, the selection of the best values is not absolutely strict (they provide almost identical results). We opted to use the following values: \texttt{n\_estimators=400}, \texttt{max\_leaf\_nodes=50}, \texttt{max\_depth=30}. We also set \texttt{class\_weight}="balanced", similar to SVC, in addition to the resampling approach. For the neural networks we used \texttt{sklearn.neural\_network.MLPClassifier()}. In this case we performed a grid search approach (\texttt{sklearn.model\_selection.GridSearchCV()}). This method allows for an exhaustive and simultaneous search over the requested parameters (with a cost in computation time). We started first by investigating the architecture of the network (e.g., number of hidden layers, number of nodes per layer) along with the three available types of methods for weight optimization (\texttt{'lbfgs},' \texttt{‘sgd,’} and \texttt{‘adam’}). We tried up to five hidden layers with up to 128 nodes per layer, using \texttt{'relu'} as the activation function (a standard selection). We present the results of this grid search in Fig. \ref{f:optimizing-NN-structures} from which we obtained systematically better results for the \texttt{'adam'} solver \citep{Kingma2014}, with the (relatively) best configuration being a shallow network with two hidden layers with 128 nodes each. Given this combination we further optimized the regularization parameter (\texttt{alpha}), the number of samples used to estimate the gradient at each epoch (\texttt{batch\_size}) and the maximum number of epochs for training (\texttt{max\_iter}), with the rest of the parameters left to their default values (with \texttt{learning\_rate\_init}=0.001). Similarly to the previous hyperparameters selections, from their validation curves (Fig. \ref{f:optimizing-NN-curves}) we selected as best values: \texttt{alpha}=0.13, \texttt{batch\_size}=128, and \texttt{max\_iter}=560. The classifier uses the Cross-Entropy loss, which allows probability estimates. \section{Results} \label{s:results} We first present our results from the individual applications of the different machine-learning algorithms to the M31 and M33 galaxies. Then, we describe how we combine the three algorithms to obtain a combined result. Finally, we apply the combined classifier to the test galaxies. \subsection{Individual application to M31 and M33} \label{s:m31m33runs} \subsubsection{Overall performance} \begin{figure*} \centering SVC \hspace{5cm} RF \hspace{5cm} MLP\\ \includegraphics[width=0.3\linewidth]{SVC-cm-clrs.pdf} \includegraphics[width=0.3\linewidth]{RF-cm-clrs.pdf} \includegraphics[width=0.3\linewidth]{MLP-cm-clrs.pdf} \\ \includegraphics[width=0.3\textwidth]{SVC-metrics-clrs.pdf} \includegraphics[width=0.3\textwidth]{RF-metrics-clrs.pdf} \includegraphics[width=0.3\textwidth]{MLP-metrics-clrs.pdf} \caption{Confusion matrices (upper panels) for the SVC, RF, and MLP methods, respectively, along with the characteristic metrics (precision, recall, and F1 score; lower panels). These results originate from single runs, i.e., by using 70\% of the initial sample for the training sample, which is then resampled to produce a balanced sample before training each model and applying the model to the remaining 30\%\ of the sample (the validation). In general, the algorithms perform well except for the cases of LBVs and WRs (see Sect. \ref{s:m31m33runs} for more details).} \label{f:results_metrics} \end{figure*} \begin{figure*} \centering SVC \hspace{5cm} RF \hspace{5cm} MLP\\ \includegraphics[width=0.3\linewidth]{SVC-pr-clrs.pdf} \includegraphics[width=0.3\linewidth]{RF-pr-clrs.pdf} \includegraphics[width=0.3\linewidth]{MLP-pr-clrs.pdf} \caption{ Precision recall curves for the three methods, along with the values of the area under the curve for each class (in parentheses). In all cases, the results of the comparison of each class against all others provide very good and consistent results, well above the random classifier (indicated for each class by the horizontal dashed lines; see Sect. \ref{s:m31m33runs}).} \label{f:results_prs} \end{figure*} \begin{table*} \centering \caption{Performance for each method and per class, after a repeated K-fold cross validation (see Sect. \ref{s:m31m33runs} for details).} \label{t:recalls} \begin{tabular}{lcccc} \hline \hline Class & SVC & RF & MLP & combined \\ \hline overall & $0.78\pm0.03$ & $0.82\pm0.02$ & $0.82\pm0.02$ & $0.83\pm0.02$ \\ \hline BSG & $0.58\pm0.08$ & $0.71\pm0.06$ & $0.71\pm0.07$ & $0.71\pm0.06$\\ BeBR & $0.80\pm0.23$ & $0.79\pm0.24$ & $0.73\pm0.25$ & $0.81\pm0.17$\\ GAL & $0.58\pm0.22$ & $0.63\pm0.21$ & $0.73\pm0.24$ & $0.71\pm0.17$ \\ LBV & $0.28\pm0.43$ & $0\pm0$ & $0\pm0$ & $0\pm0$ \\ RSG & $0.93\pm0.03$ & $0.95\pm0.02$ & $0.94\pm0.02$ & $0.94\pm0.02$\\ WR & $0.43\pm0.15$ & $0.40\pm0.16$ & $0.46\pm0.19$ & $0.48\pm0.24$ \\ YSG & $0.78\pm0.08$ & $0.75\pm0.10$ & $0.77\pm0.12$ & $0.80\pm0.08$ \\ \hline \hline \end{tabular} \end{table*} Having selected the optimal hyperparameters for our three algorithms we investigated the individual results as obtained by directly applying them to our data set. For this we need to split the sample into a training set (70\%) and evaluate the results on a validation set (30\%), which is a standard option in the literature. The split was performed individually per class to ensure the same fractional representation of all classes in the validation sample. The resampling approach to balance our sample (as described in Sect. \ref{s:imbalance_treatment}) was applied only on the training set. The model was then trained on this balanced set and the predictions were made on the original validation set. Given a specific class, we refer to the objects that are correctly predicted to belong to this class as true positives (TPs), while true negatives (TNs) are those that are correctly predicted to not belong to the class. False positives (FPs) are the ones that are incorrectly predicted to belong, while false negatives (FNs) are the ones that are incorrectly predicted to not belong to the class. In Fig. \ref{f:results_metrics} we show example runs for the SVC, RF, and MLP methods. The first row corresponds to the confusion matrix, a table that displays the correct and incorrect classification results per class. Ideally this should be a diagonal table. The presence of sources in other elements provides information about the contamination of classes (or how much the method is miss-classifying the particular class). Another representation is given in the second row where we plot the scores of the (typically used) metrics for each class. Precision (defined as $\rm TP/(TP+FP)$) refers to the number of objects that are predicted correctly to belong to a particular class over the total number of identified objects for this class (easily derived if we look at the numbers across the columns of the confusion matrix). Recall (defined as $\rm TP/(TP+FN)$) is the number of class objects over the total real population for this class (derived from the rows of the confusion matrix). Therefore, the precision indicates the ability of the method to detect an object of the particular class while recall its ability to recover the real population. The F1 score is the harmonic mean of the two previous metrics (defined as $\rm F1 score = 2 \times ( precision \times recall) / (precision + recall)$). In our case, we are mainly using the recall metric as we are interested to minimize the contamination and therefore to recover as many correct objects as possible. This is especially required for the classes with the smallest numbers, which reflect the rarity of their objects, such as the BeBR and LBV classes. We report our results using the weighted balance accuracy (henceforth, "accuracy"), which corresponds to the average of recall values across all classes, weighted by the number of objects per class. This is a reliable metric of the overall performance when training over a wide number of classes \citep{Grandini2020}. From Fig. \ref{f:results_metrics} we see that the accuracy achieved for SVC, RF, and MLP is $\sim78\%$, $\sim82\%$, and $\sim83\%$, respectively. These values are based on a single application of the algorithms, that is the evaluation of the models on the validation set (the 30\% of the whole sample). However, we left out an important fraction of information, which due to our small sample, is important. Even though we up-sampled to account for the scarcely populated classes, this happened (at each iteration) solely for those sources of the training sample, which implies that -- again -- only a part of the whole data set's feature space was actually explored. To compensate for that, the final model was actually obtained by training over the whole sample (after resampling). In this case there was no validation set to perform directly the evaluation. To address that we used a repeated K-fold CV to obtain the mean accuracy and the recall per class, which in turn provided the overall expected accuracy. Using five iterations (and five folds per iteration) we obtained $78\pm3\%$, $82\pm2\%$, and $82\pm2\%$ for SVC, RF, and MLP, respectively (the error is the standard deviation of the average values over all K-folds performed). In Table \ref{t:recalls} we show the accuracy (``overall''), and the recall as obtained per class. \cite{Dorn-Wallenstein2021}, using the SVC method and a larger set of features (12) including variability indices, achieved slightly better accuracy ($\sim90.5\%$) but for a coarser classification of their sources (i.e., for only four classes: ``hot,'' ``emisison,`` ``cool,'' and ``contamination'' stars). When they used their finer class grid with 12 classes, their result was $\sim54\%$\footnote{The balanced accuracy reported by \cite{Dorn-Wallenstein2021} is the average recall across all classes, i.e., without weighting by the frequency for each class. This metric is insensitive to class distribution \citep{Grandini2020}. We converted the reported values to the weighted balanced accuracy to directly compare our results.}. \subsubsection{Class recovery rates} The results per class are similar for all three methods. They can recover the majority of the classes efficiently, with the most prominent class being the RSGs with $\sim95\%$ success (similar to \citealt{Dorn-Wallenstein2021}). Decent results are returned for the BSG, YSG, and GAL classes, within a range of $\sim60-80\%$. The class for which we obtained the poorest results is the LBVs. The SVC is the most effective in recovering a fraction of the LBVs ($\sim30\%$, albeit with a large error of 43\%), while the other two methods failed. The LBV class is an evolutionary phase of main-sequence massive O-type stars before they lose their upper atmosphere (due to strong winds and intense mass-loss episodes) and end up as WRs. They tend to be variable both photometrically and spectroscopically, displaying spectral types from B to G. Hence, physical confusion between WRs, LBVs, and BSGs is expected, as indicated by the lower recall values and the confusion matrices (see Fig. \ref{f:results_metrics}). Moreover, the rarity of these objects leads to a small-populated class for which their features are not well determined, and consequently, the classifier has significant issues distinguishing them from other classes. On the other hand, SVC examines the entire feature space, which is the reason for the (slightly) improved recall for LBVs in this case (\citealt{Dorn-Wallenstein2021} report full recovery but probably because of overfitting). Due to the small number and the rather inhomogeneous sample of WRs, all the classifiers have difficulties to correctly recover the majority of these sources. The best result is provided by MLP at $\sim46\%$, less than the $\sim75\%$ reported by \citealt{Dorn-Wallenstein2021}. Despite the small sample size of the BeBR class, it is actually recovered successfully ($>79\%$). As BeBRs (including candidate sources) form a more homogeneous sample than LBVs and WRs, their features are well characterized, which helps the algorithms separate them. To better visualize the performance of these methods we constructed the Precision Recall curves, which are better suited in the case of imbalanced data \citep{Davis2006, Saito2015}. During this process, the classifier works in a one-versus-rest mode; that is to say, it only checks whether the objects belong to the examined class or not. In Fig. \ref{f:results_prs} we show the curves for each algorithm. The dashed (horizontal) lines correspond to the ratio of positive objects (per class) over the total number of objects in the training data. Any model found at this line or below has no ability to predict anything better than random (or worse). Therefore, the optimal curve directs toward the upper right corner of the plot (with precision=recall=1). In all cases the classifiers are better than random. RF displays systematically the best curves. In SVC, RSGs and BeBRs are almost excellent, and the rest of the classes display similar behavior. For MLP, all classes except BSGs and WRs are very close to the optimal position. Another metric is obtained if we measure the fraction of the area under the curve. This returns a single value (within the 0-1 range) depicting the ability of the classifier to distinguish the corresponding class over all the rest. In Fig. \ref{f:results_prs} we show these values within the legends. In general, we achieve high values, which means that our classifiers can efficiently distinguish the members of a class against all others. These consistent results add further support that the careful selection of our sample has worked and that the methods work efficiently (given the class limitations). \subsection{Ensemble models} \label{s:combining_models} \subsubsection{Approaches} \begin{figure*} \centering \includegraphics[width=0.4\linewidth]{pdf-10648.pdf} \includegraphics[width=0.4\linewidth]{pdf-4641.pdf}\\ \includegraphics[width=0.4\linewidth]{pdf-40429.pdf} \includegraphics[width=0.4\linewidth]{pdf-82077.pdf}\\ \caption{Examples of probability distributions for a number of objects with correct (left) and incorrect (right) final classifications.} \label{f:object_pdfs} \end{figure*} \begin{figure} \includegraphics[width=\columnwidth]{M31+M33_spt_final-mod-correct_prob_plot.pdf} \caption{Probability distributions for sources classified correctly (blue) and incorrectly (orange), for the validation sample. We successfully recover the majority of the objects in the validation sample ($\sim83\%$). The dashed blue and orange lines correspond to the mean probability values for the correct (at 0.86) and incorrect (at 0.60) classifications, based on repeated five-fold CV tests (five iterations).} \label{f:pdf_threshold} \end{figure} A common issue with machine-learning applications is choosing the best algorithm for the specific problem. This is actually impossible to achieve a priori. Even in the case of different algorithms that provide similar results it can be challenging to select one of them. However, there is no reason to exclude any. Ensemble methods refers to approaches that combine predictions from multiple algorithms, similar to combining the opinions of various "experts" in order to reach to a decision \citep[e.g.,][]{Re2012}. The motivation of ensemble methods is to reduce the variance and the bias of the models \citep{Mehta2019}. A general grouping consists of bagging, stacking, and boosting. Bagging (bootstrap aggregation) is based on training on different random subsamples whose predictions are combined either by majority vote (e.g., which is the most common class) or by averaging the probabilities (RF is the most characteristic example). Stacking (stacked generalization) refers to a model that trains on the predictions of other models. These base models are trained on the training data and their predictions (sometimes along with the original features) are provided to train the meta-model. In this case, it is better to use different methods that use different assumptions, so that to minimize the bias inherent by each method. Boosting refers to methods that focus on the improvement of the misclassifications from previous applications. After each iteration, the method will actually bias training toward the points that are harder to predict. Given the similar results among the algorithms that we used, as well as the fact that they are trained differently and plausibly sensitive to different characteristic features per class, we were motivated to combine their results to maximize the predictive power and to avoid potential biases. We chose to use a simple approach, with a classifier that averages the output probabilities from all three classifiers. There are two ways to combine the outputs of the models, either through "hard" or "soft" voting. In the former case the prediction is based on the largest sum of votes across all models, while in the latter the class corresponding to the largest summed probability is returned. To set an example with hard voting, if the results from the three models we used were BSG, BSG, and YSG, then the final class would be BSG. However, this voting scheme does not grasp all the information. Soft voting can be more efficient. Given the final summed probability distribution across all classes it is possible that the final classification may be different than the one that the hard voting would return. Additionally, it can solve cases when a single class cannot be determined, that is, when each of the three classifiers predicts a different class. With the final probability distribution we can also provide error estimates on the class predictions (and define confidence thresholds). \subsubsection{Combined classifier} The simplest approach is to combine the individual probabilities per class using equal weights per algorithm (since the accuracy of each is similar): \begin{equation} P_{\rm final} = (P_{\rm SVC} \times 1/3) + (P_{\rm RF} \times 1/3) + (P_{\rm MLP} \times 1/3). \end{equation}In Fig. \ref{f:object_pdfs} we show some example distributions for a few sources with correct and incorrect final classifications. We performed a repeated (five iterations) five-fold CV test to estimate the overall accuracy of the combined approach at $0.83\pm0.02$ (see Table \ref{t:recalls}). The recall values are consistent with the results from the individual classifiers, with the highest success obtained for RSGs ($\sim94\%$) and BeBRs and YSGs ($\sim80\%$), while LBVs are not recovered. Despite this result, it is possible to get LBV classification at a probably lower significance (i.e., probability). However, even a small number of candidates for this class are important for follow-up observations, due to their rarity and their critical role in outburst activity \citep[e.g.,][]{Smith2014}. In Fig. \ref{f:pdf_threshold} we show the distributions of probabilities of the sources in a validation sample identified correctly (blue) and incorrectly (orange). The blue and orange dashed lines correspond to the mean probability values for the correct (at $0.86\pm0.01$) and incorrect (at $0.60\pm0.03$) classifications. Although the distributions are based on a single evaluation of the classifier on the validation set, the values corresponding to these lines originate from a five-iteration repeated five-fold CV application. \subsection{Testing in other galaxies} \label{s:other_galaxies} \begin{figure} \includegraphics[width=\columnwidth]{other_galaxies-cm.pdf} \caption{Confusion matrix for 54 sources without missing values in the three galaxies (IC 1613, WLM, and Sextans A). We achieve an overall accuracy of $\sim70\%$, and we notice that the largest confusion occurs between BSGs and YSGs. The overall difference in the accuracy compared to that obtained with the M31 and M33 sample is attributed to the photometric errors and the effect of metallicity and extinction in these galaxies.} \label{f:other_galaxies-cm} \end{figure} \begin{figure}[hbt!] \includegraphics[width=\columnwidth]{other_galaxies_distributions.pdf} \caption{Probability and band completeness distributions for the sources of the three galaxies (IC 1613, WLM, and Sextans A) with and without missing data. (Top) Probability distributions of the correct (blue) and incorrect (orange) final classifications for the total sample of stars with known spectral types and with measurements in all bands. We achieved a recovery rate of $\sim70\%$. The vertical dashed lines are the same as those in Fig. \ref{f:pdf_threshold}; the solid lines correspond to the peak of the probability distributions for the current sample. (Middle) Distribution of the band completeness, i.e., the fraction of features without missing values. (Bottom) Probability distributions for all sources, including those without measurements in multiple bands (vertical lines have the same meaning as in the top panel). The success rate of $\sim68\%$ is the same as in the top panel, indicating the effectiveness of the iterative imputer for missing data imputation.} \label{f:other_galaxies} \end{figure} As an independent test we used the collection of sources with known spectral types in the IC 1613, WLM, and Sextans A galaxies (see Sect. \ref{s:data-spec_types}). In order to take into account all available information we resampled the whole M31 and M33 sample and we trained all three models. The application follows the exact same protocol as the training except from the resampling approach, which is used only for the training: (i) load the photometric data for the new sources, (ii) perform the necessary data processing to derive the features (color indices), (iii) load the fully trained models for the three classifiers\footnote{Using Python’s built-in persistence model \texttt{pickle} for saving and loading.}, (iv) apply each of them to obtain the individual (per classifier) results, (v) calculate the total probability distribution from which we get the final classification result, and (vi) compare the predictions with the original classes. For the last step, we converted the original spectral types to the classes we formed while training. Out of the 72 sources we excluded nine with uncertain classifications: four carbon stars, two identified simply as "emission" stars, one with a "composite" spectrum, one classified as a GK star, and one M foreground star. In Fig. \ref{f:other_galaxies-cm} we show the confusion matrix for the sample of the test galaxies, where we have additionally (for this plot) excluded another nine sources with missing values (see next section). By doing this we can directly compare the results with what we obtained from the training galaxies M31 and M33. We successfully recovered $\sim70\%$, which is less than what we achieved for the training (M31 and M33) galaxies ($\sim83\%$). We note that due to the very small sample size (54 sources) even a couple of misclassifications can change the overall accuracy by a few percent. Nevertheless, a difference is still present. Evidently, the largest disagreement arises from the prediction of most BSGs as YSGs. These two classes do not have a strict boundary in the HRD, making their classification at larger distances even more challenging. Moreover, the sources in these galaxies are at the faint end of the magnitude distribution for the \textit{Spitzer} bands, which may influence the accuracy of their photometry. While M31 has a metallicity above solar and M33 a gradient from solar to subsolar \citep{Pena2019} the three test galaxies are of lower metallicity \citep{Boyer2015}. However, it is not certain how this influences the classification performance. Lower metallicity affects both extinction and evolution that could lead to shifts in the intrinsic color distributions. Currently, given the lack of photometric data and source numbers for lower metallicity galaxies, it is impossible to examine the effect of metallicity thoroughly. In the upper panel of Fig. \ref{f:other_galaxies} we show the distribution of the probabilities of correct (blue) and incorrect (orange) classifications. The dashed lines represent the same limits as defined in Sect. \ref{s:combining_models} for the training sample (at 0.86 and 0.60, respectively), while the solid ones correspond to the mean values defined by the current sample, at 0.67 and 0.51 for correct and incorrect, respectively. These shifts of the peak probabilities, especially for the correct classifications, shows the increased difficulty of the classifier to achieve a confident prediction. \subsection{Missing data imputation} \label{s:dat_imputation} \begin{figure}[hbt!] \includegraphics[width=\columnwidth]{missing_data_test.pdf}\\ \includegraphics[width=\columnwidth]{missing_data_comp.pdf} \caption{ Accuracy changes with missing features. (Top) Comparing the drop in accuracy from a typical ($30\%$ split) validation set without missing data to one where missing data have been generated by randomly selecting two features per object and replacing them with the corresponding mean values (purple circles) or the values imputed by the iterative imputer (green pentagons). The mean value obtained for the imputer is less than 0.1 and almost three times better than the mean drop for mean values. (Bottom) Iterative imputer, which is more capable of handling an increased number of missing features, with a limit at three (out of five available in total). The loss in accuracy is less than 20\%.} \label{f:missing_data} \end{figure} In the previous section we excluded nine sources that contain missing values, meaning they did not have measurements in one or more bands. This is important for two reasons. In order for the methods to work they need to be fed with a value for each feature. Simultaneously, the majority of the sources in the catalogs with unclassified sources (to which this classifier will be applied) do not possess measurements in all bands. To solve this, we performed a data imputation process in two ways. One, typical approach, is to replace missing values with a median/mean value. For this we first derived the median value (to remove extremes) of each feature distribution per class and from all available sources in the training sample of M31 and M33. Then we took the mean of the feature's values over all classes. Another approach is to use iterative imputation, in which each feature is modeled as a function of others originating from the multivariate imputation by chained equations (MICE; \citealt{Buuren2011}). This is a plausible assumption in our case, since we are dealing with spectral features that are indeed covariant to some degree (spectra do not fluctuate suddenly across neighboring bands unless a peculiar emission or absorption feature is present). It is hence plausible to impute a missing bandwidth value given the others. The imputation of each feature is done sequentially, which allows previous values to be considered as part of the model in predicting the subsequent features. The process is repeated (typically ten times), which allows the estimations of the missing values to be improved even more. We implemented this by using \texttt{impute.IterativeImputer()} (with default parameters). To further investigate the influence of data with missing values, we ran a test by simulating sets with missing values from the original M31 and M33 sample. As usual, we split the sample into training (70\%) and validation (30\%) samples. After resampling the training set, it was used to train the three classifiers, and an initial estimate of the accuracy was obtained with the validation sample. Then, according to how many features we can afford to "miss," we randomly selected the features of each object in the validation sample. We either replaced these features with the corresponding mean values or we applied the iterative imputer. Then the accuracy was obtained on this modified validation set. In the upper panel of Fig. \ref{f:missing_data} we show an example of the difference between the initial (unmodified) validation set and the ones with missing values, by randomly replacing two features (per object) and imputing data with the iterative imputer (green pentagons) and mean values (purple circles). The mean drop in accuracy (over ten iterations) is less than 0.1 for the imputer (green dashed line) but almost 0.3 for the means. In the bottom panel of Fig. \ref{f:missing_data} we show the drop in accuracy with increasing number of missing features. Obviously, the imputer is performing more efficiently and doing more than simply replacing missing features with mean values, and it can work with up to three missing features (out of five available in total). We also quantified the fraction of missing values by defining a "band completeness" term, simply as $ 1 - N_{\rm bands\_without\_measurement } / N_{\rm total\_bands} $. In the middle panel of Fig. \ref{f:other_galaxies} we show the distribution of this completeness for correct and incorrect sources. Given that about half of the nine sources with missing values have band completeness $0.2$ (meaning only one feature is present) and the others are missing two to three, the success rate of five out of nine of these sources classified correctly ($\sim55\%$) matches what we would approximately expect from the bottom panel of Fig. \ref{f:missing_data}. In the bottom panel of Fig. \ref{f:other_galaxies} we show now the probability distribution for all sources. The score is $68\%$, which is the same as the accuracy obtained for the sample without any missing values (at 70\%). The dashed and solid lines have the same meaning as previously, and there is no significant change (at 0.65 ad 0.59 for correct and incorrect classifications, respectively). In this particular data set the presence of a small number of sources (9 out of 63; $\sim14\%$) with missing values does not affect the performance of the classifier. \section{Discussion} \label{s:discussion} In the following sections we discuss our results with respect to the sample sizes, label availability and feature sensitivity per class of our classifier. \subsection{Exploring sample volumes and class mixing} \begin{figure*} \includegraphics[width=\textwidth]{sample_volume-recall.pdf} \caption{Recall vs. the fraction of the training sample used per class. We notice a significant improvement for BeBRs and YSGs with increased training samples. When the samples sizes are already adequate, the maximum possible value is achieved faster (e.g., for RSGs and BSGs). The GAL and WR classes show an increase, while the LBV sample is too small to produce meaningful results.} \label{f:sample_volume-recall} \end{figure*} One of the major concerns when building machine-learning applications is the representativeness of the samples used. To explore this we performed iterative runs for each algorithm by adjusting the size of the training sample used. At each iteration after the initial split into train (70\%) and validation (30\%) sets, we kept a fraction of the training. After randomly selecting the sources per class, we performed the resampling in order to create the synthetic data. However, we needed at least two sources per class for the SMOTE to run (for this process we adjusted \texttt{n\_neighbor=1}). Therefore, we started from $10\%$ up to the complete training sample. Especially for LBVs we added an additional source by hand for the first two fractions (after 0.3 enough sources were selected automatically). In Fig. \ref{f:sample_volume-recall} we plot the recall per class for each method (for completeness in Fig. \ref{f:sample_volume} we also present the precision and F1 score). We see an expected overall improvement with increasing sample size. This means that the larger the initial sample, the more representative it is of the parent population. The resampling method can interpolate, but does not extrapolate, which means that even though we are creating new sources they originate from the available information of the feature space. For example, \cite{Kyritsis2022} experimented with three different variants of RF to find that the results were dominated by the internal scatter of the features. Therefore, any limitations are actually transferred to the synthetic data. More information results in a better representation of their features by the classifier (leading to more accurate predictions). \subsubsection{BSGs and RSGs} The BSGs and RSGs are the most populous classes, and they achieve a high accuracy much faster (except for BSGs in the SVC). The RSG class also performs well in the work of \cite{Dorn-Wallenstein2021}, at $\sim96\%$. In their refined label scheme they split (the equivalent to our) BSG sources into more classes, which results in a poorer performance. \subsubsection{BeBRs} The careful reader will notice that the BeBR sample size is similar to that of the LBVs and smaller than the WR one. Despite that, we are able to get really good results due to the specific nature of these objects. The B[e] phenomenon (presence of forbidden lines in spectra) actually includes a wide range of evolutionary stages and masses, from pre-main-sequence stars to evolved ones, symbiotics and planetary nebulae \citep{Lamers1998}. The subgroup of evolved stars is perhaps the most homogeneous group, as they are very luminous $(\rm{log}(L/L_\odot) > 6.0)$ characterized by strong Balmer lines in emission (usually with P-Cygni profiles), narrow low-excitation lines (such as FeII, [FeII], and [OI]), and they display chemically processed material (such as TiO bands and $^{13}\rm CO$ enrichment) indicative of their evolved nature. Moreover, these spectral characteristics (arising from dense, dusty disks of unknown origin) are generally long-lived \citep{Maravelias2018} and these sources tend to be photometrically stable \citep{Lamers1998}. Those characteristics, along with strong IR excess due to their circumstellar dust (see \citealt{Kraus2019a}, but also \citealt{Bonanos2009, Bonanos2010}) make them a small but relatively robust class. Interestingly, \cite{Dorn-Wallenstein2021} recover BeBR at the same accuracy with our approach. \subsubsection{LBVs} The LBV sample only shows a clear gain with an increased training sample for SVC, which is the most efficient method for recovering this class. When in quiescence, LBVs share similar observables (e.g., colors and spectral features) with BSGs, WRs, and BeBRs \citep[e.g.,][]{Weis2020, Smith2014}. Therefore, it is quite challenging to separate them, and the only way to certify the nature of these candidate LBVs is when they actually enter an active outburst phase. During this phase, the released material obstructs the central part of the star, changing its spectral appearance from a O/B type to an A/F type (which in turn would mix them with the YSG sources), while they can significantly brighten in the optical ($>2\,\rm{mag}$, but at constant bolometric luminosity; \citealt{Clark2005}). In order to form the most secure LBV sample, we excluded all candidate LBVs (some of which more resemble BeBRs; \citealt{Kraus2019a}), and we were left with a very small sample of six stars. The LBVs display an additional photometric variability of at least a few tenths of a magnitude \citep{Clark2005}. This information could be included as a supplementary feature through a variability index (such as $\chi^2$, median absolute deviation, etc; \citealt{Sokolovsky2017}). However, this is not currently possible as the data sets we are using are very limited in the epoch coverage (for example, at the very best only a few points are available per band in the Pan-STARRS survey). Furthermore, the optical (Pan-STARRS) and IR (\textit{Spitzer}) data for the same source were obtained at different epochs, which may result the source's flux being sampled from different modes. This effect, along with their small sample size (far from complete; \citealt{Weis2020}), may well explain the limited prediction capability of our method. On the other hand, \cite{Dorn-Wallenstein2021} took into account variability (using \textit{WISE} light curves) and they report a full recovery of LBVs, which might be due to overfitting. Because of the small size of their sample (two sources), they did not discuss it any further. \subsubsection{WRs} In the single case scenario, LBVs are a phase in the transition of O-type stars before their outer layers are stripped, because of the intense mass loss and/or massive eruptions \citep{Smith2014}. Binaries are another channel where efficient stripping can lead to WRs \citep{Shenar2020}. Depending on the metallicity and their rotation, WRs may also form directly from main-sequence stars \citep{Meynet2005}. As their evolution is highly uncertain, they can originate from both LBV or BSG stars. Stellar evolution is a continuous process that does not display strict boundaries between those groups in the HRD. Therefore, their features (color indices) can be mixed. They are bright sources and this has enabled the detection of almost their complete population (see \citealt{Neugent2019a} for a review) but the actual numbers are limited due to their rarity. Their small sample size -- which actually includes a number of different subtypes of WRs, such as the nitrogen or carbon rich ones, as well as some known binaries with O-type companions -- has an impact on our prediction capability, but it is better than for LBVs. We also note that their recall benefits from the increase in the training sample for SVC and RF, but not much for MLP. \cite{Rosslowe2018} have shown that WRs and LBVs can be better distinguished using near-IR (JHK bands), a region that is unfortunately excluded from our feature list because of the lack of extensive and consistent surveys for our galaxies (although 2MASS exists, it is not deep enough for our more distant galaxies). On the contrary, \cite{Dorn-Wallenstein2021} include these bands, which may explain their improved accuracy for WRs and (possibly) for LBVs. \subsubsection{YSGs} The YSG class contains all sources that are found in between the BSG and the RSG classes. In general, this is a relatively short-lived phase as the star evolves off the main sequence or evolves back to hotter phases after the RSG phase (e.g., \citealt{Kourniotis2018,Gordon2019}; excluding the contamination by foreground sources, which we minimized by preprocessing with the \textit{Gaia} properties but definitely did not eliminate). However, it is hard (if not impossible) to get strict boundaries in the CMDs between the BSG and the YSG populations, as well as the YSG and the RSG ones. \cite{Yang2019} presents a ranking scheme that is based on the presence of each source in a number of multiple CMDs (c.f. fig. 16). With our current work we are able to remove this complexity as we take into account the information from multiple CMDs (through the color indices) at once. We are able to correctly predict the majority of this sample at $\sim73\%$, in contrast to the $\sim27\%$ from \cite{Dorn-Wallenstein2021}. The major factor in this case is the use of more optical colors, which helps in distinguishing YSGs from BSGs more effectively, while \cite{Dorn-Wallenstein2021} work mainly with IR colors. \subsection{Label uncertainties} Uncertainty in the labels (or classes) can come in two flavors, either because of classification errors (e.g., human bias, instrument limitations) or due to the natural mixing of these sources. After all, there are uncertainties in the evolution of massive stars after the main sequence, as we still lack robust knowledge with respect to the transition of these sources through the various phases. However, it is a typical prerequisite in supervised machine-learning applications that the labels are the absolute truth. This can lead to inaccurate predictions. \cite{Dorn-Wallenstein2021} comment specifically on this, as with their refined classes (containing 12 classes) they achieve an accuracy of $\sim53\%$ for the SVC, because their labels for Galactic sources are "derived inhomogeneously, and many are from spectroscopy that is now more than 50 years old." In our case, we have obtained a more homogeneous sample, since we are working with specific galaxies (distance uncertainties are minimized) and the results originate from consistent surveys and modern instruments and facilities. In other words, our labels are more secure and help us achieve a better result. A way to tackle this is by properly handling label uncertainties during the training process itself, which, however, is not a trivial task. \subsection{Feature sensitivity} \begin{figure} \includegraphics[width=\columnwidth]{features-permutation.pdf} \caption{Permutation feature importance (i.e., the difference in accuracy between the original data set and the shuffled one) per feature for each classifier independently and the combined one. The features $r-i$, $y-[3.6]$, and $[3.6]-4.5$ consistently appear to be the most important.} \label{f:permutative_features} \end{figure} \begin{figure} \includegraphics[width=\columnwidth]{features-remove_all.pdf}\\ \includegraphics[width=\columnwidth]{features-remove_classes.pdf} \caption{Feature importance per removed feature. The feature is removed and the whole combined model is retrained. The first point corresponds to the full feature set. (Top) Considering the overall accuracy for the combined classifier, the most significant features are $r-i$ and $y-[3.6]$, consistent with the permutation importance test. (Bottom) Recall per class. We see that different features become more significant for BeBRs, GALs, WRs, YSGs, and LBVs, while RSGs and BSGs do not show important changes (see text for more).} \label{f:featimp_remove} \end{figure} During the feature selection (Sect. \ref{s:feature_selection}) we disregarded bands that would significantly decrease our sample and/or they would introduce noise ($J_{\rm{UK}}$, \textit{Gaia}, \textit{Spitzer} [5.8], [8.0], and [24], Pan-STARRS $g$). The \textit{Spitzer} [3.6] and [4.5] bands are present for all of our sources (by construction of our catalogs), while the availability of optical ones (Pan-STARRS $r, i, z, y$) varies depending on the source. In order not to lose any more information, we included all optical bands (except for $g$) and performed missing data imputation whenever necessary. But then, the questions of how sensitive the classifier is to these features and which are more important per class, naturally follow. We first investigate how the overall performance of the classifier depends on the features. For this we performed a permutation feature importance test (\texttt{sklearn.inspection.permutation\_importance()}). By shuffling the values of a specific feature we see how much it influences the final result. In this case the metric used is the difference between the accuracy of the original data set and of the shuffled one\footnote{The process is performed on the training sample, so we included all M31 and M33 sources in it and resampled accordingly.}. In the case of a nonsignificant feature this change will be small and the opposite holds for an important feature. In Fig. \ref{f:permutative_features} we show the results per classifier as well as their combined model (``all''). We notice that the most significant features are $r-i$, $y-[3.6]$, and $[3.6]-[4.5]$ while the least important are $i-z$ and $z-y$ and $[4.5]-[5.8]$. This is not a surprise actually since these features are the ones for which we have the largest separation among the averaged lines of classes (see Fig. \ref{f:SEDs-all}). There are small differences between the individual algorithms but they are relatively consistent. They show similar sensitivity to the optical colors. The only exception is RF for $y-[3.6]$, which seems less sensitive than the others. One key issue with this approach is that it is more accurate in the case of uncorrelated data. In our case there is some correlation both due to the fact that the consecutive color indices contain their neighboring bands and because the fluxes at each band are not totally independent from the others. An alternative and more robust way is by testing the accuracy of our model by dropping a feature each time. The general drawback in this approach is the computational time as it is needed to retrain (including resampling at each iteration) the model from the beginning, contrary to the previous test where only the values of a feature change and the model is applied. Fortunately, our training sample and modeling is neither prohibitively large nor complicated. Thus, using the combined classifier we iteratively removed one feature at the time. Then we calculated the metric of this iteration with respect to the initial feature set. In Fig. \ref{f:featimp_remove} (upper panel) we plot this accuracy, where $r-i$ and $y-[3.6]$ show (relatively) large deviation and seem to be the most important features. This is in agreement with what we found with the feature permutation approach. Interestingly, $r-i$ is the "bluest" feature and seems to be important for the overall classification (the optical part is excluded from the work of \citealt{Dorn-Wallenstein2021}). When examining the results for the recall per class (Fig. \ref{f:featimp_remove}; lower panel) we see that for different classes are sensitive to different features. For BeBRs, $i-z$ and $y-[3.6]$ seem to be the most important, although smaller offsets are visible for the rest of the features also (the mean curve of BeBR peaks at this feature; see Fig. \ref{f:SEDs-all} This can be attributed to the overall redder colors because of the dusty environment around these objects. The GALs are sensitive to both $r-i$, the feature closer to the optical part, and $y-[3.6]$, partly due to the PAH component (GALs display the second strongest peak in Fig. \ref{f:SEDs-all}). Although not so significant $i-z$ seems to favor WR classification. The WR class is a collection of different flavors or classical (evolved) WRs, including binary systems. The YSGs are more sensitive to $y-[3.6]$ and a bit less in $i-z$, similar to BeBRs, as they also tend to have dusty environments. The BSGs and RSGs are the most populated classes, and they do not show any significant dependence. This might be because although distinct to the other classes they contain a wider range of objects that possibly mask significant differences between the bands (see \citealt{Bonanos2009, Bonanos2010}). For example, we included in the BSG sources with emission lines, such as Be stars that display redder colors. For LBVs, $i-z$ seems important but due to their small population the error is quite significant. Also, the redder features lie at zero, which may be due to the incapability of our model to predict these sources with higher confidence. If we were to exclude any of these features, we would get poorer results for some of the classes. The inclusion of more colors would benefit the performance of our classifier as it would help with the sampling of the spectral energy distributions of the sources (going to the optical blue part will not help the redder sources but it would be valuable for the hotter classes). \section{Summary and conclusions} \label{s:summary} In this work we present the application of machine-learning algorithms to build an ensemble photometric classifier for the classification of massive stars in nearby galaxies. We compiled a \textit{Gaia} cleaned selected sample of 932 M31 and M33 sources, and we grouped their spectral types into seven classes: BSGs, YSGs, RSGs, B[e]SGs, LBVs, WRs, and background sources (outliers). To address the imbalance of the sample, we employed a synthetic data approach with which we managed to increase the underrepresented classes, although this is always limited by the feature space that the initial sources sample. We used as features the consecutive color indices from the \textit{Spitzer} [3.6] and [4.5] and Pan-STARRS $r, i, z, $ and $y$ bands (not corrected for extinction). We implemented three well-known supervised machine-learning algorithms, SVC, RF, and MLP, to develop our classifier. The application of each of the algorithms results in fairly good overall results (recovery rates): BSGs, GALs, and YSGs from $\sim60\%$ to $\sim80\%$, BeBRs at $\sim73-80\%$, and WRs at $\sim45\%$, with the best results obtained for the RSGs ($\sim94\%$) and the worst for LBVs ($\sim28\%$ for SVC only). These results are on par with or improved compared to the results from \cite{Dorn-Wallenstein2021}, who worked with a much less homogeneous (with respect to the labels) but more populated Galactic sample. Given the similar performance of the three methods, and to maximize our prediction capability, we combined all outputs into a single probability distribution. This final meta-classifier achieved a similar overall (weighted balanced) accuracy ($\sim83\%$) and similarly good results per class. Examining the impact of the training volume size, we noticed that, as expected, the sample size plays a critical role in the accurate prediction of a class. When many sources of a class are available (e.g., RSGs or BSGs), then the classifier works efficiently. In less populated classes (such as BeBRs and WRs), the inclusion of more objects increases the information provided to the classifier and improves the prediction ability. However, we are hampered by low-number statistics as these classes correspond to rare and/or short-lived phases. Additional information can be retrieved by using more features. We investigated the feature importance to find that, for the current data set, $r-i$ and $y-[3.6]$ are the most important, although different classes are sensitive to different features. Thus, the inclusion of more color indices (i.e., observations at different bands) could improve the separation of the classes. To test our classifier with an independent sample, we used data collected for IC 1613, WLM, and Sextans A sources, some of which ($\sim14\%$) had missing values. We performed data imputation by replacing the features' values using means and an iterative imputer. Although the missing values do not significantly affect the results for this particular data set, further tests showed that the iterative imputer can efficiently handle data sets with up to three missing features (out of the total five available). The final obtained accuracy is $\sim70\%$, lower than what we achieved for M31 and M33. The discrepancy can partly be attributed to photometric issues and the total effect of metallicity. The latter can modify the intrinsic colors of the sources and extinction due to the different galactic environments. Despite this, the result from this application is promising. In a follow-up paper we will present in detail the application of our classifier to previously unclassified sources for a large number of nearby galaxies. Currently, the metallicity dependence is impossible to address. For this we need larger samples of well-characterized sources in different metallicity environments. Although this is challenging because of the observing time required in large facilities, the ASSESS team is actively working toward this goal. A number of observing spectroscopic campaigns are completed and ongoing, which will provide the ultimate testbed of our classifier's actual performance along with opportunities for improvement.\\ \small \noindent\textit{Acknowledgements} We thank the anonymous referee for their constructive comments and suggestions that helped us to improve this work. GM, AZB, FT, SdW, MY acknowledge funding support from the European Research Council (ERC) under the European Union’s Horizon 2020 research and innovation programme (Grant agreement No. 772086). GM would like to thank Alkis Simitsis, Thodoris Bitsakis, Elias Kyritsis, Andreas Zezas, Jeff Andrews, Konstantinos Paliouras for many fruitful discussions on machine learning and beyond. \textit{Facilities:} This work has made use of data from the European Space Agency (ESA) mission {\it Gaia} (\url{https://www.cosmos.esa.int/gaia}), processed by the {\it Gaia} Data Processing and Analysis Consortium (DPAC, \url{https://www.cosmos.esa.int/web/gaia/dpac/consortium}). Funding for the DPAC has been provided by national institutions, in particular the institutions participating in the {\it Gaia} Multilateral Agreement. The Pan-STARRS1 Surveys (PS1) and the PS1 public science archive have been made possible through contributions by the Institute for Astronomy, the University of Hawaii, the Pan-STARRS Project Office, the Max-Planck Society and its participating institutes, the Max Planck Institute for Astronomy, Heidelberg and the Max Planck Institute for Extraterrestrial Physics, Garching, The Johns Hopkins University, Durham University, the University of Edinburgh, the Queen's University Belfast, the Harvard-Smithsonian Center for Astrophysics, the Las Cumbres Observatory Global Telescope Network Incorporated, the National Central University of Taiwan, the Space Telescope Science Institute, the National Aeronautics and Space Administration under Grant No. NNX08AR22G issued through the Planetary Science Division of the NASA Science Mission Directorate, the National Science Foundation Grant No. AST-1238877, the University of Maryland, Eotvos Lorand University (ELTE), the Los Alamos National Laboratory, and the Gordon and Betty Moore Foundation. The UHS is a partnership between the UK STFC, The University of Hawaii, The University of Arizona, Lockheed Martin and NASA. \textit{Software:} This research made use of Numpy \citep{numpy2020}, matplotlib \citep{matplotlib}, sklearn \citep{sklearn}, Jupyter Notebooks \citep{jupyter}, Mlxtend \citep{mlxtend}. This research made use of TOPCAT, an interactive graphical viewer and editor for tabular data \citep{topcat}. We wish to thank the "2019 Summer School for Astrostatistics in Crete"\footnote{\url{http://astro.physics.uoc.gr/Conferences/Astrostatistics_School_Crete_2019/}} for providing training on the statistical methods adopted in this work. We also thank Jeff Andrews for organizing the SMAC (Statistical methods for Astrophysics in Crete) seminar\footnote{\url{https://githubhelp.com/astroJeff/SMAC}}. We also acknowledge useful information provided by Jason Brownlee from his site Machine Learning Mastery\footnote{\url{https://machinelearningmastery.com}} This research has made use of NASA's Astrophysics Data System. This research has made use of the SIMBAD database, operated at CDS, Strasbourg, France. This research has made use of the SVO Filter Profile Service (http://svo2.cab.inta-csic.es/theory/fps/) supported from the Spanish MINECO through grant AYA2017-84089. \bibliographystyle{aa} \section{Introduction} Although rare, massive stars ($M_*>8-10M_\odot$) play a crucial role in multiple astrophysical domains in the Universe. Throughout their life they continuously lose mass via strong stellar winds that transfer energy and momentum to the interstellar medium. As the main engines of nucleosynthesis, they produce a series of elements and shed chemically processed material as they evolve through various phases of intense mass loss. And they do not simply die: they explode as spectacular supernovae, significantly enhancing the galactic environment of their host galaxies. Their end products, neutron stars and black holes, offer the opportunity to study extreme physics (in terms of gravity and temperature) as well as gamma-ray bursts and gravitational wave sources. As they are very luminous, they can be observed in more distant galaxies, which makes them the ideal tool for understanding stellar evolution across cosmological time, especially for interpreting observations from the first galaxies (such as those to be obtained from \textit{James Webb Space Telescope}). While the role of different stellar populations on galaxy evolution has been thoroughly investigated in the literature \citep{Bruzual2003, Maraston2005}, a key ingredient of models, the evolution of massive stars beyond the main sequence, is still uncertain \citep{Martins2013, Peters2013}. Apart from the initial mass, the main factors that determine the evolution and final stages of a single massive star are metallicity, stellar rotation, and mass loss \citep{Ekstrom2012, Georgy2013, Smith2014}. Additionally, the presence of a companion, which is common among massive stars with binary fractions of $\sim50-70\%$ (\citealt{Sana2012, Sana2013, Dunstall2015}), can significantly alter the evolution of a star through strong interactions \citep{deMink2014,Eldridge2017}. Although all these factors critically determine the future evolution and the final outcome of the star, they are, in many cases, not well constrained. In particular, mass loss is of paramount importance as it determines not only the stellar evolution but the enrichment and the formation of the immediate circumstellar environment (for a review, see \citealt{Smith2014} and references therein). Especially in the case of single stars, their strong radiation-driven winds during the main-sequence phase remove material continuously but not necessarily in a homogeneous way, due to clumping \citep{Owocki1999}. On top of that, there are various transition phases in the stellar evolution of massive stars during which they experience episodic activity and outbursts, such as Wolf-Rayet stars (WRs), Luminous Blue Variables (LBVs), Blue supergiants (BSGs), B[e] supergiants (B[e]SGs), Red supergiants (RSGs), and Yellow supergiants (YSGs). This contributes to the formation of complex structures, such as shells and bipolar nebulae in WRs and LBVs (\citealt{Gvaramadze2010, Wachter2010}) and disks in B[e]SGs (\citealt{Maravelias2018}). But how important the episodic mass loss is, how it depends on the metallicity (in different galaxies), and what links exist between the different evolutionary phases are still open questions. To address these questions, the European Research Council-funded project ASSESS\footnote{\url{https://assess.astro.noa.gr/}} (\textit{"Episodic Mass Loss in Evolved Massive stars: Key to Understanding the Explosive Early Universe"}) aims to determine the role of episodic mass by: (i) assembling a large sample of evolved massive stars in a selected number of nearby galaxies at a range of metallicities through multiwavelength photometry, (ii) performing follow-up spectroscopic observations on candidates to validate their nature and extract stellar parameters, and (iii) testing the observations against the assumptions and predictions of the stellar evolution models. In this paper we present our approach for the first step, which is to develop an automated classifier based on multiwavelength photometry. One major complication for this work is the lack of a sufficiently large number of massive stars with known spectral types. Some of these types are rare, which makes the identification of new sources in nearby galaxies even more difficult. Moreover, spectroscopic observations at these distances are challenging due to the time and large telescopes required. On the other hand, photometric observations can provide information for thousands of stars but at the cost of a much lower (spectral) resolution, leading to a coarser spectral-type classification (e.g., \citealt{Massey2006, Bonanos2009, Bonanos2010, Yang2019}). Using the Hertzsprung–Russell diagram (HRD) and color-color diagrams, one needs a detailed and careful approach to properly determine the boundaries between the different populations and identify new objects, a process that is not free from contaminants (e.g., \citealt{Yang2019}). To circumvent this problem, we can use a data-driven approach. In this case, data can be fed to more sophisticated algorithms that are capable of "learning" from the data and finding the mathematical relations that best separate the different classes. These machine-learning methods have been extremely successful in various problems in astronomy (see Sect. \ref{s:algorithms}). Still, though, applications of these techniques tailored for the classification of massive stars with photometry are, to the best of our knowledge, scarce if not almost nonexistent. \cite{Morello2018} studied the \textit{k}-nearest neighbors on IR colors to select WRs from other classes, while \cite{Dorn-Wallenstein2021} explored other techniques to obtain a wider classification based on \textit{Gaia} and IR colors for a large number of Galactic objects. This work provides an additional tool, focusing on massive stars in nearby galaxies. It presents the development of a photometric classifier, which will be used in a future work to provide the classification for thousands of previously unclassified sources\footnote{Code and other notes available at: \url{https://github.com/gmaravel/pc4masstars}}. In Sect. \ref{s:data} we present the construction of our training sample (spectral types, foreground removal, and photometric data). In Sect. \ref{s:ml} we provide a quick summary of the methods used and describe the class and feature selection, as well as the implementation and the optimization of the algorithms. In Sect. \ref{s:results} we show the performance of our classifier for the M31 and M33 galaxies (on which it was trained) and its application to an independent set of galaxies (IC 1613, WLM, and Sextans A). In Sect. \ref{s:discussion} we discuss the necessity of a good training sample and labels, as well as the feature sensitivity. Finally, in Sect. \ref{s:summary} we summarize and conclude our work. \section{Building the training sample} \label{s:data} In the following section we describe the steps we followed to create our training sample, starting for the available IR photometric catalogs, removing foreground sources using \textit{Gaia} astrometric information, and collecting spectral types from the literature. \begin{figure*}[hbt!] \centering \includegraphics[width=\textwidth]{M31-plots.png}\\ \includegraphics[width=\textwidth]{M31-Gaia-HRD.png} \caption{ Using \textit{Gaia} to identify and remove foreground sources. (A) Field-of-view of \textit{Gaia} sources (black dots) for M31. The big green ellipse marks the boundary we defined for the M31 galaxy, and the smaller green ellipses define M110 and M32 (inside the M31 ellipse) and are excluded. The blue dots highlight the sources in M31 with known spectral classification. (B) Foreground region, excluding the sources inside M110. (C) Distribution of the proper motion over its error for Dec, for all \textit{Gaia} sources in the foreground region, fitted with a spline function. (D) Distribution of the proper motion over its error for Dec (solid line), for all sources along the line-of-sight of M31, which includes both foreground and galactic (M31) sources. We fitted this with a scaled spline, to account for the number of foreground sources expected inside M31 (dashed line), and a Gaussian function (dotted line). The vertical dashed lines correspond to the $3\sigma$ threshold of the Gaussian. Any source with values outside this region is flagged as a potential foreground source. (E) \textit{Gaia} CMD of all sources identified as galactic (red points) and foreground (gray). The majority of the foreground sources lie on the yellow branch of the CMD, which is exactly the position at which we expect the largest fraction of the contamination.} \label{f:gaia_process} \end{figure*} \subsection{Surveys used} \label{s:surveys} Infrared bands are ideal probes for distinguishing massive stars and particular those with dusty environments \citep{Bonanos2009, Bonanos2010}. The use of IR colors is a successful method for target selection, as demonstrated by \citealt{Britavskiy2014, Britavskiy2015}. We based our catalog composition on mid-IR photometry ($3.6\, \mu m$, $4.5\, \mu m$, $5.8\, \mu m$, $8.0\, \mu m$, $24\, \mu m$), using pre-compiled point-source catalogs from the \textit{Spitzer} Space Telescope \citep{Khan2015, Khan2017, Williams2016}, which have only recently become publicly available. This allows us to use positions derived from a single instrument, a necessity for cross-matching since spectral typing comes from various works and instruments. The cross-match radius applied in all cases was 1", since this corresponds to a significant physical separation that grows gradually as a function of distance. Additionally, we only kept sources with single matches, as it is impossible to choose the correct match to the \textit{Spitzer} source when two or more candidates exist within the search radius (accounting for about 2-3\% of all sources in M31 and M33). Although the inclusion of near-IR data would help better sampling of the spectral energy distribution of our sources, this is currently impossible given the shallowness of Two Micron All-Sky Survey (2MASS) for our target galaxies, and the -- unfortunate -- lack of any other public all sky near-IR survey. Some data (for a particular band only; $J_{\rm{UK}}$) were collected from the United Kingdom Infra-Red Telescope (UKIRT) Hemisphere Survey\footnote{\url{http://wsa.roe.ac.uk/uhsDR1.html}} \citep{Dye2018}. The data set was supplemented with optical photometry ($g,r,i,z,y$) obtained from the Panoramic Survey Telescope and Rapid Response System (Pan-STARRS; \citealt{Chambers2016}), using sources with ${\rm \texttt{nDetections}}\geq2$ to exclude spurious detections\footnote{Although DR2 became available after the compilation of our catalogs, it contains information from the individual epochs. DR1 provides the object detection and their corresponding photometry from the stacked images, which we opted to use.}. We also collected photometry from \textit{Gaia} Data Release 2 (DR2) (G, G$_{\rm{BP}}$, G$_{\rm{RP}}$; \citealt{Gaia2016, Gaia2018b}). We investigated all other available surveys in the optical and IR but the aforementioned catalogs provided the most numerous and consistent sample with good astrometry for our target galaxies (M31, M33, IC 1613, WLM, and Sextans A). Significant populations of massive stars are well known for the Magellanic Clouds and the Milky Way but there are issues that prohibited us from using them. The Clouds are not covered by the Pan-STARRS survey, which means that photometry from other surveys should be used that would make the whole sample inhomogeneous (with all possible systematics introduced by the different instrumentation, data reductions, etc). Although Milky Way is covered by both Pan-STARRS and \textit{Spitzer} surveys, there are hardly any data available for the most interesting sources, such as B[e]SGs, WRs, and LBVs, through the \textit{Spitzer} Enhanced Imaging Products (which focus on the quality of the products and not completeness). Therefore, we limited ourselves to M31 and M33 galaxies when building our training sample. \subsection{Removing foreground stars} The source lists compiled from the photometric surveys described in the previous section, contain mostly genuine members of the corresponding galaxies. It is possible though that foreground sources may still contaminate these lists. To optimize our selection, we queried the \textit{Gaia} DR2 catalog \citep{Gaia2016, Gaia2018b}. With the statistical handling of the astrometric data we were able to identify and remove most probable foreground sources in the line-of-sight of our galaxies. We first defined a sufficiently large box around each galaxy: $3.5\,{\rm deg} \times 3.5\,{\rm deg} $ for M31 and $ 1.5\, {\rm deg} \times 1.5\, {\rm deg} $ for M33, which yielded 145837 and 34662 sources, respectively. From these we first excluded all sources with nonexistent or poorly defined proper motions (${\rm pmra\_error} \geq 3.0\, {\rm mas}$, $ {\rm pmdec\_error} \geq 3.0\, {\rm mas}$) or parallax (${\rm parallax\_error} \geq 1.5 {\rm mas}$), sources with large astrometric excess noise ($ {\rm astrometric\_excess\_noise} \geq 1.0$; following the cleaning suggestions by \citealt{Lindegren2018}), or sources that were fainter than our limit set in the optical (${\rm phot\_g\_mean\_mag} \geq 20.5$). These quality cuts left us with 78375 and 26553 sources in M31 and M33, respectively. The boundary for each galaxy was determined as the ellipse at which the star density dropped significantly at approximately the density of the background. This boundary was visually inspected also so that it masks the main body (disk) of each galaxy where our targets are expected to be located (and to exclude contaminating regions inside and outside the galaxy, namely M32 and M110 galaxies for M31, see Fig. \ref{f:gaia_process}, Panel A; for M33 see Fig. \ref{f:gaia_process-M33}). Therefore, we could securely assign the remainder of stars as foreground objects (see Fig. \ref{f:gaia_process}, Panel B). From these we obtained the distributions on the proper motions in RA and Dec (over their corresponding errors) and the parallax (over its error). We fitted these distributions with a spline to allow more flexibility (see Dec for example in Fig. \ref{f:gaia_process}, Panel C). Similarly, we plotted the distributions for all sources within the ellipse, which contained both galactic and foreground sources. To fit these we used a combination of a Gaussian and a spline function (see Fig. \ref{f:gaia_process}, Panel D). The spline was derived from the sources outside the galaxy (Fig. \ref{f:gaia_process}, Panel C), but when used for the sources within the ellipse it was scaled down according to the ratio of the area outside and inside the galaxy (assuming that the foreground distribution does not change). From the estimated widths of the Gaussian distributions (M31: $ {\rm pmRA/error} = 0.04 \pm 1.28$, $ {\rm pmDEC/error} = -0.03 \pm 1.48$, $ {\rm parallax/error} = 0.21 \pm 1.39$; M33: $ {\rm pmRA/error} = 0.12 \pm 1.18$, $ {\rm pmDEC/error} = 0.05 \pm 1.31$, $ {\rm parallax/error} = -0.03 \pm 1.16$) we defined as foreground sources those with values larger than $3\sigma$ in any of the above quantities. For the parallax we took into account the systematic 0.03 mas offset induced by the global zero point found by \cite{Lindegren2018}. This particular cut was applied only to sources with actual positive values, as zero or negative values are not decisive for exclusion. In the \textit{Gaia} color-magnitude diagram (CMD) of Fig. \ref{f:gaia_process}, Panel E, we show all sources identified as members of the host galaxy (red points) and foreground (gray). The majority of the foreground sources lie on the yellow branch of the CMD, which is exactly the position at which we expect the largest fraction of the contamination. This process was successful in the cases of M31 and M33 due to the numerous sources that allow their statistical handling. In the other galaxies, where the field-of-view is substantially smaller, the low numbers of sources led to a poorer (if any) estimation of these criteria. Consequently, for those galaxies we considered as foreground sources those with any of their \textit{Gaia} properties (${\rm pmRA/error}$, $ {\rm pmDEC/error}$, or $ {\rm parallax/error} $) larger than $3\sigma$ of the largest measured errors, following the most conservative approach. In practice, this means that we used the same criteria to characterize foreground sources as with M31. \subsection{Collecting spectral types} \label{s:data-spec_types} The use of any supervised machine-learning application requires a training sample. It is of paramount importance that the sample be well defined such that it covers the parameter space spanned by the objects under consideration. For this reason, we performed a meticulous search of the literature to obtain a sample as complete as possible to our knowledge with known spectral types (that were used as labels). The vast majority of collected data are found in M31 and M33. The source catalogs were retrieved primarily from \cite{Massey2016} as part of their Local Group Galaxy Survey (LGGS) survey, complemented by other works (see Table \ref{t:spectypes_refs} for the full list of numbers and references used). In all cases we carefully checked for and removed duplicates, while in a few cases we updated the classification of some sources based on newer works (e.g., candidate LBVs to B[e]SGs based on \citealt{Kraus2019a}). The initial catalogs for M31 and M33 contain 1142 and 1388 sources with spectral classification, respectively (see Fig. \ref{f:gaia_process}, Panel A, blue dots). Within these sources we purposely included some outliers (such as background galaxies and quasi-stelar objects; e.g., \citealt{Massey2019}). A significant fraction of these sources ($\sim64\%$) have \textit{Gaia} astrometric information. Applying the criteria of the previous section, we obtained 58 (M31) and 76 (M33) sources marked as foreground\footnote{ There are 696 (M31) and 926 (M33) sources with \textit{Gaia} information. The identification of 58 (M31) and 76 (M33) sources as foreground corresponds to a $\sim8\%$ contamination. Given that there are 446 (M31) and 462 (M33) additional sources but without \textit{Gaia} values we expect another $\sim72$ sources to be foreground (according to our criteria) that remained in our catalog.}. After removing those, we were left with 1084 M31 and 1312 M33 sources, which are cross-matched with the photometric catalogs, considering single matches only at 1" (see Sect. \ref{s:surveys}). After this screening process our final sample consists of 527 (M31) and 562 (M33) sources. \begin{table} \centering \caption{List of references with their corresponding number of sources that contribute to our collected sample.} \label{t:spectypes_refs} \begin{tabular}{llr} Galaxy (total) & Reference & \# sources \\ \hline \hline \multirow{3}{*}{WLM (36)} & \cite{Bresolin2006} & 20 \\ & \cite{Britavskiy2015} & 9 \\ & \cite{Levesque2012} & 7 \\ \hline \multirow{9}{*}{M31 (1142)} & \cite{Massey2016} & 966 \\ & \cite{Gordon2016} & 82 \\ & \cite{Neugent2019} & 37 \\ & \cite{Drout2009} & 18 \\ & \cite{Massey2019} & 17 \\ & \cite{Kraus2019a} & 11 \\ & \cite{Humphreys2017} & 6 \\ & \cite{Neugent2012} & 3 \\ & \cite{Massey2009} & 2 \\ \hline \multirow{4}{*}{IC 1613 (20)} & \cite{Garcia2013} & 9 \\ & \cite{Bresolin2007} & 9 \\ & \cite{Herrero2010} & 1 \\ & \cite{Britavskiy2014} & 1 \\ \hline \multirow{16}{*}{M33 (1388)} & \cite{Massey2016} & 1193 \\ & \cite{Massey1998} & 49 \\ & \cite{Neugent2019} & 46 \\ & \cite{Humphreys2017} & 24 \\ & \cite{Massey2007} & 13 \\ & \cite{Gordon2016} & 12 \\ & \cite{Drout2012} & 11 \\ & \cite{Massey2019} & 10 \\ & \cite{Kraus2019a} & 7 \\ & \cite{Massey1998a} & 6 \\ & \cite{Kourniotis2018} & 4 \\ & \cite{Humphreys2014} & 4 \\ & \cite{Massey1996} & 3 \\ & \cite{Martin2017} & 2 \\ & \cite{Neugent2011} & 2 \\ & \cite{Bruhweiler2003} & 2 \\ \hline \multirow{4}{*}{Sextans A (16)} & \cite{Camacho2016} & 9 \\ & \cite{Britavskiy2015} & 5 \\ & \cite{Britavskiy2014} & 1 \\ & \cite{Kaufer2004} & 1 \\ \hline \hline \end{tabular} \end{table} We compiled spectral types for three more galaxies, WLM, IC 1613, and Sextans A (see Table \ref{t:spectypes_refs}), to use as test cases. Among a larger collection of galaxies, these three offered the most numerous (albeit small) populations of classified massive stars: 36 sources in WLM, 20 in IC 1613, and 16 in Sextans A. Although a handful more sources could potentially be retrieved for other galaxies the effort to collect the data (individually from different works) would not match the very small increase in the sample. We present the first few lines of the compiled list of objects for guidance regarding its form and content in Table \ref{t:catalog_sptypes}. \section{Application of machine learning} \label{s:ml} In this section we provide a short description of the algorithms chosen for this work (for more details, see, e.g., \citealt{Baron2019, Ball2010}). The development of a classifier for massive stars requires the inclusion of "difficult" cases, such as those that are short-lived (e.g., YSGs with a duration of a few thousand years; \citealt{Neugent2010, Drout2009}) or very rare (e.g., LBVs, \citealt{Weis2020}; and B[e]SGs, \citealt{Kraus2019a}). To secure the training of the algorithms with specific targets, we preferred the use of supervised algorithms. However, any algorithm needs the proper input, which is determined by the class and feature selection. Finally, we show the implementation and the optimization of the methods. \subsection{Selected algorithms} \label{s:algorithms} Support Vector Machines \citep{Cortes1995} is one of the most well-established methods used in a wide range of topics. Some indicative examples include classification problems for variable stars \citep{Pashchenko2018}, black hole spin \citep{Gonzalez2019}, molecular outflows \citep{Zhang2020}, and supernova remnants \citep{Kopsacheili2020}. The method searches for the line or the hyperplane (in two or multiple dimensions, respectively) that separates the input data (features) into distinct classes. The optimal line (hyperplane) is defined as the one that maximizes the support vectors (i.e., the distance of each point with the boundary), which leads to the optimal distinction between the classes. One manifestation of the method, designed better for classification purposes, such as our problem, is the Support Vector Classification (SVC; \citealt{Ben-Hur2002}), which uses a kernel to better map the decision boundaries between the different classes. Astronomers are great machines when it comes to classification processes. A well-trained individual can easily identify the most important features for a particular problem (e.g., spectroscopic lines) and, according to specific (tree-like) criteria, can make fast and accurate decisions to classify sources. However, their strongest drawback is low efficiency as they can only process one object at a time. Although automated decision trees can be much more efficient than humans, they tend to overfit, that is to say, they learn the data they are trained on too well and can fail when applied to unseen data. A solution to overfitting is Random Forest (RF; \citealt{Breiman2001}), an ensemble of decision trees, each one trained on a random subset of the initial features and sample of sources. Some example works include \cite{Jayasinghe2018} and \cite{Pashchenko2018} for variable stars, \cite{Arnason2020} to identify new X-ray sources in M31, \cite{Moller2016} on supernovae Type Ia classification, \cite{Plewa2018} and \cite{Kyritsis2022} for stellar classification. When RF is called to action, the input features of an unlabeled object propagate through each decision tree and provide a predicted label. The final classification is the result of a majority vote among all labels predicted by independent trees. Therefore, RF overcomes the problems of single decision trees as they generalize very well and can handle large numbers of features and data efficiently. Neural networks originate from the idea of simulating the biological neural networks in animal brains \citep{McCulloch1943}. The nodes (that are located in layers) are connected and process an input signal according to their weight, which was assigned to them during the training process. Initial applications in astronomy were first performed in the 1990s (e.g., \citealt{Odewahn1992} on star and galaxy discrimination, \citealt{Storrie-Lombardi1992} on galactic morphology classification) but recent advance in computational power as well as in software development, allowing easy implementation, have revolutionized the field. Deeper and more complex neural network architectures have been developed, such as using deep convolutional networks to classify stellar spectra \citep{Sharma2020} and supernovae along with their host galaxies \citep{Muthukrishna2019}, generative adversarial networks to separate stars from quasars \citep{Makhija2019}, and recurrent neural networks for variable star classification \citep{Naul2018}. For the current project, a relatively simple shallow network with a few fully connected layers -- Multilayer Perceptron (MLP) -- proved sufficient. In summary, the aforementioned techniques are based on different concepts; for example, SVC tries to find the best hyperplane that separates the classes, RF decides the classification result based on the thresholds set at each node (for multiple trees), while neural networks attempt to highlight the differences in the features that best separate the classes. We implemented an ensemble meta-algorithm that combines the results from all three, different, approaches. Initially each method provides a classification result with a probability distribution across all selected classes (see Sect. \ref{s:class_selection}). Then these are further combined to obtain the final classification (described in detail in Sect. \ref{s:combining_models}). \subsection{Class selection} \label{s:class_selection} \begin{table} \centering \caption{Groups of spectral types of our initial sample (Col. 1) and their corresponding number of sources (Col. 2). Combining them into classes (Col.s 3) leads to the total number of sources combined per class (Col. 4; see Sect. \ref{s:class_selection}) and the final numbers (Col. 5) after removing 44 objects without full photometry in all bands.} \label{t:spectral_classes} \begin{tabular}{lc|ccc } \hline \hline Group & initial \# & Class & class \# & final \# w/phot\\ $[1]$ & $[2]$ & $[3]$ & $[4]$ & $[5]$\\ \hline O & 17 & BSG & \multirow{12}{*}{261} & \multirow{12}{*}{250} \\ Oc & 1 & - & & \\ Oe & 2 & BSG & & \\ On & 6 & BSG & & \\ B & 156 & BSG & & \\ Bc & 11 & - & & \\ Be & 7 & BSG & & \\ Bn & 18 & BSG & & \\ A & 51 & BSG & & \\ Ac & 3 & - & & \\ Ae & 2 & BSG & & \\ An & 2 & BSG & & \\ \hline WR & 50 & WR & \multirow{3}{*}{53} & \multirow{3}{*}{42} \\ WRc & 3 & - & & \\ WRn & 3 & WR & & \\ \hline LBV & 6 & LBV & \multirow{2}{*}{6} & \multirow{2}{*}{6} \\ LBVc & 18 & - & & \\ \hline BeBR & 6 & BeBR & \multirow{2}{*}{17} & \multirow{2}{*}{16} \\ BeBRc & 11 & BeBR & & \\ \hline F & 21 & YSG & \multirow{5}{*}{103} & \multirow{5}{*}{99} \\ Fc & 4 & - & & \\ G & 15 & YSG & & \\ YSG & 67 & YSG & & \\ YSGc & 16 & - & & \\ \hline K & 67 & RSG & \multirow{7}{*}{512} & \multirow{7}{*}{496} \\ Kc & 3 & - & & \\ M & 142 & RSG & & \\ Mc & 5 & - & & \\ RSG & 250 & RSG & & \\ RSGb & 53 & RSG & & \\ RSGc & 36 & - & & \\ \hline AGN & 2 & GAL & \multirow{4}{*}{24} & \multirow{4}{*}{23} \\ QSO & 17 & GAL & & \\ QSOc & 1 & - & & \\ GAL & 5 & GAL & & \\ \hline Total & 1077 & & 976 & 932 \\ \hline \hline \end{tabular} \end{table} When using supervised machine-learning algorithms it is necessary to properly select the output classes. In our case we are particularly interested in evolved massive stars, because the magnitude-limited observations of our target galaxies mainly probe the upper part of the HRD. In our compiled catalog we had a large range of spectral types, from detailed ones (such as M2.5I, F5Ia, and B1.5I) up to more generic terms (such as RSGs and YSGs). Given the small numbers per individual spectral type, as well as the continuous nature of spectral classification, which makes the separation of neighboring types difficult, we lack the ability to build a classifier sensitive to each individual spectral type. To address that we combined spectral types in broader classes, without taking into account luminosity classes (i.e., main-sequence stars and supergiants for the same spectral type were assigned to the same group). This is a two-step process as we first assigned all types to certain groups, and then, during the application of the classifier, we experimented with which classes are best detectable with our approach (given the lack of strict boundaries between these massive stars, which is a physical limitation and not a selection bias). For the first step, we grouped the 1089 sources (both in M31 and M33) as follows. First, sources of detailed subtypes were grouped by their parent type (e.g., B2 I and B1.5 Ia to the B group; A5 I and A7 I to the A group; M2.5 I and M2-2.5 I to the M group, etc.). Some individual cases with uncertain spectral type were assigned as follows: three K5-M0 I sources to the K group, one mid-late O to the O group, one F8-G0 I to the F group, and one A9I/F0I to the A group. Second, all sources with emission or nebular lines were assigned to the parent type group with an "e" or "n" indicator (e.g., B8 Ie to the Be group, G4 Ie to the Ge group, B1 I+Neb to the Bn group, and O3-6.5 V+Neb to the On group). Third, sources with an initial classification as RSGs or YSGs were assigned directly to their corresponding group. Fourth, RSG binaries with a B companion \citep{Neugent2019} were assigned to the RSGb group. Fifth, secure LBVs and B[e]s were kept as separate groups (as LBVs and BeBRs, respectively). A source classified as HotLBV was assigned to the LBV group. Sixth, all sources classified as WRs (of all subtypes), including some individual cases (WC6+B0 I, WN4.5+O6-9, WN3+abs, WNE+B3 I, WN4.5+O, and five Ofpe/WN9), were grouped under one group (WR), except three sources that are characterized by nebular lines and were assigned to the WRn group. Seventh, galaxies (GALs), active galactic nuclei (AGNs) and quasi-stellar objects (QSOs) were grouped under their corresponding groups. Eighth, all sources with an uncertainty flag (":" or "c") were assigned to their broader group followed by a "c" flag to indicate that these are candidates (i.e., not secure) classifications, such as Ac, Bc, YSGc, WRc, and QSOc. One source classified as B8Ipec/cLBV was assigned to the LBVc group. Finally, complex or very vague cases were disregarded. This entailed eight "HotSupergiant" sources and one source from each of the following types: "WarmSG," "LBV/Ofpe/WN9," "Non-WR(AI)," and "FeIIEm.Line(sgB[e])." Thus, after removing the 12 sources from the last step we are left with 1077, split into 35 groups (see Table \ref{t:spectral_classes}, Col. 1 and their corresponding numbers in Col. 2). However, these groups may contain similar objects, or in many cases a limited number of sources that may not be securely classified. To optimize our approach we experimented extensively by combining (similar) groups to broader classes to obtain the best results. All hot stars (i.e., O,B,A groups, including sources with emission "e" and nebular "n" lines) were combined under the BSG class after removing the uncertain sources (indicated as candidates). For the YSG class we considered all sources from the F, G, and YSG groups, again excluding only the candidates (i.e., members of the Fc and YSGc groups, especially as many of the YSGc are highly uncertain; \citealt{Massey2016}). For the RSG class we combined the K, M, RSG, and RSGb groups, excluding the candidates (i.e., Kc, Mc, and RSGc). The BeBR class includes both the secure and the candidate sources, because they show the same behavior (see Sect. \ref{s:feature_selection}) and there are more constraints to characterize a source as B[e] (see \citealt{Kraus2019a}). More specifically the BeBRc sources were actually the result of constraining further the classification of candidate LBVs \citep{Kraus2019a}. Therefore, we kept only the secure LBVs (LBV group) to form their corresponding class. For the WR class we used all available sources, although they are of different types, as a further division would not be efficient. The last class, GAL, includes all nonstellar background objects (galaxies, AGNs, QSOs, except for the one candidate QSO) that were used as potential outliers. We do not expect any other type of outlier (but for an $\sim8\%$ foreground contamination) since at the distances of our target galaxies we are actually probing the brighter parts of the HRD where the supergiant stars are located. The number of sources finally selected for each class is shown in Table \ref{t:spectral_classes} (Col. 4), where we used the class name to indicate which groups contribute to the class (Col. 3) while a "-" shows that a particular group is ignored. The total number of selected sources is 976. \subsection{Imbalance treatment} \label{s:imbalance_treatment} What is evident from Table \ref{t:spectral_classes} is that we have an imbalanced sample of classes, which is very typical in astronomical applications (see also \citealt{Dorn-Wallenstein2021} for similar problems). In particular, the RSG class is the most populated one (with $\sim500$ spurces), followed by the BSG class (with $\sim250$ sources), accounting for almost 80\% of the total sample. The YSG class includes about a hundred sources, but the WR, GAL, BeBR, and, most importantly, LBV classes include a few tens at most. To tackle this we can either use penalizing metrics of performance (i.e., evaluations in which the algorithm provides different weights to specific classes) or train the model using adjusted sample numbers (by over-sampling the least populated classes, and simultaneously under-sampling the most populated one). We experimented with both approaches and found a small gain when using the resampling approach. A typical approach to oversampling is duplicating objects. Although this may be a solution in many cases, it does not help with sampling the feature space better (i.e., it does not provide more information). An alternative approach is to create synthetic data. To this purpose, in this work we used a commonly adopted algorithm, the Synthetic Minority Oversampling TEchnique (SMOTE; \citealt{SMOTE}), which generates more data objects by following these steps: (i) it randomly selects a point (A) that corresponds to a minority class, (ii) it finds k-nearest neighbors (of the same class), (iii) it randomly chooses one of them (B), and (iv) it creates a synthetic point randomly along the line that connects A and B in the feature space. The benefits of this approach are that the feature space is better sampled and all features are taken into account to synthesize the new data points. On the other hand, it is limited in how representative the initial sample per class is of each class's feature space. In any case, the number of points to be added is arbitrary and can very well match the majority class. At the same time this procedure can create noise, especially when trying to oversample classes with very few sources (e.g., LBVs with only six sources available in total). Better results are obtained when the oversampling of the minority classes is combined with undersampling the majority class. For the latter we experimented with two similar approaches: the Tomek links \cite{Tomek} and the Edited Nearest Neighbors (ENN; \citealt{ENN}). In the first one, the method identifies the pairs of points that are closest to each other (in the feature space) and belong to different classes (the Tomek links). These are noisy instances or pairs located on the boundary between the two classes (in a multi-class problem it is the one-versus-rest scheme that is used, i.e., the minority compared to all other classes collectively referred to as the majority class). By removing the point corresponding to the majority class the class separation increases, and the number of majority class points are reduced. In the ENN approach the three-nearest neighbors to a minority point are found and removed when belonging to the majority class. Thus, the ENN approach is a bit more aggressive than Tomek links, as it removes more points. In conclusion, the combination of SMOTE, which creates synthetic points from the minority class to balance the majority class, and the undersampling technique (either Tomek links or ENN), which cleans irrelevant points in the boundary of the classes, help to increase the separation. For the implementation we used the \texttt{imbalanced-learn} package\footnote{\url{https://github.com/scikit-learn-contrib/imbalanced-learn}} \citep{Lematre2017} and more specifically the ENN approach \texttt{imblearn.combine.SMOTEENN()}, which provided slightly better results from Tomek links. We used \texttt{k\_neighbors=3} for SMOTE (due to the small number of LBVs). We opted to use the default values for \texttt{sampling\_strategy}, which corresponds to ``not majority'' for SMOTE (which means that all classes are resampled except for RSGs) and ``all'' for ENN function, which cleans the majority points (considering one-versus-rest classes). In Table \ref{t:resampling} we provide an example of the numbers and fractions of sources per class available before and after resampling (the whole sample). \begin{table} \caption{Number and fraction of sources per class before and after resampling to treat for imbalance (using the SMOTE ENN approach). The fractions correspond to the total number of sources used in the original and resampled sets, respectively.} \label{t:resampling} \begin{tabular}{c|cccc} \hline \hline Class & \multicolumn{2}{c}{Original sources} & \multicolumn{2}{c}{Resampled sources} \\ & (\#) & (\%) & (\#) & (\%) \\ \hline BSG & 250 & 26.8 & 496 & 14.9 \\ YSG & 99 & 10.6 & 488 & 14.6 \\ RSG & 496 & 53.2 & 493 & 14.8 \\ BeBR & 16 & 1.7 & 495 & 14.9 \\ LBV & 6 & 0.6 & 444 & 13.3 \\ WR & 42 & 4.5 & 453 & 13.6 \\ GAL & 23 & 2.4 & 452 & 13.6 \\ \hline \hline \end{tabular} \end{table} \subsection{Feature selection} \label{s:feature_selection} \begin{figure*} \centering \includegraphics[width=0.8\textwidth]{SEDs-all.pdf} \caption{Color indices (features) vs. wavelength per class. The dashed black lines correspond to the individual sources, and the solid colored lines corresponds to their average. The last panel contains only the averaged lines to highlight the differences between the classes, with the most pronounced differences in the $y-[3.6]$ index (as BeBRs are the brightest IR sources, on average, followed by the GAL, RSG, and WR classes; see text for more). The number of sources in each panel corresponds to the total number of selected sources (see Table \ref{t:spectral_classes}, Col. 5). The vertical dashed lines correspond to the average wavelength per color index, as shown at the top of the figure.} \label{f:SEDs-all} \end{figure*} \begin{table*} \caption{Data availability per class and photometric band. The first column lists the classes used and the second one the corresponding number of sources in the sample. For each class, the subsequent columns provide the fractions of sources with secure measurements in the corresponding photometric bands and their errors (which do not include objects with problematic measurements and upper limits).} \label{t:sample_phot_completness} \begin{tabular}{lccccccccccc} \hline Class & Sources & [3.6] & $\sigma_\textrm{[3.6]}$ & [4.5] & $\sigma_\textrm{[4.5]}$ & [5.8] & $\sigma_\textrm{[5.8]}$ & [8.0] & $\sigma_\textrm{[8.0]}$ & [24] & $\sigma_\textrm{[24]}$ \\ & (\#) & (\%) & (\%) & (\%) & (\%) & (\%) & (\%) & (\%) & (\%) & (\%) & (\%) \\ \hline BSG & 261 & 100 & 100 & 100 & 100 & 100 & 80 & 100 & 70 & 100 & 41 \\ YSG & 103 & 100 & 100 & 100 & 100 & 100 & 92 & 100 & 78 & 99 & 30 \\ RSG & 512 & 100 & 100 & 100 & 100 & 99 & 99 & 100 & 93 & 99 & 41 \\ BeBR & 17 & 100 & 100 & 100 & 100 & 100 & 100 & 100 & 100 & 100 & 94 \\ LBV & 6 & 100 & 100 & 100 & 100 & 100 & 100 & 100 & 100 & 100 & 66 \\ WR & 53 & 100 & 100 & 100 & 100 & 100 & 94 & 100 & 86 & 100 & 43 \\ GAL & 24 & 100 & 100 & 100 & 100 & 100 & 100 & 100 & 100 & 100 & 100 \\ \hline \end{tabular} \vspace{2mm} \begin{tabular}{lccccccccccc} \hline Class & Sources & $g$ & $\sigma_g$ & $r$ & $\sigma_r$ & $i$ & $\sigma_i$ & $z$ & $\sigma_z$ & $y$ & $\sigma_y$ \\ & (\#) & (\%) & (\%) & (\%) & (\%) & (\%) & (\%) & (\%) & (\%) & (\%) & (\%) \\ \hline BSG & 261 & 96 & 96 & 96 & 96 & 96 & 96 & 96 & 96 & 96 & 96 \\ YSG & 103 & 96 & 96 & 96 & 96 & 96 & 96 & 96 & 96 & 96 & 96 \\ RSG & 512 & 96 & 93 & 97 & 97 & 97 & 97 & 97 & 97 & 97 & 97 \\ BeBR & 17 & 100 & 100 & 100 & 100 & 100 & 100 & 100 & 100 & 94 & 94 \\ LBV & 6 & 100 & 100 & 100 & 100 & 100 & 100 & 100 & 100 & 100 & 100 \\ WR & 53 & 83 & 83 & 84 & 83 & 88 & 88 & 90 & 88 & 86 & 84 \\ GAL & 24 & 100 & 100 & 100 & 100 & 100 & 100 & 100 & 95 & 100 & 100 \\ \hline \end{tabular} \vspace{2mm} \begin{tabular}{lccccc} \hline Class & Sources & $J_{\rm{UK}}$ & G & G$_{\rm{BP}}$ & G$_{\rm{RP}}$ \\ & (\#) & (\%) & (\%) & (\%) & (\%) \\ \hline BSG & 261 & 81 & 90 & 87 & 87 \\ YSG & 103 & 82 & 96 & 95 & 95 \\ RSG & 512 & 84 & 96 & 94 & 94 \\ BeBR & 17 & 70 & 100 & 100 & 100 \\ LBV & 6 & 66 & 83 & 66 & 66 \\ WR & 53 & 75 & 71 & 50 & 50 \\ GAL & 24 & 83 & 95 & 83 & 83 \\ \hline \hline \end{tabular} \end{table*} Feature selection is a key step in any machine-learning problem. To properly select the optimal features in our case we first examined data availability. In Table \ref{t:sample_phot_completness} we list the different classes (Col. 1) and the number of available sources per class (Col. 2). In the following columns we provide the fractions of objects with photometry in the corresponding bands and with proper errors (i.e., excluding problematic sources and upper limits), per survey queried (\textit{Spitzer}, Pan-STARRS, UKIRT Hemisphere Survey, and \textit{Gaia}). To build our training sample, we required the sources to have well-measured values across all bands. To avoid significantly decreasing the size of the training sample (by almost half in the case of the LBV and BeBR classes), we chose not to include the $J_{\rm{UK}}$. Although we used \textit{Gaia} data to derive the criteria to identify foreground stars, the number of stars with \textit{Gaia} photometry in the majority of other nearby galaxies is limited. Thus, to ensure the applicability of our approach we discarded these bands also (which are partly covered by the Pan-STARRS bands). The IR catalogs were built upon source detection in both the [3.6] and [4.5] images (e.g., \citealt{Khan2015}), while the measurements in the longer bands were obtained by just performing photometry in those coordinates (regardless of the presence of a source or not). However, in most cases there is a growing (with wavelength) number of sources with only upper limits in the photometry. As these do not provide secure measurements for the training of the algorithm we could not use them. If we were to take into account sources with valid measurements up to the [24], the number of sources would end up dropping by more than 50\% for some classes (see in Table \ref{t:sample_phot_completness}, e.g., the corresponding fractions of WR and YSG with secure error measurements). As this is not really an option, we decided to remove all bands that contained a significant fraction of upper limits (i.e., above [5.8]). This, rather radical, selection is also justified by the fact that the majority of the unclassified sources (in the catalogs to which were are going to apply our method) do not have measurements in those bands. It is also interesting to point out that the majority of the disregarded sources belong to the RSG class (the most populated), which means that we do not lose any important information (for the training process). From the optical set of bands, we excluded $g$ for two reasons. About 130 sources fainter than 20.5 mag tend to have systematic issues with their photometry, especially red stars for which $g-r$ turns to bluer values. Also, due to the lack of known extinction laws for most galaxies, and the lack of data for the many sources, we opted not to correct for it. As $g$ is the band most affected by extinction we opted to use only the redder bands to minimize its impact \citep{Schlafly2011, Davenport2014}. Therefore, we kept $r$, $i$, $z$, and $y$ bands and we performed the same strict screening to remove sources with upper limits. In total, we excluded 44 sources, reflecting a small fraction of the sample ($\sim4.5\%$ - treating both M31 and M33 sources as a single catalog). We show the final number of sources per class in Col. 5 in Table \ref{t:spectral_classes} summing to 932 objects in total. To remove any distance dependence in the photometry we opted to work with color terms and obtained the consecutive magnitude differences: $r-i$, $i-z$, $z-y$, $y - [3.6]$, $[3.6] - [4.5]$. We examined different combinations of these color indices, but the difference in the accuracy with respect to the best ones found is negligible. Those combinations contained color indices with wider wavelength range that are affected more from extinction that the consecutive colors. Moreover, they tend to be systematically more correlated, resulting in poorer generalization (i.e., when applied to the test galaxies; Sect. \ref{s:other_galaxies}). Some (less pronounced) correlation still exists in the consecutive color set also, because of the use of each band into two color combinations (except for $r$ and [4.5]), and due to the stellar continuum, since the flux at each band is not totally independent from the flux measured in other bands. We also noticed that more optical colors help to better sample the optical part of the spectral energy distribution and separate more efficiently some classes (BSG and YSG in particular). The consecutive color set seems as the most intuitive selection, including well-studied colors. Moreover, it represents how the slopes of the spectral energy distribution changes with wavelength. We also experimented with other transformations of these data, such as fluxes, normalized fluxes, and standardizing data (scaling magnitudes around their mean over their standard deviation), but we did not see any significant improvement in the final classification results. Therefore, we opted for the simplest representation of the data, which is the aforementioned color set. In Fig. \ref{f:SEDs-all} we plot the color indices with respect to their corresponding wavelengths for both individual sources for each class and their average. In the last panel, we overplot all averaged lines to display the difference among the various classes. As this representation is equivalent to the consecutive slopes of the spectral energy distributions for each class we notice that the redder sources tend to have a more pronounced $y-[3.6]$ feature, a color index that characterizes the transition form the optical to the mid-IR photometry. The BeBR class presents the highest values due to the significant amount of dust (and therefore brighter IR magnitudes), followed by the GALs due to their Polycyclic Aromatic Hydrocarbons (PAH) emission, the (intrinsically redder sources) RSGs, and the WRs (due to their complex environments). \subsection{Implementation and optimization} An important step of every classification algorithm is to tune its hyperparameters, that is, the parameters that control the training process. After having defined these the algorithm determines the values of the parameters used for each model (e.g., weights) based on the training sample. The implementation of all three methods (SVC, RF, and MLP) was done through the \texttt{scikit-learn} v.0.23.1\footnote{\url{https://scikit-learn.org/}} \citep{sklearn}\footnote{For the MLP/neural networks we experimented extensively with TensorFlow v1.12.0 \citep{tensorflow2015} and Keras v2.2.4 API \citep{keras2015}. This allowed us to easily build and test various architectures for our networks. We used both dense (fully connected) and convolutional (CNN) layers, in which case the input data are 1D vectors of the features we are using. Given our tests we opted to use a simple dense network, which can be easily implemented also within the \texttt{scikit-learn} that helps with the overall simplification of the pipeline.}. For the optimal selection of the hyperparameters (and their corresponding errors), we performed a stratified K-fold cross-validation (CV; \texttt{sklearn.model\_selection.StratifiedKFold()}). With this, the whole sample is split into K subsamples or folds (five in our case), preserving the fraction representation of all classes of the initial sample into each of the folds. At each iteration one fold is used as the validation sample and the rest as training. By permuting the validation fold, the classifier is trained over the whole sample. Since we performed a resampling approach to correct for the imbalance in our initial sample (see Sect. \ref{s:imbalance_treatment}), we note that this process was performed only in the training folds, while the evaluation of the model's accuracy was done on the (unmodified) validation fold. We stress that the validation fold remained "unmodified" (i.e., it was not resampled) in order to avoid data leakage and hence overfitting. The final accuracy score is the average value, and its uncertainty corresponds to the standard deviation across all folds. For the SVC process we used the \texttt{sklearn.svm.SVC()} function. We opted to train this model with the following selection of hyperparameters: \texttt{probability=True} to get probabilities (instead of a single classification result), \texttt{decision\_function\_shape = 'ovo'}, which is the default option for multi-class problems, \texttt{kernel = 'linear'}, which is faster than the alternative nonlinear kernels and proved to be more efficient\footnote{The ``linear'' kernel was more efficient in recovering the LBV class systematically in contrast to the default ``rbf'' option. }, and \texttt{class\_weight='balanced'}, which gives more weight to rarer classes (even after the resampling approach, as described in Sect. \ref{s:imbalance_treatment}). We also optimized the regularization \textit{C} parameter, which represents a penalty for misclassifications (i.e., the objects falling on the "wrong" side of the separating hyperplane). For larger values a smaller margin for the hyperplane is selected so that the misclassified sources decrease and the classifier performs optimally for the training objects. This may result in poorer performance when applied to unseen data. On the opposite, smaller values of \textit{C} leads to a larger margin (i.e., a loose separation of the classes) at the cost of more misclassified objects. To optimize \textit{C} we tested the result in the accuracy by changing the value of \textit{C} from 0.01 to 200 (with a step of 0.1 in log space). We present these results in Fig. \ref{f:optimizing-SVC-C}, where the red line corresponds to the averaged values and the gray area to the $1\sigma$ error. As the parameter reaches fast to a plateau, the choice of this particular value does not affect significantly the accuracy above $\sim25$, which is the adopted value. For the RF classifier we used \texttt{sklearn.ensemble.RandomForestClassifier()}. To optimize it, we explored the following hyperparameters over a range of values: \texttt{n\_estimators}, which is the number of trees in the forest (10-1000, step 50), \texttt{max\_leaf\_nodes}, which limits the number of nodes in each tree (i.e., how large it can grow, from 2-100; step 2), and \texttt{max\_depth}, which is the maximum depth of the tree (1-100, step 2), while the rest of the hyperparameters were left to their default values. We present their corresponding validation curves as obtained from five-fold CV tests (with mean values as red lines and their $1\sigma$ uncertainty as gray areas) in Fig. \ref{f:optimizing-RF-curves}. Again, we see that above certain values the accuracy reaches to a plateau. Given the relative large uncertainties and the statistical nature of this test, the selection of the best values is not absolutely strict (they provide almost identical results). We opted to use the following values: \texttt{n\_estimators=400}, \texttt{max\_leaf\_nodes=50}, \texttt{max\_depth=30}. We also set \texttt{class\_weight}="balanced", similar to SVC, in addition to the resampling approach. For the neural networks we used \texttt{sklearn.neural\_network.MLPClassifier()}. In this case we performed a grid search approach (\texttt{sklearn.model\_selection.GridSearchCV()}). This method allows for an exhaustive and simultaneous search over the requested parameters (with a cost in computation time). We started first by investigating the architecture of the network (e.g., number of hidden layers, number of nodes per layer) along with the three available types of methods for weight optimization (\texttt{'lbfgs},' \texttt{‘sgd,’} and \texttt{‘adam’}). We tried up to five hidden layers with up to 128 nodes per layer, using \texttt{'relu'} as the activation function (a standard selection). We present the results of this grid search in Fig. \ref{f:optimizing-NN-structures} from which we obtained systematically better results for the \texttt{'adam'} solver \citep{Kingma2014}, with the (relatively) best configuration being a shallow network with two hidden layers with 128 nodes each. Given this combination we further optimized the regularization parameter (\texttt{alpha}), the number of samples used to estimate the gradient at each epoch (\texttt{batch\_size}) and the maximum number of epochs for training (\texttt{max\_iter}), with the rest of the parameters left to their default values (with \texttt{learning\_rate\_init}=0.001). Similarly to the previous hyperparameters selections, from their validation curves (Fig. \ref{f:optimizing-NN-curves}) we selected as best values: \texttt{alpha}=0.13, \texttt{batch\_size}=128, and \texttt{max\_iter}=560. The classifier uses the Cross-Entropy loss, which allows probability estimates. \section{Results} \label{s:results} We first present our results from the individual applications of the different machine-learning algorithms to the M31 and M33 galaxies. Then, we describe how we combine the three algorithms to obtain a combined result. Finally, we apply the combined classifier to the test galaxies. \subsection{Individual application to M31 and M33} \label{s:m31m33runs} \subsubsection{Overall performance} \begin{figure*} \centering SVC \hspace{5cm} RF \hspace{5cm} MLP\\ \includegraphics[width=0.3\linewidth]{SVC-cm-clrs.pdf} \includegraphics[width=0.3\linewidth]{RF-cm-clrs.pdf} \includegraphics[width=0.3\linewidth]{MLP-cm-clrs.pdf} \\ \includegraphics[width=0.3\textwidth]{SVC-metrics-clrs.pdf} \includegraphics[width=0.3\textwidth]{RF-metrics-clrs.pdf} \includegraphics[width=0.3\textwidth]{MLP-metrics-clrs.pdf} \caption{Confusion matrices (upper panels) for the SVC, RF, and MLP methods, respectively, along with the characteristic metrics (precision, recall, and F1 score; lower panels). These results originate from single runs, i.e., by using 70\% of the initial sample for the training sample, which is then resampled to produce a balanced sample before training each model and applying the model to the remaining 30\%\ of the sample (the validation). In general, the algorithms perform well except for the cases of LBVs and WRs (see Sect. \ref{s:m31m33runs} for more details).} \label{f:results_metrics} \end{figure*} \begin{figure*} \centering SVC \hspace{5cm} RF \hspace{5cm} MLP\\ \includegraphics[width=0.3\linewidth]{SVC-pr-clrs.pdf} \includegraphics[width=0.3\linewidth]{RF-pr-clrs.pdf} \includegraphics[width=0.3\linewidth]{MLP-pr-clrs.pdf} \caption{ Precision recall curves for the three methods, along with the values of the area under the curve for each class (in parentheses). In all cases, the results of the comparison of each class against all others provide very good and consistent results, well above the random classifier (indicated for each class by the horizontal dashed lines; see Sect. \ref{s:m31m33runs}).} \label{f:results_prs} \end{figure*} \begin{table*} \centering \caption{Performance for each method and per class, after a repeated K-fold cross validation (see Sect. \ref{s:m31m33runs} for details).} \label{t:recalls} \begin{tabular}{lcccc} \hline \hline Class & SVC & RF & MLP & combined \\ \hline overall & $0.78\pm0.03$ & $0.82\pm0.02$ & $0.82\pm0.02$ & $0.83\pm0.02$ \\ \hline BSG & $0.58\pm0.08$ & $0.71\pm0.06$ & $0.71\pm0.07$ & $0.71\pm0.06$\\ BeBR & $0.80\pm0.23$ & $0.79\pm0.24$ & $0.73\pm0.25$ & $0.81\pm0.17$\\ GAL & $0.58\pm0.22$ & $0.63\pm0.21$ & $0.73\pm0.24$ & $0.71\pm0.17$ \\ LBV & $0.28\pm0.43$ & $0\pm0$ & $0\pm0$ & $0\pm0$ \\ RSG & $0.93\pm0.03$ & $0.95\pm0.02$ & $0.94\pm0.02$ & $0.94\pm0.02$\\ WR & $0.43\pm0.15$ & $0.40\pm0.16$ & $0.46\pm0.19$ & $0.48\pm0.24$ \\ YSG & $0.78\pm0.08$ & $0.75\pm0.10$ & $0.77\pm0.12$ & $0.80\pm0.08$ \\ \hline \hline \end{tabular} \end{table*} Having selected the optimal hyperparameters for our three algorithms we investigated the individual results as obtained by directly applying them to our data set. For this we need to split the sample into a training set (70\%) and evaluate the results on a validation set (30\%), which is a standard option in the literature. The split was performed individually per class to ensure the same fractional representation of all classes in the validation sample. The resampling approach to balance our sample (as described in Sect. \ref{s:imbalance_treatment}) was applied only on the training set. The model was then trained on this balanced set and the predictions were made on the original validation set. Given a specific class, we refer to the objects that are correctly predicted to belong to this class as true positives (TPs), while true negatives (TNs) are those that are correctly predicted to not belong to the class. False positives (FPs) are the ones that are incorrectly predicted to belong, while false negatives (FNs) are the ones that are incorrectly predicted to not belong to the class. In Fig. \ref{f:results_metrics} we show example runs for the SVC, RF, and MLP methods. The first row corresponds to the confusion matrix, a table that displays the correct and incorrect classification results per class. Ideally this should be a diagonal table. The presence of sources in other elements provides information about the contamination of classes (or how much the method is miss-classifying the particular class). Another representation is given in the second row where we plot the scores of the (typically used) metrics for each class. Precision (defined as $\rm TP/(TP+FP)$) refers to the number of objects that are predicted correctly to belong to a particular class over the total number of identified objects for this class (easily derived if we look at the numbers across the columns of the confusion matrix). Recall (defined as $\rm TP/(TP+FN)$) is the number of class objects over the total real population for this class (derived from the rows of the confusion matrix). Therefore, the precision indicates the ability of the method to detect an object of the particular class while recall its ability to recover the real population. The F1 score is the harmonic mean of the two previous metrics (defined as $\rm F1 score = 2 \times ( precision \times recall) / (precision + recall)$). In our case, we are mainly using the recall metric as we are interested to minimize the contamination and therefore to recover as many correct objects as possible. This is especially required for the classes with the smallest numbers, which reflect the rarity of their objects, such as the BeBR and LBV classes. We report our results using the weighted balance accuracy (henceforth, "accuracy"), which corresponds to the average of recall values across all classes, weighted by the number of objects per class. This is a reliable metric of the overall performance when training over a wide number of classes \citep{Grandini2020}. From Fig. \ref{f:results_metrics} we see that the accuracy achieved for SVC, RF, and MLP is $\sim78\%$, $\sim82\%$, and $\sim83\%$, respectively. These values are based on a single application of the algorithms, that is the evaluation of the models on the validation set (the 30\% of the whole sample). However, we left out an important fraction of information, which due to our small sample, is important. Even though we up-sampled to account for the scarcely populated classes, this happened (at each iteration) solely for those sources of the training sample, which implies that -- again -- only a part of the whole data set's feature space was actually explored. To compensate for that, the final model was actually obtained by training over the whole sample (after resampling). In this case there was no validation set to perform directly the evaluation. To address that we used a repeated K-fold CV to obtain the mean accuracy and the recall per class, which in turn provided the overall expected accuracy. Using five iterations (and five folds per iteration) we obtained $78\pm3\%$, $82\pm2\%$, and $82\pm2\%$ for SVC, RF, and MLP, respectively (the error is the standard deviation of the average values over all K-folds performed). In Table \ref{t:recalls} we show the accuracy (``overall''), and the recall as obtained per class. \cite{Dorn-Wallenstein2021}, using the SVC method and a larger set of features (12) including variability indices, achieved slightly better accuracy ($\sim90.5\%$) but for a coarser classification of their sources (i.e., for only four classes: ``hot,'' ``emisison,`` ``cool,'' and ``contamination'' stars). When they used their finer class grid with 12 classes, their result was $\sim54\%$\footnote{The balanced accuracy reported by \cite{Dorn-Wallenstein2021} is the average recall across all classes, i.e., without weighting by the frequency for each class. This metric is insensitive to class distribution \citep{Grandini2020}. We converted the reported values to the weighted balanced accuracy to directly compare our results.}. \subsubsection{Class recovery rates} The results per class are similar for all three methods. They can recover the majority of the classes efficiently, with the most prominent class being the RSGs with $\sim95\%$ success (similar to \citealt{Dorn-Wallenstein2021}). Decent results are returned for the BSG, YSG, and GAL classes, within a range of $\sim60-80\%$. The class for which we obtained the poorest results is the LBVs. The SVC is the most effective in recovering a fraction of the LBVs ($\sim30\%$, albeit with a large error of 43\%), while the other two methods failed. The LBV class is an evolutionary phase of main-sequence massive O-type stars before they lose their upper atmosphere (due to strong winds and intense mass-loss episodes) and end up as WRs. They tend to be variable both photometrically and spectroscopically, displaying spectral types from B to G. Hence, physical confusion between WRs, LBVs, and BSGs is expected, as indicated by the lower recall values and the confusion matrices (see Fig. \ref{f:results_metrics}). Moreover, the rarity of these objects leads to a small-populated class for which their features are not well determined, and consequently, the classifier has significant issues distinguishing them from other classes. On the other hand, SVC examines the entire feature space, which is the reason for the (slightly) improved recall for LBVs in this case (\citealt{Dorn-Wallenstein2021} report full recovery but probably because of overfitting). Due to the small number and the rather inhomogeneous sample of WRs, all the classifiers have difficulties to correctly recover the majority of these sources. The best result is provided by MLP at $\sim46\%$, less than the $\sim75\%$ reported by \citealt{Dorn-Wallenstein2021}. Despite the small sample size of the BeBR class, it is actually recovered successfully ($>79\%$). As BeBRs (including candidate sources) form a more homogeneous sample than LBVs and WRs, their features are well characterized, which helps the algorithms separate them. To better visualize the performance of these methods we constructed the Precision Recall curves, which are better suited in the case of imbalanced data \citep{Davis2006, Saito2015}. During this process, the classifier works in a one-versus-rest mode; that is to say, it only checks whether the objects belong to the examined class or not. In Fig. \ref{f:results_prs} we show the curves for each algorithm. The dashed (horizontal) lines correspond to the ratio of positive objects (per class) over the total number of objects in the training data. Any model found at this line or below has no ability to predict anything better than random (or worse). Therefore, the optimal curve directs toward the upper right corner of the plot (with precision=recall=1). In all cases the classifiers are better than random. RF displays systematically the best curves. In SVC, RSGs and BeBRs are almost excellent, and the rest of the classes display similar behavior. For MLP, all classes except BSGs and WRs are very close to the optimal position. Another metric is obtained if we measure the fraction of the area under the curve. This returns a single value (within the 0-1 range) depicting the ability of the classifier to distinguish the corresponding class over all the rest. In Fig. \ref{f:results_prs} we show these values within the legends. In general, we achieve high values, which means that our classifiers can efficiently distinguish the members of a class against all others. These consistent results add further support that the careful selection of our sample has worked and that the methods work efficiently (given the class limitations). \subsection{Ensemble models} \label{s:combining_models} \subsubsection{Approaches} \begin{figure*} \centering \includegraphics[width=0.4\linewidth]{pdf-10648.pdf} \includegraphics[width=0.4\linewidth]{pdf-4641.pdf}\\ \includegraphics[width=0.4\linewidth]{pdf-40429.pdf} \includegraphics[width=0.4\linewidth]{pdf-82077.pdf}\\ \caption{Examples of probability distributions for a number of objects with correct (left) and incorrect (right) final classifications.} \label{f:object_pdfs} \end{figure*} \begin{figure} \includegraphics[width=\columnwidth]{M31+M33_spt_final-mod-correct_prob_plot.pdf} \caption{Probability distributions for sources classified correctly (blue) and incorrectly (orange), for the validation sample. We successfully recover the majority of the objects in the validation sample ($\sim83\%$). The dashed blue and orange lines correspond to the mean probability values for the correct (at 0.86) and incorrect (at 0.60) classifications, based on repeated five-fold CV tests (five iterations).} \label{f:pdf_threshold} \end{figure} A common issue with machine-learning applications is choosing the best algorithm for the specific problem. This is actually impossible to achieve a priori. Even in the case of different algorithms that provide similar results it can be challenging to select one of them. However, there is no reason to exclude any. Ensemble methods refers to approaches that combine predictions from multiple algorithms, similar to combining the opinions of various "experts" in order to reach to a decision \citep[e.g.,][]{Re2012}. The motivation of ensemble methods is to reduce the variance and the bias of the models \citep{Mehta2019}. A general grouping consists of bagging, stacking, and boosting. Bagging (bootstrap aggregation) is based on training on different random subsamples whose predictions are combined either by majority vote (e.g., which is the most common class) or by averaging the probabilities (RF is the most characteristic example). Stacking (stacked generalization) refers to a model that trains on the predictions of other models. These base models are trained on the training data and their predictions (sometimes along with the original features) are provided to train the meta-model. In this case, it is better to use different methods that use different assumptions, so that to minimize the bias inherent by each method. Boosting refers to methods that focus on the improvement of the misclassifications from previous applications. After each iteration, the method will actually bias training toward the points that are harder to predict. Given the similar results among the algorithms that we used, as well as the fact that they are trained differently and plausibly sensitive to different characteristic features per class, we were motivated to combine their results to maximize the predictive power and to avoid potential biases. We chose to use a simple approach, with a classifier that averages the output probabilities from all three classifiers. There are two ways to combine the outputs of the models, either through "hard" or "soft" voting. In the former case the prediction is based on the largest sum of votes across all models, while in the latter the class corresponding to the largest summed probability is returned. To set an example with hard voting, if the results from the three models we used were BSG, BSG, and YSG, then the final class would be BSG. However, this voting scheme does not grasp all the information. Soft voting can be more efficient. Given the final summed probability distribution across all classes it is possible that the final classification may be different than the one that the hard voting would return. Additionally, it can solve cases when a single class cannot be determined, that is, when each of the three classifiers predicts a different class. With the final probability distribution we can also provide error estimates on the class predictions (and define confidence thresholds). \subsubsection{Combined classifier} The simplest approach is to combine the individual probabilities per class using equal weights per algorithm (since the accuracy of each is similar): \begin{equation} P_{\rm final} = (P_{\rm SVC} \times 1/3) + (P_{\rm RF} \times 1/3) + (P_{\rm MLP} \times 1/3). \end{equation}In Fig. \ref{f:object_pdfs} we show some example distributions for a few sources with correct and incorrect final classifications. We performed a repeated (five iterations) five-fold CV test to estimate the overall accuracy of the combined approach at $0.83\pm0.02$ (see Table \ref{t:recalls}). The recall values are consistent with the results from the individual classifiers, with the highest success obtained for RSGs ($\sim94\%$) and BeBRs and YSGs ($\sim80\%$), while LBVs are not recovered. Despite this result, it is possible to get LBV classification at a probably lower significance (i.e., probability). However, even a small number of candidates for this class are important for follow-up observations, due to their rarity and their critical role in outburst activity \citep[e.g.,][]{Smith2014}. In Fig. \ref{f:pdf_threshold} we show the distributions of probabilities of the sources in a validation sample identified correctly (blue) and incorrectly (orange). The blue and orange dashed lines correspond to the mean probability values for the correct (at $0.86\pm0.01$) and incorrect (at $0.60\pm0.03$) classifications. Although the distributions are based on a single evaluation of the classifier on the validation set, the values corresponding to these lines originate from a five-iteration repeated five-fold CV application. \subsection{Testing in other galaxies} \label{s:other_galaxies} \begin{figure} \includegraphics[width=\columnwidth]{other_galaxies-cm.pdf} \caption{Confusion matrix for 54 sources without missing values in the three galaxies (IC 1613, WLM, and Sextans A). We achieve an overall accuracy of $\sim70\%$, and we notice that the largest confusion occurs between BSGs and YSGs. The overall difference in the accuracy compared to that obtained with the M31 and M33 sample is attributed to the photometric errors and the effect of metallicity and extinction in these galaxies.} \label{f:other_galaxies-cm} \end{figure} \begin{figure}[hbt!] \includegraphics[width=\columnwidth]{other_galaxies_distributions.pdf} \caption{Probability and band completeness distributions for the sources of the three galaxies (IC 1613, WLM, and Sextans A) with and without missing data. (Top) Probability distributions of the correct (blue) and incorrect (orange) final classifications for the total sample of stars with known spectral types and with measurements in all bands. We achieved a recovery rate of $\sim70\%$. The vertical dashed lines are the same as those in Fig. \ref{f:pdf_threshold}; the solid lines correspond to the peak of the probability distributions for the current sample. (Middle) Distribution of the band completeness, i.e., the fraction of features without missing values. (Bottom) Probability distributions for all sources, including those without measurements in multiple bands (vertical lines have the same meaning as in the top panel). The success rate of $\sim68\%$ is the same as in the top panel, indicating the effectiveness of the iterative imputer for missing data imputation.} \label{f:other_galaxies} \end{figure} As an independent test we used the collection of sources with known spectral types in the IC 1613, WLM, and Sextans A galaxies (see Sect. \ref{s:data-spec_types}). In order to take into account all available information we resampled the whole M31 and M33 sample and we trained all three models. The application follows the exact same protocol as the training except from the resampling approach, which is used only for the training: (i) load the photometric data for the new sources, (ii) perform the necessary data processing to derive the features (color indices), (iii) load the fully trained models for the three classifiers\footnote{Using Python’s built-in persistence model \texttt{pickle} for saving and loading.}, (iv) apply each of them to obtain the individual (per classifier) results, (v) calculate the total probability distribution from which we get the final classification result, and (vi) compare the predictions with the original classes. For the last step, we converted the original spectral types to the classes we formed while training. Out of the 72 sources we excluded nine with uncertain classifications: four carbon stars, two identified simply as "emission" stars, one with a "composite" spectrum, one classified as a GK star, and one M foreground star. In Fig. \ref{f:other_galaxies-cm} we show the confusion matrix for the sample of the test galaxies, where we have additionally (for this plot) excluded another nine sources with missing values (see next section). By doing this we can directly compare the results with what we obtained from the training galaxies M31 and M33. We successfully recovered $\sim70\%$, which is less than what we achieved for the training (M31 and M33) galaxies ($\sim83\%$). We note that due to the very small sample size (54 sources) even a couple of misclassifications can change the overall accuracy by a few percent. Nevertheless, a difference is still present. Evidently, the largest disagreement arises from the prediction of most BSGs as YSGs. These two classes do not have a strict boundary in the HRD, making their classification at larger distances even more challenging. Moreover, the sources in these galaxies are at the faint end of the magnitude distribution for the \textit{Spitzer} bands, which may influence the accuracy of their photometry. While M31 has a metallicity above solar and M33 a gradient from solar to subsolar \citep{Pena2019} the three test galaxies are of lower metallicity \citep{Boyer2015}. However, it is not certain how this influences the classification performance. Lower metallicity affects both extinction and evolution that could lead to shifts in the intrinsic color distributions. Currently, given the lack of photometric data and source numbers for lower metallicity galaxies, it is impossible to examine the effect of metallicity thoroughly. In the upper panel of Fig. \ref{f:other_galaxies} we show the distribution of the probabilities of correct (blue) and incorrect (orange) classifications. The dashed lines represent the same limits as defined in Sect. \ref{s:combining_models} for the training sample (at 0.86 and 0.60, respectively), while the solid ones correspond to the mean values defined by the current sample, at 0.67 and 0.51 for correct and incorrect, respectively. These shifts of the peak probabilities, especially for the correct classifications, shows the increased difficulty of the classifier to achieve a confident prediction. \subsection{Missing data imputation} \label{s:dat_imputation} \begin{figure}[hbt!] \includegraphics[width=\columnwidth]{missing_data_test.pdf}\\ \includegraphics[width=\columnwidth]{missing_data_comp.pdf} \caption{ Accuracy changes with missing features. (Top) Comparing the drop in accuracy from a typical ($30\%$ split) validation set without missing data to one where missing data have been generated by randomly selecting two features per object and replacing them with the corresponding mean values (purple circles) or the values imputed by the iterative imputer (green pentagons). The mean value obtained for the imputer is less than 0.1 and almost three times better than the mean drop for mean values. (Bottom) Iterative imputer, which is more capable of handling an increased number of missing features, with a limit at three (out of five available in total). The loss in accuracy is less than 20\%.} \label{f:missing_data} \end{figure} In the previous section we excluded nine sources that contain missing values, meaning they did not have measurements in one or more bands. This is important for two reasons. In order for the methods to work they need to be fed with a value for each feature. Simultaneously, the majority of the sources in the catalogs with unclassified sources (to which this classifier will be applied) do not possess measurements in all bands. To solve this, we performed a data imputation process in two ways. One, typical approach, is to replace missing values with a median/mean value. For this we first derived the median value (to remove extremes) of each feature distribution per class and from all available sources in the training sample of M31 and M33. Then we took the mean of the feature's values over all classes. Another approach is to use iterative imputation, in which each feature is modeled as a function of others originating from the multivariate imputation by chained equations (MICE; \citealt{Buuren2011}). This is a plausible assumption in our case, since we are dealing with spectral features that are indeed covariant to some degree (spectra do not fluctuate suddenly across neighboring bands unless a peculiar emission or absorption feature is present). It is hence plausible to impute a missing bandwidth value given the others. The imputation of each feature is done sequentially, which allows previous values to be considered as part of the model in predicting the subsequent features. The process is repeated (typically ten times), which allows the estimations of the missing values to be improved even more. We implemented this by using \texttt{impute.IterativeImputer()} (with default parameters). To further investigate the influence of data with missing values, we ran a test by simulating sets with missing values from the original M31 and M33 sample. As usual, we split the sample into training (70\%) and validation (30\%) samples. After resampling the training set, it was used to train the three classifiers, and an initial estimate of the accuracy was obtained with the validation sample. Then, according to how many features we can afford to "miss," we randomly selected the features of each object in the validation sample. We either replaced these features with the corresponding mean values or we applied the iterative imputer. Then the accuracy was obtained on this modified validation set. In the upper panel of Fig. \ref{f:missing_data} we show an example of the difference between the initial (unmodified) validation set and the ones with missing values, by randomly replacing two features (per object) and imputing data with the iterative imputer (green pentagons) and mean values (purple circles). The mean drop in accuracy (over ten iterations) is less than 0.1 for the imputer (green dashed line) but almost 0.3 for the means. In the bottom panel of Fig. \ref{f:missing_data} we show the drop in accuracy with increasing number of missing features. Obviously, the imputer is performing more efficiently and doing more than simply replacing missing features with mean values, and it can work with up to three missing features (out of five available in total). We also quantified the fraction of missing values by defining a "band completeness" term, simply as $ 1 - N_{\rm bands\_without\_measurement } / N_{\rm total\_bands} $. In the middle panel of Fig. \ref{f:other_galaxies} we show the distribution of this completeness for correct and incorrect sources. Given that about half of the nine sources with missing values have band completeness $0.2$ (meaning only one feature is present) and the others are missing two to three, the success rate of five out of nine of these sources classified correctly ($\sim55\%$) matches what we would approximately expect from the bottom panel of Fig. \ref{f:missing_data}. In the bottom panel of Fig. \ref{f:other_galaxies} we show now the probability distribution for all sources. The score is $68\%$, which is the same as the accuracy obtained for the sample without any missing values (at 70\%). The dashed and solid lines have the same meaning as previously, and there is no significant change (at 0.65 ad 0.59 for correct and incorrect classifications, respectively). In this particular data set the presence of a small number of sources (9 out of 63; $\sim14\%$) with missing values does not affect the performance of the classifier. \section{Discussion} \label{s:discussion} In the following sections we discuss our results with respect to the sample sizes, label availability and feature sensitivity per class of our classifier. \subsection{Exploring sample volumes and class mixing} \begin{figure*} \includegraphics[width=\textwidth]{sample_volume-recall.pdf} \caption{Recall vs. the fraction of the training sample used per class. We notice a significant improvement for BeBRs and YSGs with increased training samples. When the samples sizes are already adequate, the maximum possible value is achieved faster (e.g., for RSGs and BSGs). The GAL and WR classes show an increase, while the LBV sample is too small to produce meaningful results.} \label{f:sample_volume-recall} \end{figure*} One of the major concerns when building machine-learning applications is the representativeness of the samples used. To explore this we performed iterative runs for each algorithm by adjusting the size of the training sample used. At each iteration after the initial split into train (70\%) and validation (30\%) sets, we kept a fraction of the training. After randomly selecting the sources per class, we performed the resampling in order to create the synthetic data. However, we needed at least two sources per class for the SMOTE to run (for this process we adjusted \texttt{n\_neighbor=1}). Therefore, we started from $10\%$ up to the complete training sample. Especially for LBVs we added an additional source by hand for the first two fractions (after 0.3 enough sources were selected automatically). In Fig. \ref{f:sample_volume-recall} we plot the recall per class for each method (for completeness in Fig. \ref{f:sample_volume} we also present the precision and F1 score). We see an expected overall improvement with increasing sample size. This means that the larger the initial sample, the more representative it is of the parent population. The resampling method can interpolate, but does not extrapolate, which means that even though we are creating new sources they originate from the available information of the feature space. For example, \cite{Kyritsis2022} experimented with three different variants of RF to find that the results were dominated by the internal scatter of the features. Therefore, any limitations are actually transferred to the synthetic data. More information results in a better representation of their features by the classifier (leading to more accurate predictions). \subsubsection{BSGs and RSGs} The BSGs and RSGs are the most populous classes, and they achieve a high accuracy much faster (except for BSGs in the SVC). The RSG class also performs well in the work of \cite{Dorn-Wallenstein2021}, at $\sim96\%$. In their refined label scheme they split (the equivalent to our) BSG sources into more classes, which results in a poorer performance. \subsubsection{BeBRs} The careful reader will notice that the BeBR sample size is similar to that of the LBVs and smaller than the WR one. Despite that, we are able to get really good results due to the specific nature of these objects. The B[e] phenomenon (presence of forbidden lines in spectra) actually includes a wide range of evolutionary stages and masses, from pre-main-sequence stars to evolved ones, symbiotics and planetary nebulae \citep{Lamers1998}. The subgroup of evolved stars is perhaps the most homogeneous group, as they are very luminous $(\rm{log}(L/L_\odot) > 6.0)$ characterized by strong Balmer lines in emission (usually with P-Cygni profiles), narrow low-excitation lines (such as FeII, [FeII], and [OI]), and they display chemically processed material (such as TiO bands and $^{13}\rm CO$ enrichment) indicative of their evolved nature. Moreover, these spectral characteristics (arising from dense, dusty disks of unknown origin) are generally long-lived \citep{Maravelias2018} and these sources tend to be photometrically stable \citep{Lamers1998}. Those characteristics, along with strong IR excess due to their circumstellar dust (see \citealt{Kraus2019a}, but also \citealt{Bonanos2009, Bonanos2010}) make them a small but relatively robust class. Interestingly, \cite{Dorn-Wallenstein2021} recover BeBR at the same accuracy with our approach. \subsubsection{LBVs} The LBV sample only shows a clear gain with an increased training sample for SVC, which is the most efficient method for recovering this class. When in quiescence, LBVs share similar observables (e.g., colors and spectral features) with BSGs, WRs, and BeBRs \citep[e.g.,][]{Weis2020, Smith2014}. Therefore, it is quite challenging to separate them, and the only way to certify the nature of these candidate LBVs is when they actually enter an active outburst phase. During this phase, the released material obstructs the central part of the star, changing its spectral appearance from a O/B type to an A/F type (which in turn would mix them with the YSG sources), while they can significantly brighten in the optical ($>2\,\rm{mag}$, but at constant bolometric luminosity; \citealt{Clark2005}). In order to form the most secure LBV sample, we excluded all candidate LBVs (some of which more resemble BeBRs; \citealt{Kraus2019a}), and we were left with a very small sample of six stars. The LBVs display an additional photometric variability of at least a few tenths of a magnitude \citep{Clark2005}. This information could be included as a supplementary feature through a variability index (such as $\chi^2$, median absolute deviation, etc; \citealt{Sokolovsky2017}). However, this is not currently possible as the data sets we are using are very limited in the epoch coverage (for example, at the very best only a few points are available per band in the Pan-STARRS survey). Furthermore, the optical (Pan-STARRS) and IR (\textit{Spitzer}) data for the same source were obtained at different epochs, which may result the source's flux being sampled from different modes. This effect, along with their small sample size (far from complete; \citealt{Weis2020}), may well explain the limited prediction capability of our method. On the other hand, \cite{Dorn-Wallenstein2021} took into account variability (using \textit{WISE} light curves) and they report a full recovery of LBVs, which might be due to overfitting. Because of the small size of their sample (two sources), they did not discuss it any further. \subsubsection{WRs} In the single case scenario, LBVs are a phase in the transition of O-type stars before their outer layers are stripped, because of the intense mass loss and/or massive eruptions \citep{Smith2014}. Binaries are another channel where efficient stripping can lead to WRs \citep{Shenar2020}. Depending on the metallicity and their rotation, WRs may also form directly from main-sequence stars \citep{Meynet2005}. As their evolution is highly uncertain, they can originate from both LBV or BSG stars. Stellar evolution is a continuous process that does not display strict boundaries between those groups in the HRD. Therefore, their features (color indices) can be mixed. They are bright sources and this has enabled the detection of almost their complete population (see \citealt{Neugent2019a} for a review) but the actual numbers are limited due to their rarity. Their small sample size -- which actually includes a number of different subtypes of WRs, such as the nitrogen or carbon rich ones, as well as some known binaries with O-type companions -- has an impact on our prediction capability, but it is better than for LBVs. We also note that their recall benefits from the increase in the training sample for SVC and RF, but not much for MLP. \cite{Rosslowe2018} have shown that WRs and LBVs can be better distinguished using near-IR (JHK bands), a region that is unfortunately excluded from our feature list because of the lack of extensive and consistent surveys for our galaxies (although 2MASS exists, it is not deep enough for our more distant galaxies). On the contrary, \cite{Dorn-Wallenstein2021} include these bands, which may explain their improved accuracy for WRs and (possibly) for LBVs. \subsubsection{YSGs} The YSG class contains all sources that are found in between the BSG and the RSG classes. In general, this is a relatively short-lived phase as the star evolves off the main sequence or evolves back to hotter phases after the RSG phase (e.g., \citealt{Kourniotis2018,Gordon2019}; excluding the contamination by foreground sources, which we minimized by preprocessing with the \textit{Gaia} properties but definitely did not eliminate). However, it is hard (if not impossible) to get strict boundaries in the CMDs between the BSG and the YSG populations, as well as the YSG and the RSG ones. \cite{Yang2019} presents a ranking scheme that is based on the presence of each source in a number of multiple CMDs (c.f. fig. 16). With our current work we are able to remove this complexity as we take into account the information from multiple CMDs (through the color indices) at once. We are able to correctly predict the majority of this sample at $\sim73\%$, in contrast to the $\sim27\%$ from \cite{Dorn-Wallenstein2021}. The major factor in this case is the use of more optical colors, which helps in distinguishing YSGs from BSGs more effectively, while \cite{Dorn-Wallenstein2021} work mainly with IR colors. \subsection{Label uncertainties} Uncertainty in the labels (or classes) can come in two flavors, either because of classification errors (e.g., human bias, instrument limitations) or due to the natural mixing of these sources. After all, there are uncertainties in the evolution of massive stars after the main sequence, as we still lack robust knowledge with respect to the transition of these sources through the various phases. However, it is a typical prerequisite in supervised machine-learning applications that the labels are the absolute truth. This can lead to inaccurate predictions. \cite{Dorn-Wallenstein2021} comment specifically on this, as with their refined classes (containing 12 classes) they achieve an accuracy of $\sim53\%$ for the SVC, because their labels for Galactic sources are "derived inhomogeneously, and many are from spectroscopy that is now more than 50 years old." In our case, we have obtained a more homogeneous sample, since we are working with specific galaxies (distance uncertainties are minimized) and the results originate from consistent surveys and modern instruments and facilities. In other words, our labels are more secure and help us achieve a better result. A way to tackle this is by properly handling label uncertainties during the training process itself, which, however, is not a trivial task. \subsection{Feature sensitivity} \begin{figure} \includegraphics[width=\columnwidth]{features-permutation.pdf} \caption{Permutation feature importance (i.e., the difference in accuracy between the original data set and the shuffled one) per feature for each classifier independently and the combined one. The features $r-i$, $y-[3.6]$, and $[3.6]-4.5$ consistently appear to be the most important.} \label{f:permutative_features} \end{figure} \begin{figure} \includegraphics[width=\columnwidth]{features-remove_all.pdf}\\ \includegraphics[width=\columnwidth]{features-remove_classes.pdf} \caption{Feature importance per removed feature. The feature is removed and the whole combined model is retrained. The first point corresponds to the full feature set. (Top) Considering the overall accuracy for the combined classifier, the most significant features are $r-i$ and $y-[3.6]$, consistent with the permutation importance test. (Bottom) Recall per class. We see that different features become more significant for BeBRs, GALs, WRs, YSGs, and LBVs, while RSGs and BSGs do not show important changes (see text for more).} \label{f:featimp_remove} \end{figure} During the feature selection (Sect. \ref{s:feature_selection}) we disregarded bands that would significantly decrease our sample and/or they would introduce noise ($J_{\rm{UK}}$, \textit{Gaia}, \textit{Spitzer} [5.8], [8.0], and [24], Pan-STARRS $g$). The \textit{Spitzer} [3.6] and [4.5] bands are present for all of our sources (by construction of our catalogs), while the availability of optical ones (Pan-STARRS $r, i, z, y$) varies depending on the source. In order not to lose any more information, we included all optical bands (except for $g$) and performed missing data imputation whenever necessary. But then, the questions of how sensitive the classifier is to these features and which are more important per class, naturally follow. We first investigate how the overall performance of the classifier depends on the features. For this we performed a permutation feature importance test (\texttt{sklearn.inspection.permutation\_importance()}). By shuffling the values of a specific feature we see how much it influences the final result. In this case the metric used is the difference between the accuracy of the original data set and of the shuffled one\footnote{The process is performed on the training sample, so we included all M31 and M33 sources in it and resampled accordingly.}. In the case of a nonsignificant feature this change will be small and the opposite holds for an important feature. In Fig. \ref{f:permutative_features} we show the results per classifier as well as their combined model (``all''). We notice that the most significant features are $r-i$, $y-[3.6]$, and $[3.6]-[4.5]$ while the least important are $i-z$ and $z-y$ and $[4.5]-[5.8]$. This is not a surprise actually since these features are the ones for which we have the largest separation among the averaged lines of classes (see Fig. \ref{f:SEDs-all}). There are small differences between the individual algorithms but they are relatively consistent. They show similar sensitivity to the optical colors. The only exception is RF for $y-[3.6]$, which seems less sensitive than the others. One key issue with this approach is that it is more accurate in the case of uncorrelated data. In our case there is some correlation both due to the fact that the consecutive color indices contain their neighboring bands and because the fluxes at each band are not totally independent from the others. An alternative and more robust way is by testing the accuracy of our model by dropping a feature each time. The general drawback in this approach is the computational time as it is needed to retrain (including resampling at each iteration) the model from the beginning, contrary to the previous test where only the values of a feature change and the model is applied. Fortunately, our training sample and modeling is neither prohibitively large nor complicated. Thus, using the combined classifier we iteratively removed one feature at the time. Then we calculated the metric of this iteration with respect to the initial feature set. In Fig. \ref{f:featimp_remove} (upper panel) we plot this accuracy, where $r-i$ and $y-[3.6]$ show (relatively) large deviation and seem to be the most important features. This is in agreement with what we found with the feature permutation approach. Interestingly, $r-i$ is the "bluest" feature and seems to be important for the overall classification (the optical part is excluded from the work of \citealt{Dorn-Wallenstein2021}). When examining the results for the recall per class (Fig. \ref{f:featimp_remove}; lower panel) we see that for different classes are sensitive to different features. For BeBRs, $i-z$ and $y-[3.6]$ seem to be the most important, although smaller offsets are visible for the rest of the features also (the mean curve of BeBR peaks at this feature; see Fig. \ref{f:SEDs-all} This can be attributed to the overall redder colors because of the dusty environment around these objects. The GALs are sensitive to both $r-i$, the feature closer to the optical part, and $y-[3.6]$, partly due to the PAH component (GALs display the second strongest peak in Fig. \ref{f:SEDs-all}). Although not so significant $i-z$ seems to favor WR classification. The WR class is a collection of different flavors or classical (evolved) WRs, including binary systems. The YSGs are more sensitive to $y-[3.6]$ and a bit less in $i-z$, similar to BeBRs, as they also tend to have dusty environments. The BSGs and RSGs are the most populated classes, and they do not show any significant dependence. This might be because although distinct to the other classes they contain a wider range of objects that possibly mask significant differences between the bands (see \citealt{Bonanos2009, Bonanos2010}). For example, we included in the BSG sources with emission lines, such as Be stars that display redder colors. For LBVs, $i-z$ seems important but due to their small population the error is quite significant. Also, the redder features lie at zero, which may be due to the incapability of our model to predict these sources with higher confidence. If we were to exclude any of these features, we would get poorer results for some of the classes. The inclusion of more colors would benefit the performance of our classifier as it would help with the sampling of the spectral energy distributions of the sources (going to the optical blue part will not help the redder sources but it would be valuable for the hotter classes). \section{Summary and conclusions} \label{s:summary} In this work we present the application of machine-learning algorithms to build an ensemble photometric classifier for the classification of massive stars in nearby galaxies. We compiled a \textit{Gaia} cleaned selected sample of 932 M31 and M33 sources, and we grouped their spectral types into seven classes: BSGs, YSGs, RSGs, B[e]SGs, LBVs, WRs, and background sources (outliers). To address the imbalance of the sample, we employed a synthetic data approach with which we managed to increase the underrepresented classes, although this is always limited by the feature space that the initial sources sample. We used as features the consecutive color indices from the \textit{Spitzer} [3.6] and [4.5] and Pan-STARRS $r, i, z, $ and $y$ bands (not corrected for extinction). We implemented three well-known supervised machine-learning algorithms, SVC, RF, and MLP, to develop our classifier. The application of each of the algorithms results in fairly good overall results (recovery rates): BSGs, GALs, and YSGs from $\sim60\%$ to $\sim80\%$, BeBRs at $\sim73-80\%$, and WRs at $\sim45\%$, with the best results obtained for the RSGs ($\sim94\%$) and the worst for LBVs ($\sim28\%$ for SVC only). These results are on par with or improved compared to the results from \cite{Dorn-Wallenstein2021}, who worked with a much less homogeneous (with respect to the labels) but more populated Galactic sample. Given the similar performance of the three methods, and to maximize our prediction capability, we combined all outputs into a single probability distribution. This final meta-classifier achieved a similar overall (weighted balanced) accuracy ($\sim83\%$) and similarly good results per class. Examining the impact of the training volume size, we noticed that, as expected, the sample size plays a critical role in the accurate prediction of a class. When many sources of a class are available (e.g., RSGs or BSGs), then the classifier works efficiently. In less populated classes (such as BeBRs and WRs), the inclusion of more objects increases the information provided to the classifier and improves the prediction ability. However, we are hampered by low-number statistics as these classes correspond to rare and/or short-lived phases. Additional information can be retrieved by using more features. We investigated the feature importance to find that, for the current data set, $r-i$ and $y-[3.6]$ are the most important, although different classes are sensitive to different features. Thus, the inclusion of more color indices (i.e., observations at different bands) could improve the separation of the classes. To test our classifier with an independent sample, we used data collected for IC 1613, WLM, and Sextans A sources, some of which ($\sim14\%$) had missing values. We performed data imputation by replacing the features' values using means and an iterative imputer. Although the missing values do not significantly affect the results for this particular data set, further tests showed that the iterative imputer can efficiently handle data sets with up to three missing features (out of the total five available). The final obtained accuracy is $\sim70\%$, lower than what we achieved for M31 and M33. The discrepancy can partly be attributed to photometric issues and the total effect of metallicity. The latter can modify the intrinsic colors of the sources and extinction due to the different galactic environments. Despite this, the result from this application is promising. In a follow-up paper we will present in detail the application of our classifier to previously unclassified sources for a large number of nearby galaxies. Currently, the metallicity dependence is impossible to address. For this we need larger samples of well-characterized sources in different metallicity environments. Although this is challenging because of the observing time required in large facilities, the ASSESS team is actively working toward this goal. A number of observing spectroscopic campaigns are completed and ongoing, which will provide the ultimate testbed of our classifier's actual performance along with opportunities for improvement.\\ \small \noindent\textit{Acknowledgements} We thank the anonymous referee for their constructive comments and suggestions that helped us to improve this work. GM, AZB, FT, SdW, MY acknowledge funding support from the European Research Council (ERC) under the European Union’s Horizon 2020 research and innovation programme (Grant agreement No. 772086). GM would like to thank Alkis Simitsis, Thodoris Bitsakis, Elias Kyritsis, Andreas Zezas, Jeff Andrews, Konstantinos Paliouras for many fruitful discussions on machine learning and beyond. \textit{Facilities:} This work has made use of data from the European Space Agency (ESA) mission {\it Gaia} (\url{https://www.cosmos.esa.int/gaia}), processed by the {\it Gaia} Data Processing and Analysis Consortium (DPAC, \url{https://www.cosmos.esa.int/web/gaia/dpac/consortium}). Funding for the DPAC has been provided by national institutions, in particular the institutions participating in the {\it Gaia} Multilateral Agreement. The Pan-STARRS1 Surveys (PS1) and the PS1 public science archive have been made possible through contributions by the Institute for Astronomy, the University of Hawaii, the Pan-STARRS Project Office, the Max-Planck Society and its participating institutes, the Max Planck Institute for Astronomy, Heidelberg and the Max Planck Institute for Extraterrestrial Physics, Garching, The Johns Hopkins University, Durham University, the University of Edinburgh, the Queen's University Belfast, the Harvard-Smithsonian Center for Astrophysics, the Las Cumbres Observatory Global Telescope Network Incorporated, the National Central University of Taiwan, the Space Telescope Science Institute, the National Aeronautics and Space Administration under Grant No. NNX08AR22G issued through the Planetary Science Division of the NASA Science Mission Directorate, the National Science Foundation Grant No. AST-1238877, the University of Maryland, Eotvos Lorand University (ELTE), the Los Alamos National Laboratory, and the Gordon and Betty Moore Foundation. The UHS is a partnership between the UK STFC, The University of Hawaii, The University of Arizona, Lockheed Martin and NASA. \textit{Software:} This research made use of Numpy \citep{numpy2020}, matplotlib \citep{matplotlib}, sklearn \citep{sklearn}, Jupyter Notebooks \citep{jupyter}, Mlxtend \citep{mlxtend}. This research made use of TOPCAT, an interactive graphical viewer and editor for tabular data \citep{topcat}. We wish to thank the "2019 Summer School for Astrostatistics in Crete"\footnote{\url{http://astro.physics.uoc.gr/Conferences/Astrostatistics_School_Crete_2019/}} for providing training on the statistical methods adopted in this work. We also thank Jeff Andrews for organizing the SMAC (Statistical methods for Astrophysics in Crete) seminar\footnote{\url{https://githubhelp.com/astroJeff/SMAC}}. We also acknowledge useful information provided by Jason Brownlee from his site Machine Learning Mastery\footnote{\url{https://machinelearningmastery.com}} This research has made use of NASA's Astrophysics Data System. This research has made use of the SIMBAD database, operated at CDS, Strasbourg, France. This research has made use of the SVO Filter Profile Service (http://svo2.cab.inta-csic.es/theory/fps/) supported from the Spanish MINECO through grant AYA2017-84089. \bibliographystyle{aa}
1,108,101,565,357
arxiv
\section{Introduction} Multi-agent path finding ({\bf MAPF}\xspace) problem aims to find paths for multiple agents from their initial locations to destinations such that no two agents collide with each other while they follow these paths. This problem has been studied under various constraints (e.g., where an upper bound is given on the plan length) or attempting to reach a certain objective (e.g., minimizing the total time taken for all agents to reach their goals, or minimizing the maximum time taken for each agent to reach its goal location). All these variants are NP-hard~\cite{RatnerW86,Surynek10}. We study a dynamic version of the {\bf MAPF}\xspace problem that emerges when changes in our environment begin to take place, e.g., when new agents are added to the team at different times with their own initial and goal locations, or when some obstacles are placed into the environment. We refer to this problem as Dynamic Multi-Agent Path Finding ({\bf D-MAPF}\xspace) problem. {\bf D-MAPF}\xspace\ has many direct applications in automated warehouses, where teams of hundreds of robots are utilized to prepare dynamic orders in an every changing environment~\cite{Wurman}. We propose a new method to solve the {\bf D-MAPF}\xspace problem, which involves replanning for a small set of agents that conflict with each other. When several new agents join the team, if some conflicts occur, our objective is to minimize the number of agents that are required to replan to resolve these conflicts. In this way, we avoid having to replan for all agents and rather, keep the plans of as many of the existing agents fixed. We identify a minimal set of agents whose paths should be replanned by means of identifying conflicts and then resolving them by replanning. The proposed method utilizes Answer Set Programming (ASP)~\cite{MarekT99,Niemelae99,Lifschitz02} (based on answer sets~\cite{GelfondL88,GelfondL91}) for planning, replanning and identifying a minimal set of agents with conflicts. The ASP formulation used for planning is presented in our earlier study~\cite{ErdemKOS13}, to which we refer the reader for details. In the following, we will focus more on the use of ASP for the latter two problems. \begin{figure}[t] \centering \resizebox{\columnwidth}{!}{\includegraphics{figure1.pdf}} \vspace{-1.\baselineskip} \caption{An illustrative example. (a) 2 agents with their already determined respective paths. (b) A new agent $a_3$ is added to the environment but cannot find a collision-free path. (c) All agents replan their solutions to find a collision free path. } \label{fig:sh1} \end{figure} \section{Dynamic {\bf MAPF}\xspace} {\bf D-MAPF}\xspace can be thought of as a generalization of the {\bf MAPF}\xspace problem. In the case of {\bf D-MAPF}\xspace, we deal with changes that take place with the passage of time. These changes can include, but are not be limited to, the addition of obstacles into the environment, the addition of new agents into the environment, and the changes in the objectives of each agent for a given problem. The inputs to a {\bf D-MAPF}\xspace problem are the same as that for the {\bf MAPF}\xspace problem which includes the initial and goal positions of each agent, the updates or modifications that have taken place in the environment, a restriction on the makespan of each agent, and the paths of existing agents. Figure~\ref{fig:sh1} above gives an example of a {\bf D-MAPF}\xspace problem where a new agent $a_3$ is added to the existing environment which consists of two agents $a_1$ and $a_2$ who already have their paths determined as in Figure~\ref{fig:sh1}(a). With a makespan of each agent restricted to 3, agent $a_3$ is unable to find a collision free solution as shown in Figure~\ref{fig:sh1}(b) for the given instance. As the main objective of any {\bf MAPF}\xspace problem is to find a collision free solution for all agents, replanning is attempted for all agents in the example as shown in Figure~\ref{fig:sh1}(c). \section{Solving {\bf D-MAPF}\xspace via Conflict Resolution} \label{sec:conflict} We introduce a new method to solve {\bf D-MAPF}\xspace where replanning for all agents is avoided most of the time. This method keeps track of two sets of agents throughout the program: \textit{nonConflictSet}, which contains the set of agents (and their plans) that do not conflict with each other and, ideally, remain as they are despite the changes in the environment; and \textit{conflictSet}, which contains the set of agents (and their plans) that conflict with each other, and, ideally, replanning for a minimal subset of this set would resolve conflicts. Our algorithm applies when some new agents join the team, as the existing agents are executing their plans. \begin{enumerate} \item When a set of new agents join the set of existing agents, then try to find a {\bf MAPF}\xspace solution for the new agents so that they do not conflict with each other or the existing agents. \item If such a solution exists, then include the new agents (with their plans) in \textit{nonConflictSet}. \item Otherwise, include the new agents (with their plans) in \textit{conflictSet}. \item While there is some conflict to resolve do the following: \begin{enumerate} \item Try to find a minimal(-cardinality) subset of agents in \textit{conflictSet}, such that replanning for them resolves the conflicts in \textit{conflictSet}. \item If such a minimal subset of agents is found, then include all agents (and their plans/replans) from the \textit{conflictSet} into \textit{nonConflictSet}. \item Otherwise, some conflicts exist between some agents in \textit{nonConflictSet} and \textit{conflictSet}, expand \textit{conflictSet} by a set of agents (and their plans) from \textit{nonConflictSet} that cause the minimum number of conflicts. \item Meanwhile, move the agents from \textit{conflictSet} that are not involved in these conflicts to \textit{nonConflictSet}. \end{enumerate} \end{enumerate} Note that, in the worst case, the algorithm above replans for all agents. We use a slight variation of the ASP formulation $\Pi$ for {\bf MAPF}\xspace from our earlier studies~\cite{ErdemKOS13} to find a {\bf MAPF}\xspace solution for the new agents in Step 1 above, by generating plans for the new agents only and by incorporating the plans of the existing agents as facts. In Step 4(a), we enumerate all subsets of \textit{conflictSet} with cardinality 2,3,... incrementally, and use a slight variation of $\Pi$ for replanning for each subset of agents only and by incorporating the plans of the other agents in \textit{conflictSet} as facts. In Step 4(c), we expand the \textit{conflictSet} by utilizing {\sc ASP}\xspace's noteworthy feature of weak constraints. In particular, we identify the minimum number of conflicts between agents in the \textit{conflictSet} and those in the \textit{nonConflictSet}: $$ \xleftarrow{\scriptstyle\sim} \ii{plan}(t,a_1,x,y),\ \ii{path}(t,a_2,x,y),\ \ii{conflictSet}(a_1), \ \ii{nonConflictSet}(a_2)\ [1@1,a_1,a_2,t] \qquad (a_1\neq a_2). $$ Here, a penalty of 1 is assigned each time such a conflict is detected. ASP solver generates several solutions with the addition of this weak constraint, however, a solution with the lowest penalty cost is chosen. In addition to the weak constraints, note that we still include hard constraints to prevent collisions between agents within the conflict set: $$ \begin{array} l \leftarrow \ii{path}(t,a_1,x,y),\ \ii{path}(t,a_2,x,y),\ \ii{conflictSet}(a_1),\ \ii{conflictSet}(a_2) \quad(a_1\neq a_2) . \end{array} $$ Figure~\ref{fig:sh2} below gives an example of a scenario where our algorithm manages to find a collision free solution for the agents in the environment without having to replan for all agents. The existing agents ($a_1, a_2, a_3, a_4$) are added to the \textit{nonConflictSet} and their existing paths are stored. The three new agents ($a_5, a_6, a_7$) are added to the environment. The algorithm first attempts to find a solution for the new agents while keeping the paths of the pre-existing agents fixed. Unable to find a solution, it places the new agents in \textit{conflictSet}. It further tries to resolve conflicts within \textit{conflictSet}. Unable to resolve the conflicts, the algorithm tries to expand the conflict set by trying to find a minimum set of conflicts between agents in \textit{conflictSet} and the agents in \textit{nonConflictSet}. The algorithm finds, as shown in Figure~\ref{fig:sh2}(c), that the agents $a_1, a_2, a_4, a_5$ and $a_6$ conflict amongst each other. Then, \textit{conflictSet} is updated to contain these agents only, while $a_7$ is moved to \textit{nonConflictSet}. \begin{figure}[t] \centering \resizebox{\columnwidth}{!}{\includegraphics{figure2.pdf}} \vspace{-1.2\baselineskip} \caption{An illustrative example. (a)~4~agents with their already determined respective paths are added to the \textit{nonConflictSet}. (b)~3~new agents $a_5$, $a_6$, $a_7$ are added to the \textit{conflictSet} and a solution is attempted. (c)~Regions of conflicts amongst agents in the \textit{conflictSet} and the \textit{nonConflictSet} are shown. (d)~New plans for all agents are presented. } \label{fig:sh2} \vspace{-.25\baselineskip} \end{figure} The algorithm then proceeds to resolve the conflicts within $\textit{conflictSet}= \{a_1, a_2, a_4, a_5, a_6\}$. It enumerates all subsets of size $\geq 2$ of the agents in the \textit{conflictSet}: $\{a_1,a_2\}, \{a_1,a_4\}, \{a_1,a_5\}, \{a_1,a_6\}, \{a_2,a_4\}$, $\{a_2,a_5\}, \{a_2,a_6\}, \{a_4,a_5\}, \{a_4,a_6\}, \{a_5,a_6\}$. Each subset is selected one at a time, and the algorithm proceeds to determine whether a solution can be found by replanning only for those two agents in the given subset. In this particular case, the algorithm is unable to find a solution for any of the 10 subsets of size 2. Then the algorithm enumerates all subsets of size 3: $\{a_1,a_2,a_4\}, \{a_1,a_2,a_5\}, \{a_1,a_2,a_6\}, \{a_1,a_4,a_5\}$, $\{a_1,a_4,a_6\}, \{a_1,a_5,a_6\}, \{a_2,a_4,a_5\}, \{a_2,a_4,a_6\}, \{a_2,a_5,a_6\}, \{a_4,a_5,a_6\}$. Once again the algorithm attempts to replan for each subset one at a time until a solution is found. Fortunately, this time the algorithm is able to find a solution for the subset $\{a_1,a_2,a_5\}$ and replanning is performed only for those agents to devise a collision free solution for all agents as shown in Figure~\ref{fig:sh2}(d). \section{Experimental Evaluations} \vspace{-1mm} We have compared our algorithm to solve {\bf D-MAPF}\xspace with the straightforward approach of replanning, by means of some experiments. The algorithm described in the previous section has been implemented using Python~3.6.4 and {\sc Clingo}\xspace~4.5.4, and we have performed experiments on a Linux server with 16~2.4~GHz~Intel~E5-2665 CPU cores and 64~GB memory. \begin{table}[t!] \centering \caption{Experimental evaluations.} \vspace{-.5\baselineskip} \resizebox{1.0\columnwidth}{!}{\begin{tabular}{|c|c|c|c|c|c|c|c|c|c|} \hline Initial & \# of new & Makespan & \multicolumn{2}{c}{Replanning for a Subset} & \multicolumn{2}{c}{Replanning for All} & Cardinality of the & Cardinality of the subset & Subset \\ Instance & agents & & CPU time [s] & Solution Found [Y/N] & CPU time [s] & Solution Found [Y/N] & Conflict Set & For which solution is found & Number\\ \hline \hline & 1 & 38 & 1.90 & Y & 49.40 & Y & 2 & 2 & 1 \\ 1 & 2 & 38 & 4.58 & Y & 50.58 & Y & 4 & 2 & 3 \\ 28 agents & 3 & 38 & 15.16 & Y & 53.49 & Y & 6 & 3 & 16 \\ $20\times 20$ grid & 4 & 38 & 142.66 & Y & 56.57 & Y & 8 & 4 & 120 \\ \hline & 1 & 58 & 7.34 & Y & 180.63 & Y & 2 & 2 & 1 \\ 2 & 2 & 58 & 18.07 & Y & 186.77 & Y & 4 & 2 & 3 \\ 28 agents & 3 & 58 & 57.90 & Y & 247.68 & Y & 6 & 3 & 16 \\ $30\times 30$ grid & 4 & 58 & 551.74 & Y & 261.07 & Y & 8 & 4 & 120 \\ \hline & 1 & 78 & 21.83 & Y & $>1000$ & Y & 2 & 2 & 1 \\ 3 & 2 & 78 & 50.28 & Y & $>1000$ & Y & 4 & 2 & 3 \\ 42 agents & 3 & 78 & 61.72 & Y & $>1000$ & Y & 4 & 2 & 3 \\ $40\times 40$ grid & 4 & 78 & 62.59 & Y & 1440.14 & Y & 5 & 2 & 1 \\ \hline & 1 & 98 & 39.67 & Y & $>2500$ & Y & 2 & 2 & 1 \\ 4 & 2 & 98 & 115.74 & Y & $>2500$ & Y & 4 & 2 & 4 \\ 38 agents & 3 & 98 & 172.40 & Y & $>2500$ & Y & 4 & 2 & 4 \\ $50\times 50$ grid & 4 & 98 & 394.36 & Y & 2766.37 & Y & 6 & 3 & 16 \\ \hline 5 & & & & & & & & & \\ 46 agents & 4 & 138 & 23408.82 & Y & - & N & 8 & 4 & 85 \\ $70\times 70$ grid & & & & & & & & & \\ \hline \end{tabular}}\label{Tab:results} \end{table} Experiments have been carried for various grid sizes and varying number of agents as shown in Table~\ref{Tab:results}. For each grid size, 4 test cases were run. The number of existing agents and the makespan for a particular grid size have been kept fixed for each of the 4 test cases. For uniformity, the makespan has been selected as the longest path from one corner of the grid to the opposite diagonal. The tests have also been carried out with the assumption that no static obstacles exist and that each agent starts at $t=0$. The size of the {\it conflictSet}\xspace along with the size of the subset for which a solution is found is also shown for better analysis. To serve as an example, let us look at the forth instance in Table~\ref{Tab:results} with 38 agents on a $50\times 50$ grid, with a makespan of 98. When four new agents are added to the environment, a new solution is computed by our algorithm in 394.36 seconds whereas replanning for all agents requires 2766.37 seconds. The number of agents that were conflicting with each other in this case were 6 and the size of the subset for which a solution was found was 3. For small grid sizes, replanning for all agents outperformed our implementation. This was expected, however, as calling the {\sc ASP}\xspace program with weak constraints generates many more possible configurations and for such small instances, it is more efficient to replan for all agents. The results get more interesting as the grid size and the number of agents increase. When the grid size increases, we obtain the results as expected in almost all of the remaining test cases. There were exceptions to the efficiency of our algorithm as shown by the last test case for $20 \times 20$ grid sizes. Replanning for all agents proved to be more efficient than our version because 120 subsets had to be tried until a solution was found. This proved to be a more time consuming process, therefore the results are as indicated. Results for the $40 \times 40$ and $50 \times 50$ instances show how much more effective it can be to replan only for a subset of agents. For the test case with a grid size of $40 \times 40$ and 4 new agents, our algorithm was at least 20 times as efficient as replanning for all agents. As the grid size and the number of agents increases further, there is a notable difference between the time taken to find a solution by our algorithm and replanning. For the largest grid size of $70 \times 70$, our algorithm found a solution in about 6 hours. However, replanning for all agents was not possible due to the sheer size of the input. From these results, we observe that the underlying idea of reusing existing solutions may be quite efficient in terms of computation time, in particular, for large instances with a large makespan. \section{Related Work} {\em Regarding conflicts}: A sort of conflict-resolution has been utilized by the Conflict-Based Search Algorithm (CBS)~\cite{SharonSFS15} introduced to solve the {\bf MAPF}\xspace problem. The approach attempts to decompose a {\bf MAPF}\xspace problem into several constrained single-agent path finding problems. At the high level, the algorithm maintains a binary tree referred to as a Conflict Tree (CT) which detects conflicts and adds a set of constraints for every agent to each node of the tree. At the low level, the shortest path for every agent with respect to its constraints are searched for. The algorithm then checks to determine whether any conflicts arise with the new paths computed at that node. If conflicts do arise, the algorithm declares the current node as a non-goal node. What is interesting about their approach is the way that they deal with conflicts. While we generate all possible subsets of the conflicting agents, attempt to replan for each subset until a solution is found or expand the conflict set, \cite{SharonSFS15} splits the node at which a conflict arises into its two children nodes and both these nodes are then checked to see if a solution exists. If a conflict exists between two agents $\alpha_i$ and $\alpha_j$, each child node contains an additional constraint to its parent node for either $\alpha_i$ or $\alpha_j$. Search is then performed for only the agent which is associated with the new constraint while the paths of all other agents are kept fixed. When conflicts are generated amongst more than 2 agents, focus is placed on the first two agents and the same procedure as described above is followed. Further conflicts are dealt with at a deeper level of the tree. {\em Regarding dynamic {\bf MAPF}\xspace}: Online {\bf MAPF}\xspace~\cite{svancara2019} considers the addition of new agents to the team while a plan is being executed, under the assumptions that agents disappear when they reach their goal and that new agents may wait before entering their initial location in the environment. These assumptions relax the {\bf D-MAPF}\xspace problem: the new agents may enter the environment one at a time, and they provide more space for the other agents when they disappear. To solve online {\bf MAPF}\xspace with these assumptions, Svancara et al. investigate algorithms that rely on replanning (e.g., for all agents) and conflict-resolution (e.g., planning for the new agents one at a time ignoring others, and then resolving conflicts by replanning). Our approach does not rely on such assumptions and tries to resolve conflicts by identifying the minimal set of agents that cause conflicts. In an earlier study~\cite{BogatarkanP019}, we introduce an alternative method for {\bf D-MAPF}\xspace. It does not rely on conflict-resolution. The idea is to revise the traversals of paths of the existing agents (up to the given upper bound on makespan) while computing new plans for the new agents so that there is no conflict between any two agents. If a solution cannot be found, then replanning is applied for all agents. We plan to use the method based on conflict-resolution, in combination with the revise and augment method to further reduce the number of replannings as part of our ongoing studies. \section{Conclusion} Our approach to minimizing the number of agents that are required to replan their solutions has been shown to be very efficient as detailed above. Replanning for all agents tends to become expensive very quickly once our environment becomes larger or more congested. An alternate approach as described by our algorithm can help reduce the cost of performing such a search while minimizing the modifications applied to the paths of agents that already exist. \section*{Acknowledgements} This work has been partially supported by Tubitak Grant 188E931. \bibliographystyle{eptcs}
1,108,101,565,358
arxiv
\section{Introduction} Reflected forms of regular Dirichlet forms have been introduced by Silverstein in \cite{Sil1} to study the boundary behavior of Markov processes. More precisely, it is an important question to characterize all extensions of a given Markov process beyond its lifetime, whose sample paths show the same local behavior as the original process. In terms of the associated Dirichlet forms Silverstein observed in \cite{Sil,Sil2} that such processes are encoded by Silverstein extensions (with an additional condition on the generator) of the Dirichlet form of the given process and that the active\footnote{The adjective active refers to to the fact that it lives on the space of square integrable functions. In the literature the reflected Dirichlet form is a form on all functions and therefore NOT a Dirichlet form; the active reflected Dirichlet form is then the restriction of the reflected Dirichlet form to square integrable functions. Since this can be a source of confusion, we shall write of the active reflected Dirichlet form and of the reflected form (here we drop the word Dirichlet).} reflected Dirichlet form is maximal among all such extensions in the sense of quadratic forms. The name reflected form comes from the following observation. For Brownian motion on an Euclidean domain (with smooth boundary) that is killed upon leaving the domain the process associated to the active reflected Dirichlet form is Brownian motion reflected at the boundary of the domain, see e.g. \cite{Fuk67}. Silverstein proposed two approaches for constructing the reflected forms, one by extending processes and one by extending forms. However, he did not show that both approaches coincide and it seems that the closedness of the reflected forms in his 'form approach' is missing. These gaps were closed by Chen in \cite{Che} with probabilistic methods. In \cite{Kuw} Kuwae extended the form theoretic approach to defining reflected forms to quasi-regular Dirichlet forms and obtained an analytic proof of the closedness of the reflected form. A streamlined version of both constructions for quasi-regular Dirichlet forms can now be found in the textbook \cite{CF} by Chen and Fukushima. The known approaches to defining reflected forms have in common that they are rather technical and need some regularity of the underlying space and the Dirichlet form. One relies on the Beurling-Deny decomposition of quasi-regular Dirichlet forms while the other uses characterizations of harmonic functions through the associated Markov process. It is the main goal of this note to give an 'algebraic' construction of reflected Dirichlet forms, which works for all Dirichlet forms. In spirit, a similar approach has been recently used by Robinson in \cite{Rob} for inner regular local Dirichlet forms but his methods cannot treat the nonlocal case and also needs some regularity. Note also that Robinson does not use the name reflected Dirichlet form; one of his extremal forms is the reflected form. Besides greater generality our new approach has some virtues that lead to new insights even for regular Dirichlet forms. It is claimed in \cite{CF,Kuw} that the active reflected Dirichlet form is always the maximal Silverstein extension of the given quasi-regular Dirichlet form. Unfortunately, the statement and the given proofs are only correct if the form does not have a killing, see Proposition~\ref{proposition:counterexample} for a counterexample. We construct the reflected form by splitting the given form into its main and its killing part and extending both parts to the maximal possible domain. The reflected form is then the sum of both extensions. We obtain that the active reflected Dirichlet form is the maximal Silverstein extension if the killing vanishes (as discussed previously in this case the proofs in \cite{CF,Kuw} are also correct) but additionally prove that the active main part is the maximal form whose resolvent dominates the resolvent of the given form, see Theorem~\ref{theorem:maximality active reflected form}. This seems to be a new observation and leads to the insight that for every Dirichlet form there exists a maximal Dirichlet form whose resolvent dominates the one of the given form. Moreover, our construction allows us to prove that continuous functions are dense in the domain of the active reflected Dirichlet form of a regular Dirichlet form, see Theorem~\ref{theorem:continous functions are dense}. This can then be used to construct the reflected process on a compactification (minus one point) of the underlying space, see Theorem~\ref{theorem:regularity active main part}. To the best of our knowledge this precise topological information on the space where the reflected process can live is new. Previous constructions only show that there exists a locally compact space on which the reflected process lives that contains a quasi-open subset which is quasi-homeomorphic to the given underlying space. As can already be seen for quasi-regular Dirichlet forms, reflected forms leave the realm of Dirichlet forms. They are Markovian forms on all a.e. defined measurable functions and are lower semicontinuous with respect to a.e. convergence. In our construction of reflected forms quadratic forms of this type feature even more prominently. While it is possible to show that they are extended Dirichlet forms after a change of the underlying measure, it is more natural to to replace a.e. convergence by local convergence in measure and consider them instead as closed Markovian forms on the topological vector space $L^0(m)$, so called energy forms. For localizable measures energy forms have been introduced by the author in his PhD thesis \cite{Schmi}. There it is shown that they are a common generalization of extended Dirichlet forms and resistance forms in the sense of Kigami \cite{Kig2}. In this note we only deal with $\sigma$-finite measures but with two types of quadratic forms: Dirichlet forms and energy forms. The paper is organized as follows. In Section~\ref{section:preliminaries} we discuss basics about Dirichlet forms and energy forms. In particular, we clarify the relation of Silverstein extensions and form domination. Section~\ref{section:reflected forms} is devoted to the construction and properties of reflected forms. In Section~\ref{section:regular} we apply the developed theory to regular Dirichlet forms. In Appendix~\ref{appendix:closed forms on l0} we discuss the basics of closed forms on metrizable topological vectors spaces while Appendix~\ref{appendix:monotone forms} contains a characterization of monotone forms. \medskip Section~\ref{section:reflected forms} and Appendix~\ref{appendix:closed forms on l0} are based on the author's PhD thesis \cite{Schmi}. \medskip {\bf Acknowledgements:} The author would like to thank Alexander Grigor'yan for his encouragement to write this article. Large parts of the text were written while the author was enjoying the hospitality of Jun Masamune at Hokkaido University Sapporo. The support of JSPS for this stay is gratefully acknowledged. Since the present text is based on the author's PhD thesis, it also owes greatly to discussions with his former advisor Daniel Lenz. \section{Preliminaries}\label{section:preliminaries} Throughout the paper $(X,\mathcal{B},m)$ is a $\sigma$-finite measure space. The space of real-valued measurable $m$-a.e. defined functions on $X$ is denoted by $L^0(m)$. We equip it with the vector space topology of {\em local convergence in measure}. Recall that a sequence $(f_n)$ {\em converges to $f$ locally in measure} (in which case we write $f_n \overset{m}{\to} f$) if and only if for all sets $U \in \mathcal{B}$ with $m(U) < \infty$ we have $$\int_U |f -f_n| \wedge 1 dm \to 0 \text{, as } n\to \infty.$$ Here, for $f,g \in L^0(m)$ we use the notation $f \wedge g$ for the pointwise minimum of $f$ and $g$, and $f \vee g$ for the pointwise maximum. Since $m$ is $\sigma$-finite, the topology of local convergence in measure is metrizable and $f_n \tom f$ if and only if each subsequence of $(f_n)$ has a subsequence that converges $m$-a.e. to $f$, see e.g. \cite[Proposition~245K]{Fre2}. In particular, a.e. convergent sequences are locally convergent in measure. In what follows we shall be concerned with Dirichlet forms on $L^2(m)$ and closed Markovian forms on $L^0(m)$, so-called energy forms. Closed forms on topological vector spaces other than $L^2(m)$ seem to be not so well-studied. Therefore, we include a short discussion about them in Appendix~\ref{appendix:closed forms on l0}. We also refer to the beginning of this appendix for the general terminology that we use for quadratic forms. For a background on Dirichlet forms see e.g. \cite{CF,FOT,MR}. \subsection{Dirichlet forms and domination} Let ${\mathcal E}$ be a {\em Dirichlet form}, i.e., a densely defined closed Markovian quadratic form on $L^2(m)$. We write $\as{\cdot,\cdot}_{\mathcal E}$ for the {\em form inner product} $$\as{f,g}_{\mathcal E} = {\mathcal E}(f,g) + \as{f,g}_2 \text{ for } f,g \in D({\mathcal E}),$$ where $\as{\cdot,\cdot}_2$ is the ordinary $L^2$-inner product on $L^2(m)$. The {\em form norm } is $\|\cdot\|_{\mathcal E} := \as{\cdot,\cdot}_{\mathcal E}^{1/2}$. Recall the following structure properties of the domains of Dirichlet forms, see e.g. \cite[Theorem~I.4.12]{MR}. \begin{lemma} \label{lemma:contraction properties} Let ${\mathcal E}$ be a Dirichlet form. For $f,f_1,\ldots,f_n \in L^2(m)$ the inequalities $$|f(x)| \leq \sum_{k=1}^n|f_k(x)| \text{ and } |f(x)-f(y)| \leq \sum_{k = 1}^n |f_k(x)-f_k(y)| \text{ for } m\text{-a.e. } x,y \in X $$ imply $${\mathcal E}(f)^{1/2} \leq \sum_{k = 1}^n {\mathcal E}(f_k)^{1/2}.$$ In particular, $D({\mathcal E}) \cap L^\infty(m)$ is an algebra and $D({\mathcal E})$ is a lattice. \end{lemma} Let ${\mathcal E}$ and $\tilde{{\mathcal E}}$ be Dirichlet forms on $L^2(m)$ and let $(G_\alpha)_{\alpha > 0}$, respectively $(\tilde G_\alpha)_{\alpha> 0}$, be the associated Markovian resolvents. We say that {\em $\tilde{{\mathcal E}}$ dominates ${\mathcal E}$} if $(\tilde G_\alpha)_{\alpha> 0}$ dominates $(G_\alpha)_{\alpha > 0}$, i.e., if for all $f \in L^2(m)$ and all $\alpha>0$ the inequality $|G_\alpha f| \leq \tilde{G}_\alpha |f|$ holds. Domination of forms can be characterized by ideal properties of their domains. This is discussed next. For two subsets $I,S \subseteq L^0(m)$ we say that $I$ is an {\em order ideal in $S$} if $f \in S$, $g \in I$ and $|f| \leq |g|$ implies $f \in I$ and we say that $I$ is an {\em algebraic ideal in $S$} if $f \in S$, $g \in I$ implies $fg \in I$. \begin{lemma} \label{lemma:characterization of domination} Let ${\mathcal E}$, $\tilde{{\mathcal E}}$ Dirichlet forms on $L^2(m)$. The following assertions are equivalent. \begin{itemize} \item[(i)] ${\mathcal E}$ is dominated by $\tilde {\mathcal E}$. \item[(ii)] $D({\mathcal E}) \subseteq D(\tilde {\mathcal E})$, $D({\mathcal E})$ is an order ideal in $D(\tilde{{\mathcal E}})$ and % $${\mathcal E}(f,g) \geq \tilde{{\mathcal E}}(f,g)$$ % for all nonnegative $f,g \in D({\mathcal E})$. \item[(iii)] $D({\mathcal E}) \subseteq D(\tilde {\mathcal E})$, $D({\mathcal E})\cap L^\infty(m)$ is an algebraic ideal in $D(\tilde{{\mathcal E}}) \cap L^\infty(m)$ and % $${\mathcal E}(f,g) \geq \tilde{{\mathcal E}}(f,g)$$ % for all nonnegative $f,g \in D({\mathcal E})$. \end{itemize} \end{lemma} \begin{proof} (i) $\Leftrightarrow$ (ii): This follows from \cite[Corollary~4.3]{MVV}, where instead of the resolvents the associated semigroups are considered. It is readily verified that domination of the associated resolvents is equivalent to domination of the associated semigroups. (ii) $\Rightarrow$ (iii): Let $f \in D(\tilde{{\mathcal E}}) \cap L^\infty(m)$ and let $g \in D({\mathcal E})\cap L^\infty(m)$. We obtain $|fg| \leq \|f\|_\infty |g|$. Since $D({\mathcal E})$ is an order ideal in $D(\tilde {\mathcal E})$, this implies $fg \in D({\mathcal E})$. (iii) $\Rightarrow$ (ii): Let $f \in D(\tilde {\mathcal E})$ and let $g \in D({\mathcal E})$ with $|f| \leq |g|$. Since domains of Dirichlet forms are lattices, we can assume $0 \leq f \leq g$. Moreover, since $f \wedge n \to f$, $g \wedge n \to g$, as $n \to \infty$, with respect to the corresponding form norms, see e.g. \cite[Theorem~1.4.2]{FOT}, we can further assume that $f$ and $g$ are bounded. Let $A:= \{(x,y) \in {\mathbb R}^2 \mid 0 \leq x \leq y\}$, $\varepsilon > 0$ and consider % $$C_\varepsilon:A \to {\mathbb R},\, C_\varepsilon(x,y) := x \frac{y}{y+\varepsilon}.$$ % For $(x_i,y_i) \in A$, $i=1,2$, it satisfies % $$|C_\varepsilon(x_1,y_1) - C_\varepsilon(x_2,y_2)| \leq |x_1 - x_2| + |y_1 - y_2|$$ % and $C_\varepsilon(0,0) = 0$. Since $0\leq f \leq g$, we also have % $$C_\varepsilon(f,g) = f \frac{g}{g + \varepsilon} \to f \text{ in } L^2(m),\text{ as }\varepsilon \to 0+,$$ % and the $L^2$-lower semicontinuity of ${\mathcal E}$ implies % $${\mathcal E}(f) \leq \liminf_{\varepsilon\to 0+} {\mathcal E}(C_\varepsilon(f,g)).$$ % Therefore, it suffices to prove that the right-hand side of this inequality is bounded independently of $\varepsilon$. The function $H_\varepsilon:[0,\infty) \to {\mathbb R}, H_\varepsilon(y) = y(y+\varepsilon)^{-1}$ is $\varepsilon^{-1}$-Lipschitz and satisfies $H_\varepsilon(0) = 0$, such that $H_\varepsilon(g) \in D({\mathcal E})$ by Lemma~\ref{lemma:contraction properties}. Since $D({\mathcal E})\cap L^\infty(m)$ is an algebraic ideal in $D(\tilde{{\mathcal E}}) \cap L^\infty(m)$, this implies $C_\varepsilon(f,g) = f H_\varepsilon(g)\in D({\mathcal E})$ for all $\varepsilon > 0$. The inequality between ${\mathcal E}$ and $\tilde{{\mathcal E}}$ for nonnegative functions in $D({\mathcal E})$ and the inequalities $0 \leq C_{\varepsilon}(f,g) \leq f \leq g$ then show % \begin{align*} {\mathcal E}(C_\varepsilon(f,g)) &= \tilde {\mathcal E}(C_\varepsilon(f,g)) + {\mathcal E}(C_\varepsilon(f,g)) - \tilde {\mathcal E}(C_\varepsilon(f,g)) \\ &\leq \tilde {\mathcal E}(C_\varepsilon(f,g)) + {\mathcal E}(g) - \tilde {\mathcal E}(g). \end{align*} % Moreover, the properties of $C_\varepsilon$ and Lemma~\ref{lemma:contraction properties} imply % $$\tilde {\mathcal E}(C_\varepsilon(f,g))^{1/2} \leq \tilde {\mathcal E} (f)^{1/2} + \tilde {\mathcal E} (g)^{1/2}.$$ % Altogether we obtain that ${\mathcal E}(C_\varepsilon(f,g))$ is bounded independently of $\varepsilon$. This finishes the proof. \end{proof} \begin{remark} For the case when $\tilde {\mathcal E}$ is an extension of ${\mathcal E}$ the author learned the presented proof of (iii) $\Rightarrow$ (ii) in a discussion with Peter Stollmann and Hendrik Vogt in December 2012. Independently, a different proof for the lemma was recently given in \cite{Rob} in the somewhat more restrictive setting that $X$ is a locally compact separable metric space and $m$ is a Radon measure of full support on $X$. \end{remark} If the quadratic form $\tilde {\mathcal E}$ is an extension of ${\mathcal E}$, then $D({\mathcal E}) \subseteq D(\tilde {\mathcal E})$ and the inequality ${\mathcal E}(f,g) \geq \tilde {\mathcal E}(f,g)$ for nonnegative $f,g \in D({\mathcal E})$ are trivially satisfied. If, in this case, the form $\tilde {\mathcal E}$ satisfies any of the three equivalent conditions of the previous lemma, it is called {\em Silverstein extension} of ${\mathcal E}$. It is one main goal of this paper to construct for a given Dirichlet form the maximal Dirichlet form that dominates it and, if possible, to also construct its maximal Silverstein extension. \begin{remark} Silverstein extensions play an important rôle in the study of the boundary behavior of symmetric Markov processes. Their study was initiated in the books \cite{Sil,Sil2}. A modern treatment can be found in \cite{CF}. \end{remark} \subsection{Energy forms and extended Dirichlet spaces} As discussed in Appendix~\ref{appendix:closed forms on l0}, a quadratic form $E$ on $L^0(m)$ is called {\em closed} if it is lower semicontinuous with respect to local convergence in measure, i.e., if for all sequences $(f_n)$ in $L^0(m)$ and all $f \in L^0(m)$ the convergence $f_n \tom f$ implies $$E(f) \leq \liminf_{n\to \infty} E(f_n).$$ A closed quadratic form $E$ on $L^0(m)$ is called {\em energy form} if it is {\em Markovian}, i.e., if for each normal contraction $C:{\mathbb R} \to {\mathbb R}$ and all $f \in L^0(m)$ we have $$E(C \circ f) \leq E(f).$$ Clearly, the restriction of an energy form to $L^2(m)$ is a (not necessarily densely defined) Dirichlet form on $L^2(m)$ as its $L^2$-lower semicontinuity follows from the continuity of the embedding $L^2(m) \hookrightarrow L^0(m)$. The theory of extended Dirichlet forms shows that also the opposite way is possible. A Dirichlet form ${\mathcal E}$ on $L^2(m)$ can be considered to be a quadratic form on $L^0(m)$ by letting ${\mathcal E}(f) = \infty$ for $f \in L^0(m) \setminus L^2(m)$. The next lemma shows that this form is closable and its closure is an energy form, the so-called {\em extended Dirichlet form}, which we denote by $\mathcal{E}_{\rm e}$. \begin{lemma} Every Dirichlet form on $L^2(m)$ is closable on $L^0(m)$. Its closure $\mathcal{E}_{\rm e}$ is an energy form that is given by % $$\mathcal{E}_{\rm e}:L^0(m) \to [0,\infty],\, \mathcal{E}_{\rm e}(f):= \begin{cases} \lim\limits_{n \to \infty} {\mathcal E}(f_n) &\text{if } (f_n) \text{ is }{\mathcal E}\text{-Cauchy with } f_n \tom f,\\ \infty &\text{if there exists no such sequence}. \end{cases} $$ \end{lemma} \begin{proof} For proving closability and the formula for $\mathcal{E}_{\rm e}$ we employ Lemma~\ref{lemma:characterization closability}. Thus, we need to show that ${\mathcal E}$ is lower semicontinuous with respect to local convergence in measure on its domain. To this end, let $(f_n)$ a sequence in $D({\mathcal E})$ and let $f \in D({\mathcal E})$ with $f_n \tom f$. Without loss of generality we can assume $\liminf_{n \to \infty} {\mathcal E}(f_n) < \infty$ and by passing to a suitable subsequence we can further assume $f_n \to f$ $m$-a.e. It then follows from the main result of \cite{Schmu2} that ${\mathcal E}(f) \leq \liminf_{n \to \infty} {\mathcal E}(f_n).$ For a simplified proof see \cite[Theorem~1.59]{Schmi}. It remains to show that $\mathcal{E}_{\rm e}$ is Markovian. Let $C:{\mathbb R} \to {\mathbb R}$ a normal contraction and let $f \in D(\mathcal{E}_{\rm e})$. We choose an ${\mathcal E}$-Cauchy sequence $(f_n)$ in $D({\mathcal E})$ with $f_n \tom f$. The lower semicontinuity of $\mathcal{E}_{\rm e}$, the Markov property of ${\mathcal E}$ and the fact that $\mathcal{E}_{\rm e}$ extends ${\mathcal E}$ yield $$\mathcal{E}_{\rm e}(C \circ f) \leq \liminf_{n\to \infty} \mathcal{E}_{\rm e}(C \circ f_n) = \liminf_{n\to \infty} {\mathcal E}(C \circ f_n) \leq \liminf_{n\to \infty} {\mathcal E}( f_n) = \mathcal{E}_{\rm e}(f).$$ This finishes the proof. \end{proof} It follows from the previous lemma and the characterization of local convergence in measure that the domain of $\mathcal{E}_{\rm e}$ coincides with the classical extended Dirichlet space, i.e., $$D(\mathcal{E}_{\rm e}) = \{f \in L^0(m) \mid \text{ there ex. ${\mathcal E}$-Cauchy sequence } (f_n) \text{ in } D({\mathcal E}) \text{ with } f_n \to f\, m\text{-a.e.}\}.$$ In particular, $D(\mathcal{E}_{\rm e}) \cap L^2(m)$ = $D({\mathcal E})$, see e.g. \cite[Theorem~1.1.5]{CF}. \begin{remark} We included the previous lemma because it seems that the lower semicontinuity of $\mathcal{E}_{\rm e}$ on $L^0(m)$ is not contained in the literature. We could only find the inequality % $$\mathcal{E}_{\rm e}(f) \leq \liminf_{n\to \infty} \mathcal{E}_{\rm e}(f_n) $$ % for sequences $(f_n)$ with $f_n \to f$ $m$-a.e. (which is almost the same as local convergence in measure) under the additional assumption that either $f \in D(\mathcal{E}_{\rm e})$, see \cite[Corollary~1.9]{CF}, or $f_n \in D({\mathcal E})$, see \cite[Lemma~2]{Schmu2}. In contrast, the Markov property of $\mathcal{E}_{\rm e}$ is contained in the literature, see e.g. \cite[Theorem~1.1.5]{CF}. It seems that our proof, which makes direct use of lower semicontinuity, is a bit simpler. \end{remark} \begin{remark} Energy forms have been introduced in \cite{Schmi} on localizable measure spaces. It can be proven that for a given $\sigma$-finite measures $m$, any energy form on $L^0(m)$ is an extended Dirichlet form of a not necessarily densely defined Dirichlet form (a so-called Dirichlet form in the wide sense). However, the measure $m$ needs to be changed to a suitable equivalent finite measure $m'$ on $X$ for which $L^0(m) = L^0(m')$ as topological vector spaces (e.g. one can take $m' = g \cdot m$ for some strictly positive $g \in L^1(m)$). For details see \cite[Proposition~3.7]{Schmi}. Below we will construct different energy forms on $L^0(m)$. Since it would be somewhat artificial and technically more complicated to consider them as extended Dirichlet forms with respect to a changed measure, we will directly work in the category of energy forms. Note also that energy forms on non-$\sigma$-finite measure spaces need not be extended Dirichlet forms. For example, resistance forms in the sense of Kigami \cite{Kig2} are energy forms on a set equipped with the counting measure, see the discussion in \cite[Subsection~2.1.3]{Schmi}. \end{remark} It follows from the previous remark that the statements of Lemma~\ref{lemma:contraction properties} are also true for energy forms, see also \cite[Theorem~2.20]{Schmi} for a direct proof. We only need the following consequences which can be proven directly. \begin{lemma}\label{lemma:bounded approximation} Let $E$ be an energy form on $L^0(m)$. Let $f \in D(E)$ and for $\alpha > 0$ let $f^{(\alpha)} := (f\wedge \alpha)\vee(-\alpha)$. Then $f^{(\alpha)} \in D(E)$ and the following convergence statements hold true. \begin{itemize} \item[(a)] $E(f^{(n)} - f) \to $, as $n\to \infty$. \item[(b)] Let $(f_n)$ a sequence in $D(E)$ with $f_n \tom f$ and $E(f_n - f)\to 0$, as $n \to \infty$. Then, for every $\alpha \geq \|f\|_\infty$, we have $f^{(\alpha)}_n \tom f$ and $E(f^{(\alpha)}_n - f) \to 0$, as $n \to \infty$. \end{itemize} \end{lemma} \begin{proof} (a) + (b): All statements are consequences of the Markov property of $E$ (applied to the normal contraction ${\mathbb R} \to {\mathbb R}, x\mapsto (x \wedge \alpha) \vee(-\alpha)$), its lower semicontinuity and Lemma~\ref{lemma:existence of a weakly convergent subnet}. \end{proof} \begin{lemma}\label{lemma:algebraic properties} Let $E$ be an energy form. Then $D(E)$ a lattice and for every $f,g \in D(E)$ we have % $$E(f\wedge g)^{1/2} \leq E(f)^{1/2} + E(g)^{1/2} \text{ and } E(f\vee g)^{1/2} \leq E(f)^{1/2} + E(g)^{1/2}.$$ % Moreover, $D(E) \cap L^\infty(m)$ is an algebra and for every $f,g \in D(E)\cap L^\infty(m)$ the inequality % $$E(fg)^{1/2} \leq \|f\|_\infty E(g) + \|g\|_\infty E^{1/2}(f)$$ % holds. \end{lemma} \begin{proof} The first statement follows from the identity $f \wedge g = \frac{f + g - |f-g|}{2}$ and the fact that ${\mathbb R} \to {\mathbb R},x \mapsto |x|$ is a normal contraction. $f \vee g$ can be treated similarly. For the 'Moreover'-statement let $h \in L^1(m)$ with $h> 0$ $m$-a.e. and set $m' = h \cdot m$. Then the embedding $L^2(m') \hookrightarrow L^0(m)$ is continuous and so the restriction of $E$ to $L^2(m')$, which we denote by ${\mathcal E}$, is a (not necessarily densely defined) Dirichlet form. Since $h \in L^1(m)$ and $f,g$ are bounded, we have $f,g \in L^2(m')$ and therefore $f,g \in D({\mathcal E})$. Now the statement follows from \cite[Theorem~1.4.2]{FOT}. \end{proof} We call an energy form $E$ {\em recurrent} if $1 \in {\mathrm {ran}\,} E$ and {\em transient if ${\mathrm {ran}\,} E = \{0\}$}. These notions are borrowed from Dirichlet form theory, as a Dirichlet form ${\mathcal E}$ is recurrent if and only if $1 \in {\mathrm {ran}\,} \mathcal{E}_{\rm e}$ and it is transient if and only if ${\mathrm {ran}\,} \mathcal{E}_{\rm e} = \{0\}$, see e.g. \cite[Theorem~1.6.2 and Theorem~1.6.3]{FOT}. \section{Reflected Dirichlet forms} \label{section:reflected forms} In this section we construct the reflected Dirichlet form by splitting the given Dirichlet form into its main part and its killing part and then extending both parts to the maximal possible domain. We prove that the $L^2$-restriction of the main part is the maximal Dirichlet form that dominates the given form and we prove that it is the maximal Silverstein extension if there is no killing part. Moreover, we give an example of a Dirichlet form with non-vanishing killing part that does not possess a maximal Silverstein extension. Everything in this section is based on \cite[Chapter~3.3]{Schmi}. \subsection{The main part} In this subsection ${\mathcal E}$ is a fixed Dirichlet form. Let $\varphi \in D({\mathcal E})$ with $0 \leq \varphi \leq 1$ be given. We define the functional $\hat{\mathcal{E}}_\varphi:L^0(m) \to [0,\infty]$ by $$\hat{\mathcal{E}}_\varphi(f) := \begin{cases} {\mathcal E}(\varphi f) - {\mathcal E}(\varphi f^2,\varphi) & \text{if } f \in L^\infty(m), \varphi f, \varphi f^2 \in D({\mathcal E}),\\ \infty &\text{else}. \end{cases} $$ On a technical level the following theorem is the main insight of this subsection. We formulate it as a theorem because we think that it has some applications beyond the construction of reflected Dirichlet forms. Recall that the relation $\leq$ on quadratic forms compares the size of their domains; we have $q \leq q'$ for two quadratic forms $q,q'$ if and only if $q(f) \geq q'(f)$ for all $f \in D(q)$, cf. Appendix~\ref{appendix:closed forms on l0}. \begin{theorem}[Properties of truncated forms] \label{theorem:properties of concatenated forms} The functional $\hat{\mathcal{E}}_\varphi$ is a closable quadratic form on $L^0(m)$. Its closure $\mathcal{E}_\varphi$ is a recurrent energy form with the following properties. \begin{itemize} \item[(a)] $\mathcal{E}_{\rm e} \leq \mathcal{E}_\varphi$ and if $\psi \in D({\mathcal E})$ with $\varphi \leq \psi \leq 1$, then ${\mathcal E}_\psi \leq \mathcal{E}_\varphi$. \item[(b)] $D(\mathcal{E}_\varphi) \cap L^\infty(m) = D(\hat{\mathcal{E}}_\varphi) = \{f \in L^\infty(m) \mid \varphi f \in D({\mathcal E})\}$. \item[(c)] If $\tilde {\mathcal E}$ is a Dirichlet form that dominates ${\mathcal E}$, then $\tilde {\mathcal E}_\varphi \leq \mathcal{E}_\varphi$. \end{itemize} \end{theorem} \begin{definition}[Truncated form] The closure $\mathcal{E}_\varphi$ of $\hat{\mathcal{E}}_\varphi$ on $L^0(m)$ is called the {\em truncation of ${\mathcal E}$ with respect to $\varphi$}. \end{definition} % % \begin{remark} Property (b) in the theorem is quite important as it shows how to compute $\mathcal{E}_\varphi$ and its domain. Namely, if $f \in L^\infty(m)$ with $\varphi f \in D({\mathcal E})$ we have % $$\mathcal{E}_\varphi(f) = \hat{\mathcal{E}}_\varphi(f) = {\mathcal E}(\varphi f) - {\mathcal E}(\varphi f^2,\varphi).$$ % For arbitrary $f \in L^0(m)$ and $n \in {\mathbb N}$ we let $f^{(n)}:= (f\wedge n) \vee(-n).$ It follows from the Markov property and the lower semicontinuity of $\mathcal{E}_\varphi$ that $\mathcal{E}_\varphi(f) = \lim\limits_{n\to \infty}\mathcal{E}_\varphi(f^{(n)})$. Therefore, $f \in D(\mathcal{E}_\varphi)$ if and only if $\varphi f^{(n)} \in D({\mathcal E})$ for each $n \in {\mathbb N}$ and the limit % $$\mathcal{E}_\varphi(f) = \lim_{n\to \infty} {\mathcal E}(\varphi f^{(n)}) - {\mathcal E}(\varphi (f^{(n)})^2,\varphi) $$ % is finite. \end{remark} % In order to prove this theorem we need two lemmas. \begin{lemma} \label{lemma:maximal silverstein extension technical lemma} Let $\varphi \in D({\mathcal E})$ with $0 \leq \varphi \leq 1$ and let $f \in L^\infty(m)$. \begin{itemize} \item[(a)] Let $C:{\mathbb R} \to {\mathbb R}$ be $L$-Lipschitz with $C(0) = 0$ and let $$M:= \sup \{|C(x)| \mid |x|\leq \|f\|_\infty\}.$$ Then % $${\mathcal E}( \varphi\, C(f))^{1/2} \leq L {\mathcal E}(\varphi f)^{1/2} + (M + L\|f\|_\infty) {\mathcal E}(\varphi)^{1/2}.$$ % In particular, $\varphi f \in D({\mathcal E})$ implies $\varphi\, C(f) \in D({\mathcal E})$ and $\varphi f^2 \in D({\mathcal E})$. \item[(b)] If $\psi \in D({\mathcal E})$ with $0 \leq \varphi \leq \psi$, then $\psi f \in D({\mathcal E})$ implies $\varphi f \in D({\mathcal E})$. \end{itemize} \end{lemma} \begin{proof} (a): Let $A :=\{(x,y) \in {\mathbb R}^2 \mid |x| \leq |y| \cdot \|f\|_\infty\}$ and consider the function $$\ow{C}: A \to {\mathbb R}, \quad (x,y) \mapsto \ow{C}(x,y) := \begin{cases}C\left({x}/{y}\right)y &\text{if } y \neq 0\\0 & \text{if } y = 0 \end{cases}.$$ We show that $\ow{C}$ is Lipschitz with appropriate constants. The statement then follows from Lemma~\ref{lemma:contraction properties} and the identity $\varphi\, C(f) = \ow{C}(\varphi f, \varphi)$. For $(x_1,y_1),(x_2,y_2) \in A$ with $y_1,y_2 \neq 0$ we have \begin{align*} |\ow{C}(x_1,y_1) - \ow{C}(x_2,y_2)| &\leq |y_1| |C(x_1/y_1) - C(x_2/y_2)| + |C(x_2/y_2)| |y_1 - y_2|. \end{align*} Since $|x_i|\leq |y_i| \|f\|_\infty$, $i=1,2$, we obtain $|C(x_2/y_2)| \leq M$ and $$|C(x_1/y_1) - C(x_2/y_2)| \leq L \left|\frac{x_1y_2 - x_2 y_1}{y_1y_2}\right| \leq L \left|\frac{x_1 - x_2}{y_1}\right| + L \|f\|_\infty \left|\frac{y_1 - y_2}{y_1}\right|. $$ Altogether, these considerations amount to $$|\ow{C}(x_1,y_1) - \ow{C}(x_2,y_2)| \leq L|x_1 - x_2| + (M + L \|f\|_\infty) |y_1 - y_2|.$$ By the continuity of $\ow{C}$ on $A$, this inequality extends to the case when $y_2 = x_2 = 0$, in which it reads $$|\ow{C}(x_1,y_1)| \leq L|x_1| + (M + L \|f\|_\infty) |y_1|.$$ From Lemma~\ref{lemma:contraction properties} we infer $${\mathcal E}(\varphi\, C(f))^{1/2} = {\mathcal E}(\ow{C}(\varphi f,\varphi))^{1/2} \leq L {\mathcal E}(\varphi f)^{1/2} + (M + L\|f\|_\infty) {\mathcal E}(\varphi)^{1/2}. $$ For the 'In particular'-part we apply the statement to the function $$C:{\mathbb R} \to {\mathbb R},\, x \mapsto x^2 \wedge \|f\|_\infty^2.$$ (b): We let $B:=\{(x,y,z) \in {\mathbb R}^3 \mid z \geq 0, |x| \leq z \cdot \|f\|_\infty \text{ and } |y| \leq z\}$. For $\varepsilon> 0$ we consider the function $$C_\varepsilon:B \to {\mathbb R},\quad (x,y,z) \mapsto C_\varepsilon(x,y,z):= xy/(z + \varepsilon).$$ From the inequality $0 \leq \varphi \leq \psi$ we obtain $$C_\varepsilon(\psi f, \varphi ,\psi) = \varphi \frac{\psi}{\psi + \varepsilon} f \to \varphi f $$ in $L^2(m)$, as $\varepsilon \to 0+$. The lower semicontinuity of ${\mathcal E}$ implies $${\mathcal E}(\varphi f) \leq \liminf_{\varepsilon \to 0+} {\mathcal E}(C_\varepsilon(\psi f ,\varphi,\psi)). $$ Thus, it suffices to prove that the right-hand side of the above inequality is finite. The partial derivatives of $C_\varepsilon$ satisfy $|\partial_x C_\varepsilon| \leq 1$, $|\partial_y C_\varepsilon| \leq \|f\|_\infty$ and $|\partial_z C_\varepsilon| \leq \|f\|_\infty$ in the interior of $B$. This yields $$|C_\varepsilon(x_1,y_1,z_1) - C_\varepsilon(x_2,y_2,z_2)| \leq |x_1 - x_2| + \|f\|_\infty |y_1-y_2| + \|f\|_\infty |z_1 - z_2|,$$ for $(x_i,y_i,z_i), i = 1,2,$ in the interior of $B$. Since $C_\varepsilon$ is continuous and any point in $B$ can be approximated by interior points, we can argue similarly as in the proof of assertion (a) to obtain $${\mathcal E}(C_\varepsilon(\psi f, \varphi,\psi))^{1/2} \leq {\mathcal E}(\psi f)^{1/2} + \|f\|_\infty {\mathcal E}(\varphi)^{1/2} + \|f\|_\infty {\mathcal E}(\psi)^{1/2} < \infty. $$ This finishes the proof. \end{proof} \begin{lemma}\label{lemma:properties of eph} Let $\varphi,\psi \in D({\mathcal E})$ with $0\leq \varphi,\psi \leq 1$. \begin{itemize} \item[(a)] $\hat{\mathcal{E}}_\varphi$ is a nonnegative quadratic form on $L^0(m)$. Its domain satisfies % $$D(\hat{\mathcal{E}}_\varphi) = \{f \in L^\infty(m) \mid \varphi f \in D({\mathcal E})\}.$$ % \item[(b)] For every normal contraction $C:{\mathbb R} \to {\mathbb R}$ and every $f \in L^0(m)$ the inequality % $$\hat{\mathcal{E}}_\varphi(C\circ f) \leq \hat{\mathcal{E}}_\varphi(f)$$ % holds. \item[(c)] If $ \varphi \leq \psi$, then for all $f \in L^0(m)$ we have $\hat{\mathcal{E}}_\varphi (f) \leq \hat{{\mathcal E}}_{\psi}(f)$. \item[(d)] For all $f \in D({\mathcal E}) \cap L^\infty(m)$ the inequality $\hat{\mathcal{E}}_\varphi (f) \leq {\mathcal E}(f)$ holds. \item[(e)] If $\tilde {\mathcal E}$ is a Dirichlet form that dominates ${\mathcal E}$, then for all $f \in L^0(m)$ we have $$\hat{\mathcal{E}}_\varphi(f) \leq \hat{\tilde {\mathcal E}}_\varphi(f).$$ \end{itemize} \end{lemma} \begin{proof} (a): Let $f \in L^\infty(m)$. Lemma~\ref{lemma:maximal silverstein extension technical lemma}~(a) shows that $\varphi f\in D({\mathcal E})$ implies $\varphi f^2 \in D({\mathcal E})$. This observation and the fact that ${\mathcal E}$ is a quadratic form then show that $\hat{\mathcal{E}}_\varphi$ is a quadratic form whose domain satisfies $D(\hat{\mathcal{E}}_\varphi) = \{f \in L^\infty(m) \mid \varphi f \in D({\mathcal E})\}$. The nonnegativity of $\hat{\mathcal{E}}_\varphi$ follows from assertion (c) (we shall not use this fact in the rest of the proof). Let $(G_\alpha)$ be the resolvent of ${\mathcal E}$. We denote the corresponding continuous approximating form by ${\mathcal E}^{(\alpha)}$, i.e., $${\mathcal E}^{(\alpha)}:L^2(m) \to [0,\infty),\, {\mathcal E}^{(\alpha)}(f) := \as{f, (I-\alpha G_\alpha)f}.$$ It is shown in \cite{FOT} that for $f \in L^2(m)$ we have $${\mathcal E}(f) = \lim_{\alpha \to \infty} \alpha {\mathcal E}^{(\alpha)}(f),$$ where the limit is infinite on $L^2(m) \setminus D({\mathcal E})$. In particular, for $f \in D(\hat{\mathcal{E}}_\varphi)$ this implies $$\hat{\mathcal{E}}_\varphi(f) = {\mathcal E}(\varphi f) - {\mathcal E}(\varphi f^2,\varphi) = \lim_{\alpha\to \infty} \alpha \left({\mathcal E}^{(\alpha)}(\varphi f) - {\mathcal E}^{(\alpha)}(\varphi f^2,\varphi)\right)= \lim_{\alpha\to \infty} \alpha \hat {\mathcal E}_\varphi^{(\alpha)}(f). $$ Note that since $\hat{\mathcal{E}}_\varphi(f)$ involves taking off-diagonal values of ${\mathcal E}$, this approximation for $\hat{\mathcal{E}}_\varphi$ is only valid for functions in the domain $D(\hat{\mathcal{E}}_\varphi)$. We now prove that assertions (b), (c), (d) and (e) hold true for the continuous Dirichlet form ${\mathcal E}^{(\alpha)}$ and then infer the statement for general forms by an approximation procedure. To simplify notation we write $\hat {\mathcal E}^{(\alpha)}_\varphi$ for the form $\hat{({\mathcal E}^{(\alpha)})}_\varphi$. Since $\varphi \in D({\mathcal E}) \subseteq L^2(m),$ for any $f \in L^\infty(m)$ we have $\varphi f \in L^2(m) = D({\mathcal E}^{(\alpha)})$ and so $D(\hat {\mathcal E}^{(\alpha)}_\varphi) = L^\infty(m)$. Any function in $L^\infty(m)$ can be approximated by a sequence of simple functions $(f_n)$ in $L^2(m)$ such that $\varphi f_n$ converges to $\varphi f$ and $\varphi f_n^2$ converges to $\varphi f_n^2$ in $L^2(m)$. Therefore, it suffices to prove the statements for simple $L^2$-functions. To this end, let $$f = \sum_{j = 1}^n \alpha_j \mathds{1}_{A_j}$$ with pairwise disjoint $A_j$ of finite measure be given. We obtain $$\hat {\mathcal E}^{(\alpha)}_\varphi(f) = \sum_{i,j=1}^n b_{ij}^\varphi (\alpha_i - \alpha_j)^2 + \sum_{i}^n c_{i}^\varphi \alpha_i^2,$$ with $$b_{ij}^\varphi = -{\mathcal E}^{(\alpha)}_\varphi(\mathds{1}_{A_i},\mathds{1}_{A_j}) = -{\mathcal E}^{(\alpha)}(\varphi \mathds{1}_{A_i},\varphi\mathds{1}_{A_j}) = \alpha \as{\varphi \mathds{1}_{A_i},G_\alpha (\varphi \mathds{1}_{A_j})} $$ and $$c_{i}^\varphi = {\mathcal E}^{(\alpha)}_\varphi(\mathds{1}_{A_i},\mathds{1}_{\cup_j A_j}) = {\mathcal E}^{(\alpha)}(\varphi \mathds{1}_{A_i},\varphi (\mathds{1}_{\cup_j A_j} - 1)) = \as{\varphi \mathds{1}_{A_i}, \alpha G_\alpha (\varphi \mathds{1}_{X \setminus \cup_j A_j})}.$$ The same computation for ${\mathcal E}^{(\alpha)}$ yields $${\mathcal E}^{(\alpha)}(f) = \sum_{i,j=1}^n b_{ij} (\alpha_i - \alpha_j)^2 + \sum_{i}^n c_{i} \alpha_i^2,$$ with $$b_{ij} = {\mathcal E}^{(\alpha)}(\mathds{1}_{A_i},\mathds{1}_{A_j}) = \alpha \as{ \mathds{1}_{A_i},G_\alpha \mathds{1}_{A_j}} $$ % and % $$c_i = {\mathcal E}^{(\alpha)} (\mathds{1}_{A_i},\mathds{1}_{\cup_j A_j}) = \as{\mathds{1}_{A_i}, \mathds{1}_{\cup_j A_j} - \alpha G_\alpha\mathds{1}_{\cup_j A_j}}. $$ % Since $\alpha G_\alpha$ is Markovian, these identities show % $0 \leq b_{ij}^\varphi \leq b_{ij} \text{ and } 0 \leq c_i^\varphi \leq c_i,$ % and we obtain (b) and (d) for the form ${\mathcal E}^{(\alpha)}$ (cf. the proof of \cite[Theorem~I.4.12]{MR}). If $\psi \in L^2(m)$ with $\varphi \leq \psi \leq 1$, then we also have $b_{ij}^\varphi \leq b_{ij}^\psi$ and $c_i^\varphi \leq c_i^\psi$ proving (c) for the form ${\mathcal E}^{(\alpha)}$. If $\tilde {\mathcal E}$ is a Dirichlet form that dominates ${\mathcal E}$, it follows from the formula for $b_{ij}^\varphi$ and $c_i^\varphi$ that $ \hat {\mathcal E}^{(\alpha)}_\varphi(f) \leq \hat{\tilde {\mathcal E}}^{(\alpha)}_\varphi(f) $ for each $f \in L^2(m)$. We now prove the statements (b), (c), (d) and (e) for $\hat{\mathcal{E}}_\varphi$ by approximating it with $\hat {\mathcal E}^{(\alpha)}_\varphi$. Since this approximation is only valid on $D(\hat{\mathcal{E}}_\varphi)$, for each statement we still need to verify that the involved functions belong to the correct domain. (b): Let $C:{\mathbb R} \to {\mathbb R}$ be a normal contraction and let $f \in D(\hat{\mathcal{E}}_\varphi)$ such that $\varphi f,\varphi f^2 \in D({\mathcal E})$. Lemma~\ref{lemma:maximal silverstein extension technical lemma} yields $\varphi\, C(f) \in D({\mathcal E})$ and $\varphi\, C(f)^2 \in D({\mathcal E})$. Thus, using (b) for the approximating forms, we obtain % $$\hat{\mathcal{E}}_\varphi(C(f)) = \lim_{\alpha \to \infty} \alpha \hat {\mathcal E}^{(\alpha)}_\varphi(C(f)) \leq \lim_{\alpha \to \infty} \alpha \hat {\mathcal E}^{(\alpha)}_\varphi( f ) = \hat{\mathcal{E}}_\varphi(f).$$ % (c): Let $f \in D(\hat {\mathcal E}_\psi)$. Since $\psi f,\psi f^2 \in D({\mathcal E})$, Lemma~\ref{lemma:maximal silverstein extension technical lemma} yields $\varphi f,\varphi f^2 \in D({\mathcal E})$. Thus, using (c) for the approximating forms yields % $$\hat{\mathcal{E}}_\varphi( f ) = \lim_{\alpha \to \infty} \alpha \hat {\mathcal E}^{(\alpha)}_\varphi(f) \leq \lim_{\alpha \to \infty} \alpha \hat {\mathcal E}^{(\alpha)}_\psi( f ) = {\mathcal E}_\psi( f ). $$ % (d): Let $f \in D({\mathcal E}) \cap L^\infty(m)$. Since $D({\mathcal E}) \cap L^\infty(m)$ is an algebra, we have $\varphi f \in D({\mathcal E})$ and Lemma~\ref{lemma:maximal silverstein extension technical lemma} yields $\varphi f^2 \in D({\mathcal E})$. Using (d) for the approximating forms shows % $$\hat{\mathcal{E}}_\varphi(f) = \lim_{\alpha \to \infty} \alpha \hat {\mathcal E}^{(\alpha)}_\varphi(f) \leq \lim_{\alpha \to \infty} \alpha \hat {\mathcal E}^{(\alpha)} ( f ) = {\mathcal E}( f ).$$ % (e): Let $f \in D(\hat{\tilde {\mathcal E}}_\varphi)$ such that $\varphi f \in D(\tilde {\mathcal E})$. We have $\varphi \in D({\mathcal E})$ and $|\varphi f| \leq \|f\|_\infty \varphi$. Since $D({\mathcal E})$ is an order ideal in $D(\tilde {\mathcal E})$, this implies $\varphi f\in D({\mathcal E})$ and also $\varphi f^2 \in D({\mathcal E})$ by Lemma~\ref{lemma:maximal silverstein extension technical lemma}. Using (e) for the approximating forms yields % $$\hat{\mathcal{E}}_\varphi(f) = \lim_{\alpha \to \infty} \alpha \hat {\mathcal E}^{(\alpha)}_\varphi(f) \leq \lim_{\alpha \to \infty} \alpha \hat{\tilde {\mathcal E}}^{(\alpha)}_\varphi(f) = \hat{\tilde {\mathcal E}}_\varphi(f).$$ % This finishes the proof. \end{proof} We can now prove Theorem~\ref{theorem:properties of concatenated forms} by showing that $\hat{\mathcal{E}}_\varphi$ is closable and that the properties discussed in the previous lemma pass to its closure. \begin{proof}[Proof of Theorem~\ref{theorem:properties of concatenated forms}] We first prove that $\hat{\mathcal{E}}_\varphi$ is closable on $L^0(m)$. Indeed, we show that for any sequence $(f_n)$ in $L^\infty(m)$ and $f \in L^\infty(m)$ the convergence $f_n \tom f$ implies % $$\hat{\mathcal{E}}_\varphi(f) \leq \liminf_{n\to \infty} \hat{\mathcal{E}}_\varphi(f_n).$$ % This means that the restriction of $\hat{\mathcal{E}}_\varphi$ to $L^\infty(m)$ is lower semicontinuous with respect to $L^0(m)$-convergence. It is slightly stronger than closability (cf. Lemma~\ref{lemma:characterization closability}) and is crucial for proving the identity $D(\mathcal{E}_\varphi) \cap L^\infty(m) = D(\hat{\mathcal{E}}_\varphi)$ later on. If $\liminf_{n\to \infty} \hat{\mathcal{E}}_\varphi(f_n) = \infty$ there is nothing to show. Hence, after passing to a suitable subsequence, we can assume that $(f_n)$ in $D(\hat{\mathcal{E}}_\varphi)$ and % $$\lim_{n\to \infty}\hat{\mathcal{E}}_\varphi(f_n) = \liminf_{n\to \infty} \hat{\mathcal{E}}_\varphi(f_n) < \infty.$$ % Moreover, by Lemma~\ref{lemma:properties of eph}~(b) we can further assume $\|f_n\|_\infty \leq \|f\|_\infty$. Lemma~\ref{lemma:maximal silverstein extension technical lemma}~(a) applied to the function $C:{\mathbb R} \to {\mathbb R}, \, C(x) = x^2 \wedge \|f\|_\infty ^2$ yields % $${\mathcal E}(\varphi f_n^2)^{1/2} = {\mathcal E}(\varphi\, C(f_n))^{1/2} \leq 2 \|f\|_\infty {\mathcal E}(\varphi f_n )^{1/2} + 3 \|f\|_\infty ^2 {\mathcal E}(\varphi)^{1/2}.$$ % From this inequality we infer % \begin{align*} \hat{\mathcal{E}}_\varphi(f_n) &= {\mathcal E}(\varphi f_n) - {\mathcal E}(\varphi f_n^2,\varphi) \\ &\geq {\mathcal E}(\varphi f_n) - {\mathcal E}(\varphi f_n^2)^{1/2}{\mathcal E}(\varphi)^{1/2}\\ &\geq {\mathcal E}(\varphi f_n)^{1/2} \left({\mathcal E}(\varphi f_n)^{1/2} - 2\|f\|_\infty {\mathcal E}(\varphi)^{1/2} \right) - 3\|f\|^2_\infty {\mathcal E}(\varphi). \end{align*} % Therefore, the boundedness of $(\hat{\mathcal{E}}_\varphi(f_n))$ yields the boundedness of $({\mathcal E}(\varphi f_n))$ and this in turn yields the boundedness of $({\mathcal E}(\varphi f_n^2))$. Since $(f_n)$ is uniformly bounded by $\|f\|_\infty$ and since $\varphi \in L^2(m)$, Lebesgue's dominated convergence theorem implies $\varphi f_n \to \varphi f$ and $\varphi f_n^2 \to \varphi f^2$ in $L^2(m)$. Therefore, the $L^2$-lower semicontinuity of ${\mathcal E}$ yields $\varphi f, \varphi f^2 \in D({\mathcal E})$, i.e., $f \in D(\hat{\mathcal{E}}_\varphi)$. From the boundedness of $({\mathcal E}(\varphi f_n^2))$ and the convergence $\varphi f_n^2 \to \varphi f^2$ in $L^2(m)$ we obtain the ${\mathcal E}$-weak convergence $\varphi f_n^2 \to \varphi f^2$, see Lemma~\ref{lemma:existence of a weakly convergent subnet}. This observation and the lower semicontinuity of ${\mathcal E}$ on $L^2(m)$ amount to $$\hat{\mathcal{E}}_\varphi(f) = {\mathcal E}(\varphi f) - {\mathcal E}(\varphi f^2,\varphi) \leq \liminf_{n\to \infty} {\mathcal E}(\varphi f_n) - \lim_{n\to \infty} {\mathcal E}(\varphi f_n^2,\varphi) = \liminf_{n \to \infty} \hat{\mathcal{E}}_\varphi(f_n),$$ and prove the desired lower semicontinuity. Having proven the closability of $\hat{\mathcal{E}}_\varphi$ on $L^0(m)$, we denote its closure by $\mathcal{E}_\varphi$. We now prove that $\mathcal{E}_\varphi$ is Markovian. To this end, let $C:{\mathbb R} \to {\mathbb R}$ a normal contraction and let $f \in D(\mathcal{E}_\varphi)$. By Lemma~\ref{lemma:characterization closability} there exists an $\hat{\mathcal{E}}_\varphi$-Cauchy sequence $(f_n)$ in $D(\hat{\mathcal{E}}_\varphi)$ with $f_n \tom f$ and $\hat{\mathcal{E}}_\varphi(f_n) \to \mathcal{E}_\varphi(f)$. Since $C$ is a normal contraction, we also have $C(f_n) \tom C(f)$. The lower semicontinuity of $\mathcal{E}_\varphi$ and Lemma~\ref{lemma:properties of eph}~(b) yield $C(f_n) \in D(\hat{\mathcal{E}}_\varphi)$ and $$\mathcal{E}_\varphi(C(f)) \leq \liminf_{n\to \infty} \mathcal{E}_\varphi(C(f_n)) = \liminf_{n\to \infty} \hat{\mathcal{E}}_\varphi(C(f_n)) \leq \liminf_{n\to \infty} \hat{\mathcal{E}}_\varphi( f_n ) = \mathcal{E}_\varphi(f).$$ Altogether, we have proven that $\mathcal{E}_\varphi$ is an energy form. Its recurrence follows from the fact that $1 \in D(\hat{\mathcal{E}}_\varphi)$ and $$\mathcal{E}_\varphi(1) = \hat{\mathcal{E}}_\varphi(1) = {\mathcal E}(\varphi 1) - {\mathcal E}(\varphi 1^2,\varphi) = 0.$$ (b): Since $\mathcal{E}_\varphi$ is an extension of $\hat{\mathcal{E}}_\varphi$ and since $D(\hat{\mathcal{E}}_\varphi) \subseteq L^\infty(m)$, the inclusion $D(\hat{\mathcal{E}}_\varphi) \subseteq D(\mathcal{E}_\varphi) \cap L^\infty(m)$ is trivial. Let $f \in D(\mathcal{E}_\varphi) \cap L^\infty(m)$ be given. According to Lemma~\ref{lemma:characterization closability}, there exists an $\hat{\mathcal{E}}_\varphi$-Cauchy sequence $(f_n)$ in $D(\hat{\mathcal{E}}_\varphi)$ with $f_n \tom f$ and $\mathcal{E}_\varphi(f) = \lim_{n \to \infty}\hat{\mathcal{E}}_\varphi(f_n)$. The lower semicontinuity property that we proved above for $\hat{\mathcal{E}}_\varphi$ yields $$\hat{\mathcal{E}}_\varphi(f) \leq \liminf_{n\to\infty} \hat{\mathcal{E}}_\varphi(f_n) = \mathcal{E}_\varphi(f) < \infty. $$ This shows $f \in D(\hat{\mathcal{E}}_\varphi)$. (a): Let $f \in D(\mathcal{E}_{\rm e})$ and let $(f_n)$ be an ${\mathcal E}$-Cauchy sequence with $f_n \tom f$. Since $D({\mathcal E}) \cap L^\infty(m)$ is dense in $D({\mathcal E})$, see e.g. \cite[Theorem~1.4.2]{FOT}, we can choose the $(f_n)$ to be essentially bounded so that $f_n \in D(\hat{\mathcal{E}}_\varphi)$. The $L^0(m)$-lower semicontinuity of $\mathcal{E}_\varphi$ and Lemma~\ref{lemma:properties of eph}~(d) yield $$\mathcal{E}_\varphi(f) \leq \liminf_{n\to \infty}\mathcal{E}_\varphi(f_n) = \liminf_{n\to \infty}\hat{\mathcal{E}}_\varphi(f_n) \leq \liminf_{n\to \infty} {\mathcal E}(f_n) = \mathcal{E}_{\rm e}(f).$$ This proves $\mathcal{E}_{\rm e} \leq \mathcal{E}_\varphi$. Now, let $\psi \in D({\mathcal E})$ with $\varphi \leq \psi \leq 1$. For $f \in D({\mathcal E}_\psi)$ we choose an $\hat {\mathcal E}_\psi$-Cauchy sequence $(f_n)$ with $f_n \tom f$. Lemma~\ref{lemma:properties of eph}~(c) yields $f_n \in D(\hat{\mathcal{E}}_\varphi)$ and $$\mathcal{E}_\varphi(f) \leq \liminf_{n \to \infty} \mathcal{E}_\varphi(f_n) = \liminf_{n \to \infty} \hat{\mathcal{E}}_\varphi(f_n) \leq \liminf_{n \to \infty} \hat {\mathcal E}_\psi(f_n) = {\mathcal E}_\psi(f).$$ (c): Let $\tilde {\mathcal E}$ be a Dirichlet form that dominates ${\mathcal E}$. For $f \in D(\tilde {\mathcal E}_\varphi)$ we choose an $\hat{\tilde {\mathcal E}}_\varphi$-Cauchy sequence $(f_n)$ with $f_n \tom f$ and $\hat{\tilde {\mathcal E}}_\varphi(f_n) \to \tilde {\mathcal E}_\varphi(f)$. The $L^0$-lower semicontinuity of $\mathcal{E}_\varphi$ and Lemma~\ref{lemma:properties of eph}~(e) imply $$\mathcal{E}_\varphi(f) \leq \liminf_{n \to \infty} \mathcal{E}_\varphi(f_n) = \liminf_{n \to \infty} \hat{\mathcal{E}}_\varphi(f_n) \leq \liminf_{n \to \infty}\hat{\tilde {\mathcal E}}_\varphi(f_n) \leq \tilde {\mathcal E}_\varphi(f). $$ This finishes the proof. \end{proof} \begin{definition}[Main part] The {\em main part of ${\mathcal E}$} is defined by $$\mathcal{E}^{(M)}:L^0(m) \to [0,\infty],\, \mathcal{E}^{(M)}(f) := \sup\{ \mathcal{E}_\varphi(f) \mid \varphi \in D({\mathcal E}) \text{ with } 0 \leq \varphi \leq 1\}.$$ The restriction of $\mathcal{E}^{(M)}$ to $L^2(m)$ is called the {\em active main part of ${\mathcal E}$} and is denoted by $\mathcal{E}^{(M)}_a$. \end{definition} \begin{theorem}[Maximality of the main part]\label{theorem:maximality main part} The main part $\mathcal{E}^{(M)}$ is a recurrent energy form that satisfies $\mathcal{E}_{\rm e} \leq \mathcal{E}^{(M)}$ and the active main part $\mathcal{E}^{(M)}_a$ is a Dirichlet form that satisfies ${\mathcal E} \leq \mathcal{E}^{(M)}_a$. Moreover, if $\tilde{{\mathcal E}}$ is a Dirichlet form that dominates ${\mathcal E}$, then also $\tilde{{\mathcal E}}_{\rm e} \leq \mathcal{E}^{(M)}$ and $\tilde{{\mathcal E}} \leq \mathcal{E}^{(M)}_a$. \end{theorem} \begin{proof} As the supremum of lower semicontinuous functions on $L^0(m)$ the functional $\mathcal{E}^{(M)}$ is lower semicontinuous. Moreover, the Markov property of $\mathcal{E}^{(M)}$ and its recurrence follow from the Markov property the recurrence of the $\mathcal{E}_\varphi$. Next we prove that $\mathcal{E}^{(M)}$ is a quadratic form. The homogeneity of $\mathcal{E}^{(M)}$ follows easily from the homogeneity of $\mathcal{E}_\varphi$. We let $I := \{\varphi \in D({\mathcal E}) \mid 0 \leq \varphi \leq 1\}$. For fixed $f \in L^0(m)$ the map $I \to [0,\infty],\, \varphi \mapsto \mathcal{E}_\varphi(f)$ is monotone increasing and if $\varphi,\psi \in I$, then also $\varphi \vee \psi \in I$. This monotonicity implies that for all $f,g \in L^0(m)$ we have \begin{align*} \mathcal{E}^{(M)}(f+g) + \mathcal{E}^{(M)}(f-g) &= \sup_{\varphi \in I} \mathcal{E}_\varphi(f+g) + \sup_{\varphi \in I}\mathcal{E}_\varphi(f-g) \\ &= \sup_{\varphi \in I} \left( \mathcal{E}_\varphi(f+g) + \mathcal{E}_\varphi(f-g) \right)\\ &= \sup_{\varphi \in I} \left( 2\mathcal{E}_\varphi(f) + 2\mathcal{E}_\varphi(g) \right)\\ &= 2 \sup_{\varphi \in I} \mathcal{E}_\varphi(f) + 2\sup_{\varphi \in I}\mathcal{E}_\varphi(g) \\ &= 2\mathcal{E}^{(M)}(f) + 2\mathcal{E}^{(M)}(g). \end{align*} Therefore, $\mathcal{E}^{(M)}$ is a quadratic form. Theorem~\ref{theorem:properties of concatenated forms} implies $\mathcal{E}_{\rm e} \leq \mathcal{E}_\varphi$ for all $\varphi \in I$ and therefore $\mathcal{E}_{\rm e} \leq \mathcal{E}^{(M)}$. As the $L^2$-restriction of an energy form, $\mathcal{E}^{(M)}_a$ is clearly a Dirichlet form. The inequality ${\mathcal E} \leq \mathcal{E}^{(M)}_a$ follows from the statement for the extended Dirichlet form $\mathcal{E}_{\rm e}$ and the identity $D(\mathcal{E}_{\rm e}) \cap L^2(m) = D({\mathcal E})$. If $\tilde {\mathcal E}$ is a Dirichlet form that dominates ${\mathcal E}$, Theorem~\ref{theorem:properties of concatenated forms} also shows $\tilde {\mathcal E}_\varphi \leq \mathcal{E}_\varphi$ for all $\varphi \in I$. Moreover, what we have already proven applied to the form $\tilde {\mathcal E}$ yields $\tilde {\mathcal E}_{\rm e} \leq \tilde {\mathcal E}_{\varphi}$ for any $\varphi \in I$. Combining these inequalities and taking a supremum over $\varphi$ shows $\tilde {\mathcal E}_{\rm e} \leq \mathcal{E}^{(M)}$. With the same argumentation as above for the form ${\mathcal E}$, we also obtain $\tilde{{\mathcal E}} \leq \mathcal{E}^{(M)}_a$. \end{proof} \begin{remark} The previous theorem shows that $\mathcal{E}^{(M)}_a$ is larger (in the sense of quadratic forms) than any Dirichlet form dominating ${\mathcal E}$. Below we will also prove that $\mathcal{E}^{(M)}_a$ is a Dirichlet form that dominates ${\mathcal E}$ so that $\mathcal{E}^{(M)}_a$ is the maximal element in the cone of Dirichlet forms that dominate ${\mathcal E}$. \end{remark} For later purposes we note the following lower semicontinuity of $\mathcal{E}_\varphi$ in the parameter $\varphi$. \begin{lemma}\label{lemma:lsc in varphi} Let $(\varphi_n)$ a sequence in $D({\mathcal E})$ and let $\varphi \in D({\mathcal E})$ with $0\leq \varphi_n,\varphi \leq 1$. If $\varphi_n \to \varphi$ with respect to $\|\cdot\|_{\mathcal E}$, then for all $f \in L^0(m)$ we have % $$\mathcal{E}_\varphi(f) \leq \liminf_{n\to \infty} {\mathcal E}_{\varphi_n}(f).$$ % \end{lemma} \begin{proof} We first prove the statement for $f \in L^\infty(m)$. Without loss of generality we can assume % $$\lim_{n\to \infty} {\mathcal E}_{\varphi_n}(f) = \liminf_{n\to \infty} {\mathcal E}_{\varphi_n}(f) < \infty.$$ % Since $D({\mathcal E}_{\varphi_n}) \cap L^\infty(m) = D(\hat {\mathcal E}_{\varphi_n})$, this implies $f \in D(\hat {\mathcal E}_{\varphi_n})$ and ${\mathcal E}_{\varphi_n}(f) = \hat {\mathcal E}_{\varphi_n}(f)$ for each $n \in {\mathbb N}$. As in the proof of Theorem~\ref{theorem:properties of concatenated forms} we obtain the inequalities % $${\mathcal E}(\varphi_n f^2)^{1/2} \leq 2 \|f\|_\infty {\mathcal E}(\varphi_n f )^{1/2} + 3 \|f\|_\infty ^2 {\mathcal E}(\varphi_n)^{1/2},$$ % and % $${\mathcal E}(\varphi_n f)^{1/2} \left({\mathcal E}(\varphi_n f)^{1/2} - 2\|f\|_\infty {\mathcal E}(\varphi_n)^{1/2} \right) - 3\|f\|^2_\infty {\mathcal E}(\varphi_n) \leq \hat {\mathcal E}_{\varphi_n}(f).$$ % Hence, the boundedness of $({\mathcal E}(\varphi_n))$ and $(\hat {\mathcal E}_{\varphi_n}(f))$ yields the boundedness of $({\mathcal E}(\varphi_n f))$ and $({\mathcal E}(\varphi_n f^2))$. Since $\varphi_n f \to \varphi f$ in $L^2(m)$, the $L^2$-lower semicontinuity of ${\mathcal E}$ yields $\varphi f \in D({\mathcal E})$ and so $f \in D(\hat{\mathcal{E}}_\varphi)$ by Lemma~\ref{lemma:properties of eph}. Moreover, the $L^2$-convergence $\varphi_n f^2 \to \varphi f^2$ and the ${\mathcal E}$-boundedness of $(\varphi_n f^2)$ yields $\varphi_n f^2 \to \varphi f^2$ ${\mathcal E}$-weakly. Since also $\varphi_n \to \varphi$ in form norm, we obtain % $${\mathcal E}(\varphi_n f^2,\varphi_n) \to {\mathcal E}(\varphi f^2,\varphi) \text{, as }n\to \infty.$$ % The $L^2$-lower semicontinuity of ${\mathcal E}$ and this observation yield % $$\mathcal{E}_\varphi(f) = \hat{\mathcal{E}}_\varphi(f) = {\mathcal E}(\varphi f) - {\mathcal E}(\varphi f^2,\varphi) \leq \liminf_{n \to \infty} {\mathcal E}(\varphi_n f) - \lim_{n\to \infty} {\mathcal E}(\varphi_n f^2,\varphi_n) = \liminf_{n\to \infty} {\mathcal E}_{\varphi_n}(f).$$ % For general $f \in L^0(m)$ we consider $f^{(k)} := (f\wedge k) \vee (-k)$. Using the $L^0$-lower semicontinuity of $\mathcal{E}_\varphi$, what we have already proven for bounded functions and the Markov property of ${\mathcal E}_{\varphi_n}$, we obtain % $$\mathcal{E}_\varphi(f) \leq \liminf_{k\to \infty} \mathcal{E}_\varphi(f^{(k)}) \leq \liminf_{k\to \infty} \liminf_{n\to \infty}{\mathcal E}_{\varphi_n}(f^{(k)}) \leq \liminf_{n\to \infty}{\mathcal E}_{\varphi_n}(f).$$ % This finishes the proof. \end{proof} For a measurable subset $F$ of $X$ we let $D({\mathcal E})_F = \{f \in D({\mathcal E}) \mid f\mathds{1}_{X \setminus F} = 0\}$. Following \cite{AH}, we call an ascending sequence of measurable subsets $(F_n)_{n \in {\mathbb N}}$ of $X$ a {\em (measurable) ${\mathcal E}$-nest} if \begin{itemize} \item for each $n\in {\mathbb N}$ there exists $\varphi_n \in D({\mathcal E})$ with $\varphi_n \geq \mathds{1}_{F_n},$ \item $\bigcup_{n \in {\mathbb N}} D({\mathcal E})_{F_n}$ is dense in $D({\mathcal E})$ with respect to $\|\cdot\|_{\mathcal E}$. \end{itemize} According to \cite[Lemma~3.1]{AH} there always exits a nest. For the discussion in Section~\ref{section:regular} the following alternative formula for $\mathcal{E}^{(M)}$ will be important. It shows that the supremum in the definition of $\mathcal{E}^{(M)}$ can be taken along a suitable increasing sequence of functions. \begin{lemma} \label{lemma:alternative formula em} Let $(F_n)$ be an ${\mathcal E}$-nest and let $(\chi_n)$ be an increasing sequence in $D({\mathcal E})$ with $\mathds{1}_{F_n} \leq \chi_n \leq 1$. For all $f \in L^0(m)$ we have % $$\mathcal{E}^{(M)}(f) = \lim_{n \to \infty} {\mathcal E}_{\chi_n}(f).$$ % \end{lemma} \begin{proof} It follows from the monotonicity of $\mathcal{E}_\varphi$ in the parameter $\varphi$ that for any $f \in L^0(m)$ the limit $\lim_{n \to \infty} {\mathcal E}_{\chi_n}(f)$ exists and satisfies % $$\lim_{n \to \infty} {\mathcal E}_{\chi_n}(f) \leq \mathcal{E}^{(M)}(f).$$ % Thus, it suffices to prove the opposite inequality by showing that for any $\varphi \in D({\mathcal E})$ with $0 \leq \varphi \leq 1$ and any $f \in L^0(m)$ we have % $$\mathcal{E}_\varphi(f) \leq \lim_{n \to \infty} {\mathcal E}_{\chi_n}(f).$$ % To this end, we use that $(F_n)$ is a nest and choose a sequence $(\varphi_k)$ in $\bigcup_n D({\mathcal E})_{F_n}$ such that $\varphi_k \to \varphi$ with respect to $\|\cdot\|_{\mathcal E}$. Without loss of generality we can assume $0 \leq \varphi_k \leq 1$. By the choice of $\varphi_k$ there exists $n_k$ such that $\varphi_k$ vanishes outside $F_{n_k}.$ Since $\chi_n \geq \mathds{1}_{F_{n_k}}$, this implies $\varphi_k \leq \chi_{n_k}$ and we obtain % $${\mathcal E}_{\varphi_k}(f) \leq {\mathcal E}_{\chi_{n_k}}(f) \leq \lim_{n \to \infty} {\mathcal E}_{\chi_n}(f).$$ % With this at hand, Lemma~\ref{lemma:lsc in varphi} yields % $$\mathcal{E}_\varphi(f) \leq \liminf_{k \to \infty} {\mathcal E}_{\varphi_k}(f) \leq \lim_{n \to \infty} {\mathcal E}_{\chi_n}(f).$$ % This finishes the proof. \end{proof} \begin{example}[Weighted manifolds]\label{example:mainfolds} For the notation used in this example we refer the reader to \cite{Gri}. Let $(M,g,\mu)$ a weighted Riemannian manifold and let $V \in L^1_{\rm loc}(\mu)$. We define the quadratic form ${\mathbb D}_0$ by letting $D({\mathbb D}_0) = C_c^\infty(M)$ on which it acts by % $${\mathbb D}_0(f) = \int_M g(\nabla f, \nabla f) d\mu + \int_M V f^2 d\mu.$$ % The closure ${\mathbb D}$ of ${\mathbb D}_0$ in $L^2(\mu)$ is a regular Dirichlet form with domain $D({\mathbb D}) = W_0^1(M) \cap L^2(V \cdot \mu)$. The domain of the main part of ${\mathbb D}$ is $D({\mathbb D}^{(M)}) = \{f \in L^2_{\rm loc}(\mu) \mid \nabla f \in \vec L^2(\mu)\}$ on which it acts by % $${\mathbb D}^{(M)}(f) = \int_M g(\nabla f, \nabla f) d\mu.$$ % Therefore, the active main part of ${\mathbb D}$ is the Dirichlet form of the weighted Laplacian on $(M,g,\mu)$ with Neumann boundary conditions at infinity, i.e., $D({\mathbb D}^{(M)}_a) = W^1(M) = \{f \in L^2(\mu) \mid \nabla f \in \vec L^2(\mu)\}$. \begin{proof} We first prove $\{f \in L^2_{\rm loc}(\mu) \mid \nabla f \in \vec L^2(\mu)\} \subseteq D({\mathbb D}^{(M)})$ and the formula % $${\mathbb D}^{(M)}(f) = \int_M g(\nabla f, \nabla f) d\mu$$ % for $f \in L^2_{\rm loc}(\mu)$ with $\nabla f \in L^2(\mu)$. For $f \in L^2 (\mu)$ with $\nabla f \in \vec L^2(\mu)$ and $\varphi \in C_c^\infty(M)$ with $0 \leq \varphi \leq 1$ we have $\varphi f \in L^2(m)$ and it follows from some distributional chain rule that $\nabla(\varphi f) \in \vec L^2(\mu)$. Since $\varphi f$ has compact support, \cite[Lemma~5.5]{Gri} implies $\varphi f \in W^1_0(M)$. If, additionally, $f \in L^\infty(\mu)$, we have also have $\varphi f \in L^2(V \cdot \mu)$ and therefore $\varphi f \in D({\mathbb D})$. In this case, another application of the chain rule shows % $${\mathbb D}_\varphi(f) = {\mathbb D}(\varphi f) - {\mathbb D}(\varphi f^2,\varphi) = \int_M \varphi^2 g(\nabla f,\nabla f) d\mu.$$ % Let now $f \in L^2_{\rm loc}(\mu)$ with $\nabla f \in \vec L^2(\mu)$ and for $n \in {\mathbb N}$ set $f^{(n)} = (f \wedge n) \vee(-n)$. Since ${\mathbb D}_\varphi$ is an energy form, we have ${\mathbb D}_\varphi(f) = \lim_n {\mathbb D}_\varphi(f^{(n)})$. Moreover, $\nabla f^{(n)} \to \nabla f$ in $L^2_{\rm loc}(\mu)$. Therefore, the above identity extends to unbounded functions as well. Letting $\varphi \nearrow 1$ and using Lemma~\ref{lemma:alternative formula em} shows $\{f \in L^2_{\rm loc}(\mu) \mid \nabla f \in L^2(\mu)\}$ and the desired formula. It remains to prove $D({\mathbb D}^{(M)}) \subseteq \{f \in L^2_{\rm loc}(\mu) \mid \nabla f \in \vec L^2(\mu)\}$. Let $f \in D({\mathbb D}^{(M)}) \cap L^\infty(m)$. Then for each $\varphi \in C_c^\infty(M)$ with $0 \leq \varphi \leq 1$ we have $\varphi f \in {\mathbb D}$ and therefore $\nabla(\varphi f) \in \vec L^2(\mu)$. This shows $\nabla f \in \vec L^2_{\rm loc}(\mu)$. Similar computations as above yield % $${\mathbb D}_\varphi(f) = \int_M \varphi^2 g(\nabla f, \nabla f) d\mu$$ % and after letting $\varphi \nearrow 1$ we arrive at $\nabla f \in \vec L^2(\mu)$. For unbounded $f \in D({\mathbb D}^{(M)})$ what we have already proven and Lemma~\ref{lemma:bounded approximation} yields % $$\sup_n \int_M g(\nabla f^{(n)},\nabla f^{(n)}) d \mu = \sup_n {\mathbb D}^{(M)}(f^{(n)}) = {\mathbb D}^{(M)}(f) < \infty.$$ % With this at hand, a local Poincaré inequality yields $f \in L^2_{\rm loc}(\mu)$ and $\nabla f \in L^2(\mu)$ (for more details on this conclusion see \cite[Proof of Proposition~7.1]{HKLMS}). \end{proof} \end{example} \begin{example}[Weighted graphs] For a background on Dirichlet forms associated with weighted graphs we refer the reader to \cite{KL}. Let $X$ a countably infinite set. Any function $m:X \to (0,\infty)$ induces a measure of full support on $X$ by letting $$m(A) = \sum_{x \in A}m(x).$$ A weighted graph over $X$ is a pair $(b,c)$ consisting of a function $c:X \to [0,\infty)$ and a function $b:X \times X \to [0,\infty)$ with \begin{itemize} \item[(b0)] $b(x,x) = 0$ for all $x \in X$, \item[(b1)] $b(x,y) = b(y,x)$ for all $x,y \in X$, \item[(b2)] $\sum_{y\in X} b(x,y) < \infty$ for all $x \in X$. \end{itemize} The value $b(x,y)$ is considered to be the edge weight of the edge $(x,y)$. We define the form $Q_0$ by letting $D(Q_0) = C_c(X)$, the finitely supported functions on $X$, on which $Q_0$ acts by $$Q_0(f) = \sum_{x,y\in X} b(x,y)(f(x)-f(y))^2 + \sum_{x \in X} c(x) f(x)^2. $$ The closure $Q$ of $Q_0$ on $\ell^2(X,m)$ is a regular Dirichlet form. Then, the domain of the main part of $Q$ is $$D(Q^{(M)}) = \{f :X \to {\mathbb R} \mid \sum_{x,y\in X} b(x,y)(f(x)-f(y))^2 < \infty \}$$ on which it acts by % $$Q^{(M)}(f) = \sum_{x,y\in X} b(x,y)(f(x)-f(y))^2.$$ % \begin{proof} It follows from the definitions and a simple approximation argument that for $\varphi \in C_c(X)$ with $0\leq \varphi \leq 1$ we have $D(Q_\varphi) = \{f:X\to {\mathbb R}\}$ on which it acts by % $$Q_\varphi(f) = \sum_{x,y\in X} b(x,y) \varphi(x) \varphi(y) (f(x) - f(y))^2.$$ % This observation and Lemma~\ref{lemma:alternative formula em} imply the claim after letting $\varphi \nearrow 1$. \end{proof} \end{example} \subsection{The killing part and the reflected Dirichlet form} The given examples show that in general $\mathcal{E}^{(M)}$ is not an extension of $\mathcal{E}_{\rm e}$ and $\mathcal{E}^{(M)}_a$ is not an extension of ${\mathcal E}$. However, on $D({\mathcal E})$ the form ${\mathcal E}$ is a perturbation of $\mathcal{E}^{(M)}$ by a monotone quadratic form. In this subsection we (prove and) employ this monotonicity to construct the killing part of ${\mathcal E}$ and discuss its properties. We define the {\em preliminary killing part of ${\mathcal E}$} by $$\hat{\mathcal{E}}^{(k)}:L^0(m) \to [0,\infty],\, \hat{\mathcal{E}}^{(k)}(f) := \begin{cases} {\mathcal E}(f) - \mathcal{E}^{(M)}(f) &\text{if } f \in D({\mathcal E}), \\ \infty &\text{else.} \end{cases} $$ It follows from the considerations in the previous subsection that $\hat{\mathcal{E}}^{(k)}$ is a nonnegative quadratic form with domain $D({\mathcal E})$. In contrast to $\hat{\mathcal{E}}_\varphi$ it is in general not closable on $L^0(m)$, see \cite[Remark~3.50]{Schmi}. Instead, we will use the following monotonicity property of $\hat{\mathcal{E}}^{(k)}$ to extend it to a larger domain. \begin{lemma}\label{lemma:monotonicity of ekh} For $f,g \in D({\mathcal E})$ the inequality $|f| \leq |g|$ implies $\hat{\mathcal{E}}^{(k)}(f) \leq \hat{\mathcal{E}}^{(k)}(g)$. \end{lemma} \begin{proof} Let $f,g \in D({\mathcal E}) \cap L^\infty(m)$ with $|f|\leq |g|$ be given. For $\varepsilon > 0$ we set $f_\varepsilon := f - (f \wedge \varepsilon) \vee (-\varepsilon)$, $g_\varepsilon := g - (g \wedge \varepsilon) \vee (-\varepsilon)$ and $\varphi_\varepsilon := (\varepsilon^{-1}g) \wedge 1$ such that $\varphi_\varepsilon = 1$ on $\{f_\varepsilon > 0\}$ and $\{g_\varepsilon > 0\}$. According to \cite[Theorem~1.4.2]{FOT}, we have $f_\varepsilon \to f$ and $g_\varepsilon \to g$ with respect to ${\mathcal E}$, as $\varepsilon \to 0+$. Since $\hat{\mathcal{E}}^{(k)}$ is continuous with respect to ${\mathcal E}$ convergence, this implies % \begin{align*} \hat{\mathcal{E}}^{(k)}(g) - \hat{\mathcal{E}}^{(k)}(f) &= \lim_{\varepsilon \to 0+} \left( \hat{\mathcal{E}}^{(k)}(g_\varepsilon) - \hat{\mathcal{E}}^{(k)}(f_\varepsilon)\right)\\ &= \lim_{\varepsilon \to 0+} \left( {\mathcal E}(g_\varepsilon) - \mathcal{E}^{(M)}(g_\varepsilon) - {\mathcal E}(f_\varepsilon) + \mathcal{E}^{(M)}(f_\varepsilon)\right)\\ &= \lim_{\varepsilon \to 0+}\lim_{ \varphi \in D({\mathcal E}),\, \varphi \nearrow 1} \left( {\mathcal E}(g_\varepsilon) - \mathcal{E}_\varphi(g_\varepsilon) - {\mathcal E}(f_\varepsilon) + \mathcal{E}_\varphi(f_\varepsilon)\right). \end{align*} % Moreover, for any $\varphi \in D({\mathcal E})$ with $\varphi_\varepsilon \leq \varphi \leq 1$ we have % $$ {\mathcal E}(g_\varepsilon) - \mathcal{E}_\varphi(g_\varepsilon) - {\mathcal E}(f_\varepsilon) + \mathcal{E}_\varphi(f_\varepsilon) = {\mathcal E}(g_\varepsilon) - \hat{\mathcal{E}}_\varphi(g_\varepsilon) - {\mathcal E}(f_\varepsilon) + \hat{\mathcal{E}}_\varphi(f_\varepsilon)= {\mathcal E}(g_\varepsilon^2 - f_\varepsilon^2,\varphi). $$ Since $g_\varepsilon^2 - f_\varepsilon^2 \geq 0$ and $\varphi = 1$ on $\{g_\varepsilon^2 - f_\varepsilon^2 > 0\}$, for any $\alpha > 0$ we obtain $${\mathcal E}(\varphi) = {\mathcal E}\left( \left(\varphi +\alpha(g_\varepsilon^2 - f_\varepsilon^2) \right) \wedge 1 \right) \leq {\mathcal E}(\varphi) + 2\alpha {\mathcal E}(g_\varepsilon^2 - f_\varepsilon^2,\varphi) + \alpha^2 {\mathcal E}(g_\varepsilon^2 - f_\varepsilon^2). $$ Rearranging this inequality and letting $\alpha \to 0+$ yields ${\mathcal E}(g_\varepsilon^2 - f_\varepsilon^2,\varphi) \geq 0$ and finishes the proof. \end{proof} \begin{definition}[Killing part] The {\em killing part} of ${\mathcal E}$ is defined by $$\mathcal{E}^{(k)}:L^0(m) \to [0,\infty],\, \mathcal{E}^{(k)}(f):= \begin{cases} \sup\{\hat{\mathcal{E}}^{(k)}(g) \mid g \in D({\mathcal E}) \text{ with } |g| \leq |f|\} &\text{if } f \in D(\mathcal{E}^{(M)}),\\ \infty &\text{else}. \end{cases}$$ \end{definition} It follows from the previous lemma that $\mathcal{E}^{(k)}$ is an extension of $\hat{\mathcal{E}}^{(k)}$. In order to prove further properties of $\mathcal{E}^{(k)}$ we need the following alternative formula. \begin{lemma} \label{lemma:alternative formula ek} For all $f \in D(\mathcal{E}^{(M)})$ we have % $$\mathcal{E}^{(k)}(f) = \sup\{\hat{\mathcal{E}}^{(k)}(\varphi f^{(n)}) \mid n \in {\mathbb N}, \varphi \in D({\mathcal E}) \text{ with } 0 \leq \varphi \leq 1\},$$ % where $f^{(n)} = (f \wedge n) \vee (-n)$. \end{lemma} \begin{proof} For $f \in D(\mathcal{E}^{(M)})$ we let $$k(f) := \sup\{\hat{\mathcal{E}}^{(k)}(\varphi f^{(n)}) \mid n \in {\mathbb N}, \varphi \in D({\mathcal E}) \text{ with } 0 \leq \varphi \leq 1\}.$$ Since $D({\mathcal E}) \cap L^\infty(m)$ is an algebraic ideal in $D(\mathcal{E}^{(M)}) \cap L^\infty(m)$ and $|\varphi f^{(n)}| \leq |f|$, the monotonicity of $\hat{\mathcal{E}}^{(k)}$ on $D({\mathcal E})$ implies $k(f) \leq \mathcal{E}^{(k)}(f)$. In order to deduce the opposite inequality, we use the monotonicity of $\hat{\mathcal{E}}^{(k)}$ and that $D({\mathcal E})$ is a lattice to choose an increasing sequence of nonnegative functions $(f_l)$ in $D({\mathcal E})$ such that $\mathcal{E}^{(k)}(f) = \sup_l \hat{\mathcal{E}}^{(k)}(f_l)$. Since $f_l^{(n)}\overset{n\to \infty}{\longrightarrow} f_l$ with respect to ${\mathcal E}$, see e.g. \cite[Theorem~1.4.2]{FOT}, and since $\hat{\mathcal{E}}^{(k)}$ is continuous with respect to ${\mathcal E}$-convergence, we can further assume the $(f_l)$ to be bounded. For $\varepsilon > 0$ we let $\varphi_{l,\varepsilon} := f_l/(\varepsilon + f_l)$. It follows from Lemma~\ref{lemma:contraction properties} that $\varphi_{l,\varepsilon} \in D({\mathcal E})$ and that $\varphi_{l,\varepsilon} f_l$ satisfies $${\mathcal E}(\varphi_{l,\varepsilon}f_l) \leq {\mathcal E}(f_l).$$ This inequality together with $\varphi_{l,\varepsilon}f_l \tom f_l$ as $\varepsilon \to 0 +$ yields $\varphi_{l,\varepsilon}f_l \to f_l$ with respect to ${\mathcal E}$, see Lemma~\ref{lemma:existence of a weakly convergent subnet}. Since $\hat{\mathcal{E}}^{(k)}$ is monotone and continuous with respect to ${\mathcal E}$, we obtain $$\hat{\mathcal{E}}^{(k)}(f_l) = \lim_{\varepsilon \to 0+} \hat{\mathcal{E}}^{(k)}(\varphi_{l,\varepsilon} f_l) \leq k(f).$$ This finishes the proof. \end{proof} The following theorem summarizes the properties of $\mathcal{E}^{(k)}$. \begin{theorem} \label{lemma:properties of ek} The functional $\mathcal{E}^{(k)}$ is a monotone quadratic form on $L^0(m)$ that extends $\hat{\mathcal{E}}^{(k)}$. Its domain satisfies $D(\mathcal{E}^{(k)}) \subseteq D(\mathcal{E}^{(M)})$. If $(f_n)$ is an $\mathcal{E}^{(M)}$-bounded sequence in $D(\mathcal{E}^{(M)})$ with $f_n \tom f$, then $$\mathcal{E}^{(k)}(f) \leq \liminf_{n\to \infty}\mathcal{E}^{(k)}(f_n).$$ \end{theorem} \begin{proof} The monotonicity of $\mathcal{E}^{(k)}$ and that it extends $\hat{\mathcal{E}}^{(k)}$ is immediate from its definition and the monotonicity of $\hat{\mathcal{E}}^{(k)}$. Next we show that $\mathcal{E}^{(k)}$ is a quadratic form. For $\lambda \in {\mathbb R}$ and $f \in D(\mathcal{E}^{(M)})$ the identity % $$\mathcal{E}^{(k)}(\lambda f) = |\lambda|^2\mathcal{E}^{(k)}(f)$$ follows easily from the definition of $\mathcal{E}^{(k)}$ and the corresponding statement for $\hat{\mathcal{E}}^{(k)}$. We employ Lemma~\ref{lemma:alternative formula ek} to verify the parallelogram identity. Let $f,g \in D(\mathcal{E}^{(M)})$, $\varphi \in D({\mathcal E})$ with $0 \leq \varphi \leq 1$ and $n \in {\mathbb N}$. The inequalities $|f + g| \geq |\varphi(f^{(n)} + g^{(n)})|$ and $|f - g| \geq |\varphi(f^{(n)} - g^{(n)})|$, the monotonicity of $\mathcal{E}^{(k)}$ and the fact that $\hat{\mathcal{E}}^{(k)}$ is a quadratic form yield % \begin{align*} \mathcal{E}^{(k)}(f + g) + \mathcal{E}^{(k)}(f-g) &\geq \hat{\mathcal{E}}^{(k)}(\varphi(f^{(n)} + g^{(n)})) + \hat{\mathcal{E}}^{(k)}(\varphi(f^{(n)} - g^{(n)}))\\ &= 2 \hat{\mathcal{E}}^{(k)}(\varphi g^{(n)}) + 2 \hat{\mathcal{E}}^{(k)}(\varphi f^{(n)}). \end{align*} % Lemma~\ref{lemma:alternative formula ek} implies % $$\mathcal{E}^{(k)}(f + g) + \mathcal{E}^{(k)}(f-g) \geq 2\mathcal{E}^{(k)}(f) + 2\mathcal{E}^{(k)}(g).$$ % According to Lemma~\ref{lemma:characterization quadratic forms} this is inequality and the homogeneity are sufficient for proving that $\mathcal{E}^{(k)}$ is a quadratic form. % It remains to show the statement on lower semicontinuity. To this end, let $(f_n)$ an $\mathcal{E}^{(M)}$-bounded sequence in $D(\mathcal{E}^{(M)})$ with $f_n \tom f$. Since $\mathcal{E}^{(M)}$ is closed, we have $f \in D(\mathcal{E}^{(M)})$. Therefore, Lemma~\ref{lemma:alternative formula ek} implies that it suffices to show % $$\hat{\mathcal{E}}^{(k)}(\varphi f^{(l)} ) \leq \liminf_{n\to \infty}\hat{\mathcal{E}}^{(k)}(\varphi f_n^{(l)})$$ % for every $l \in {\mathbb N}$ and $\varphi \in D({\mathcal E})$ with $0 \leq \varphi \leq 1$. The monotonicity of $\hat{\mathcal{E}}^{(k)}$ and Lemma~\ref{lemma:algebraic properties} yield % $${\mathcal E}(\varphi f_n^{(l)}) = \mathcal{E}^{(M)}(\varphi f_n^{(l)}) + \hat{\mathcal{E}}^{(k)}(\varphi f_n^{(l)}) \leq 2 l^2 \mathcal{E}^{(M)}(\varphi) + 2 \mathcal{E}^{(M)}(f_n) + l^2 \hat{\mathcal{E}}^{(k)}(\varphi).$$ % Since $(f_n)$ is $\mathcal{E}^{(M)}$-bounded, we obtain that $(\varphi f_n^{(l)})$ is ${\mathcal E}$-bounded and $\mathcal{E}^{(M)}$-bounded. Furthermore, we have the $L^2$-convergence $\varphi f_n^{(l)} \to \varphi f^{(l)}$, as $n\to \infty$, and so Lemma~\ref{lemma:existence of a weakly convergent subnet} shows $\varphi f_n^{(l)} \to \varphi f^{(l)}$ ${\mathcal E}$-weakly and $\mathcal{E}^{(M)}$-weakly, i.e., $\varphi f_n^{(l)} \to \varphi f^{(l)}$ $\hat{\mathcal{E}}^{(k)}$-weakly. Quadratic forms are lower semicontinuous with respect to weak convergence of the induced bilinear form. Therefore, we arrive at the desired inequality. \end{proof} \begin{example}[Weighted manifolds, continued] We prove that the domain of ${\mathbb D}^{(k)}$ is given by $D({\mathbb D}^{(k)}) = \{f \in L^2(\mu) \mid \nabla f \in \vec L^2(\mu) \text{ and } \int_M V f^2 d \mu < \infty \}$ on which it acts by % $${\mathbb D}^{(k)}(f) = \int_M V f^2 d\mu.$$ % \begin{proof} If follows from what we have already proven on ${\mathbb D}^{(M)}$ that % $$\hat {\mathbb D}^{(k)}(f) = {\mathbb D}(f) - {\mathbb D}^{(M)}(f) = \int_M V f^2 d\mu$$ % for $f \in D({\mathbb D})$. From this identity it easily follows from the monotone convergence theorem that for $f \in D({\mathbb D}^{(M)}) = \{g \in L^2_{\rm loc}(\mu) \mid \nabla \vec L^2(\mu)\}$ the supremum % $$ \sup \{ {\mathbb D}^{(k)}(h) \mid h \in D({\mathbb D}) \text{ with } |h| \leq |f|\} $$ % is finite if and only if $\int_M V f^2 d\mu < \infty$ in which case it equals $\int_M V f^2 d\mu$. This finishes the proof. \end{proof} \end{example} \begin{example}[Weighted graphs, continued] With the same arguments as for manifolds it follows that the domain of the killing part of $Q$ satisfies % $$D(Q^{(k)}) = \{f :X \to {\mathbb R}\mid \sum_{x,y} b(x,y) (f(x)- f(y))^2 + \sum_{x \in X} c(x) f(x)^2 < \infty\}$$ % on which it acts by % $$Q^{(k)}(f) = \sum_{x\in X} c(x) f(x)^2.$$ % \end{example} \subsection{The (active) reflected Dirichlet form and its maximality} In this subsection we use the main part and the killing part to introduce the reflected Dirichlet form. We show that it is a Silverstein extension and discuss its maximality among all Silverstein extensions. As a main result we obtain that the active main part is the maximal form among all Dirichlet forms that dominate the given form. In contrast, we give an example of a Dirichlet form which does not possess a maximal Silverstein extension. \begin{definition}[Reflected Dirichlet form] Let ${\mathcal E}$ be a Dirichlet form. The {\em reflected form of ${\mathcal E}$} is $\mathcal{E}^{\rm ref} := \mathcal{E}^{(M)} + \mathcal{E}^{(k)}$. The restriction of $\mathcal{E}^{\rm ref}$ to $L^2(m)$ is called the {\em active reflected Dirichlet form of ${\mathcal E}$} and is denoted by $\mathcal{E}^{\rm ref}_a$. \end{definition} The results of the previous subsection accumulate in the following theorem. \begin{theorem}\label{theorem:er is a silverstein extension} $\mathcal{E}^{\rm ref}$ is an energy form that extends $\mathcal{E}_{\rm e}$ and the Dirichlet form $\mathcal{E}^{\rm ref}_a$ is a Silverstein extension of ${\mathcal E}$. \end{theorem} \begin{proof} The closedness of $\mathcal{E}^{\rm ref}$ on $L^0(m)$ follows from the closedness of $\mathcal{E}^{(M)}$ on $L^0(m)$ and Lemma~\ref{lemma:properties of ek}. That it is Markovian is a consequence of the Markov property of $\mathcal{E}^{(M)}$ and the monotonicity of $\mathcal{E}^{(k)}$. Furthermore, by the very definition of $\hat{\mathcal{E}}^{(k)}$ the forms $\mathcal{E}^{\rm ref}$ and ${\mathcal E}$ agree on $D({\mathcal E})$. Next we prove that $\mathcal{E}^{\rm ref}$ extends $\mathcal{E}_{\rm e}$. To this end, let $f \in D(\mathcal{E}_{\rm e})$ and let $(f_n)$ an ${\mathcal E}$-Cauchy sequence in $D({\mathcal E})$ with $f_n \tom f$. The form $\mathcal{E}^{\rm ref}$ is closed and so its domain $D(\mathcal{E}^{\rm ref})$ is complete with respect to the form topology, cf. Lemma~\ref{lemma:completeness v.s. lower semicontinuity} and the discussion preceding it. Since $\mathcal{E}_{\rm e}$ and $\mathcal{E}^{\rm ref}$ agree on $D({\mathcal E})$, the sequence $(f_n)$ is a Cauchy sequence in the form topology of $\mathcal{E}^{\rm ref}$. By completeness this implies $f \in D(\mathcal{E}^{\rm ref})$ and % $$\mathcal{E}^{\rm ref}(f) = \lim_{n \to \infty} \mathcal{E}^{\rm ref}(f_n) = \lim_{n \to \infty} {\mathcal E}(f_n) = \mathcal{E}_{\rm e}(f).$$ % For proving that $\mathcal{E}^{\rm ref}_a$ is a Silverstein extension of ${\mathcal E}$ we show that $D({\mathcal E}) \cap L^\infty(m)$ is an algebraic ideal in $D(\mathcal{E}^{\rm ref}_a) \cap L^\infty(m)$. Let $\varphi \in D({\mathcal E}) \cap L^\infty(m)$ and $f \in D(\mathcal{E}^{\rm ref}_a) \cap L^\infty(m)$. Since $D({\mathcal E})$ is a lattice it suffices to consider $0 \leq \varphi \leq 1$. By definition we have % $$D(\mathcal{E}^{\rm ref}_a) \cap L^\infty(m) \subseteq D(\mathcal{E}^{(M)}) \cap L^\infty(m) \subseteq D(\mathcal{E}_\varphi)\cap L^\infty(m) = \{g \in L^\infty(m) \mid \varphi g \in D({\mathcal E})\},$$ where we used Theorem~\ref{theorem:properties of concatenated forms} for the last equality. This shows $f\varphi \in D({\mathcal E})$ and finishes the proof. \end{proof} New approaches towards defining reflected forms naturally raise the question whether these approaches coincide or not. The following maximality statement about the main part of partially settles this issue. It is the main structural result about the active main part. \begin{theorem}\label{theorem:maximality active reflected form} Let ${\mathcal E}$ be a Dirichlet form. Its active main part $\mathcal{E}^{(M)}_a$ is the maximal Dirichlet form that dominates ${\mathcal E}$. In particular, if $\mathcal{E}^{(k)}= 0$, then $\mathcal{E}^{\rm ref}_a = \mathcal{E}^{(M)}_a$ is the maximal Silverstein extension of ${\mathcal E}$. \end{theorem} \begin{proof} We proved in Theorem~\ref{theorem:maximality main part} that $\mathcal{E}^{(M)}_a$ is larger than any Dirichlet form that dominates ${\mathcal E}$. Hence, it only remains to prove that $\mathcal{E}^{(M)}_a$ dominates ${\mathcal E}$. To this end, we employ Lemma~\ref{lemma:characterization of domination}. It follows along the same lines as at the end of the proof of Theorem~\ref{theorem:er is a silverstein extension} that $D({\mathcal E}) \cap L^\infty(m)$ is an algebraic ideal in $D(\mathcal{E}^{(M)}_a) \cap L^\infty(m)$. For nonnegative $f,g \in D({\mathcal E})$ we infer $${\mathcal E}(f,g) - \mathcal{E}^{(M)}_a(f,g) = \hat{\mathcal{E}}^{(k)}(f,g) \geq 0$$ from Lemma~\ref{lemma:monotonicity of ekh} and Lemma~\ref{lemma:characterization monotonicity}, which we can apply since $D(\mathcal{E}^{(M)})$ is a lattice, see Lemma~\ref{lemma:algebraic properties}. The 'In particular' part follows from the identity $\mathcal{E}^{(M)}_a = \mathcal{E}^{\rm ref}_a$ if $\mathcal{E}^{(k)}=0$. This finishes the proof. \end{proof} \begin{remark} It is proven in \cite[Theorem~6.6.9]{CF} for quasi-regular Dirichlet forms that the active reflected Dirichlet form constructed there is always the maximal Silverstein extension of the given form. While the proof of \cite[Theorem~6.6.9]{CF} contains a mistake (see also the counterexample below), it is true when the given form has no killing (in the sense of the Beurling-Deny decomposition of quasi-regular forms). Since the maximal Silverstein extension is unique, in this case we obtain from Theorem~\ref{theorem:maximality active reflected form} that the active reflected Dirichlet form in the sense of \cite{CF} coincides with the active reflected Dirichlet form (and the active main part) in our sense. With the theory developed in this paper it is also not so difficult to prove that both approaches to reflected Dirichlet spaces agree for all quasi-regular Dirichlet forms. Our main part corresponds to the sum of the extended strongly local part and the extended jump part (in the sense of the Beurling-Deny decomposition) and our killing part coincides with the killing part of the Beurling-Deny decomposition. However, spelling out all the definitions and details is a bit lengthy and so we leave the details as an exercise to the reader. For weighted manifolds and weighted graphs we have given the necessary arguments in the examples. In \cite{Rob} there is another construction for maximal Dirichlet forms that dominate a given inner regular local Dirichlet form. It follows from \cite[Corollary~4.3]{Rob} that our form $\mathcal{E}^{\rm ref}_a$ coincides with the form ${\mathcal E}_m$ constructed in \cite{Rob} if ${\mathcal E}$ has no killing. \end{remark} \begin{remark} Given a Dirichlet form ${\mathcal E}$ we have proven that $\mathcal{E}^{(M)}_a$ is the maximal element (in the sense of quadratic forms) in the cone of Dirichlet forms that dominate ${\mathcal E}$. It would be interesting to know whether it is also the maximal element among all quadratic forms that dominate ${\mathcal E}$. Note that any form that dominates ${\mathcal E}$ has a positivity preserving resolvent and therefore satisfies the first Beurling-Deny criterion, while it need not satisfy the second Beurling-Deny criterion. \end{remark} In general $\mathcal{E}^{\rm ref}$ is not the maximal Silverstein extension of ${\mathcal E}$. In fact, the following example shows that there are regular Dirichlet forms that do not possess a maximal Silverstein extension. In particular, it is a counterexample to the (wrong) claims of \cite[Theorem~5.1]{Kuw} and \cite[Theorem~6.6.9]{CF}, which state for quasi-regular Dirichlet forms that the active reflected Dirichlet form is always the maximal Silverstein extension. We let $\lambda$ be the Lebesgue measure on all Borel subsets of the open interval $(-1,1)$ and let $$W^1((-1,1)) = \{f \in L^2(\lambda) \mid f' \in L^2(\lambda)\}$$ the usual Sobolev space of first order. It is folklore that $W^1((-1,1))$ equipped with the norm $$ \|f\|_{W^1} := \sqrt{\|f\|_2^2 + \|f'\|_2^2} $$ is a Hilbert space, which continuously embeds into $(C([-1,1]),\|\cdot\|_\infty)$. In particular, any function in $W^1((-1,1))$ can be uniquely extended to the boundary points $-1$ and $1$. Then $$W_0^1((-1,1)) = \{ f\in W^1((-1,1)) \mid f(-1) = f(1) = 0\} $$ coincides with the closure of $C_c((-1,1)) \cap W^1((-1,1))$ in the space $(W^1((-1,1)),\|\cdot\|_{W^1})$. \begin{proposition}\label{proposition:counterexample} The Dirichlet form $${\mathcal E}:L^2(\lambda) \to [0,\infty], {\mathcal E}(f) = \begin{cases} \int_{-1}^1|f'|^2 d\lambda + f(0)^2 & \text{ if } f \in W_0^1((-1,1))\\ \infty &\text{ else} \end{cases} $$ is regular and does not possess a maximal Silverstein extension. \end{proposition} \begin{proof} The regularity of ${\mathcal E}$ follows from the properties of the Sobolev space $W_0^1((-1,1))$. Assume that $\ow{{\mathcal E}}$ were the maximal Silverstein extension of ${\mathcal E}$. The Dirichlet forms % $${\mathcal E}_1:L^2(\lambda) \to [0,\infty],{\mathcal E}_1(f) = \begin{cases} \int_{-1}^1|f'|^2 d \lambda + (f(0)-f(1))^2 & \text{ if } f \in W^1((-1,1))\\ \infty &\text{ else} \end{cases} $$ and % $${\mathcal E}_2:L^2(\lambda) \to [0,\infty],{\mathcal E}_2(f) = \begin{cases} \int_{-1}^1|f'|^2 d \lambda + f(0)^2 & \text{ if } f \in W^1((-1,1))\\ \infty &\text{ else} \end{cases} $$ are Silverstein extensions of ${\mathcal E}$, as obviously $D({\mathcal E})$ is an order ideal in $D({\mathcal E}_1)$ and $D({\mathcal E}_2)$. The maximality of $\ow{{\mathcal E}}$ implies $W^1((-1,1)) = D({\mathcal E}_1) \subseteq (\ow{{\mathcal E}})$ and $$\ow{{\mathcal E}}(1) \leq {\mathcal E}_1(1) = 0.$$ We choose a function $f \in W_0^1((-1,1))$ with $f(0) = 1$. The equality $\ow{{\mathcal E}}(1) = 0$ and the maximality of $\ow{{\mathcal E}}$ yield \begin{align*} \int_{-1}^1|f'|^2 d \lambda + 1 = {\mathcal E}(f) = \ow{{\mathcal E}}(f) = \ow{{\mathcal E}}(f - 1) \leq {\mathcal E}_2(f-1) = \int_{-1}^1|f'|^2 d\lambda, \end{align*} a contradiction. This shows that ${\mathcal E}$ does not possess a maximal Silverstein extension. \end{proof} \section{Reflected regular Dirichlet forms}\label{section:regular} In this we employ our new construction of the reflected form to prove that for regular forms the space of continuous functions is dense in the domain of the active reflected Dirichlet form. We then briefly sketch how this observation leads to a construction of the associated reflected process on a compactification of the underlying space. Let $(X,d)$ a separable locally compact metric space and let $m$ a Radon measure of full support on $X$. By $C_c(X)$ we denote the space of compactly supported continuous functions on $X$. Recall that a Dirichlet form ${\mathcal E}$ on $L^2(m)$ is called {\em regular} if $C_c(X) \cap D({\mathcal E})$ is dense \begin{itemize} \item in $D({\mathcal E})$ with respect to the form norm, \item in $C_c(X)$ with respect to the uniform norm. \end{itemize} In what follows ${\mathcal E}$ is a fixed regular Dirichlet form on the metric measure space $(X,d,m)$. \begin{lemma}[Regular partitions of unity]\label{lemma:regular partitions of unity} For every sequence of relatively compact open sets $(G_n)$ with $\overline{G_n} \subseteq G_{n+1}$, for all $n \in {\mathbb N}$, and $\bigcup_n G_n = X$ there exist functions $\psi_n \in D({\mathcal E})\cap C_c(X)$ with $0\leq \psi_n \leq 1$, ${\mathrm {supp}\,} \psi_n \subseteq G_{n + 1} \setminus \overline{G_{n-1}}$ and $$\sum_{n = 1}^\infty \psi_n = 1.$$ \end{lemma} \begin{proof} Set $\Omega_n := G_{n+1} \setminus \overline{G_{n-1}}$. Since ${\mathcal E}$ is regular, there exist functions $\varphi_n \in D({\mathcal E}) \cap C_c(X)$ with $0 \leq \varphi_n \leq 1$, $\varphi_n = 1$ on $G_n$ and ${\mathrm {supp}\,} \varphi_n \subseteq \Omega_n$, cf. \cite[Exercise~1.4.1]{FOT}. We let % $$\varphi:= \sum_{n = 1}^\infty \varphi_n \text{ and } \psi_n := \frac{\varphi_n}{\varphi}. $$ % Since $(\Omega_n)$ is a locally finite cover of $X$, the function $\varphi$ is bounded and continuous. Moreover, it satisfies $\varphi \geq 1$. Therefore, $\psi_n \in C_c(X)$ and % $$\sum_{n=1}^\infty \psi_n = 1.$$ % It remains to prove $\psi_n \in D({\mathcal E})$. The property ${\mathrm {supp}\,} \varphi_n \subseteq \Omega_n$ yields % $$\psi_n = \begin{cases} \frac{\varphi_n}{\varphi_{n-1} + \varphi_n + \varphi_{n+1}} &\text{on } \{\varphi_n > 0\},\\ 0&\text{on } \{\varphi_n = 0\}. \end{cases} $$ % Hence, $\psi_n$ is of the form $g^{-1}f$ with $f,g \in D({\mathcal E})$, $0 \leq f \leq g$ and $g \geq 1 $ on $\{f > 0\}$, where we use the convention $g^{-1}f = 0$ on $\{f = 0\}$ . Such functions satisfy % $$\left|\frac{f(x)}{g(x)}\right| \leq |f(x)| \text{ and } \left|\frac{f(x)}{g(x)} - \frac{f(y)}{g(y)}\right| \leq |f(x) - f(y)| + |g(x) - g(y)|.$$ % Therefore, Lemma~\ref{lemma:contraction properties} yields $g^{-1}f \in D({\mathcal E})$ and the claim is proven. \end{proof} The following lemma is a variant of \cite[Corollary~2.3.1]{FOT} and we include a proof for the convenience of the reader. For a discussion of the related question of Kac regularity we refer to the recent \cite{Wir}. \begin{lemma}\label{lemma:help domains} Let $\Omega$ relatively compact open. Then $\{\psi \in D({\mathcal E}) \cap C_c(X) \mid {\mathrm {supp}\,} \psi \subseteq \Omega\}$ is dense in $ \{f \in D({\mathcal E}) \mid \text{there ex. compact } K\subseteq \Omega \text{ with } f 1_{X \setminus K} = 0\}$ with respect to the form norm. \end{lemma} \begin{proof} Let $f\in D({\mathcal E})$ and $K \subseteq \Omega$ compact such that $f \mathds{1}_{X \setminus K} = 0$. Without loss of generality we assume $0 \leq f \leq 1$. Since ${\mathcal E}$ is regular, there exists $\psi \in D({\mathcal E}) \cap C_c(X)$ with $\psi = 1$ on $K$ and $\psi = 0$ on $X \setminus \Omega$, cf. \cite[Exercise~1.4.1]{FOT}. Let now $(f_n)$ a sequence in $D({\mathcal E})\cap C_c(X)$ with $f_n \to f$ with respect to the form norm and set $g_n := \psi \cdot (f_n \vee 0) \wedge 1 $. Then $(g_n)$ is an ${\mathcal E}$-bounded sequence in $D({\mathcal E}) \cap C_c(X)$ with ${\mathrm {supp}\,} g_n \subseteq \Omega$ that converges in $L^2(m)$ towards $f$. By Lemma~\ref{lemma:existence of a weakly convergent subnet} it also converges ${\mathcal E}$-weakly. Now the Banach-Saks theorem implies the desired statement. \end{proof} The following density statements are the main insights of this section. Its proof is based on the ideas of \cite{MeSe}. \begin{theorem}\label{theorem:continous functions are dense} \begin{itemize} \item[(a)] $D(\mathcal{E}^{(M)}) \cap C(X)$ is dense in $D(\mathcal{E}^{(M)})$ with respect to $\mathcal{E}^{(M)}$. \item[(b)] $D(\mathcal{E}^{\rm ref}) \cap C(X)$ is dense in $D(\mathcal{E}^{\rm ref})$ with respect to $\mathcal{E}^{\rm ref}$. \item[(c)] $D(\mathcal{E}^{(M)}_a) \cap C(X)$ is dense in $D(\mathcal{E}^{(M)}_a)$ with respect to the form norm of $\mathcal{E}^{(M)}_a$. \item[(d)] $D(\mathcal{E}^{\rm ref}_a) \cap C(X)$ is dense in $D(\mathcal{E}^{\rm ref}_a)$ with respect to the form norm of $\mathcal{E}^{\rm ref}_a$. \end{itemize} \end{theorem} \begin{proof} Starting with $f \in D(\mathcal{E}^{(M)})$, we construct a continuous function $g$ which is close to $f$ with respect to $\mathcal{E}^{(M)}$. We then show that under the additional assumption $f \in D(\mathcal{E}^{\rm ref})$, the previously constructed function $g$ is also close to $f$ with respect $\mathcal{E}^{\rm ref}$. Similarly, we treat the $L^2$-statement. (a): Since bounded functions are dense in domains of energy forms, see Lemma~\ref{lemma:bounded approximation}, it suffices to prove that each function $f \in D(\mathcal{E}^{(M)}) \cap L^\infty(m)$ can be approximated by continuous ones. Let $(G_n)$ a sequence of relatively compact open sets with $\overline{G_n} \subseteq G_{n+1}$, for all $n \in {\mathbb N}$, and $\bigcup_n G_n = X$, and let $(\psi_n)$ in $D({\mathcal E}) \cap C_c(X)$ a corresponding partition of unity as in Lemma~\ref{lemma:regular partitions of unity}. We set $\Omega_n := G_{n+1} \setminus \overline{G_{n-1}}$. Let $\varepsilon > 0$. By construction $D({\mathcal E}) \cap L^\infty(m)$ is an algebraic ideal in $D(\mathcal{E}^{(M)}) \cap L^{\infty}(m)$, cf. proof of Theorem~\ref{theorem:er is a silverstein extension}. Hence, it follows that $f \psi_n \in D({\mathcal E})$ with $f \psi_n = 0$ a.e. on $({\mathrm {supp}\,} \psi_n)^c$. By definition ${\mathrm {supp}\,} \psi_n $ is a compact subset of $\Omega_n$ and so Lemma~\ref{lemma:help domains} yields a function $g_n \in C_c(X) \cap D({\mathcal E})$ with ${\mathrm {supp}\,} g_n \subseteq \Omega_n$, % $${\mathcal E}(f \psi_n - g_n) \leq \frac{\varepsilon}{4^{n}} \text{ and } \|f\psi_n - g_n\|_2 \leq \frac{\varepsilon}{2^{n}}.$$ % Moreover, $g_n$ can be chosen to also satisfy $\|g_n\|_\infty \leq \|f\|_\infty$, see Lemma~\ref{lemma:bounded approximation}. Let $g := \sum_{n = 1}^\infty g_n$. Since $(\Omega_n)$ is a locally finite cover of $X$, we obtain that $g$ is a bounded continuous function on $X$. Let now $\varphi \in C_c(X) \cap D({\mathcal E})$ with $0 \leq \varphi \leq 1$. There exists $N$ such that ${\mathrm {supp}\,} \varphi \subseteq G_N$. Using that $\psi_n$ and $g_n$ are supported on $\Omega_n = G_{n+1}\setminus \overline{G_{n-1}}$, we infer % $$\sum_{n = 1}^{N+2} \psi_n = 1 \text{ and } \sum_{n = 1}^{N+2} g_n = g \text{ on } G_N , $$ % so that % $$f-g = \sum_{n = 1}^{N + 2} ( f \psi_n - g_n) \text{ on } {\mathrm {supp}\,} \varphi.$$ % It follows from the definition of $\mathcal{E}_\varphi$ that the value of $\mathcal{E}_\varphi(f-g)$ only depends on the function $f-g$ on ${\mathrm {supp}\,} \varphi$. We obtain % \begin{align*} \mathcal{E}_\varphi(f - g)^{1/2} &= \mathcal{E}_\varphi\left(\sum_{n = 1}^{N + 2} ( f \psi_n - g_n)\right)^{1/2} \leq \sum_{n = 1}^{N + 2}{\mathcal E}(f\psi_n - g_n)^{1/2} \leq \varepsilon^{1/2}, \end{align*} where we used Theorem~\ref{theorem:properties of concatenated forms} for the first inequality. Since $\varphi \in C_c(X) \cap D({\mathcal E})$ was arbitrary and ${\mathcal E}$ is regular, it follows from Lemma~\ref{lemma:alternative formula em} that $\mathcal{E}^{(M)}(f-g) \leq \varepsilon.$ (b): Let now $f \in D(\mathcal{E}^{\rm ref}) \cap L^\infty(m)$ and $g$ as in the proof of (a). We prove $\mathcal{E}^{(k)}(f-g) \leq \varepsilon$. For this estimate we employ Lemma~\ref{lemma:alternative formula ek}. Let $\varphi \in D({\mathcal E})$ with $0 \leq \varphi \leq 1$ arbitrary and choose a sequence $\varphi_n \in D({\mathcal E}) \cap C_c(X)$ with $0 \leq \varphi_n \leq 1$ and $\varphi_n \to \varphi$ with respect to the form norm of ${\mathcal E}$. Since $f$ and $g$ are bounded, Lemma~\ref{lemma:algebraic properties} yields % \begin{align*} \mathcal{E}^{(M)}(\varphi_n(f-g))^{1/2} &\leq \mathcal{E}^{(M)}(f-g)^{1/2} + \|f - g\|_\infty \mathcal{E}^{(M)}(\varphi_n)^{1/2} \\&\leq \mathcal{E}^{(M)}(f-g)^{1/2} + \|f - g\|_\infty {\mathcal E}(\varphi_n)^{1/2}. \end{align*} % In particular, the sequence $(\varphi_n(f-g))$ is $\mathcal{E}^{(M)}$-bounded and so Theorem~\ref{lemma:properties of ek} shows % $$\mathcal{E}^{(k)}(\varphi(f-g)) \leq \liminf_{n\to \infty} \mathcal{E}^{(k)}(\varphi_n (f-g)). $$ % Since $f-g \in D(\mathcal{E}^{(M)}) \cap L^\infty(m)$, this computation and Lemma~\ref{lemma:alternative formula ek} show that it suffices to prove the estimate % $\mathcal{E}^{(k)}(\psi(f-g)) \leq \varepsilon$ % for each $\psi \in D({\mathcal E}) \cap C_c(X)$ with $0\leq \psi \leq 1$. Since ${\mathrm {supp}\,} \psi$ is compact, it is included in $G_N$ for some $N \in {\mathbb N}$. The properties $\psi_n$ and $g_n$, cf. (a), yield % \begin{align*} \mathcal{E}^{(k)}(\psi (f-g))^{1/2} &= \mathcal{E}^{(k)} \left(\psi \sum_{n = 1}^{N + 2} ( f \psi_n - g_n) \right)^{1/2} \\&\leq \sum_{n = 1}^{N + 2} \mathcal{E}^{(k)} \left(\psi ( f \psi_n - g_n) \right)^{1/2}\\ &\leq \sum_{n = 1}^{N + 2} {\mathcal E} (f \psi_n - g_n)^{1/2}\\ &\leq \varepsilon^{1/2}. \end{align*} % For the second inequality we used the monotonicity of $\mathcal{E}^{(k)}$ and that $\mathcal{E}^{(k)}(g) \leq {\mathcal E}(g)$ for $g \in D({\mathcal E})$. This proves (b). (c) + (d): Let $f \in D(\mathcal{E}^{(M)}_a) = D(\mathcal{E}^{(M)})\cap L^2(m)$ and let $g$ as in (a). It remains to proof $g \in L^2(m)$ and estimates on the $L^2$-norm of $f-g$. Fatou's lemma and the properties of $\psi_n,g_n$, cf. (a), yield % $$\|f - g\|_2 \leq \sum_{n = 1}^\infty\|f\psi_n - g_n\|_2 \leq \varepsilon.$$ % Since $f \in L^2(m)$, this implies $g \in L^2(m)$ and the claim is proven. \end{proof} \begin{remark} It seems to be a new observation that continuous functions are dense in the domain of the reflect form and the main part. With the same arguments it can even be strengthened. If $\mathcal{C}$ is a special standard core for ${\mathcal E}$ in the sense of \cite[Section~1.1]{FOT} and $\mathcal B \subseteq C(X)$ is an algebra such that $\mathcal{C}$ is an algebraic ideal in $\mathcal{B}$ and % $$\sum_{n = 1}^\infty {\psi_n} \in \mathcal{B}$$ % for all sequences $(\psi_n)$ in $\mathcal{C}$ for which there exist neighborhoods $U_n$ of ${\mathrm {supp}\,} \psi_n$ that form a locally finite cover of $X$, then ${\mathcal B} \cap D(\mathcal{E}^{(M)})$ is dense in $D(\mathcal{E}^{(M)})$ with respect to $\mathcal{E}^{(M)}$. Similar statements also hold for $\mathcal{E}^{(M)}_a$, $\mathcal{E}^{\rm ref}$ and $\mathcal{E}^{\rm ref}_a$ with the corresponding form norms. In the example of weighted manifolds, see Example~\ref{example:mainfolds}, the algebras $C_c^\infty(M)$ and $C^\infty(M)$ satisfy the above conditions. Therefore, $C^\infty(M)$ is dense in $D({\mathbb D}^{(M)})$ with respect to ${\mathbb D}^{(M)}$ and in $D({\mathbb D}^{(M)}_a)$ with respect to the form norm. The latter assertion is nothing more than the well known fact that $C^\infty(M)$ is dense in $W^1(M)$, which is the main result of \cite{MeSe}. \end{remark} With the help of the previous theorem and Gelfand theory of commutative $C^*$-algebras we can construct a compactification of the underlying space such that $\mathcal{E}^{\rm ref}_a$ (and also $\mathcal{E}^{(M)}_a$) can be considered to be a regular Dirichlet form on this compactification (or the compactification minus one point). Since this type of construction is essentially known, see e.g. \cite[Proof of Theorem~6.6.5]{CF} or \cite{KLSS} for Dirichlet forms on graphs, we only give a brief sketch and leave details to the reader. Let ${\mathcal B}$ a countably generated subalgebra of $D(\mathcal{E}^{\rm ref}_a) \cap C_b(X)$ that is dense in $D(\mathcal{E}^{\rm ref}_a)$ with respect to the form norm and in $C_c(X)$ with respect to the uniform norm. The existence of such an algebra follows from Theorem~\ref{theorem:continous functions are dense} and the separability of $(X,d)$ and $L^2(m)$. The (complexification of the) uniform closure of ${\mathcal B}$ equipped with the uniform norm is a commutative $C^*$-algebra, which we denote by ${\mathcal A}$. By construction $C_c(X) \subseteq {\mathcal A}$ and so $C_0(X)$, the space of continuous functions that vanish at infinity, is also contained in ${\mathcal A}$. Gelfand theory of commutative $C^*$-algebras implies that there exists a compactification $\hat X$ of $X$ such that either $C(\hat X) \simeq {\mathcal A}$ (this happens if $1 \in {\mathcal A}$) or there exists a point $\infty \in \hat X$ such that ${\mathcal A} \simeq C_0(\hat X \setminus \{\infty\})$ (this happens if $1 \not \in {\mathcal A}$). Moreover, the corresponding isomorphisms are given by $C(\hat X) \to {\mathcal A}$, $f \mapsto f|_X$ or $C_0(\hat X \setminus \{\infty\}) \to {\mathcal A}$, $f \mapsto f|_X$, respectively. Since ${\mathcal A}$ is separable, $\hat X$ is a separable metric space and if $1 \not \in {\mathcal A}$, then $\hat X \setminus \{\infty\}$ is locally compact. It can be proven that $1 \in {\mathcal A}$ if and only if $\mathcal{E}^{(k)}(1) < \infty$ and $m$ is finite. To simplify notation, in what follows we let $$\hat X' := \begin{cases} \hat X &\text{if } 1 \in {\mathcal A},\\ \hat X \setminus \{\infty\} &\text{if }1 \not \in {\mathcal A}. \end{cases} $$ With this convention ${\mathcal A}$ is isomorphic to $C_0(\hat X')$ and the isomorphism is given by $C_0(\hat X') \to {\mathcal A},$ $f \mapsto f|_X$. We equip $\hat X'$ with the Borel $\sigma$-algebra and extend the measure $m$ to $\hat X'$ by letting $\hat m (\hat X' \setminus X) = 0$. Then $\hat X$ is a Radon measure of full support on $\hat X'$. The embedding map $\tau:X \to \hat X'$, $x \mapsto x$ induces a unitary operator $T: L^2(\hat X', \hat m) \to L^2(X, m)$, $f \mapsto f \circ \tau$. We define the quadratic forms $\hat {\mathcal E} :L^2(\hat X',\hat m) \to [0,\infty]$, $\hat {\mathcal E}(f) = {\mathcal E}(Tf)$ and $\hat \mathcal{E}^{\rm ref}_a:L^2(\hat X', \hat m) \to [0,\infty]$, $\hat \mathcal{E}^{\rm ref}_a (f) = \mathcal{E}^{\rm ref}_a(Tf)$. \begin{theorem} \label{theorem:regularity active main part} $\hat \mathcal{E}^{\rm ref}_a$ is a regular Dirichlet form on $L^2(\hat X',\hat m)$. \end{theorem} \begin{proof} Since $U$ is unitary and defined via the pointwise map $\tau$, it follows immediately that $\hat \mathcal{E}^{\rm ref}_a$ is a Dirichlet form. Since ${\mathcal A}$ is isomorphic to $C_0(\hat X')$ and the isomorphism is given by $C_0(\hat X') \to {\mathcal A}, f \mapsto f|_X$, the operator $T^{-1}$ maps the algebra ${\mathcal B}$ to a uniform dense subalgebra of $C_0(\hat X')$. Since $T^{-1} {\mathcal B} \subseteq D(\hat \mathcal{E}^{\rm ref}_a)$, this implies that $D(\hat \mathcal{E}^{\rm ref}_a) \cap C_0(\hat X') \supseteq T^{-1} {\mathcal B}$ is uniformly dense in $C_0(\hat X')$. The properties of ${\mathcal B}$ also imply that $T^{-1} {\mathcal B}$ is dense in $D(\hat \mathcal{E}^{\rm ref}_a)$ with respect to the form norm. The regularity of $\hat \mathcal{E}^{\rm ref}_a$ now follows from \cite[Lemma~1.4.2]{FOT}. \end{proof} \begin{remark} For the potential theoretic notions in this remark we refer to \cite[Chapter~2]{FOT}. The form $\hat \mathcal{E}^{\rm ref}_a$ is regular and so there exists an associated $m$-symmetric Hunt process on $X'$, see \cite[Theorem~7.2.1]{FOT}. Since functions in $D(\hat {\mathcal E}) \cap C_c(X)$ (viewed as functions on $\hat X'$) vanish on $\hat X' \setminus X$ and are dense in $D(\hat {\mathcal E})$ with respect to the form norm, it is also not difficult to prove the identity % $$D(\hat {\mathcal E}) = \{f \in D(\hat \mathcal{E}^{\rm ref}_a) \mid \tilde f = 0 \text{ q.e. on } \hat X' \setminus X\},$$ % where $\tilde f$ is a quasi-continuous version of $f$. This situation is generic for Silverstein extensions of quasi-regular Dirichlet forms: Given a quasi-regular Dirichlet form ${\mathcal E}$ on some space $X$ and a Silverstein extension $\tilde {\mathcal E}$, it follows from \cite[Theorem~6.6.5]{CF} that there exists a locally compact separable metric space $\hat X$, a quasi-open subset $\tilde X$ of $\hat X$ that is quasi-homeomorphic with $X$ and a pointwise transformation (almost as above) such that $\tilde {\mathcal E}$ can be considered to be a regular Dirichlet form on $\hat X$ and % $$D({\mathcal E}) = \{f \in D(\tilde {\mathcal E}) \mid \tilde f = 0 \text{ q.e. on } \hat X \setminus \tilde X\}.$$ % The new information here is that if ${\mathcal E}$ is regular and $\tilde {\mathcal E} = \mathcal{E}^{\rm ref}_a$, then $\hat X$ can be chosen to be a compactification (minus one point) of $X$. For this insight the information of Theorem~\ref{theorem:continous functions are dense} is essential. \end{remark}
1,108,101,565,359
arxiv
\section{Introduction} \label{sec:intro} Conventional imaging sensors detect signals lying on regular grids. On the other hand, recent advances and proliferation in sensing have led to new imaging signals lying on irregular domains. An example is brain imaging data such as Electroencephalography (EEG) and Magnetoencephalography (MEG). Some example of MEG data used in our experiments is shown in Figure \ref{fig1}(a). The color in Figure \ref{fig1}(a) is indicative of the intensity and influx / outflux of magnetic fields. The data are different from conventional 2D image data in that they lie irregularly on the brain structure. The data are captured by a recumbent Elekta MEG scanner with 306 sensors distributed across the scalp to record the cortical activations for 1100 milliseconds (Figure \ref{fig1}(b)). Therefore, MEG are high-dimensional spatiotemporal data often degraded by complex, non-Gaussian noise. For reliable analysis of MEG data, it is important to learn discriminative, low-dimensional intrinsic representation of the recorded data \cite{mwangi:2014,hossein2016}. \begin{figure}[htb] \begin{minipage}[b]{.48\linewidth} \centering \centerline{\includegraphics[width=3.0cm]{figure1_1}} \vspace{0.3cm} \centerline{(a) Top view of MEG brain imaging. }\medskip \end{minipage} \hfill \begin{minipage}[b]{0.48\linewidth} \centering \centerline{\includegraphics[width=3.0cm]{figure1_2}} \vspace{0.3cm} \centerline{(b) Top view with the sensors. }\medskip \end{minipage} \caption{Example of MEG brain imaging data. The color indicates the intensity and directon of the magnetic fields. The nodes in (b) represent the sensors.} \label{fig1} \end{figure} Several methods have been applied to perform dimensionality analysis of brain imaging data, e.g., principal component analysis (PCA) and its numerous variants (see \cite{mwangi:2014} for a recent review). In addition, it has been recognized that there are patterns of anatomical links, statistical dependencies or causal interactions between distinct units within a nervous system~\cite{bullmore2009complex,hyde2012cross,brovelli2004beta}. By modeling brain imaging data as signals residing on {\em brain connectivity graphs}, some methods have been proposed to apply the recent graph signal processing \cite{shuman2013emerging} to analyze brain imaging data \cite{behjat2015anatomically,huang2016graph,rui2016dimensionality,rui:17}. Deep learning, on the other hand, has achieved breakthroughs in image and video analysis, thanks to its hierarchical neural network structures with layer-wise non-linear activation and high capacity\cite{lecun2015deep}. As an important deep learning model, autoencoders(AE) / stacked autoencoders(SAE) has achieved state-of-the-art performance in extraction of meaningful low-dimensional representations for input data in an unsupervised way\cite{vincent2010stacked}. However, conventional SAEs fail to take advantage the graph information when the inputs are modeled as graph signals. In this work, we propose new AE-like neural networks that tightly integrate graph information for analysis of high-dimensional graph signals such as brain imaging data. In particular, we propose new AE networks that directly integrate graph models to extract meaningful representations. Our work leverages efficient graph filter design using Chebyshev polynomial\cite{hammond2011wavelets} and recent work on deep learning on graph-structured data \cite{scarselli2009graph, bruna2013spectral, defferrard2016convolutional, kipf2016semi}. Among these models, Convolutional Nets(ConvNets) are of great interest since they achieve state-of-the-art performance for images\cite{krizhevsky2012imagenet, he2016deep} by extracting local features to build hierarchical representations. Image signals residing on regular grids are suitable for ConvNets. However, the problem to generalize ConvNets to signals on irregular domains, i.e. graphs, is a challenging one \cite{bruna2013spectral, defferrard2016convolutional, niepert2016learning}. \cite{niepert2016learning} proposed to convert the vertices on a graph into a sequence and extract locally connected regions from graphs, where the convolution is performed in spatial domain. On the contrary, the convolution in \cite{bruna2013spectral} is performed in spectral domain using recent graph signal processing theory \cite{shuman2013emerging}. \cite{defferrard2016convolutional} presented a formulation of ConvNets on graph in spectral domain and proposed fast localized convolutional filters. The filters are polynomial Chebyshev expansions where the polynomial coefficients are the parameters to be learned. \cite{kipf2016semi} applied the first order approximation of \cite{defferrard2016convolutional} and achieved good results on the semi-supervised classification task on social networks. \begin{figure}[htb] \centering \centerline{\includegraphics[width=8.5cm]{figure2}} \caption{The structure of the proposed method. 306 MEG sensors are used to record the cortical activations evoked by two categories of visual stimuli: face and object. Recorded high-dimensional MEG measurements and the prior estimated graph are the inputs to the proposed ConvNets on graph. This is followed by an autoencoder with fully connected layers of various size. The entire network is trained end-to-end with mean square error. During testing, we extract the activation of the innermost hidden layer and this is subject to a linear SVM to predict whether the subject views face or object.} \label{fig2} \end{figure} This work is inspired by \cite{defferrard2016convolutional, kipf2016semi} but focuses on new AE-like networks to extract meaningful representation in an unsupervised manner. The proposed method is depicted in Figure \ref{fig2}. First, brain imaging data is modelled as signals residing on connectivity graphs estimated with causality analysis. Then, the graph signals are processed by the ConvNets on graph, which output high-dimensional, rich feature maps of the graph signals. Subsequently, fully connected layers are used to extract low dimensional representations. During testing, this low-dimensional representations are subject to a linear SVM classifier to evaluate their inclusion of discriminative information. Similar to \cite{kipf2016semi}, we also use the first order approximation in Chebyshev expansions \cite{hammond2011wavelets, defferrard2016convolutional}. However, our network structure is different in that we propose an integration of ConvNets on graph with SAE. The entire network is trained end-to-end in an unsupervised way to learn the low-dimensional representations for the input brain imaging data. In other words, our work is a method of dimensionality reduction. Authors in \cite{Jia2015250} propose to use graph Laplacian to regularize the learning of autoencoder. Their work uses a {\em sample graph} to model the underlying data manifold. Their approach is significantly different from our work that integrates graph structure into the network. Moreover, it is non-trivial to apply their method to our problem which encodes sensor correlation with a {\em feature graph}. Our contributions are threefold. First, we model the brain imaging data as graph signals with suitable brain connectivity graphs. Second, we propose new AE-like network structure that integrates ConvNets on graph with the SAE; the system is trained end-to-end in an unsupervised way. Third, we perform extensive experiments to demonstrate that our model can extract more robust and discriminative representations for brain imaging data. The proposed method can be useful for other high-dimensional graph signals. \section{Proposed Method} \label{sec:proposed method} We first discuss main results from graph signal processing and ConvNets on graph. Then we discuss our proposed method. \subsection{GSP and convolution on graph} \label{ssec:GSP and GCN} In conventional ConvNets, local filters are convoluted with signals on regular grids and the filter parameters are learned by back-propagation. To extend convolution from image / audio signals on regular grids to graph-structured data on irregular domain, recent graph signal processing\cite{shuman2013emerging} provides theoretical results. In particular, we consider an undirected, connected, weighted graph $\mathcal{G} = \{\mathcal{V}, \mathcal{E}, W\}$, which has a number of vertices $|\mathcal{V}| = N$ and an edge set $\mathcal{E}$. $W$ is the symmetric weighted adjacency matrix encoding the edge weights. Graph Laplacian, or combinatorial Laplacian is defined as $L = D - W $, where $D$ is the diagonal degree matrix with diagonal element $D_{ii} = \sum_{j=1}^{N}W_{ij}$. Since $L$ is an symmetric matrix, it can be eigen-decomposed as $L = U\Lambda U^T$ and has a complete set of orthonormal eigenvectors, denoted as $u_l$, for $l = 0, 1, ..., N-1$, and sorted real associated eigenvalues $\lambda_l$, known as the frequencies. In other words, we have $Lu_l = \lambda_lu_l$ for $l = 0, 1, ..., N-1$ and $0 \leq \lambda_0 < \lambda_1 < ... < \lambda_{N-1}$. Normalized graph Laplacian, defined as $\tilde{L} = I - D^{-\frac{1}{2}}LD^{-\frac{1}{2}}$, is also widely used due to the property that all the eigenvalues of it lie in the interval $[0, 2]$. $\{u_l\}$ acts like the Fourier basis in analogy to the eigen-functions of Laplace operator in classical signal processing. The graph Fourier transform(GFT) for a signal $\bm{\mathrm{x}}\in \mathbb{R}^N$ on vertices of the graph $\mathcal{G}$ is defined as $\tilde{x}(\lambda_l) = \langle u_l, \bm{\mathrm{x}}\rangle = u_l^T\bm{\mathrm{x}}$. GFT plays a fundamental role to define filtering and convolution operations for graph signals. Convolution theorem \cite{mallat1999wavelet} states that convolution in spatial domain equals element-wise multiplication in spectral domain. Given the signal $\bm{\mathrm{x}}$ and a filter $\bm{\mathrm{h}}\in \mathbb{R}^N$ on graph $\mathcal{G}$, the convolution $\ast_{\mathcal{G}}$ between $\bm{\mathrm{x}}$ and $\bm{\mathrm{h}}$ is \begin{equation}\label{eq1} \bm{\mathrm{x}}\ast_{\mathcal{G}}\bm{\mathrm{h}} = U((U^T\bm{\mathrm{h}})\odot(U^T\bm{\mathrm{x}})), \end{equation} where $\odot$ indicates element-wise multiplication. In \cite{bruna2013spectral}, the authors proposed spectral neural networks to learn the filters $\bm{\mathrm{h}}$ in spectral domain. There are two limitations in this approach. First, it is computationally-intensive to perform GFT and inverse GFT in each feed forward pass. Second, the learned filters using this approach are not explicitly localized, which differ from the filters in conventional ConvNets on images. To overcome these limitations, authors of \cite{defferrard2016convolutional} proposed to use polynomial filters and Chebyshev expansions \cite{hammond2011wavelets}: \begin{equation}\label{eq2} \bm{\mathrm{x}}\ast_{\mathcal{G}}\bm{\mathrm{h}} \approx \sum_{k=0}^{K-1} \theta'_kT_k(\hat{L})\bm{\mathrm{x}}, \end{equation} where $\theta'_k$ are the polynomial filter coefficients to be learned, $\hat{L} = \frac{2}{\lambda_{max}}\tilde{L}-I_N$, and $T_k(\cdot)$ is the Chebyshev polynomial generated recursively. $K$ is the order of the polynomial, which means that the filter is $K$-hop localized. See \cite{hammond2011wavelets,defferrard2016convolutional} for further details. \subsection{Model structure} \label{details about GCN-AE} Our proposed networks use ConvNets on graph to compute rich features for the input graph signals. In particular, ConvNets on graph leverage the underlying graph structure of the data to extract local features. Then, we use fully-connected layers and AE-like structure to extract intrinsic representations from the features. \subsubsection{ConvNets on graph} The structure of the ConvNets on graph is shown in Figure \ref{fig3}, which integrates the graph information into the neural network. We use the first order approximation of Equation \eqref{eq2} \cite{kipf2016semi}. Since we use normalized Laplacian and all the eigenvalues of it are in the interval [0, 2], we let $\lambda_{max}\approx 2$. Further, we restrict $\theta=\theta'_0 = -\theta'_1$ to reduce overfitting and computation cost. We also use a renormalization technique proposed in \cite{kipf2016semi}, which converts $I_N + D^{-\frac{1}{2}}AD^{-\frac{1}{2}}$ ($A$ is the adjacency matrix) into $\widehat{D}^{-\frac{1}{2}}\widehat{A}\widehat{D}^{-\frac{1}{2}}$, where $\widehat{A} = A +I_N$ and $\widehat{D}$ is the corresponding degree matrix of $\widehat{A}$. The reason for renormalization is that the eigenvalues of $I_N + D^{-\frac{1}{2}}AD^{-\frac{1}{2}}$ are in the interval [0, 2], which makes training of this neural network unstable due to gradient explosion\cite{kipf2016semi}. After the renormalization, we have\cite{kipf2016semi} \begin{equation} \label{eq3} \bm{\mathrm{x}}\ast_{\mathcal{G}}\bm{\mathrm{h}_{\theta}} \approx \theta \widetilde{A} \bm{\mathrm{x}}, \end{equation} where $\widetilde{A} = \widehat{D}^{-\frac{1}{2}}\widehat{A}\widehat{D}^{-\frac{1}{2}}$ is the new normalized adjacency matrix for the graph, which takes self-connections into consideration. $\bm{\mathrm{h}_{\theta}}$ indicates the filter $\bm{\mathrm{h}}$ is parameterized by $\theta$, which transforms the graph signal from one channel to another channel. \begin{figure}[htb] \centering \centerline{\includegraphics[width=6.5cm]{figure3}} \caption{Network structure of the ConvNets on graph. $C_k$ and $C_{k+1}$ are the number of channels at the $k$-th and $(k+1)$-th layers resp.} \label{fig3} \end{figure} Recent work \cite{kipf2016semi} uses ConvNets on graph for semi-supervised classification tasks, e.g., semi-supervised document classification in citation networks. The entire dataset (e.g. full dataset of documents) is modeled as a {\em sample graph} with each vertex representing a sample (e.g., a labeled or unlabeled document). Therefore, the number of vertices equals to the number of samples. In their work, they apply two-layer ConvNets on graph to compute a feature vector for each vertex, which is then used to classify a unlabeled vertex. In particular, their network processes the whole graph (e.g. entire dataset of documents) as a full-batch. It is unclear how to scale the design for large dataset. On the contrary, our network processes individual graph signals in separate passes. The graph signals are modeled by a {\em feature graph} that encodes the correlation between features. The feature graph has $N$ vertices, with $N$ being the dimensionality of a graph signal (for MEG brain imaging data, $N=306$, the number of sensors). Individual low-dimensional representations of the graph signals are subject to classification independently. In our design, the $k$-th network layer takes as input a graph signal $\bm{\mathrm{x_k}} \in \mathbb{R}^{N\times C_k}$, which means that this signal lies on a graph with $N$ vertices and has $C_k$ channels on each vertex. The output is a graph signal $\bm{\mathrm{x_{k+1}}} \in \mathbb{R}^{N\times C_{k+1}}$. The transformation equation for the $k$-th network layer is \begin{equation} \label{eq4} \bm{\mathrm{x_{k+1}}} = \sigma\bigg(\widetilde{A}\bm{\mathrm{x_k}}\Theta\bigg). \end{equation} Here $\sigma (\cdot)$ is the element-wise non-linear activation function; $\Theta \in \mathbb{R}^{C_k\times C_{k+1}}$ is the parameter matrix to be learned. Note that $\Theta$ generalizes the $\theta$ in (\ref{eq3}) for multiple channels. $\Theta$ has dimension $C_k\times C_{k+1}$: the input signal with $C_k$ channels is transformed into one with $C_{k+1}$ channels. With the normalized adjacency matrix $\widetilde{A}$ in (\ref{eq4}), the network layer considers correlation between individual vertices and their 1-hop neighbors. To take $m$-hop neighbours into account, $m$ layers need to be stacked. In our experiment, we only stack two ConvNets on graph layers and this shows competitive performance. Note that $\widetilde{A}$ plays the role of specifying the receptive field for one feature: one feature is convoluted with its neighbours on the graph with different weights, which are determined by the nonzero value of $\widetilde{A}$. This is different from conventional ConvNets for images, where the weights is learned by back-propagation. In our work, the neural networks instead learn the weights for transforming the channels of the input graph signal. Note that with the non-linear activation function, the transformation in each network layer is not simply multiplication. In comparison, conventional neural networks can also expand or compress number of the channels with $1\times 1$ convolution. Specifically, this is the ConvNets on graph when $\widetilde{A} = I$, where $I$ is the identity matrix. This is a limited model due to small kernel size. In fact, when $\widetilde{A}=I$, the ConvNets on graph reduce to fully connected layers in a conventional AE. Similarly, removing the non-linearity activation function limits the model capacity. Even with larger receptive field for one feature, the output becomes linear combination of the neighbours on graph of this feature. We observe in our experiment (Section \ref{sec:Experiment}) that without $\widetilde{A}$ and non-linearity activation function, our design has similar performance as conventional AEs. \subsubsection{Fully connected layers and loss function} After $k$ layers of ConvNets on graph, we obtain a graph signal $\bm{\mathrm{x_k}} \in \mathbb{R}^{N\times C_k}$ of features. Each row vector is the multichannel feature of one vertex. We concatenate the row vectors and obtain $\bm{\mathrm{x_k}} \in \mathbb{R}^{N\cdot C_k} $ as the output of ConvNets on graph. Since our goal is to extract low dimensional and semantically discriminative representations for each signal in an unsupervised way, we introduce stacked autoencoder(SAE) \cite{vincent2010stacked} here. SAE has been shown by recent research that it consistently produces high-quality semantic representations on several real-world datasets\cite{le2013building}. The difference between our work and SAE is that SAE takes the original signal as input while our work takes as input the high dimensional, rich feature map of the graph signal, which is the output of ConvNets on graph. The dimension of the SAE output $\bm{\mathrm{y}}$ is the same as the original signal. The training of the entire network is end-to-end by minimizing mean square error between input $\bm{\mathrm{x}}$ and $\bm{\mathrm{y}}$, i.e. $\lVert \bm{\mathrm{x}} - \bm{\mathrm{y}} \rVert^2_2$. \section{Experiment} \label{sec:Experiment} \subsection{Datasets} We test our model on real MEG signal datasets. The MEG signals record the brain responses to two categories of visual stimulus: human face and object. The subjects were shown 322 human-face and 197 object images randomly while MEG signals were collected by 306 sensors on the brain. The signals were recorded 100ms before the stimulus and until 1000ms after the stimulus onset. Each image was shown to the subjects for 300ms. We focus on MEG data from 96ms to 110ms after the visual stimulus onset, as it has been recognized that the cortical activities in this duration contain rich information \cite{thorpe:1996}. We model the MEG signals as graph signals by regarding the 306 sensor measurements as signals on a graph of 306 vertices. The underlying graph, which represents the complex brain network\cite{guye2010graph}, is estimated by Granger Causality connectivity(GCC) analysis using the Matlab open-source toolbox BrainStorm\cite{tadel2011brainstorm}. Note that we have to renormalize the connectivity matrix following our discussion in Section \ref{details about GCN-AE}. \subsection{Implementation} We use TensorFlow\cite{abadi2016tensorflow} to implement our networks. The numbers of channels for the two-layer ConvNets on graph are set to be 16 and 5. The subsequent fully-connected layers have dimension $d-2000-50-2000-306$, where $d$ is the dimension after concatenation of the row vectors of the output of ConvNets. Adam\cite{kingma2014adam} is adopted to minimize the MSE with learning rate 0.001. Dropout\cite{srivastava2014dropout} is used to avoid overfitting. We also include the $L2$ regularization in the loss function for the fully connected layers. For comparison, we train two different SAEs with the same schemes. After training all the networks for 300 epochs, we use linear SVM to predict whether the subject viewed face or object based on the 50-dimensional representation of the original MEG imaging data. We use 10-fold cross validation and report the average accuracy. All the experiments are performed on each subject separately. \subsection{Results} We compare our results with several unsupervised dimensionality reduction methods: PCA, GBF, Robust PCA and SAE. PCA is a commonly used dimensionality reduction technique by projecting data to the axis with first $n$ largest variance. GBF \cite{egilmez2014spectral,rui2016dimensionality} projects the MEG signals to a linear subspace spanned by the first $n$ eigenvectors of the normalized graph Laplacian. Robust PCA(RPCA) \cite{candes2011robust} decomposes the data into two parts: low rank representation and sparse perturbation. For non-linear transformation, we test two SAEs, one is with symmetric structure $306-2000-50$ and the other $306-5000-1500-2000-50$. \begin{table}[htb] \setlength{\abovecaptionskip}{0cm} \setlength{\belowcaptionskip}{-10pt} \centering \caption{Average classification accuracy with different methods on MEG brain imaging data.} \label{tab1} \begin{tabular}{|c|c|c|c|} \hline \multirow{2}{*}{Method} & \multicolumn{3}{c|}{Accuracy} \\ \cline{2-4} & subject A & subject B & subject C \\ \hline original data & 0.6482 & 0.6015 & 0.6338 \\ \hline PCA & 0.6529 & 0.5957 & 0.6100 \\ \hline RPCA & 0.6656 & 0.5925 & 0.6186 \\ \hline GBF & 0.6638 & 0.6026 & 0.5970 \\ \hline 2-layer AE & 0.6610 & 0.5983 & 0.6302 \\ \hline 4-layer AE & 0.6693 & 0.5939 & 0.6323 \\ \hline proposed model & \textbf{0.6833} & \textbf{0.6414} & \textbf{0.6435} \\ \hline \end{tabular} \end{table} The results are shown in Table \ref{tab1}. It can be observed that accuracy for the original 306-dimensional data is inferior or similar to other methods. Thus, it is advantageous to perform dimensionality reduction and feature extraction. Improvement using PCA is limited as it is not robust to the existing non-Gaussian noise. For subject A and B, RPCA achieves similar result as GBF, which leverages Granger Causality connectivity(GCC) of subjects' brain as side information. PCA, RPCA and GBF are linear transformations failing to capture the non-linearity property of the brain imaging data, which limits the performance. SAEs with 2 layers and 4 layers also outperform PCA by introducing non-linear transformation. \cite{he2016deep} has shown that increasing the depth of networks helps improve performance by a large margin. Nevertheless, the results are similar for the two SAEs. We conjecture that the optimization stops at saddle points or local minima\cite{dauphin2014identifying}. Our proposed model achieves the highest accuracy comparing to other methods. The reasons are that our approach 1) considers connectivity as the prior side information and 2) uses neural networks with high capacity to learn the discriminative representation. \subsection{Discussion} \subsubsection{Contribution of the graph} We may ask whether the graph information is truly helpful and necessary for this task. To answer this question and better understand the importance and necessity of incorporating the graph information in the neural networks, we replace the graph adjacency matrix estimated by GCC with an identity matrix and a random symmetric matrix and train the model. Table \ref{tab2} shows that GCC indeed helps the networks to extract expressive features. Replacing GCC with identity matrix ignores the prior feature correlation, resulting in accuracy similar to SAEs. Random symmetric matrix confuses the neural networks and thus the accuracy drops drastically. \begin{table}[htb] \centering \caption{Classification accuracy with different adjacency matrix.} \label{tab2} \begin{tabular}{|c|c|c|c|} \hline \multirow{2}{*}{Graph} & \multicolumn{3}{c|}{Accuracy} \\ \cline{2-4} & subject A & subject B & subject C \\ \hline GCC & 0.6833 & 0.6414 & 0.6435 \\ \hline Identity Matrix & 0.6616 & 0.6052 & 0.6213 \\ \hline Random Matrix & 0.5941 & 0.5589 & 0.5332 \\ \hline \end{tabular} \end{table} \subsubsection{Contribution of nonlinear transformation} Since we expand our single channel MEG data to multiple channels, there is concern that the transformation is a trivial multiplication with a scaler in graph ConvNets. Therefore, in this experiment, we remove the non-linearity activation function in ConvNets on graph. By doing this, the outputs of the graph ConvNets become the average of the input weighted by the graph adjacency matrix, which is equivalent to linear combination of the inputs. Thus, the accuracy should be similar to SAEs. This can be observed in Table \ref{tab3}. With non-linear activation function, ConvNets on graph can fully exploit the graph information. \begin{table}[htb] \centering \caption{Classification accuracy with different activation function.} \label{tab3} \begin{tabular}{|c|c|c|c|} \hline \multirow{2}{*}{Activation Function} & \multicolumn{3}{c|}{Accuracy} \\ \cline{2-4} & subject A & subject B & subject C \\ \hline Non-linear & 0.6833 & 0.6414 & 0.6435 \\ \hline Linear & 0.6656 & 0.6016 & 0.6132 \\ \hline \end{tabular} \end{table} \section{Conclusion} \label{conclusion} In this work, we propose AE-like deep neural network that integrates ConvNets on graph with fully-connected layers. The proposed network is used to learn the low-dimensional, discriminative representations for brain imaging data. Experiments on real MEG datasets suggest that our design extracts more discriminative information than other advanced methods such as RPCA and autoencoders. The improvement is due to the exploitation of graph structure as side information. For future work, we apply recent graph learning techniques \cite{DiscGlassoJYK,SparsityDictionaryHPM} to improve the estimation of the underlying connectivity graph. Moreover, we address the problem of deploying the networks for real-time analysis in brain computer interface applications. Furthermore, we explore applications of our ConvNets on graph integrated AE for other image / video applications \cite{cheung:07,fang:14}. \bibliographystyle{IEEEbib} { \footnotesize
1,108,101,565,360
arxiv
\section{Introduction} Rydberg atoms are -- amongst others -- highly susceptible to external fields and show a strong mutual interaction \cite{gallagher94}. Combining the extraordinary properties of Rydberg atoms, which originate from the large displacement of the valence electron and the remaining ionic core, with the plethora of techniques known from the preparation and manipulation of ultracold gases enables remarkable observations such as the excitation blockade between two single atoms a few $\mu$m apart \cite{Urban2009,Gaetan2009}. Moreover, interacting Rydberg atoms serve as flexible tools for various purposes. For example, their strong dipole-dipole interaction renders Rydberg atoms interesting candidates for the realization of two-qubit quantum gates \cite{PhysRevLett.85.2208,PhysRevLett.87.037901} or efficient multiparticle entanglement \cite{saffman:240502,muller:170502,PhysRevLett.103.185302}. In fact, only very recently a {\sc cnot} gate between two individually addressed neutral atoms and the generation of entanglement has been demonstrated experimentally by employing the Rydberg blockade mechanism \cite{isenhower10,wilk10}. Other proposals utilize the peculiar properties of an ensemble of interacting Rydberg atoms by employing an off-resonant laser coupling that dresses ground state atoms with Rydberg states. For example, a method has been proposed of creating a polarized atomic dipolar gas by coupling to an electrically polarized Rydberg state \cite{santos00}. The resulting long-ranged dipole-dipole interaction in such gases are predicted to give rise to dipolar crystals and novel supersolid phases \cite{pupillo10,cinti10}. In a similar manner, the Rydberg dressing of ground state atoms is expected to entail a roton-maxon excitation spectrum in three-dimensional Bose-Einstein condensates \cite{pohl10} and collective many-body interactions \cite{honer10}. Here, we discuss a further application for the use of Rydberg states, namely, how they can be employed for substantially manipulating the trapping potentials of magnetically trapped $^{87}$Rb atoms in a controlled manner. Inhomogeneous magnetic trapping fields are omnipresent in experiments dealing with ultracold atoms. Constituting a promising alternative to optical approaches, even one- and two-dimensional lattices of magnetic microtraps have been realized experimentally \cite{guenter05,singh08,gerritsma07,whitlock09}. The issue of trapping Rydberg atoms in magnetic traps -- primarily of Ioffe-Pritchard kind -- has been studied extensively, demonstrating that Rydberg atoms can be tightly confined \cite{hezel:223001,hezel:053417} and that one-dimensional Rydberg gases can be created and stabilized by means of an additional electric field \cite{mayle:113004}. In particular, the authors demonstrated in a previous work that the trapping potentials of $^{87}$Rb Rydberg atoms in low angular momentum electronic states (i.e., $l\leq2$) considerably deviate from the behavior known from ground state atoms \cite{Mayle2009}. This effect is due to the \emph{composite} nature of Rydberg atoms, i.e., the fact that they consist of an outer valence electron far apart from the ionic core. In the present work we demonstrate how the peculiar properties of the Rydberg trapping potential can be utilized to manipulate the trapping potential for the ground state. To this end, an off-resonant two-photon laser transition is employed that dresses the ground state atoms by their Rydberg states. We thoroughly discuss the coupling scheme previously employed in \cite{mayle:041403} and systematically study the resulting dressed potentials. In particular, it is demonstrated how the delicate interplay between the spatially varying quantization axis of the Ioffe-Pritchard field and the fixed polarizations of the laser transitions greatly influences the actual shape of the trapping potentials -- a mechanism that has been been employed also very recently to create versatile atom traps by means of a Raman type setup \cite{middelkamp10}. Moreover, the employed scheme allows us to map the Rydberg trapping potential onto the ground state. In detail, we proceed as follows. Section \ref{sec:rydsurf} briefly reviews the properties of Rydberg atoms in a magnetic Ioffe-Pritchard trap as derived in \cite{Mayle2009}; the resulting trapping potentials are contrasted with the ones belonging to ground state atoms. Section \ref{sec:scheme} then introduces the off-resonant two-photon laser coupling scheme that dresses the ground state with the Rydberg state. In section \ref{sec:elimination} we establish a simplified three-level scheme (opposed to the 32-level scheme that is needed to fully describe the excitation dynamics) that allows us to derive analytical expressions of the dressed potentials. Section \ref{sec:dressedsurf} finally contains a thorough discussion of the dressed ground trapping potentials for a variety of field and laser configurations. \section{Review of the Rydberg Trapping Potentials} \label{sec:rydsurf} Let us start by briefly recapitulating the results from \cite{Mayle2009} concerning the trapping potentials of alkali Rydberg atoms in their $nS$, $nP$, and $nD$ electronic states. As the basic ingredient for magnetically trapping Rydberg atoms, we consider the Ioffe-Pritchard field configuration given by $\mathbf{B}(\mathbf{x})=\mathbf B_c+\mathbf{B}_l(\mathbf{x})$ with $\mathbf B_c=B\mathbf e_3$, $\mathbf{B}_l(\mathbf{x})= G\left[x_1\mathbf{e}_1-x_2\mathbf{e}_2\right]$. The corresponding vector potential reads $\mathbf{A}(\mathbf{x})= \mathbf{A}_c(\mathbf{x})+\mathbf{A}_l(\mathbf{x})$, with $\mathbf{A}_c(\mathbf{x})= \frac{B}{2}\left[x_1\mathbf{e}_2-x_2\mathbf{e}_1\right]$ and $\mathbf{A}_l(\mathbf{x})=Gx_1x_2\mathbf{e}_3$; $B$ and $G$ are the Ioffe field strength and the gradient, respectively. The mutual interaction of the highly excited valence electron and the remaining closed-shell ionic core of a Rydberg atom is modeled by an effective potential which depends only on the distance of the two particles. After introducing relative and center of mass coordinates ($\mathbf{r}$ and $\mathbf{R}$) and employing the unitary transformation $U=\exp\left[\frac{i}{2}(\mathbf{B}_c\times \mathbf{r}) \cdot \mathbf{R}\right]$, the Hamiltonian describing the Rydberg atom becomes (atomic units are used unless stated otherwise) \begin{eqnarray} \label{eq:hamfinaluni} H&=&H_A+\frac{\mathbf{P}^2}{2M} +\frac{1}{2}[\mathbf L+2\mathbf S]\cdot\mathbf B_c +\mathbf S\cdot\mathbf{B}_l(\mathbf{R+r}) \nonumber\\ &&+\mathbf{A}_l(\mathbf{R+r})\cdot\mathbf{p} +H_\mathrm{corr}\,. \end{eqnarray} Here, $H_A=\mathbf{p}^2/2+V_l(r)+V_{so}(\mathbf{L},\mathbf{S})$ is the field-free Hamiltonian of the valence electron whose core penetration, scattering, and polarization effects are accounted for by the $l$-dependent model potential $V_l(r)$ \cite{PhysRevA.49.982} while $\mathbf L$ and $\mathbf S$ denote its orbital angular momentum and spin, respectively. $V_{so}(\mathbf{L},\mathbf{S})=\frac{\alpha^2}{2}\left[1-\frac{\alpha^2}{2}V_l(r)\right]^{-2} \frac{1}{r}\frac{\mathrm d V_l(r)}{\mathrm d r} \mathbf L\cdot\mathbf S$ denotes the spin-orbit interaction that couples $\mathbf L$ and $\mathbf S$ to the total electronic angular momentum $\mathbf J = \mathbf L + \mathbf S$; the term $\left[1-\alpha^2V_l(r)/2\right]^{-2}$ has been introduced to regularize the nonphysical divergence near the origin \cite{condon35}. $H_\mathrm{corr}=-\boldsymbol{\mu}_c\cdot \mathbf{B(R)}+\frac{1}{2}\mathbf A_c(\mathbf r)^2+ \frac{1}{2}\mathbf A_l(\mathbf{R+r})^2 +\frac{1}{M}\mathbf B_c\cdot(\mathbf{r\times P}) +U^\dagger[V_l(r)+V_{so}(\mathbf{L},\mathbf{S})]U -V_l(r)-V_{so}(\mathbf{L},\mathbf{S})$ are small corrections that are neglected in the parameter regime we are focusing on; the magnetic moment of the ionic core is connected to the nuclear spin $\mathbf I$ according to $\boldsymbol{\mu}_c=-\frac{1}{2}g_I\mathbf I$, with $g_I$ being the nuclear g-factor. In order to solve the resulting coupled Schr\"odinger equation, we employ a Born-Oppenheimer separation of the center of mass motion and the electronic degrees of freedom. We are thereby led to an electronic Hamiltonian for fixed center of mass position of the atom whose eigenvalues $E_\kappa(\mathbf R)$ depend parametrically on the center of mass coordinates. These adiabatic electronic surfaces serve as trapping potentials for the quantized center of mass motion. For fixed total electronic angular momentum $\mathbf{J=L}+\mathbf S$, approximate expressions for the adiabatic electronic energy surfaces can be derived by applying the spatially dependent transformation $U_r=e^{-i \gamma (L_x+S_x)}e^{-i \beta (L_y+S_y)}$ that rotates the local magnetic field vector into the $z$-direction of the laboratory frame of reference. The corresponding rotation angles are defined by $\sin\gamma=-GY/\sqrt{B^2+G^2(X^2+Y^2)}$, $\sin\beta=-GX/\sqrt{B^2+G^2X^2}$, $\cos\gamma =\sqrt{B^2+G^2X^2}/\sqrt{B^2+G^2(X^2+Y^2)}$, and $\cos\beta =B/\sqrt{B^2+G^2X^2}$. In second order perturbation theory, the adiabatic electronic energy surfaces read \begin{equation}\label{eq:dressealpha} E_\kappa(\mathbf R)= E_\kappa^{(0)}(\mathbf R)+E_\kappa^{(2)}(\mathbf R), \end{equation} where \begin{equation}\label{eq:ealpha0} E_\kappa^{(0)}(\mathbf R) = E_\kappa^{el}+\frac{1}{2}g_jm_j\sqrt{B^2+G^2(X^2+Y^2)} \end{equation} represents the coupling of a point-like particle to the magnetic field via its magnetic moment $\boldsymbol{\mu}\propto\mathbf J=\mathbf L+\mathbf S$; $\kappa$ represents the electronic state under investigation, i.e., $|\kappa\rangle=|njm_jls\rangle$, $g_j=\frac{3}{2}+\frac{s(s+1)-l(l+1)}{2j(j+1)}$ its Land\'e g-factor, and $E_\kappa^{el}$ the field-free atomic energy levels. $E_\kappa^{(0)}(\mathbf R)$ is rotationally symmetric around the $Z$-axis and confining for $m_j>0$. For small radii ($\rho=\sqrt{X^2+Y^2}\ll B/G$) an expansion up to second order yields a harmonic potential \begin{equation} E_\kappa^{(0)}(\rho)\approx E_\kappa^{el}+\frac{1}{2}g_jm_jB+ \frac{1}{2}M\omega^2\rho^2 \end{equation} with the trap frequency defined by $\omega=G\sqrt{\frac{g_jm_j}{2MB}}$ while we find a linear behavior $E_\kappa^{(0)}(\rho)\approx E_\kappa^{el}+\frac{1}{2}g_jm_jG\rho$ when the center of mass is far from the $Z$-axis ($\rho\gg B/G$). The second order contribution $E_\kappa^{(2)}(\mathbf R)$ stems from the composite nature of the Rydberg atom, i.e., the fact that it consists of an outer Rydberg electron far apart from the ionic core. It reads \begin{equation}\label{eq:ealpha2} E_\kappa^{(2)}(\mathbf R)=C G^2X^2Y^2\,, \end{equation} where the coefficient $C$ depends on the electronic state $\kappa$ under investigation. Since $C$ is generally negative \cite{Mayle2009}, a de-confining behavior of the energy surface for large center of mass coordinates close to the diagonal ($X=Y$) is found. For a detailed derivation and discussion of the Rydberg trapping potentials (\ref{eq:dressealpha}-\ref{eq:ealpha2}) we refer the reader to \cite{Mayle2009}. \subsection*{Trapping Potentials of Ground State Atoms} When considering the trapping of ground state atoms, the coupling mechanism relies on the point-like interaction of the atomic magnetic moment $\boldsymbol{\mu}$ with the external field. Since the hyperfine interaction easily overcomes the Zeeman splitting for the regime of magnetic field strengths we are interested in, we include the hyperfine interaction in our theoretical considerations and assume the atom to couple via its total angular momentum $\boldsymbol{\mu}\propto\mathbf F=\mathbf J+\mathbf I$ to the magnetic field ($\mathbf I$ being the nuclear spin). The ground state trapping potentials correspondingly read \begin{equation} E_\kappa(\mathbf R)=E_\kappa^{el}+\frac{1}{2}g_Fm_F|\mathbf{B(R)}|\,, \end{equation} where $E_\kappa^{el}$ includes the hyperfine as well as spin-orbit effects, and \begin{equation} g_F=g_j\frac{F(F+1)+j(j+1)-I(I+1)}{2F(F+1)}\,. \end{equation} Let us note that for Rydberg atoms the hyperfine interaction $H_\mathrm{hfs}=A\mathbf I\cdot\mathbf J$ only plays a minor role since the hyperfine constant $A$ scales as $n^{-3}$ \cite{li:052502}. For a wide range of field strengths it is therefore sufficient to treat the hyperfine interaction perturbatively, giving rise to a mere splitting of the Rydberg trapping potentials (\ref{eq:dressealpha}) according to $W_\mathrm{hfs}=Am_im_j$ \cite{armstrong71}. Correspondingly, we continue to label the Rydberg states by their $j$, $m_j$, and $m_I$ quantum numbers rather than the $F$, $m_F$ ones. In particular, for characterizing the Rydberg trapping potentials the $j$, $m_j$ quantum numbers are sufficient. In our numerical calculations, on the other hand, we fully incorporated the hyperfine interaction $H_\mathrm{hfs}$ of the Rydberg state. Moreover, we also included the coupling of the magnetic moment of the ionic core, $\boldsymbol{\mu}\propto\mathbf I$, to the field. Finally, we remark that -- except for the electronic energy offset $E_\kappa^{el}$ -- the zeroth order Rydberg trapping potential $E_{nS_{1/2}, m_j=1/2}^{(0)}(\mathbf R)$ and the $5S_{1/2}$ ground state energy surface are identical for $F=m_F=2$. \section{Off-Resonant Coupling Scheme} \label{sec:scheme} In this section, we discuss the coupling scheme of the ground- and Rydberg state that arises for a two-photon off-resonant laser excitation in the presence of the Ioffe-Pritchard trap. The off-resonant coupling results in a dressed ground state atom to which the Rydberg state is weakly admixed. In this manner, the ground state atom gains properties that are specific for the Rydberg atom. In particular, the peculiar properties of the Rydberg trapping surfaces can be exploited for substantially manipulating the trapping potentials of ground state atoms. We investigate the excitation scheme that is frequently encountered in experiments \cite{heidemann:163601,reetz-lamour:253001}: Laser 1, which is $\sigma^+$ polarized, drives the transition $s\rightarrow p$ detuned by $\Delta_1$ while a second, $\sigma^-$ polarized laser then couples to the Rydberg state $n\equiv nS_{1/2},m_j=1/2, m_I=3/2$, with $s$ denoting the ground state $5S_{1/2}, F=m_F=2$ and $p$ the intermediate state $5P_{3/2},F=m_F=3$. Both lasers are propagating along the $\mathbf{e}_3$-axis in the laboratory frame of reference; the complete two-photon transition is supposed to be off-resonant by $\Delta_2$. A sketch of the whole scheme is provided in figure \ref{fig:dressscheme}(a). In a Ioffe-Pritchard trap, however, the quantization axis is spatially dependent and the polarization vectors of the two excitation lasers are only well defined as $\sigma^+$ and $\sigma^-$ at the trap center. As we are going to show in the following, in the rotated frame of reference, i.e., after applying the unitary transformation $U_r$, contributions of all polarizations emerge and the excitation scheme becomes more involved. \begin{figure} \centering \includegraphics[width=8.6cm]{./figure1.eps} \caption{(a) Idealized level scheme for an off-resonant two photon coupling of the ground- and Rydberg state of $^{87}$Rb. In a Ioffe-Pritchard trap, additional atomic levels and polarizations contribute away from the trap center, see text. Note that the hyperfine splittings of the Rydberg level are included in the calculation although not shown in this figure. (b) Atomic energy level scheme of the $5S_{1/2}$ and $5P_{3/2}$ states of $^{87}$Rb including the hyperfine splittings.} \label{fig:dressscheme} \end{figure} In the dipole approximation, the interaction of the atom with the laser fields is given by \begin{equation} H_\mathrm{AF}=-\sum_{i=1}^{2}\mathbf{d\cdot E}_i(t)=\sum_{i=1}^{2}\mathbf{r\cdot E}_i(t)\,, \label{eq:hl} \end{equation} where the sum runs over the two applied excitation lasers. The electric field vectors $\mathbf E_i(t)$ can be decomposed into their positive- and negative-rotating components according to $\mathbf E_i^{(+)}(t)$ and $\mathbf E_i^{(-)}(t)$, \begin{eqnarray} \mathbf E_i(t)&=&\frac{E_{i0}}{2}\big(\boldsymbol{\epsilon}_ie^{-i\omega t}+\boldsymbol{\epsilon}_i^*e^{i\omega t}\big)\\ &\equiv& \mathbf E_i^{(+)}(t)+\mathbf E_i^{(-)}(t)\,, \end{eqnarray} i.e., $\mathbf E_i^{(\pm)}\propto e^{-i(\pm \omega)t}$. The electric field amplitude $E_{i0}$ is connected to the intensity $I_i$ of the $i$th laser via $E_{i0}=\sqrt{2 I_i/c\varepsilon_0}$. We distinguish three different polarization vectors $\boldsymbol{\epsilon}$ of the excitation lasers, namely, {$\boldsymbol{\epsilon}_\pm=(\mathbf e_1\pm i\mathbf e_2)/\sqrt{2}$ and $\boldsymbol{\epsilon}_0=\mathbf e_3$ for $\sigma^\pm$- and $\pi$-polarized light, respectively. In order to solve the time-dependent Schr\"odinger equation, the Hamiltonian for the atom in the Ioffe-Pritchard trap and the laser interaction must be expressed in the same frame of reference. Hence, the unitary transformations of the previous section must be applied to $H_\mathrm{AF}$ as well. The first one, $U=\exp\left\{\frac{i}{2}(\mathbf{B}_c\times \mathbf{r}) \cdot \mathbf{R}\right\}$, leaves the interaction Hamiltonian (\ref{eq:hl}) of the atom with the lasers unchanged. The transformation $U_r=e^{-i \gamma J_x}e^{-i \beta J_y}$ into the rotated frame of reference, on the other hand, yields \begin{equation} U_r\mathbf rU_r^\dagger= \left(\begin{array}{c} x\cos\beta + y\sin\gamma\sin\beta - z\cos\gamma\sin\beta\\ y\cos\gamma + z\sin\gamma\\ x\sin\beta - y\sin\gamma\cos\beta + z\cos\gamma\cos\beta \end{array}\right)\,. \end{equation} That is, the $\sigma^+$ and $\sigma^-$ laser transitions that are depicted in figure \ref{fig:dressscheme}(a) become \begin{eqnarray} \boldsymbol{\epsilon}_\pm\cdot U_r\mathbf rU_r^\dagger&=&\frac{1}{\sqrt{2}}\big[x\cos\beta+y\sin\gamma\sin\beta\nonumber\\ &&\quad\,\,-z\cos\gamma\sin\beta \pm i(y\cos\gamma+z\sin\gamma)\big]\,.\label{eq:dressUeps} \end{eqnarray} Equation (\ref{eq:dressUeps}) can be rewritten in terms of the polarization vectors $\tilde{\boldsymbol{\epsilon}}_\pm$ and $\tilde{\boldsymbol{\epsilon}}_0$ defined in the rotated frame of reference. To this end, we rotate the polarization vector $\boldsymbol{\epsilon}$ and leave the position operator $\mathbf r$ unchanged: $\boldsymbol{\epsilon}\cdot U_r\mathbf rU_r^\dagger\rightarrow(\mathcal R\boldsymbol{\epsilon})\cdot\mathbf r$ with $\mathcal R$ denoting the rotation matrix associated with the transformation $U_r$. $\mathcal R\boldsymbol{\epsilon}$ can then be decomposed into the components $\tilde{\boldsymbol{\epsilon}}_\pm$ and $\tilde{\boldsymbol{\epsilon}}_0$, i.e., $\mathcal R\boldsymbol{\epsilon}=\sum_{i=\pm,0} c_i\tilde{\boldsymbol{\epsilon}_i}$ with $c_i=\tilde{\boldsymbol{\epsilon}}_i^*\cdot\mathcal R\boldsymbol{\epsilon}$. Employing \begin{equation} \mathcal R\boldsymbol{\epsilon}_\pm=\frac{1}{\sqrt{2}} \left(\begin{array}{c} \cos\beta\\ \sin\gamma\sin\beta\pm i\cos\gamma\\ -\cos\gamma\sin\beta\pm i\sin\gamma \end{array}\right) \end{equation} finally yields \begin{eqnarray} \boldsymbol{\epsilon}_+\cdot U_r\mathbf rU_r^\dagger&=& \bigg[\frac{1}{2}(\cos\gamma+\cos\beta-i\sin\gamma\sin\beta)\tilde{\boldsymbol{\epsilon}}_+\nonumber\\ &&\,\,-\frac{1}{2}(\cos\gamma-\cos\beta-i\sin\gamma\sin\beta)\tilde{\boldsymbol{\epsilon}}_-\nonumber\\ &&\,\,-\frac{1}{\sqrt{2}}(\cos\gamma\sin\beta-i\sin\gamma)\tilde{\boldsymbol{\epsilon}}_0\Big]\cdot\mathbf r\,,\label{eq:dress_epsplus}\\ \boldsymbol{\epsilon}_-\cdot U_r\mathbf rU_r^\dagger&=&(\boldsymbol{\epsilon}_+\cdot U_r\mathbf rU_r^\dagger)^*.\label{eq:dress_epsminus} \end{eqnarray} Thus, in the rotated frame of reference contributions of all polarizations emerge away from the trap center. In particular, the $5S_{1/2},F=m_F=2$ ground state can also couple to $m_F<3$ magnetic sublevels of the $5P_{3/2}$ intermediate state. Moreover, two-photon couplings between the $5S_{1/2},F=m_F=2$ and $5S_{1/2},F=2, m_F<2$ levels via the hyperfine levels of the $5P_{3/2}$ intermediate state emerge if the first excitation laser gains a significant contribution of the $\sigma^-$- or $\pi$-polarization in the rotated frame of reference. On the Rydberg side, also $m_j=-1/2$ states become accessible. As a results, the simple three-level excitation scheme $s \leftrightarrow p \leftrightarrow n$ is in general not sufficient and all relevant hyperfine levels must be included in the theoretical treatment. In detail, these are the $F=1$ and $F=2$ hyperfine levels of the $5S_{1/2}$ ground and the $nS_{1/2}$ Rydberg state. For the intermediate $5P_{3/2}$ state we have $F\in\{0,1,2,3\}$. Of course, for each $F$ there are in addition $2F+1$ magnetic sublevels with $|m_F|\le F$. Note that the intermediate $5P_{3/2},F<3$ states are split considerably below the $5P_{3/2},F=3$ levels because of the hyperfine interaction, see figure \ref{fig:dressscheme}(b). An even stronger splitting is encountered for the $F=1$ and $F=2$ hyperfine levels of the $5S_{1/2}$ electronic state. Nevertheless, all these states are taken into account in our numerical calculations, yielding in total 32 states. Other electronic states are far off-resonant and thus do not contribute in the excitation dynamics. The resulting multi-level excitation scheme is solved by employing the rotating wave approximation while adiabatically eliminating the intermediate states by a strong off-resonance condition. This procedure results in an effective coupling matrix for the ground and Rydberg states whose diagonalization yields a dressed electronic potential energy surface for the center of mass motion of the ground state atom. In the next section, we derive the coupling matrix for the illustrative example of a simplified three-level system. The generalization to the full level scheme is straightforward, although laborious. \section{Simplified Three-Level Scheme} \label{sec:elimination} In this section, we restrict ourselves to the three-level system $s\leftrightarrow p\leftrightarrow n$, i.e., including from the transformed dipole interaction (\ref{eq:dress_epsplus}-\ref{eq:dress_epsminus}) only the $\sigma^+$ and $\sigma^-$ part for the first and second laser, respectively. Such a simplification allows us to derive analytical solutions of the time-dependent Schr\"odinger equation and therefore constitutes a particularly illustrative example. It is expected to be valid for large Ioffe fields $B$ and/or small gradients $G$ when the quantization axis only shows a weak spatial dependence and the $nS_{1/2},m_j=1/2,m_I=3/2$ Rydberg state is predominantly addressed via the $5P_{3/2},F=m_F=3$ intermediate state. For higher gradients, the polarization vector significantly changes its character throughout the excitation area such that the contributions of other states cannot be neglected anymore. The three-level system can be further simplified by adiabatically eliminating the intermediate state $p$ by a strong off-resonance condition, i.e., assuming $|\Delta_1|\gg\omega_{ps}$ and $|\Delta_1-\Delta_2|\gg\omega_{np}$ with $\omega_{ps}$ and $\omega_{np}$ being the single-photon Rabi frequencies of the first and second laser transition, respectively: \begin{eqnarray} \omega_{ps}&=&\frac{1}{2}(\cos\gamma+\cos\beta-i\sin\gamma\sin\beta)\cdot\omega_{ps}^{(0)}=\omega_{sp}^*\label{eq:rabi1}\,,\\ \omega_{np}&=&\frac{1}{2}(\cos\gamma+\cos\beta+i\sin\gamma\sin\beta)\cdot\omega_{np}^{(0)}=\omega_{pn}^*\,.\label{eq:rabi2} \end{eqnarray} $\omega_{ps}^{(0)}=E_{1,0}\langle p|\tilde{\boldsymbol{\epsilon}}_+\cdot\mathbf r|s\rangle$ and $\omega_{np}^{(0)}=E_{2,0}\langle n|\tilde{\boldsymbol{\epsilon}}_-\cdot\mathbf r|p\rangle$ denote the single-photon Rabi frequency at the trap center. We remark that in the regime of strong Ioffe fields, where the simplified three-level scheme is valid, the spatial dependencies of (\ref{eq:rabi1}-\ref{eq:rabi2}) are largely negligible. Hence, the single-photon Rabi frequencies are to a good approximation given by their values $\omega_{ps}^{(0)}$ and $\omega_{np}^{(0)}$ at the origin. Employing in addition the rotating wave approximation, quasidegenerate van Vleck perturbation theory \cite{shavitt:5711} provides us an effective two-level Hamiltonian for the ground state $s$ and the Rydberg state $n$: \begin{equation} \mathcal H_{2l}= \left(\begin{array}{ccc} \Delta_2+\tilde{E}_n+E_\mathrm{hfs}+V_n& \Omega/2\\ \Omega^*/2&\tilde{E}_s+V_s \end{array}\right). \label{eq:h2l} \end{equation} Here, $E_\mathrm{hfs}$ includes the energy shift due to the hyperfine splitting of the Rydberg state as well as the Zeeman shift of the nuclear spin. For a detailed derivation of Hamiltonian (\ref{eq:h2l}) we refer the reader to the appendix of this work. $\tilde E_n\equiv \frac{1}{2}|\mathbf{B(R)}|+C\cdot G^2X^2Y^2$, $\tilde E_s\equiv \frac{1}{2}|\mathbf{B(R)}|$, and $\tilde E_p\equiv |\mathbf{B(R)}|$ are the trapping potentials of the individual energy levels. Note that the Rydberg state $n$ experiences the same potential energy surface as the ground state $s$, plus the perturbation $E_{nS_{1/2}}^{(2)}(\mathbf R)$ due to its non-pointlike character. The laser detunings are defined by $\Delta_1=E_p^{el}-E_s^{el}-\omega_1$ and $\Delta_2=E_n^{el}-E_s^{el}-\omega_1-\omega_2$. The effective interaction between the ground- and Rydberg state is given by the two-photon Rabi frequency \begin{equation} \Omega=\frac{\omega_{ps}\omega_{np}}{4} \left[\frac{1}{\tilde{E}_s-\tilde{E}_p-\Delta_1}+\frac{1}{\tilde{E}_n-\tilde{E}_p+\Delta_2-\Delta_1+E_\mathrm{hfs}}\right]\,. \label{eq:omega} \end{equation} On the diagonal of Hamiltonian (\ref{eq:h2l}) we find the contributions \begin{eqnarray} V_n&=-\frac{1}{4}\frac{|\omega_{np}|^2}{\tilde{E}_p-\tilde{E}_n+\Delta_1-\Delta_2-E_\mathrm{hfs}}\,, \label{eq:vn}\\ V_s&=-\frac{1}{4}\frac{|\omega_{ps}|^2}{\tilde{E}_p-\tilde{E}_s+\Delta_1}\,, \label{eq:vs} \end{eqnarray} which are the light shifts of the Rydberg and ground state, respectively. In the limit $\Delta_1\gg\Delta_2$ and neglecting the energy surfaces $\tilde{E}_i$ -- which means looking at the trap center -- one recovers $\Omega=-\omega_{ps}\omega_{np}/2\Delta_1$, $V_n=-|\omega_{np}|^2/4\Delta_1$, and $V_s=-|\omega_{ps}|^2/4\Delta_1$. The diagonalization of Hamiltonian (\ref{eq:h2l}) yields the \emph{dressed} Rydberg ($+$) and ground state energy surfaces ($-$), \begin{eqnarray} E_\pm(\mathbf R)&=&\frac{1}{2}\bigg[\tilde{E}_s+V_s+\tilde{E}_n+V_n+\Delta_2+E_\mathrm{hfs}\nonumber\\ &&\quad\pm\sqrt{(\tilde{E}_n+V_n-\tilde{E}_s-V_s+\Delta_2+E_\mathrm{hfs})^2+\Omega^2}\bigg],\label{eq:dressed} \end{eqnarray} that serve as trapping potential for the external motion. Here, we are mainly interested in the dressed potential for the ground state. For large detunings $\Delta_2\gg\Omega$ one can approximate \begin{equation}\label{eq:dressed2} E_-(\mathbf R)\approx\tilde{E}_s+V_s-\frac{\Omega^2}{4\Delta_2}+\frac{\Omega^2}{4\Delta_2^2}(\tilde{E}_n+V_n-\tilde{E}_s-V_s)\,, \end{equation} i.e., the contribution of the Rydberg surface $\tilde E_n$ to the dressed ground state trapping potential $E_-(\mathbf R)$ is suppressed by the factor $(\Omega/\Delta_2)^2$. Note that any spatial variation in the light shift $V_s$ and in the Rabi frequency $\Omega$ will effectively alter the trapping potential experienced by the dressed ground state atom. \section{Dressed Ground State Trapping Potentials} \label{sec:dressedsurf} In this section, we investigate the dressed ground state trapping potentials arising from the two-photon coupling described in Section \ref{sec:scheme}. Since the actual shape of these energy surfaces is determined by the interplay of the various parameters belonging to the field configuration ($B$ and $G$) as well as to the laser couplings ($\omega_{ps}^{(0)}$, $\omega_{np}^{(0)}$, $\Delta_1$, and $\Delta_2$), there is a plethora of possible configurations. Nevertheless, one can distinguish basically two relevant regimes based on the magnetic field parameters. First of all, there is the regime where the ground state trapping potential is substantially influenced by the admixture of the Rydberg surface. This regime is usually encountered for a Ioffe dominated magnetic field configuration combined with a relatively strong laser coupling. In contrast, the second regime is obtained for strong gradient fields. In this case, the resulting spatially inhomogeneous light shift determines the characteristics of the ground state trapping potential and the contribution of the Rydberg surface is of minor importance. Exemplary dressed energy surfaces belonging to both regimes are discussed in the following. We stress that for determining the dressed trapping potentials the full 32-level scheme is solved. Comparisons with the analytically obtained result (\ref{eq:dressed}) are provided. Concerning the choice of the Rydberg state $n$, a principal quantum number of $n=40$ is considered throughout this section. \subsection{Dressed Trapping Potentials of the $m_F=2$ State} Let us start by investigating the dressed potential arising for the $5S_{1/2},\,m_F=2$ state of the rubidium atom. As mentioned before, in zero order this state gives rise to the same trapping potential as the $nS_{1/2}$ Rydberg state. Hence, when going from the non-dressed to the dressed potential, any changes that arise can be mapped directly to either the higher order properties of the Rydberg trapping potential or the influence of a spatially dependent light shift. In Figures \ref{fig:dressedmf2}(a)-(b) the trapping potential of the dressed ground state atom is illustrated for the configuration $B=25\,$G, $G=2.5\,\mathrm{Tm}^{-1}$, $\omega_{ps}^{(0)}=2\pi\times 100\,$MHz, $\omega_{np}^{(0)}=2\pi\times 130\,$MHz, $\Delta_1=-2\pi\times 40\,$GHz, and $\Delta_2=-2\pi\times 1.5\,$MHz. In this strongly Ioffe field dominated case, the contribution $E_{40S}^{(2)}(\mathbf R)$ to the Rydberg trapping potential $E_{40S}(\mathbf R)$ is very strong, cf.\ (\ref{eq:dressealpha}). As a result, the Rydberg potential energy surface is extremely shallow and does not confine even a single center of mass state \cite{Mayle2009}. According to (\ref{eq:dressed2}), this strong deviation from the harmonic confinement of the ground state, $E_{5S}(\mathbf R)\propto \frac{1}{2}M\omega^2\rho^2$, is consequently mirrored in the dressed ground state potential: Along the diagonal ($X=Y$), where the effect of $E_{40S}^{(2)}(\mathbf R)$ is most pronounced, the trapping potential is gradually lowered compared to the harmonic confinement of the non-dressed ground state, cf.\ figure \ref{fig:dressedmf2}(b). Along the axes ($X=0$ or $Y=0$), on the other hand, $E_\kappa^{(2)}(\mathbf R)$ vanishes and the non-dressed Rydberg and ground state energy surfaces coincide. As a consequence, the continuous azimuthal symmetry of the two-dimensional ground state trapping potential is reduced to a four-fold one, see figure \ref{fig:dressedmf2}(a). \begin{figure} \centering \includegraphics[width=8.6cm]{./figure2.eps} \caption{(a) Contour plot of the dressed ground state trapping potential for $B=25\,$G, $G=2.5\,\mathrm{Tm}^{-1}$, $\omega_{ps}^{(0)}=2\pi\times 100\,$MHz, $\omega_{np}^{(0)}=2\pi\times 130\,$MHz, $\Delta_1=-2\pi\times 40\,$GHz, and $\Delta_2=-2\pi\times 1.5\,$MHz. (b) Cut along the diagonal $X=Y$ of the same surface (solid line); the short-dashed line which is on top of the solid black curve corresponds to the analytical solution (\ref{eq:dressed}) of the simplified three level system. For comparison, the cut along the axis $X=0$ -- where $E_{40S}^{(2)}(\mathbf R)$ does not contribute -- is also illustrated, which corresponds to the trapping potential $E_{5S}(\mathbf R)$ of the ground state (dashed line). (c) and (d) Same as in subfigures (a) and (b), respectively, but for $B=1\,$G. In this case, all three curves coincide on the scale of figure (d). The energy scale of all subfigures is given by the ground state trap frequency $\omega=\sqrt{G^2/MB}$.} \label{fig:dressedmf2} \end{figure} In addition to the full numerical solution, in figure \ref{fig:dressedmf2}(b) the results of the simplified three-level scheme according to (\ref{eq:dressed}) are illustrated as well (short-dashed line). Since in the Ioffe-field dominated regime the spatial variation of the quantization axis is minor, (\ref{eq:dressed}) agrees very well with the solution of the full 32-level problem (solid line). This allows us to recapitulate the above made observations on grounds of the analytical expressions available within the reduced level scheme. For this reason, let us first consider the single photon Rabi frequencies as given by (\ref{eq:rabi1}-\ref{eq:rabi2}). Because of the strong Ioffe field, they experience only a weak spatial dependence and are therefore essentially defined by their values $\omega_{ps}^{(0)}$ and $\omega_{np}^{(0)}$ at the origin. As a consequence, also the light shifts $V_n$ and $V_s$ [cf.\ (\ref{eq:vn}-\ref{eq:vs})] as well as the effective two-photon Rabi frequency $\Omega$ [cf.\ (\ref{eq:omega})], can be approximated by their values at the origin. Hence, these quantities are not contributing to the particular shape of the dressed ground state energy surface. Omitting in this manner all contributions from (\ref{eq:dressed2}) that merely yield a constant energy offset, one arrives at \begin{equation}\label{eq:eminus} E_-(\mathbf R)=\tilde E_{5S}(\mathbf R)+\frac{\Omega^2}{4\Delta_2^2}E_{40S}^{(2)} (\mathbf R)+\mathrm{const}. \end{equation} That is, the deviation of the dressed ground state surface from its non-dressed counterpart is given by \begin{equation} E_-(\mathbf R)-E_{5S}(\mathbf R)=\frac{\Omega^2}{4\Delta_2^2}E_{40S}^{(2)}(\mathbf R)\,. \end{equation} This complies with the observations made before: $E_\kappa^{(2)}(\mathbf{R})$ possesses the envisaged discrete azimuthal symmetry (${\mathbf C}_{4v}$) and contributes mostly close to the diagonals of the two-dimensional trapping surface while vanishing on the axes. Moreover, $E_{40S}^{(2)}(\mathbf R)<0$ which agrees with the lowering of the energy surface. Hence, the regime of strong Ioffe fields in combination with a strong laser coupling allows us to map the specific features of the Rydberg trapping potential onto the ground state. In Figure \ref{fig:dressedmf2}(c)-(d) the same dressed trapping potential as before is illustrated, but now for $B=1\,$G. The reduction of the Ioffe field strength has basically two effects. First of all, considering the same spatial range as before the variation of the quantization axis is stronger. Consequently, the simplified three-level approach starts to slightly deviate from the exact solution, as can be observed in figure \ref{fig:dressedmf2}(d). Secondly, decreasing the Ioffe field influences the dressed potential by altering the Rydberg surface. For $B=1\,$G, $G=2.5\,\mathrm{Tm}^{-1}$, the Rydberg trapping potential is not quite as shallow as for $B=25\,G$, $G=2.5\,\mathrm{Tm}^{-1}$ and now supports a few confined center of mass states \cite{Mayle2009}. Consequently, the deviation between the Rydberg and the ground state surface is not as strong as in the previous case, resulting in a reduced lowering of the energy surface along the diagonal. Considering the two-dimensional trapping potential, the azimuthal symmetry is thus nearly recovered. In view of (\ref{eq:eminus}) this can be understood as follows. While the contribution $E_{40S}^{(2)}(\mathbf R)$ is identical for both cases (it depends only on the magnetic field gradient $G$ rather than on the Ioffe field strength $B$), the spatial dependence of $E_{5S}(\mathbf R)$ is stronger in the case of the weaker Ioffe field. Hence, for a decreasing Ioffe field the importance of $E_{40S}^{(2)}(\mathbf R)$ is diminished and the original behavior of the ground state trapping potential $E_{5S}(\mathbf R)$ is more and more recovered. Note that this does \emph{not} imply a smaller contribution of the Rydberg level to the dressed state. This regime is thus particularly useful if any change of the trapping surface due to the Rydberg dressing is not desirable but the admixture of the Rydberg character is still wanted. Regarding the magnetic field parameters, the previous example represents the intermediate regime between the Ioffe dominated and the gradient dominated case; the latter let us investigate in the following. To achieve a strong gradient Ioffe-Pritchard configuration, we further reduce the Ioffe field to $B=0.25\,$G and leave the magnetic field gradient $G=2.5\,\mathrm{Tm}^{-1}$ unchanged. An important aspect of the strong gradient regime is the contribution of $E_{40S}^{(2)}(\mathbf R)$ to the dressed ground state energy surface. Already in the case of figures \ref{fig:dressedmf2}(b)-(d) it was indicated that the influence of $E_{40S}^{(2)}(\mathbf R)$ is diminished if the gradient field becomes more important. Indeed, for the present field parameters the deviation of the Rydberg trapping potential from the ground state potential is minor and many center of mass states can be confined. Thus in the spatial domain we are considering, the continuous azimuthal symmetry of the ground state trapping potential is conserved and we present in figure \ref{fig:dressedmf2_2} only cuts along the diagonal of the dressed ground state energy surface. The parameters of the lasers are $\omega_{ps}^{(0)}=2\pi\times750\,$MHz, $\omega_{np}^{(0)}=2\pi\times 100\,$MHz, $\Delta_1=-2\pi\times 75\,$GHz, and $\Delta_2=-2\pi\times 5\,$MHz. The dashed line in figure \ref{fig:dressedmf2_2}(a) represents the non-dressed ground state trapping potential, $E_{5S}(\mathbf R)$. As one can observe, the two-photon dressing (solid line) substantially alters this surface by significantly reducing the trap frequency, namely, from $2\pi\times638\,$Hz to $2\pi\times381\,$Hz. Although the simplified three-level system derived in Section \ref{sec:elimination} is not able to reproduce this result quantitatively, it nevertheless provides us a qualitative understanding of the underlying physics, as we shall demonstrate in the following. In the case of a strong gradient field, the light shift $V_s$ experienced by the ground state atom, (\ref{eq:vs}), shows a strong spatial dependence and therefore cannot be omitted in (\ref{eq:dressed2}). Specifically, it can be approximated for small center of mass coordinates by \begin{equation} V_s\approx V_s^{(0)}\cdot\Big(1-\frac{1}{2}\frac{G^2\rho^2}{B^2}\Big)\,, \end{equation} where $V_s^{(0)}=-\frac{1}{4}\frac{|\omega_{ps}^{(0)}|^2}{\tilde E_p-\tilde E_s+\Delta_1}$ denotes the light shift at the origin. Except for constant contributions, the dressed ground state surface then reads \begin{eqnarray}\label{eq:dressedintermediate} E_-(\mathbf R)&\propto& E_{5S}(\mathbf R)-\frac{1}{2}\frac{G^2\rho^2}{B^2}V_s^{(0)}\\ &=&\frac{1}{2}M(\omega^2-\frac{G^2}{M B^2}V_s^{(0)})\rho^2, \end{eqnarray} i.e., one encounters a reduced trap frequency $\tilde\omega^2\equiv \omega^2-\frac{G^2}{M B^2}V_s^{(0)}$. Note that the azimuthal symmetry of $E_-(\mathbf R)$ is conserved; hence the dressed trapping potentials experienced in this regime are qualitatively different from the one of figures \ref{fig:dressedmf2}(a)-(b). We stress that (\ref{eq:dressedintermediate}) only serves for our qualitative understanding of the underlying physics. In the given regime, it fails to quantitatively reproduce the dressed potentials. The actual spatial dependence of the light shift is illustrated as the short-dashed line in figure \ref{fig:dressedmf2_2}(b). It has been calculated by solving the full 32-level system but without the contribution of the Ioffe-Pritchard trapping potentials. The combination with the confinement $E_{5S}(\mathbf R)$ (dashed line) finally yields the surface of reduced trap frequency (solid line). The short-dashed line in figure \ref{fig:dressedmf2_2}(a) represents the trapping potential for $\omega_{np}^{(0)}=0$, i.e., in absence of the second laser that couples to the Rydberg state. Remarkably, turning off the second laser hardly changes the dressed potential. Hence, for the given example it is the interplay between the spatially varying quantization axis of the Ioffe-Pritchard field and the fixed polarization of the first laser that determines the spatially dependent light shift. As in the case of figures \ref{fig:dressedmf2}(c)-(d), this does not mean that the Rydberg state does not contribute to the dressed state. Hence, in the strong gradient regime we have two means to manipulate a ground state atom: With the first laser, one can alter the trapping potential experienced by the dressed atom and with the second laser we can in addition admix some Rydberg character to the atomic wave function. \begin{figure} \centering \includegraphics[width=8.6cm]{./figure3.eps} \caption{(a) Cut along the diagonal of the dressed ground state trapping potential for $B=0.25\,$G, $G=2.5\,\mathrm{Tm}^{-1}$, $\omega_{ps}^{(0)}=2\pi\times750\,$MHz, $\omega_{np}^{(0)}=2\pi\times 100\,$MHz, $\Delta_1=-2\pi\times 75\,$GHz, and $\Delta_2=-2\pi\times 5\,$MHz. Note that in this strong gradient regime the trapping potential shows a continuous azimuthal symmetry, hence the corresponding contour plot is not provided. The trap frequency of the dressed surface (solid line) is greatly reduced compared to the trapping potential $E_{5S}(\mathbf R)$ of the ground state (dashed line). Turning off the second laser, i.e., setting $\omega_{np}^{(0)}=0$ hardly changes the shape of the potential surface (short-dashed line, on top of the black solid curve). (b) Spatially dependent light shift (short-dashed line) that in combination with the energy surface of the ground state (dashed line) leads to the trapping potential presented in subfigure (a) (solid line). The energy scale of all subfigures is given by the ground state trap frequency $\omega=\sqrt{G^2/MB}$.} \label{fig:dressedmf2_2} \end{figure} The configuration leading to figure \ref{fig:dressedmf2_2} has one drawback: Since the influence of the spatial dependent light shift on the trapping potential strongly depends on the coupling strength to the intermediate state, the effective lifetime of the dressed state is restricted. The issue of the finite lifetime is discussed in more detail in section \ref{sec:exissues}. For now, let us remark that for the particular case of figure \ref{fig:dressedmf2_2} a lifetime of $\approx 1$ms can be achieved. Hence, the proposed scheme is suitable for scenarios where a short-term manipulation of the trapping potential is needed, e.g., for the modulation of the trap frequency on short timescales. \subsection{Dressed Trapping Potentials of the $m_F=0$ State} In the discussion above, we focused on the dressed ground state arising from the $m_F=2$ magnetic sublevel of the $5S_{1/2},F=2$ electronic state. Since ultracold samples of ground state atoms can nowadays routinely prepared and magnetically trapped in this state, this is a sensible choice. Nevertheless, also different magnetic sublevels merit a closer look. As an example, we consider in the following dressed states of the $m_F=0$ state. Note that the latter is untrapped in a pure Ioffe-Pritchard trap, i.e., without the coupling lasers. Therefore, one can expect that the influence of the specific features of the Rydberg trapping potential on the shape of the dressed surface is much more pronounced than in the case of the $m_F=2$ dressed state. Both examples that are presented in the following belong to the strong gradient regime where the simplified three-level scheme is not valid anymore and the full 32-level system must be considered. In Figures \ref{fig:dressedmf0}(a)-(b) the trapping potential of the dressed $m_F=0$ ground state atom is illustrated for the configuration $B=1\,$G, $G=10\,\mathrm{Tm}^{-1}$, $\omega_{ps}^{(0)}=2\pi\times 100\,$MHz, $\omega_{np}^{(0)}=2\pi\times 35\,$MHz, $\Delta_1=-2\pi\times 14\,$GHz, and $\Delta_2=-2\pi\times 10\,$MHz. The first thing to note is that -- in contrast to the non-dressed $m_F=0$ state -- the atom experiences a confining potential that is due to the spatially dependent light shift of the off-resonant laser coupling. Moreover, Figure \ref{fig:dressedmf0}(a) reveals the four-fold symmetry known from the Rydberg trapping potential $E_{40S}(\mathbf R)$. Because the admixed Rydberg surface has not to compete against a strong magnetic confinement of the ground state according to $\boldsymbol{\mu}_F\cdot\mathbf{B(R)}$, the anti-trapping effect of $E_{40S}^{(2)}(\mathbf R)$ becomes particularly visible in the dressed potential of the $m_F=0$ state. In Figure \ref{fig:dressedmf0}(b) once more the cut along the diagonal of the dressed potential is illustrated (solid line). As expected, the admixture of the Rydberg surface eventually changes the character of the trapping potential from confining to de-confining when going to larger center of mass coordinates. However, for very large coordinates a weak confining behavior is recovered that can be explained as follows. For such large center of mass coordinates, the contribution $E_{40S}^{(2)}(\mathbf R)$ shifts the Rydberg state far off-resonant and thereby diminishes the contribution of the Rydberg level to the dressed state. The slightly confining character of the dressed potential in this regime is reminiscent of the spatially dependent light shift. Note that the azimuthally symmetric dressed potential arising in absence of the second laser (short-dashed line) coincides very well with the two-photon dressed potential along the axes (dashed line). Hence, the first laser can be used to trap and prepare the atoms in the $m_F=0$ ground state. By switching on the second laser, the Rydberg state gets admixed, resulting in the above described significant change of the trapping potential in the vicinity of the diagonals ($X=Y$). Overall, the influence of the Rydberg surface is much more distinct than in the case of the $m_F=2$ dressed states, cf.\ figure \ref{fig:dressedmf2}. \begin{figure} \centering \includegraphics[width=8.6cm]{./figure4.eps} \caption{(a) Contour plot of the dressed ground state trapping potential for the $5S_{1/2},m_F=0$ state. The parameters are $B=1\,$G, $G=10\,\mathrm{Tm}^{-1}$, $\omega_{ps}^{(0)}=2\pi\times 100\,$MHz, $\omega_{np}^{(0)}=2\pi\times 35\,$MHz, $\Delta_1=-2\pi\times 14\,$GHz, and $\Delta_2=-2\pi\times 10\,$MHz. (b) Cut along the diagonal $X=Y$ (solid line) and along the axis with $X=0$ (dashed line) of the same surface; the short-dashed line corresponds to the single-photon dressing, i.e., $\omega_{np}^{(0)}=0$. For comparison, all curves are offset to zero at the origin. (c) and (d) Same as in subfigures (a) and (b), respectively, but for $B=0.1\,$G, $G=10\,\mathrm{Tm}^{-1}$, $\omega_{ps}^{(0)}=2\pi\times 150\,$MHz, $\omega_{np}^{(0)}=2\pi\times 50\,$MHz, $\Delta_1=-2\pi\times 15\,$GHz, and $\Delta_2=-2\pi\times 10\,$MHz. In subfigure (d) we refrained from offsetting all curves to zero at the origin but rather applied a common offset such that the joint asymptote of the solid and dotted line becomes evident. Note that the detunings are defined in the same way as for the $5S_{1/2},m_F=2$ case. The energy scale in all subfigures is given in terms of the trap frequency. The latter has been gained by a harmonic fit around the origin, yielding $\omega=2\pi\times 25\,$Hz and $\omega=2\pi\times 187\,$Hz for the first and second configuration, respectively.} \label{fig:dressedmf0} \end{figure} For comparison, we show in figures \ref{fig:dressedmf0}(c)-(d) the dressed trapping potentials of the $m_F=0$ state for a more dominant gradient field. The actual parameters are $B=0.1\,$G, $G=10\,\mathrm{Tm}^{-1}$, $\omega_{ps}^{(0)}=2\pi\times 150\,$MHz, $\omega_{np}^{(0)}=2\pi\times 50\,$MHz, $\Delta_1=-2\pi\times 15\,$GHz, and $\Delta_2=-2\pi\times 10\,$MHz. This configuration results in a much tighter confinement ($\omega=2\pi\times 187\,$Hz compared to $\omega=2\pi\times 25\,$Hz for the previous example) and a deeper trap along the diagonals. On the other hand, the revival of the weak trapping character as previously observed in figure \ref{fig:dressedmf0}(b) for large center of mass coordinates along the diagonal is lost since the light shift of the first laser already reached a constant asymptotic behavior in this regime. As discussed for the previous example, the contribution $E_{40S}^{(2)}(\mathbf R)$ shifts the Rydberg state far off-resonant and thereby diminishes the influence of the second laser on the excitation dynamics. Consequently, for large center of mass coordinates close to the diagonal the dressed surface approaches the asymptotic of the single-photon dressing of the first laser [short-dashed line in figure \ref{fig:dressedmf0}(d)]. In contrast, along the axes (dashed line) the potential does not reach a constant asymptote but maintains a weak confining behavior. The latter is due to the admixture of predominantly $m_j=1/2$ Rydberg states: Since we assumed the two-photon transition to be blue-detuned, the magnetic field interaction $\propto m_j|\mathbf{B(R)}|$ pushes the $m_j=1/2$ Rydberg state closer to resonance and in the same manner repels its $m_j=-1/2$ counterpart. Hence, the dressed state shows a stronger admixture of trapped than anti-trapped Rydberg states, giving rise to a confining energy surface. We remark that not only the Rydberg state contributes to the dressed state. In fact, the first laser slightly mixes the $5S_{1/2},F=2,m_F=0$ state with $m_F\neq0$ states of the same hyperfine level. However, the $m_F<0$ states are admixed in the same degree as their $m_F>0$ counterparts such that their confining and de-confining characters cancel. We remark that, as in the case of figure \ref{fig:dressedmf2_2}, the effective lifetime of the dressed $m_F=0$ states is restricted to a few ms due to the coupling to the intermediate state. The latter is required for the confining light-shift potential of the otherwise untrapped $m_F=0$ state. \subsection{Experimental Issues} \label{sec:exissues} Let us finish by commenting on the experimental feasibility of the above discussed scheme. The proposed dressed states possess a finite lifetime due to the spontaneous decay of the Rydberg state. The associated effective lifetime can be estimated by \begin{equation}\label{eq:efflife} \tau=\frac{\tau_n}{|c_n|^{2}}\,, \end{equation} $c_n$ being the admixture coefficient of the Rydberg state; within the simplified three-level scheme it evaluates to $|c_{n,\mathrm{3l}}|^2=[\Omega/2\tilde\Delta_2]^{2}$ with $\tilde\Delta_2=\Delta_2+V_n-V_s+E_\mathrm{hfs}$. $\tau_n$ denotes the radiative lifetime of the $nS_{1/2}$ Rydberg atom and can be parameterized as $\tau_n=\tau'(n-\delta)^\gamma$ where one finds $\tau'=1.43$ ns and $\gamma=2.94$ for $l=0$, $\tau'=2.76$ ns and $\gamma=3.02$ for $l=1$, and $\tau'=2.09$ ns and $\gamma=2.85$ for $l=2$ \cite{PhysRevA.65.031401}. For the $40S_{1/2}$ Rydberg state, this yields $\tau_{40}=58\,\mu$s. In table \ref{tab:efflife}, the effective lifetimes for the examples presented in this work are tabulated. Because the Rydberg state is only weakly admixed ($|c_n|^2<10^{-2}$ for all examples), effective lifetimes greater than ten milliseconds are obtained. Besides the finite lifetime of the Rydberg state, one needs in addition to account for the decay of the intermediate $5P_{3/2}$ level that possesses a much shorter radiative lifetime of $\tau_p=26\,$ns. The resulting effective lifetimes together with the coupling coefficients are provided in table \ref{tab:efflife} as well. As it turns out, the decay of the intermediate state constitutes the dominating loss channel, allowing for lifetimes $\gtrsim 1\,$ms. The resulting lifetimes need to be compared with the timescales emerging in actual experiments. In case of figure \ref{fig:dressedmf2}(a), it is desirable to map the dressed potential by the external motion. Given the trap frequency of $\omega=2\pi\times63.8\,$Hz, this yields a timescale of about 15 ms which is of the same order of magnitude as the effective lifetime. For the remaining examples addressed in this work, the effective lifetime is not quite sufficient to cover the timescale of the external motion. Hence, in these cases the proposed scheme is more suitable whenever only a short term manipulation of the trapping potential is required. Finally, we remark that the effective lifetime can be further prolonged by coupling to Rydberg states with a higher principle quantum number $n$ and by substituting the intermediate $5P_{3/2}$ state by a more long-lived one such as the $6P_{3/2}$ state. \begin{table} \caption{Effective lifetimes of the dressed states considered in this section. The lifetimes are determined according to (\ref{eq:efflife}) using the Rydberg admixture coefficient $c_n$ of the full 32-level system. When applicable, the admixture coefficient $c_{n,\mathrm{3l}}$ of the simplified three-level system is provided for comparison. In addition, the admixture coefficient $c_p$ of the intermediate state and the resulting lifetime are given. \label{tab:efflife}} \begin{ruledtabular} \begin{tabular}{l c c c c c} Configuration & $|c_{n,\mathrm{3l}}|^2$&$|c_n|^2$&$\tau_n$(ms)&$|c_p|^2$&$\tau_p$(ms)\\ \hline figure \ref{fig:dressedmf2}(a)&$0.004\,186$&$0.004\,134$&$14.0$&$1.83\times10^{-6}$&$14.2$\\ figure \ref{fig:dressedmf2}(c)&$0.004\,533$&$0.004\,472$&$13.0$&$1.84\times10^{-6}$&$14.1$\\ figure \ref{fig:dressedmf2_2}(a)&$-$&$0.001\,433$&$40.5$&$25.2\times10^{-6}$&$1.03$\\ figure \ref{fig:dressedmf0}(a)&$-$&$0.000\,040$&$1457$&$12.8\times10^{-6}$&$2.03$\\ figure \ref{fig:dressedmf0}(c)&$-$&$0.000\,154$&$377$&$25.2\times10^{-6}$&$1.03$\\ \end{tabular} \end{ruledtabular} \end{table} Similar to the effective lifetime, the van der Waals interaction of two Rydberg atoms is suppressed by $|c_n|^4$. The latter interaction results in an energy shift $\Delta_\mathrm{vdW}$ that depends on the interparticle distance and effectively alters the detuning of the two-photon transition. In order to avoid any such effects, $\Delta_\mathrm{vdW}$ should be well below the the excitation detuning $\Delta_2$. Taking $c_n=0.1$ and $\Delta_\mathrm{vdW}<2\pi\times 0.1$\,MHz as an (quite restrictive) example yields a minimum interparticle distance of $\approx 1\,\mu$m. \section{Summary} In the present work, we investigated a magnetically trapped rubidium atom that is coupled to its $nS$ Rydberg state via a two-photon laser transition. We studied the off-resonant case where the ground state atom becomes \emph{dressed} by the Rydberg state and vice versa. By this procedure, the peculiar properties of Rydberg atoms become accessible also for ground state atoms. In particular, we explored how the trapping potential experienced by a ground state atom in a magnetic Ioffe-Pritchard trap can be manipulated by means of such an off-resonant laser coupling. It is demonstrated that in the limit of a strong offset field the four-fold azimuthal symmetry, which is inherent for the trapping potential of the Rydberg atom, is mirrored in the dressed ground state trapping potential. In this regime, a simplified three-level scheme is derived that facilitates the interpretation of the observed results. In the opposite regime of a strong gradient, the delicate interplay between the spatially varying quantization axis of the Ioffe-Pritchard field and the fixed polarizations of the laser transitions greatly influences the actual shape of the dressed trapping potentials. In this manner, the trapping potentials of ground state atoms can be manipulated substantially. \begin{acknowledgments} This work was supported by the German Research Foundation (DFG) within the framework of the Excellence Initiative through the Heidelberg Graduate School of Fundamental Physics (GSC~129/1). M.M.\ acknowledges financial support from the Landesgraduiertenf\"orderung Baden-W\"urttemberg. Financial support by the DFG through the grant Schm 885/10-3 is gratefully acknowledged. \end{acknowledgments}
1,108,101,565,361
arxiv
\section{Introduction} We study a general class of complex non-linear segmentation energies with high-order regional terms. Such energies are often desirable in the computer vision tasks of image segmentation, co-segmentation and stereo \cite{kkz:iccv03,freedman:pami04,rother:cvpr06,Ismail:LevelSetsWithArea08,ismail:cvpr10,KC11:iccv,linesearchcuts:12,FTR:cvpr13} and are particularly useful when there is a prior knowledge about the appearance or the shape of an object being segmented. We focus on energies of the following form: \begin{equation} \label{eq:ERL} \min_{S\in \Omega}E(S) = R(S) + \lambda L(S), \end{equation} where $S$ is a binary segmentation, $L(S)$ is a standard length-based smoothness term, and $R(S)$ is a (generally) non-linear {\em regional functional} discussed below. \begin{figure}[t] \begin{center} \includegraphics[width = 0.4\textwidth]{yuri_lena.pdf} \bf\caption{\rm% Gradient descent ({\em level sets}) and {\em trust region}. \label{fig:yuri_lena}} \end{center} \end{figure} Let $I:\Omega\to\mathbb{R}^m$ be an image defined in $\Omega \subset \mathbb{R}^n$. The most basic type of regional terms used in segmentation is a linear functional $U(S)$, which can be represented via an arbitrary scalar function $f:\Omega\to\mathbb{R}$ \begin{align}\label{eq:LinearRegionalFunctional} U(S) = \int_S f\,\text{d}x = \int_\Omega f\cdot 1_S\,\text{d}x =: \vecprod{f}{S}. \end{align} Usually, $f$ corresponds to an {\em appearance model} based on image intensities/colors $I$, e.g.,~$f(x)$ could be a log-likelihood ratio for intensity $I(x)$ given particular object and background intensity distributions. The integral in \eqref{eq:LinearRegionalFunctional} can be seen as a dot product between scalar function $f$ and $1_S$, which denotes the characteristic function of set $S$. We use notation $\vecprod{f}{S}$ to refer to such linear functionals. More general \emph{non-linear regional functional} $R(S)$ can be described as follows. Assume $k$ scalar functions $f_1,\ldots,f_k:\Omega\to\mathbb{R}$, each defining a linear functional $\vecprod{f_i}{S}$ of type~\eqref{eq:LinearRegionalFunctional}, and one differentiable non-linear function $F(v_1,\ldots,v_k)\colon\mathbb{R}^k\to\mathbb{R}$ that combines them, \begin{align}\label{eq:NonlinearRegionalFunctional} R(S) = F\left(\vecprod{f_1}{S},\ldots,\vecprod{f_k}{S}\right). \end{align} Such general regional terms could enforce non-linear constraints on volume or higher-order shape moments of segment $S$. They could also penalize $L_2$ or other distance metrics between the distribution (or non-normalized bin counts) of intensities/colors inside segment $S$ and some given target. For example, $S$ could be softly constrained to specific volume $V_0$ via quadratic functional $$R(S) = (\vecprod{1}{S}-V_0)^2$$ using $f_1(x)=1$ and $F(v)=(v-V_0)^2$, while the Kullback-Leibler (KL) divergence between the intensity distribution in $S$ and a fixed target distribution $q=(q_1,\ldots,q_k)$ could be written as $$R(S) = \sum_i^k \frac{\vecprod{f_i}{S}}{\vecprod{1}{S}}\log\left(\frac{\vecprod{f_i}{S}}{\vecprod{1}{S}\cdot q_i}\right).$$ Here scalar function $f_i(x)$ is an indicator for pixels with intensity $i$ (in bin $i$). More examples of non-linear regional terms $R(S)$ are discussed in Section~\ref{sec:experiments}\@. In general, optimization of non-linear regional terms is NP-hard and cannot be addressed by standard global optimization methods. Some earlier papers developed specialized techniques for particular forms of non-linear regional functionals. For example, the algorithm in \cite{ismail:cvpr10} was developed for minimizing the Bhattacharyya distance between distributions and a dual-decomposition approach in \cite{woodford:iccv09} applies to convex piece-wise linear regional functionals \eqref{eq:NonlinearRegionalFunctional}. These methods are outside the scope of our paper since we focus on more general techniques. Combinatorial techniques \cite{kkz:iccv03,rother:cvpr06} apply to general non-linear regional functionals, and they can make large moves by globally minimizing some approximating energy. However, as pointed out in \cite{linesearchcuts:12}, these methods are known to converge to solutions that are not even a local minimum of energy \eqref{eq:ERL}. Level sets are a well established in the litterature as a gradient descent framework that can address arbitrary differentiable functionals \cite{LevelSetsBook:Ismail11}, and therefore, are widely used for high-order terms \cite{Adam2009,Foulonneau2009,BenAyed2009,Michailovich2007,freedman:pami04}. This paper compares two known optimization methods applicable to energy \eqref{eq:ERL} with any high-order regional term \eqref{eq:NonlinearRegionalFunctional}: {\em trust region} and {\em level sets}. To address non-linear term $R(S)$, both methods use its first-order functional derivative \begin{align}\label{eq:dR} \frac{\partial R}{\partial S} = \sum_{i=1}^k \frac{\partial F}{\partial v_i} \left(\vecprod{f_1}{S},\ldots,\vecprod{f_k}{S}\right) \cdot f_i \end{align} either when computing gradient flow for \eqref{eq:ERL} in level sets or when approximating energy \eqref{eq:ERL} in trust region. However, despite using the same first derivative $\frac{\partial R}{\partial S}$, the level sets and trust region are fairly different approaches to optimize~\eqref{eq:ERL}. The structure of the paper is as follows. Sections~\ref{sec:ls}-\ref{sec:tr} review the two general approaches that we compare: gradient descent implemented with level sets and trust region implemented with graph cuts. While the general trust region framework \cite{FTR:cvpr13} can be based on a number of underlying global optimization techniques, we specifically choose a graph cut implementation (versus continuous convex relaxation approach), since it is more appropriate for our CPU-based evaluations. Our experimental results and comparisons are reported in Section~\ref{sec:experiments} and Section~\ref{sec:conclusion} presents the conclusions. \section{Overview of Algorithms}\label{sec:overview} Below we provide general background information on level sets and trust region methods, see Sections \ref{sec:ls}--\ref{sec:tr}. High-level conceptual comparison of the two frameworks is provided in Section \ref{sec:comp}. \subsection{Standard Level Sets}\label{sec:ls} In the level set framework, minimization of energy $E$ is carried out by computing a partial differential equation (PDE), which governs the evolution of the boundary of $S$. To this end, we derive the Euler-Lagrange equation by embedding segment $S$ in a one-parameter family $S(t)$, $t \in \mathbb{R}^{+}$, and solving a PDE of the general form: \begin{equation} \label{eq:pde} \frac{\partial S}{\partial t} = -\frac{\partial E(S)}{\partial S} = - \frac{\partial R(S)}{\partial S} - \frac{\partial L(S)}{\partial S} \end{equation} where $t$ is an artificial time step parameterizing the descent direction. The basic idea is to describe segment $S$ implicitly via an embedding function $\phi: \Omega \rightarrow \mathbb{R}$: \begin{eqnarray} \label{level_set_region_membership} S &=& \{ x \in \Omega | \phi(x)\leq 0 \} \nonumber \\ \Omega \setminus S &=& \{ x \in \Omega | \phi(x) > 0 \}, \end{eqnarray} and evolve $\phi$ instead of $S$. With the above representation, the terms that appear in energy (\ref{eq:ERL}) can be expressed as functions of $\phi$ as follows \cite{Chan01ip,LevelSetsBook:Ismail11}: \begin{eqnarray}\label{eq:LSlength} L(S) &=& \int_{\Omega} \|\nabla H(\phi)\|dx = \int_{\Omega}\delta (\phi)\|\nabla \phi \|dx \nonumber \\ \vecprod{f_i}{S} &=& \int_\Omega H(\phi) f_i dx. \end{eqnarray} Here, $\delta$ and $H$ denote the Dirac function and Heaviside function, respectively. Therefore, the evolution equation in (\ref{eq:pde}) can be computed directly by applying the Euler-Lagrange descent equation with respect to $\phi$. This gives the following gradient flow: \begin{equation} \label{curve-flow-withoutSDF} \frac{\partial \phi}{\partial t} = \left[-\frac{\partial R(S)}{\partial S}+\lambda\kappa\right]\delta(\phi) \end{equation} with $\kappa:=\Div\left(\frac{\nabla\phi}{\norm{\nabla\phi}}\right)$ denoting the curvature of $\phi$'s level lines. The first term in (\ref{curve-flow-withoutSDF}) is a regional flow minimizing $R$, and the second is a standard curvature flow minimizing the length of the segment's boundary. In standard level set implementations, it is numerically mandatory to keep the evolving $\phi$ close to a distance function \cite{Li2005a,Osher2002}. This can be done by re-initialization procedures \cite{Osher2002}, which were intensively used in classical level set methods \cite{Caselles97ijcv}. Such procedures, however, rely on several {\em ad hoc} choices and may result in undesirable side effects \cite{Li2005a}. In our implementation, we use an efficient and well-known alternative \cite{Li2005a}, which adds an internal energy term that penalizes the deviation of $\phi$ from a distance function: \begin{equation} \label{distance-function-penalty} \frac{\mu}{2} \int_{\Omega} \left (1 - \|\nabla \phi\| \right)^2 dx . \end{equation} In comparison to re-initialization procedures, the implementation in \cite{Li2005a} allows larger time steps (and therefore faster curve evolution). Furthermore, it can be implemented via simple finite difference schemes, unlike traditional level set implementations which require complex upwind schemes \cite{Sethian1999}. With the distance-function penalty, the gradient flow in (\ref{curve-flow-withoutSDF}) becomes: \begin{equation} \label{curve-flow-withSDF} \frac{\partial \phi}{\partial t} = \mu \left [ \Delta\phi - \kappa \right ] + \left[-\frac{\partial R(S)}{\partial S}+\lambda \kappa\right]\delta(\phi) \end{equation} For all the experiments in this paper, we implemented the flow in (\ref{curve-flow-withSDF}) using the numerical prescriptions in \cite{Li2005a}. For each point $p$ of the discrete grid, we update the level set function as \begin{equation} \label{Discrete-Level-Set-Updates} \phi^{j+1}(p) = \phi^{j}(p) + \Delta t\cdot A(\phi^{j}(p)), \end{equation} where $\Delta t$ is the discrete time step and $j$ is the iteration number. $A(\phi^{j}(p))$ is a numerical approximation of the right-hand side of (\ref{curve-flow-withSDF}), where the spatial derivatives of $\phi$ are approximated with central differences and the temporal derivative with forward differences. The Dirac function is approximated by $\delta_{\epsilon}(t) = \frac{1}{2\epsilon} [1+\cos(\frac{\pi t}{\epsilon})]$ for $|t| \leq \epsilon$ and $0$ elsewhere. We use $\epsilon = 1.5$ and $\mu=0.05$. In the context of level set and PDE methods, it is known that the choice of time steps should follow strict numerical conditions to ensure stability of front propagation, e.g., the standard Courant-Friedrichs-Lewy (CFL) conditions \cite{Estellers2012}. These conditions require that $\Delta t$ should be smaller than a certain value $\tau$ that depends on the choice of discretization. The level set literature generally uses fixed time steps. For instance, classical upwind schemes \cite{Sethian1999} generally require a small $\Delta t$ for stability, whereas the scheme in \cite{Li2005a} allows relatively larger time steps. The optimum time step is not known a priori and finding a good $\Delta t < \tau$ via an adaptive search such as back-tracking \cite{Boyd2004} seems attractive. However, to apply a back-tracking scheme, we would have to evaluate the energy at each step. In the case of level sets, this requires a discrete approximation of the original continuous energy. We observed in our experiments that the gradient of such discrete approximation of the energy does not coincide with the gradient obtained in the numerical updates in~ \eqref{Discrete-Level-Set-Updates}. Therefore, with a back-tracking scheme, level sets get stuck very quickly in a local minimum of the discrete approximation of the energy (See the adaptive level set example in Fig. \ref{fig:volume}). We believe that this is the main reason why, to the best of our knowledge, back-tracking approaches are generally avoided in the level-set literature. Therefore, in the following level-set experiments, we use a standard scheme based on fixed time step $\Delta t$ during each curve evolution, and report the performance at convergence for several values $\Delta t \in \{1 \ldots 10^3\}$. \subsection{Trust Region Framework}\label{sec:tr} Trust region methods are a class of iterative optimization algorithms. In each iteration, an approximate model of the optimization problem is constructed near the current solution. The model is only ``trusted'' within a small region around the current solution called ``trust region'', since in general, approximations fit the original non-linear function only locally. The approximate model is then globally optimized within the trust region to obtain a candidate iterate solution. This step is often called {\em trust region sub-problem}. The size of the trust region is adjusted in each iteration based on the quality of the current approximation. Variants of trust region approach differ in the kind of approximate model used, optimizer for the trust region sub-problem step and a protocol to adjust the next trust region size. For a detailed review of trust region methods see \cite{TRreview:Yuan}. Below we outline a general version of a trust region algorithm in the context of image segmentation. The goal is to minimize $E(S)$ in Eq.~\eqref{eq:ERL}. Given solution $S_j$ and distance $d_j$, the energy $E$ is approximated using \begin{equation}\label{eq:TR_subproblem} \widetilde{E}(S) = U_0(S) + L(S), \end{equation} where $U_0(S)$ is the first order Taylor approximation of the non-linear term $R(S)$ near $S_j$. The trust region sub-problem is then solved by minimizing $\widetilde{E}$ within the region given by $d_j$. Namely, \begin{equation} \label{eq:constrained} S^*= \underset{||S-S_j||<d}{\operatorname{argmin}} \tilde{E}(S). \end{equation} Once a candidate solution $S^*$ is obtained, the quality of the approximation is measured using the ratio between the actual and predicted reduction in energy. The trust region is then adjusted accordingly. For the purpose of our CPU-based evaluations we specifically selected the Fast Trust Region (FTR) implementation \cite{FTR:cvpr13} which includes the following components for the trust region framework. The non-linear term $R(S)$ is approximated by the first order Taylor approximation $U_0(S)$ in \eqref{eq:TR_subproblem} using first-order functional derivative \eqref{eq:dR}. The trust region sub-problem in \eqref{eq:constrained} is formulated as unconstrained Lagrangian optimization, which is globally optimized using one graph-cut (we use a floating point precision in the standard code for graph-cuts \cite{BK:PAMI04}). Note that, in this case, the length term $L(S)$ is approximated using Cauchy-Crofton formula as in \cite{GeoCuts:ICCV03}. More details about FTR can be found in \cite{FTR:cvpr13}. \subsection{Conceptual Comparison}\label{sec:comp} \begin{figure*}[t] \begin{center} \includegraphics[width = 1\textwidth]{volume.pdf} \bf\caption{\rm% Volume constraint with boundary length regularization. We set the weights to $\lambda_{Length}=1$, $\lambda_{Volume}=10^{-4}$. \label{fig:volume}} \end{center} \end{figure*} Some high-level conceptual differences between the {\em level sets} and {\em trust region} optimization frameworks are summarized in Figure \ref{fig:yuri_lena}. Standard level sets methods use fixed $\Delta t$ to make steps $-\Delta t \cdot \frac{\partial E}{\partial S}$ in the gradient descent direction. Trust region algorithm \cite{FTR:cvpr13} moves to solution $S^*$ minimizing approximating functional $\tilde E(S)$ within a circle of given size $d$. The trust region size is adaptively changed from iteration to iteration based on the observed approximation quality. The blue line illustrates the spectrum of trust region moves for all values of $d$. Solution $\tilde{S}$ is the global minimum of approximation $\tilde{E}(S)$. For example, if $\tilde{E}(S)$ is a 2nd-order Taylor approximation of $E(S)$ at point $S_0$ then $\tilde{S}$ would correspond to a Newton's step. \section{Experimental Comparison}\label{sec:experiments} In this section, we compare trust region and level sets frameworks in terms of practical efficiency, robustness and optimality. We selected several examples of segmentation energies with non-linear regional constraints. These include: 1) quadratic volume constraint, 2) shape prior in the form of $L_2$ distance between the target and the observed shape moments and 3) appearance prior in the form of either $L_2$ distance, Kullback-Leibler divergence or Bhattacharyya distance between the target and the observed color distributions. In all the experiments below, we optimize energy of a general form $E(S) = R(S) + L(S)$. To compare optimization quality, we will plot energy values for the results of both level sets and trust region. Note that the Fast Trust Region (FTR) implementation uses a discrete formulation based on graph-cuts and level sets are a continuous framework. Thus, the direct comparison of their corresponding energy values should be done carefully. While numerical evaluation of the regional term $R(S)$ is equivalent in both methods, they use completely different numerical approaches to measuring length $L(S)$. In particular, level sets use the approximation of length given in \eqref{eq:LSlength}, while the graph cut variant of trust region relies on integral geometry and Cauchy-Crofton formula popularized by \cite{GeoCuts:ICCV03}. \begin{figure*}[t] \begin{center} \includegraphics[width = 0.8\textwidth]{liver.pdf} \bf\caption{\rm% Shape prior constraint with length regularization and log-likelihood models. Target shape moments and appearance models are computed from the provided ellipse. We used 100 intensity bins, moments up to order $l=2$, $\lambda_{Length}=10$, $\lambda_{Shape}=0.01$ and $\lambda_{App}=1$. The continuous energy is plotted starting from $4^{th}$ iteration to reduce the range of the y-axis. \label{fig:liver}} \end{center} \end{figure*} Since the energies are not comparable by its actual number, we study instead the robustness of each method independently and compare the resulting segmentation with one another. Note that level sets have much smaller oscillation for small time steps, which supports the theory of the CFL-conditions. In each application below we examine the robustness of both trust region and level sets methods by varying the running parameters. In the trust region method we vary the multiplier $\alpha$ used to change the size of the trust region from one iteration to another. For the rest of the parameters we follow the recommendations of~\cite{FTR:cvpr13}. In our implementation of level sets we vary the time-step size $\Delta t$, but keep the parameters $\epsilon=1.5$ and $\mu=0.05$ fixed for all the experiments. The top-left plots in figures \ref{fig:volume}-\ref{fig:mushroom} report energy $E(S)$ as a function of the CPU time. At the end of each iteration, both level sets and trust region require energy updates, which could be computationally expensive. For example, appearance-based regional functionals require re-evaluation of color histograms/distributions at each iteration. This is a time consuming step. Therefore, for completeness of our comparison, we report in the top-middle plots of each figure energy $E(S)$ versus the number of energy evaluations (number of updates) required during the optimization. \subsection{Volume Constraint} First, we perform image segmentation with a volume constraint with respect to a target volume $V_0$, namely, $$R(S) = (\vecprod{1}{S}-V_0)^2.$$ We choose to optimize this energy on a synthetic image without appearance term since the solution to this problem is known to be a circle. Figure \ref{fig:volume} shows that both FTR and level sets converge to good solutions (nearly circle), with FTR being 25 times faster, requiring 150 times less energy updates and exhibiting more robustness to the parameters. \begin{figure*}[t] \begin{center} \includegraphics[width = 0.8\textwidth]{soldier_length.pdf} \bf\caption{\rm% $L_2$ norm between the observed and target color bin counts with length regularization. We used 100 bins per channel, $\lambda_{App} =1$ and $\lambda_{Length} =1$. \label{fig:soldier_L2L}} \end{center} \end{figure*} \subsection{Shape Prior with Geometric Shape Moments} Next, we perform image segmentation with a shape prior constraint in the form of $L_2$ distance between the geometric shape moments of the segment and a target. Our energy is defined as $E(S) = \lambda_{Shape}R(S) + \lambda_{Length}L(S) + \lambda_{App}D(S)$, where $D(S)$ is a standard log-likelihood unary term based on intensity histograms. In this case, $R(S)$ is given by $$ R(S) =\sum_{p+q\leq l}(\vecprod{x^py^q}{S}-m_{pq})^2, $$ with $m_{pq}$ denoting the target geometric moment of order $l=p+q$. Figure \ref{fig:liver} shows an example of liver segmentation with the above shape prior constraint. The target shape moments as well as the foreground and background appearance models are computed from the user provided input ellipse as in \cite{KC11:iccv,FTR:cvpr13}. We used moments of up to order $l=2$ (including the center of mass and shape covariance but excluding the volume). Both trust region and level sets obtain visually pleasing solutions. The trust region method is two orders of magnitude faster and requires two orders of magnitude less energy updates (top-left and top-middle plots). Since the level sets method was forced to stop after 10000 iterations, we show the last solution available for each value of parameter $\Delta t$ . The actual convergence for this method would have taken more iterations. In this example, the oscillations of the energy are especially pronounced (top-right plot). \subsection{Appearance Prior} \exclude{ \begin{figure*}[t] \begin{center} \includegraphics[width = 0.8\textwidth]{imgs/soldier.pdf} \end{center} \caption{$L_2$ norm between the observed and target color bin counts, no length regularization ($\lambda_{Length}=0$). In this case, the discrete and continuous energies are equal. We used 100 bins per channel.\label{fig:soldier_L2}} \end{figure*} } In the experiments below, we apply both methods to optimize segmentation energies where the goal is to match a given target appearance distribution using either the $L_2$ distance between the observed and target color bin counts, or the Kullback-Leibler divergence and Bhattacharyya distance between the observed and target color distributions. Here, our energy is defined as $E(S) = \lambda_{App}R(S) + \lambda_{Length}L(S)$. We assume $f_i$ is an indicator function of pixels belonging to bin $i$ and $q_i$ is the target count (or probability) for bin $i$. The target appearance distributions for the object and the background were obtained from the ground truth segments. We used 100 bins per color channel. The images in the experiments below are taken from \cite{RKB:SIGGRAPH04}. \paragraph{$L_2$ distance constraint on bin counts:} Figure \ref{fig:soldier_L2L} shows results of segmentation with $L_2$ distance constraint between the observed and target bin counts regularized by length. The regional term in this case is $$R(S) = \sqrt{\sum_{i=1}^{k} (\vecprod{f_i}{S}-q_i)^2}.$$ Since the level sets method was forced to stop after 15000 iterations for values of $\Delta t=1,5$, we show the last solution available. Full convergence would have taken more iterations. For higher values of $\Delta t$, we show results at convergence. We observe two orders of magnitude difference between the trust region and level sets method in terms of the speed and the number of energy updates required. \paragraph{Kullback-Leibler divergence:} Figure \ref{fig:llama} shows results of segmentation with KL divergence constraint between the observed and target color distributions. The regional term in this case is given by $$R(S)= \sum_{i=1}^k\frac{\vecprod{f_i}{S}}{\vecprod{1}{S}}\log\left(\frac{\vecprod{f_i}{S}}{\vecprod{1}{S}q_i}\right).$$ The level sets method converged for times steps $\Delta t=50$ and $1000$ , but was forced to stop for other values of the parameter after $10000$ iterations. We show the last solution available. Full convergence would have required more iterations. The trust region method obtains solutions that are closer to the ground truth, runs two order of magnitude faster and requires less energy updates. \begin{figure*}[t] \begin{center} \includegraphics[width = 0.8\textwidth]{llama.pdf} \bf\caption{\rm% KL divergence between the observed and the target color distribution. We used 100 bins per channel, $\lambda_{App}=100$ and $\lambda_{Length}=0.01$. Continuous energy is plotted starting from forth iteration to reduce the range of the y-axis. \label{fig:llama}} \end{center} \end{figure*} \paragraph{Bhattacharyya divergence:} Figure \ref{fig:mushroom} shows results of segmentation with Bhattacharyya distance constraint between the observed and target color distributions. The regional term in this case is given by $$R(S)= -\log\left(\sum_{i=1}^k \sqrt{\frac{\vecprod{f_i}{S}}{\vecprod{1}{S}}q_i}\right).$$ Also for this image, since the level sets method had not (yet) converged after 10000 iterations for any set of the parameters, we show the last solution available. Further increasing parameter $\Delta t$ would increase the oscillations of the energy (see top-right plot). \section{Conclusions}\label{sec:conclusion} For relatively simple functionals \eqref{eq:NonlinearRegionalFunctional}, combining a few linear terms ($k$ is small), such as constraints on volume and low-order shape moments, the quality of the results obtained by both methods is comparable (visually and energy-wise). However, we observe that the number of energy updates required for level sets is two orders of magnitude larger than for trust region. This behavior is consistent with the corresponding CPU running time plots. The segmentation results on shape moments were fairly robust with respect to parameters (time step $\Delta t$ and multiplier $\alpha$) for both methods. The level sets results for volume constraints varied with the choice of $\Delta t$. In general, larger steps caused significant oscillations of the energy in level sets thereby affecting the quality of the result at convergence. When optimizing appearance-based regional functionals with large number of histogram bins (corresponding to large $k$), level sets proved to be extremely slow. Convergence would require more than $10^4$ iterations (longer than 1 hour on our machine). In some cases, the corresponding results were far from optimal both visually and energy-wise. This is in contrast to the results by trust region approach, which consistently converged to plausible solutions with low energy in less than a minute or $100$ iterations. \exclude{ \FS{ It is worth noting that the slow convergence of the level set method is not only due to the curvature flow (see the example without length in Fig.~\ref{fig:soldier_L2}), but also due to the distance function penalty in \eqref{distance-function-penalty}. In our experiment we observed that increasing the time step in level sets does not necessarily correspond to larger moves within each iteration. }{} } We believe that our results will be useful for many practitioners in computer vision and medical imaging when selecting an optimization technique. \begin{figure*}[t] \begin{center} \includegraphics[width = 0.8\textwidth]{mushroom.pdf} \bf\caption{\rm % Bhattacharyya distance between the observed and the target color distribution. We used 100 bins per channel, $\lambda_{App}=1000$ and $\lambda_{Length}=0.01$. \label{fig:mushroom}} \end{center} \end{figure*} {\small \bibliographystyle{ieee}
1,108,101,565,362
arxiv
\section{Introduction} The possibility of producing translationally ultracold molecules has recently generated great anticipation in the field of molecular dynamics. Attractive applications include the possibility of testing fundamental symmetries \cite{EDM}, the potential of new phases of matter \cite{10, 11a, 11b, 12} and the renewed quest for the control of chemical reactions through ultracold chemistry \cite{Gian, Balak, Bala}. These endeavors are beginning to bear scientific fruit. For example, high-resolution spectroscopic measurements of translationally cold samples of OH should allow improved astrophysical tests of the variation with time of the fine-structure constant \cite{maser}. The recent experimental advances \cite{Cold, Special, Form} have made theoretical studies of the collisional behavior of cold molecules essential \cite{Krems, Weck, Bodo}, both to interpret the data and to suggest future directions. Several approaches have produced cold neutral molecules to date, many of which are described in Ref.\ \onlinecite{Special}. The methods available can be classified into {\em direct methods}, based on cooling of preexisting molecules, and {\it indirect methods}, which create ultracold molecules from ultracold atoms. Among the direct methods, Stark deceleration of dipolar molecules in a supersonic beam \cite{4a, 4b, Boch} and helium buffer-gas cooling \cite{3} are currently leading the way. They reach temperatures of the order of 10~mK, and there are a wide variety of proposals on how to bridge the temperature gap to below 1~mK. These include evaporative cooling and even direct laser cooling. The idea of {\em sympathetic cooling}, where a hot species is cooled via collisions with a cold one, also seems very attractive and is being pursued by several experimental groups. Sympathetic cooling is a form of {\em collisional cooling}, which works for multiple degrees of freedom simultaneously. It does not rely on specific transitions, which makes if suitable for cooling molecules. Collisional cooling is also the basis for helium buffer-gas cooling. Sympathetic cooling of trapped ions has already been demonstrated \cite{iones}, using a different laser-cooled ionic species as the refrigerant. Cooling of polyatomic molecules to sub-kelvin temperatures with ions has also been reported. This technique is expected to be suitable for cooling molecules of very high mass, including those of biological relevance \cite{biolo}. But the ease with which alkali metal atoms can be cooled to ultracold temperatures makes them good candidates to use as a thermal reservoir to cool other species. They have already been used to cool ``more difficult'' atomic alkali metal partners. For example, BEC for $^{41}$K was achieved by sympathetic cooling of potassium atoms with Rb atoms \cite{Modugno}. A theoretical study of the viability of this cooling technique for molecules is desirable. There have been a number of theoretical studies of collisions of molecules with He \cite{CaH, NH, OH, Forr, otra, what}, in support of buffer-gas cooling, but only a few with alkali metals \cite{Na2, Li2homo, Li2het, K2, PRL}. To our knowledge, no such study has included the effects of hyperfine structure. The main objective of the present work is to study cold collisions of OH with trapped Rb atoms. OH has been successfully slowed by Stark deceleration in at least two laboratories \cite{Meer,maser}. To cool the molecules further, sympathetic cooling by thermal contact with $^{87}$Rb is an attractive possibility. Rb is easily cooled and trapped in copious quantities and can be considered the workhorse for experiments on cold atoms. Temperatures below 100~$\mu$K are reached even in a MOT environment (70~$\mu$K using normal laser cooling and 7~$\mu$K using techniques such as polarization gradient cooling). The cooling and lifetime of species in the trap depends largely on the ratio of elastic collision rates (which lead to thermalization of the sample) to inelastic ones. The latter can transfer molecules into non-trappable states and/or release kinetic energy, with resultant heating and trap loss. The characterization of the rates of both kinds of process is thus required. Since applied electric and magnetic fields offer the possibility of {\em controlling} collisions, it is very important to know the effects of such fields on the rates. At present, nothing is known about the low-temperature collision cross sections of Rb-OH or any similar system. Rb-OH can be considered as a benchmark system for the study of the feasibility of sympathetic cooling for molecules. Many molecule-alkali metal atom systems have deeply bound electronic states with ion-pair character \cite{17,9} and have collision rates that are enhanced by a ``harpooning'' mechanism. Both the atom and the diatom are open-shell doublet species, and can interact on two triplet and three singlet potential energy surfaces (PES). In addition, the OH radical has fine structure, including lambda-doubling, and both species possess nuclear spins and hence hyperfine structure. Thus Rb-OH is considerably more complicated than other collision systems that have been studied at low temperatures. In previous work we advanced the first estimates of cross sections (for both inelastic and elastic collisions), based on fully {\em ab initio} surfaces, for the collision of OH radicals with Rb atoms in the absence of external fields \cite{PRL}. Here we provide details of the methodology used and discuss the potential surfaces and the state-resolved partial cross sections. This paper is organized as follows: Section II describes the calculation of {\em ab initio} PES for Rb-OH. Details of the electronic structure calculations are given and the methods used for diabatization, interpolation and fitting are described. The general features of the resulting surfaces are analyzed. Section III describes the exact and approximate theoretical methodologies used for the dynamical calculations. Section IV presents the resulting cross sections and discusses the possibility of sympathetic cooling. We also comment on the role expected for the harpooning mechanism. Section 5 summarizes our results and describes prospects for future work. Further details about electronic structure calculations and other channels basis sets to describe the dynamics are given in Appendixes 1 and 2 respectively. \section{Potential energy surfaces} We have used {\it ab initio} electronic structure calculations to obtain PES for interaction of OH($^2\Pi$) with Rb. The ground $X^2\Pi$ state of OH has a $\pi^3$ configuration, while the ground $^2S$ state of Rb has 5s$^1$. At long range, linear RbOH thus has $^1\Pi$ and $^3\Pi$ states. At nonlinear configurations, $^1\Pi$ splits into $^1A'$ and $^1A^{\prime\prime}$, with even and odd reflection symmetry in the molecular plane, whereas $^3\Pi$ splits into $^3A'$ and $^3A^{\prime\prime}$. At shorter range, the situation is more complicated. The ion-pair threshold Rb$^+$ + OH$^-$ lies only 2.35 eV above the neutral threshold. The corresponding $^1\Sigma^+$ ($^1A'$) ion-pair state drops very fast in energy with decreasing distance because of the Coulomb attraction. Below, Jacobi coordinates ($R,\theta$) will be used, $R$ being the radial distance between the atom and the OH center of mass and $\theta$ the angle between this line and the internuclear axis. At the linear Rb-OH geometry, the ion-pair state crosses the covalent (non-ion-pair) state near $R=6.0$ \AA, as shown in Figure \ref{figcurve}. At nonlinear geometries, the ion-pair state has the same symmetry ($^1A'$) as one of the covalent states, so there is an avoided crossing. There is thus a conical intersection between the two $^1A'$ states at linear geometries, which may have major consequences for the scattering dynamics. \begin{figure} \setlength{\unitlength}{4mm} \begin{picture}(0,0)(0,0) \put(8,-12){\makebox(0,0){{\bf \large{(a)}}}} \put(8,-28.7){\makebox(0,0){{\bf \large{(b)}}}} \put(8,-45.5){\makebox(0,0){{\bf \large{(c)}}}} \end{picture} \vspace{-.5cm} \begin{center} \epsfig{file=rboh-lin-3curves-overview.eps,angle=0,width=0.95\linewidth,clip=} \epsfig{file=rboh-159-5curves-overview.eps,angle=0,width=0.95\linewidth,clip=} \epsfig{file=rboh-159-5curves.eps,angle=0,width=0.95\linewidth,clip=} \end{center} \caption{RbOH adiabatic potential curves from MRCI calculations, showing crossing for the Rb-OH linear geometry (a) and avoided crossing for a slightly nonlinear geometry ($\theta=159^\circ$, (b)). Panel (c) shows an expanded view of the curves in the center panel.} \label{figcurve} \end{figure} The $^1A'$ electronic wavefunctions near the conical intersection are made up from two quite different configurations, so that a multiconfiguration electronic structure approach is essential to describe them. We have therefore chosen to use MCSCF (multiconfiguration self-consistent field) calculations followed by MRCI (multireference configuration interaction) calculations to characterize the surfaces. The electronic structure calculations initially produce {\em adiabatic} (Born-Oppenheimer) surfaces, but these are unsuitable for dynamics calculations both because they are difficult to interpolate (with derivative discontinuities at the conical intersections) and because there are nonadiabatic couplings between them that become infinite at the conical intersections. We have therefore transformed the two $^1A'$ adiabatic surfaces that cross into a {\it diabatic} representation, where there are non-zero couplings between different surfaces but both the potentials and the couplings are smooth functions of coordinates. The electronic structure calculations are carried out using the MOLPRO package \cite{MOLPRO}. It was necessary to carry out an RHF (restricted Hartree-Fock) calculation to provide initial orbital guesses before an MCSCF calculation. It is important that the Hartree-Fock calculation gives good orbitals for both the OH $\pi$ and Rb 5s orbitals at all geometries (even those inside the crossing, where the Rb 5s orbital is unoccupied in the ground state). In addition, it is important that the OH $\pi$ orbitals are doubly occupied in the RHF calculations, as otherwise they are non-degenerate at linear geometries at the RHF level, and the MCSCF calculation is unable to recover the degeneracy. To ensure this, we begin with an RHF calculation on RbOH$^-$ rather than neutral RbOH. For Rb, we use the small-core quasirelativistic effective core potential (ECP) ECP28MWB \cite{ECPs} with the valence basis set from Ref.\ \cite{Sol03}. This treats the 4s, 4p and 5s electrons explicitly, but uses a pseudopotential to represent the core orbitals. For O and H, we use the aug-cc-pVTZ correlation-consistent basis sets of Dunning \cite{Dunning} in uncontracted form. Electronic structure calculations were carried out at 275 geometries, constructed from all combinations of 25 intermolecular distances $R$ and 11 angles $\theta$ in Jacobi coordinates. The 25 distances were from 2.0 to 6.0\,\AA\ in steps of 0.25\,\AA, from 6.0 to 9.0\,\AA\ in steps of 0.5\,\AA\ and from 9\,\AA\ to 12\,\AA\ in steps of 1\,\AA. The OH bond length was fixed at $r=0.9706$\,\AA. The 11 angles were chosen to be Gauss-Lobatto quadrature points \cite{Lobatto}, which give optimum quadratures to project out the Legendre components of the potential while retaining points at the two linear geometries. The linear points are essential to ensure that the $A'$ and $A^{\prime\prime}$ surfaces are properly degenerate at linear geometries: if we used Gauss-Legendre points instead, the values of the $A'$ and $A^{\prime\prime}$ potentials at linear geometries would depend on extrapolation from nonlinear points and would be non-degenerate. The Gauss-Lobatto points correspond approximately to $\theta=0$, 20.9, 38.3, 55.6, 72.8, 90, 107.2, 124.4, 141.7, 159.1 and 180$^\circ$, where $\theta=0$ is to the linear Rb-HO geometry. The calculations were in general carried out as angular scans at each distance, since this avoided most convergence problems due to sudden changes in orbitals between geometries. \subsection{Singlet states} We carried out a state-averaged MCSCF calculation of the lowest 3 singlet states of neutral RbOH (two $^1A'$ and one $^1A^{\prime\prime}$). Molecular orbital basis sets will be described using the notation $(n_{A'},n_{A^{\prime\prime}})$, where the two integers indicate the number of $A'$ and $A^{\prime\prime}$ orbitals included. The orbital energies are shown schematically in Figure \ref{figorb}. The MCSCF basis set includes a complete active space (CAS) constructed from the lowest (10,3) molecular orbitals, with the lowest (5,1) orbitals closed (doubly occupied in all configurations). The MCSCF calculation generates a common set of orbitals for the 3 states. The calculations were carried out in $C_s$ symmetry, but at linear geometries the two components of the $\Pi$ states are degenerate to within our precision ($10^{-8}\,E_h$). \begin{figure} \begin{center} \epsfig{file=rboh-orbitals.ps,angle=0,width=85mm} \end{center} \caption{Schematic representation of the RbOH molecular orbitals from MCSCF calculations.} \label{figorb} \end{figure} For cold molecule collisions, it is very important to have a good representation of the long-range forces. These include a large contribution from dispersion (intermolecular correlation), so require a correlated treatment. We therefore use the MCSCF orbitals in an MRCI calculation, again of the lowest three electronic states. The MOLPRO package implements the ``internally contracted" MRCI algorithm of Werner and Knowles \cite{Wer}. The reference space in the MRCI is the same as the active space for the MCSCF, and single and double excitations are included from all orbitals except oxygen 1s. As described in Appendix 1, the two $^1A'$ states are calculated in a single MRCI block, so that they share a common basis set. We encountered difficulties with non-degeneracy between the two components of the $^1\Pi$ states at linear geometries. These are described in Appendix 1. However, using the basis sets and procedures described here, the non-degeneracies were never greater than 90 $\mu E_h$ in the total energies for distances $R\ge 2.25$ \AA\ (and considerably less in the interaction energies around the linear minimum). \subsection{Transforming to a diabatic representation} As described above, the two surfaces of $^1A'$ symmetry cross at conical intersections at linear geometries. For dynamical calculations, it is highly desirable to transform the adiabatic states into diabatic states (or, strictly, quasidiabatic states). MOLPRO contains code to carry out diabatization by maximizing overlap between the diabatic states and those at a reference geometry. However, this did not work for our application because we were unable to find reference states that had enough overlap with the lowest adiabats at all geometries. We therefore adopted a different approach, based on matrix elements of angular momentum operators. We use a Cartesian coordinate system with the $z$ axis along the OH bond. At any linear geometry, the $\Pi$ component of $^1A'$ symmetry is uncontaminated by the ion-pair state, and the matrix elements of $\hat L_z$ are \begin{eqnarray} \langle ^1A'(\Pi) | \hat L_z | ^1A^{\prime\prime} \rangle &=& i; \nonumber\\ \langle ^1A'(\Sigma) | \hat L_z | ^1A^{\prime\prime} \rangle &=& 0. \end{eqnarray} At nonlinear geometries, the actual $^1A'$ states can be represented approximately as a mixture of $\Sigma$ and $\Pi$ components, \begin{equation} \left(\begin{matrix}\Psi_{1A'} & \Psi_{2A'}\end{matrix}\right) = \left(\begin{matrix}\Psi_\Pi & \Psi_\Sigma\end{matrix}\right) \left(\begin{matrix}\cos\phi & \sin\phi \\ -\sin\phi & \cos\phi\end{matrix}\right), \label{mix} \end{equation} where the ``singlet" superscripts have been dropped to simplify notation. If Eq.\ \ref{mix} were exact, the matrix elements of $\hat L_z$ would be \begin{eqnarray} \langle 1A' | \hat L_z | A^{\prime\prime} \rangle &=& i\cos\phi; \nonumber\\ \langle 2A' | \hat L_z | A^{\prime\prime} \rangle &=& i\sin\phi. \end{eqnarray} The mixing angle $\phi$ would thus be given by \begin{equation} \phi = \tan^{-1} \frac{\langle 2A'| \hat L_z | A^{\prime\prime} \rangle} {\langle 1A' | \hat L_z | A^{\prime\prime} \rangle}. \label{ratio} \end{equation} In the present work, we have taken the mixing angle to be {\it defined} by Eq.\ \ref{ratio}, using matrix elements of $\hat L_z$ calculated between the MRCI wavefunctions. This gives a mixing angle that, for linear geometries, is $\phi=0$ at long range and $\phi=\pi/2$ at short range (inside the crossing). One complication that arises here is that the signs of the three wavefunctions are arbitrary, and may change discontinuously from one geometry to another. The signs of the matrix elements obtained numerically by MOLPRO are thus completely arbitrary. It was therefore necessary to pick a sign convention for the matrix elements at linear geometries and adjust the signs at other geometries to give a smoothly varying mixing angle. It should be noted that this diabatization procedure is not general, and will fail if there is any geometry where both the numerator and the denominator of Eq.\ \ref{ratio} are small. Fortunately, this was not encountered for RbOH. The sum of squares of the two matrix elements of $\hat L_z$ was never less than 0.99 at distances from $R=3.0$ \AA\ outwards, and never less than 0.7 even at $R=2.0$ \AA. The mixing angles obtained for the singlet states of RbOH are shown as a contour plot in Figure \ref{figmix}. As expected, $\phi$ changes very suddenly from 0 to $90^\circ$ at linear and near-linear geometries, but smoothly at strongly bent geometries. \begin{figure} \begin{center} \epsfig{file=rboh-mix.ps,angle=-90,width=0.95\linewidth,clip=} \end{center} \caption{Contour plot of the diabatic mixing angle $\phi$ (in degrees) for the $^1A'$ states of RbOH} \label{figmix} \end{figure} Once a smooth mixing angle has been determined, the diabatic potentials and coupling surface are obtained from \begin{eqnarray} \left(\begin{matrix}H_{11} & H_{12} \\ H_{21} & H_{22}\end{matrix}\right) &=& \left(\begin{matrix}\cos\phi & \sin\phi \cr -\sin\phi & \cos\phi\end{matrix}\right) \nonumber\\ &\times& \left(\begin{matrix}E_{1A'} & 0 \\ 0 & E_{2A'}\end{matrix}\right) \left(\begin{matrix}\cos\phi & -\sin\phi \\ \sin\phi & \cos\phi\end{matrix}\right). \end{eqnarray} \begin{figure} \setlength{\unitlength}{4mm} \begin{picture}(0,0)(0,0) \put(7.5,-3.5){\makebox(0,0){{\bf \large{(a)}}}} \put(7.5,-25){\makebox(0,0){{\bf \large{(b)}}}} \end{picture} \vspace{-.6cm} \begin{center} \epsfig{file=rboh-1ap.ps,angle=-90,width=0.95\linewidth,clip=} \epsfig{file=rboh-1app.ps,angle=-90,width=0.95\linewidth,clip=} \end{center} \caption{Contour plot of the diabatic $^1A'$ (panel (a)) and $^1A^{\prime\prime}$ (panel (b)) covalent (non-ion-pair) potential energy surfaces for RbOH from MRCI calculations. Contours are labeled in cm$^{-1}$.} \label{figsing1} \end{figure} \begin{figure} \setlength{\unitlength}{4mm} \begin{picture}(0,0)(0,0) \put(7.5,-3.5){\makebox(0,0){{\bf \large{(a)}}}} \put(7.5,-25){\makebox(0,0){{\bf \large{(b)}}}} \end{picture} \vspace{-.5cm} \begin{center} \epsfig{file=rboh-ip.ps,angle=-90,width=0.95\linewidth,clip=} \epsfig{file=rboh-12.ps,angle=-90,width=0.947\linewidth,clip=} \end{center} \caption{Contour plot of the $^1A'$ diabatic ion-pair potential energy surface (panel (a)) and the diabatic coupling potential (panel (b)) for RbOH from MRCI calculations. Contours are labeled in cm$^{-1}$.} \label{figsing2} \end{figure} The diabatization was carried out using total electronic energies (not interaction energies). The two singlet diabatic PES that correlate with Rb($^2S$) + OH($^2\Pi$) are shown in Figure \ref{figsing1}, and the diabatic ion-pair surface and the coupling potential are shown in Figure \ref{figsing2}. Some salient characteristics of the surfaces are given in Table \ref{tabpot}. As required, the $^1A^{\prime\prime}$ and $^1A'$ covalent states are degenerate at the two linear geometries, with a relatively deep well (337 cm$^{-1}$) at Rb-OH and a much shallower well at Rb-HO. At bent geometries, it is notable that the potential well is broader and considerably deeper for the $^1A^{\prime\prime}$ state than for the $^1A'$ state; indeed, the $^1A'$ state has a linear minimum, while the $^1A^{\prime\prime}$ has a bent minimum at $\theta=128^\circ$ with a well depth of 405 cm$^{-1}$. In spectroscopic terms, this corresponds to a Renner-Teller effect of type 1(b) in the classification of Pople and Longuet-Higgins \cite{Pop58}. \begin{table} \caption{Characteristics of the diabatic PES for RbOH.} \vspace{4mm} \begin{tabular}{lcccccc} \hline\hline & \ & $^3A^{\prime\prime}$ & $^3A'$ & $^1A^{\prime\prime}$ & $^1A'$ & $^1A'$ \\ &&&&& covalent & ion-pair \\ \hline well depth/cm$^{-1}$ && 615 & 511 & 405 & 337 & 26260 \\ distance of minimum/\AA && 3.185 & 3.170 & 3.226 & 3.230 & 2.407 \\ angle of minimum/ $^\circ$ && 123 & 180 & 128 & 180 & 180 \\ \hline \end{tabular} \label{tabpot} \end{table} The fact that the $^1A^{\prime\prime}$ state is deeper than the $^1A'$ state is somewhat unexpected, and will be discussed in more detail in the context of the triplet surfaces below. The fact that the surface for the covalent $^1A^\prime$ state is slightly repulsive between $R=4$ and 9 \AA\ and between $\theta=40^\circ$ and $150^\circ$ is an artifact of the diabatization procedure and should not be given physical significance: the choice of mixing angle (\ref{ratio}) is an approximation, and a slightly different choice would give different diabats and coupling terms, but of course corresponding to the same adiabatic surfaces. The ion-pair state has a deep well at the RbOH geometry and a rather shallower one at RbHO, as expected from electrostatic considerations. The region around the minimum of this surface has been characterized in more detail by Lee and Wright \cite{Lee03}. It is notable that the coupling potential is quite large, peaking at 2578 cm$^{-1}$ at $R=3.87$ \AA\ and $\theta=83^\circ$, and is thus larger than the interaction energy for the two covalent surfaces at most geometries. It may be seen from Figure \ref{figmix} that the $\Sigma-\Pi$ mixing it induces is significant at most distances less than 8 \AA. \subsection{Triplet states} The triplet states of RbOH are considerably simpler than the singlet states, because there is no low-lying triplet ion-pair state. There is thus no conical intersection, and diabatization is not needed. Single-reference calculations would in principle be adequate for the triplet states. Nevertheless, for consistency with the singlet surfaces, we carried out MCSCF and MRCI for the triplet surfaces as well. The (10,3) active and reference spaces were specified as for the singlet states, except that there is only one relevant $^3A'$ state and the state average in the MCSCF calculation is therefore over the lowest two states. \begin{figure} \setlength{\unitlength}{4mm} \begin{picture}(0,0)(0,0) \put(7.5,-3.5){\makebox(0,0){{\bf \large{ (a)}}}} \put(7.5,-25){\makebox(0,0){{\bf \large{(b)}}}} \end{picture} \vspace{-.5cm} \begin{center} \epsfig{file=rboh-3ap.ps,angle=-90,width=0.95\linewidth,clip=} \epsfig{file=rboh-3app.ps,angle=-90,width=0.95\linewidth,clip=} \end{center} \caption{Contour plot of the $^3A'$ (panel (a)) and $^3A^{\prime\prime}$ (panel (b)) PES for RbOH from MRCI calculations. Contours are labeled in cm$^{-1}$.} \label{figtrip} \end{figure} Contour plots of the $^3A'$ and $^3A^{\prime\prime}$ surfaces are shown in Figure \ref{figtrip}. As for the singlet states, it is notable that the $^3A^{\prime\prime}$ state lies below the $^3A'$ state at nonlinear geometries. This ordering is different from that found for systems such as Ar-OH \cite{Esp90,H93ArOH} and He-OH \cite{Lee00}. In each case, the $A'$ state corresponds to an atom approaching OH {\it in} the plane of the unpaired electron, while the $A^{\prime\prime}$ state corresponds to an atom approaching OH {\it out of} the plane. For He-OH and Ar-OH, the $^2A'$ state is deeper than the $^2A^{\prime\prime}$ state simply because there is slightly less repulsion due to a half-filled $\pi$ orbital than due to a doubly filled $\pi$ orbital. Since these systems are dispersion-bound, and the dispersion coefficients are similar for the $^2A'$ and $^2A^{\prime\prime}$ states, the slightly reduced repulsion for the $^2A'$ state produces a larger well depth. Rb-OH is quite different. The long-range coefficients still provide a large part of the binding energy of the covalent states, but the equilibrium distances (around 3.2 \AA, Table \ref{tabpot}) are about 1 \AA\ shorter than would be expected from the sum of the Van der Waals radii of Rb (2.44 \AA) and OH (1.78 \AA, obtained from the Ar value of 1.88 \AA\ and the Ar-OH equilibrium distance of 3.67 \AA\ \cite{H93ArOH}). The qualitative explanation is that at nonlinear geometries there is significant overlap between the Rb 5s orbital and the OH $\pi$ orbital of $a'$ symmetry, forming weakly bonding and antibonding molecular orbitals (MOs) for the RbOH supermolecule. The effect of this is shown in Figure \ref{figbond}. In the $A'$ states, the bonding and antibonding MOs are equally populated and there is no overall stabilization. However, for the $A^{\prime\prime}$ states the bonding MO is doubly occupied and the antibonding MO is singly occupied. This gives a significant reduction in the repulsion compared to a simple overlap-based model. Even at linear Rb-OH geometries, the repulsion is reduced by a similar effect involving the Rb 5s orbital and the highest occupied OH $\sigma$ orbital ($6a'$ in Figure \ref{figorb}). \begin{figure} \begin{center} \epsfig{file=rboh-bond.ps,angle=0,width=0.95\linewidth,clip=} \end{center} \caption{Orbital occupations for the covalent (non-ion-pair) states of RbOH at nonlinear geometries, $90^\circ < \theta < 180^\circ$, showing the origin of the reduced repulsion for $A^{\prime\prime}$ states compared to $A'$ states.} \label{figbond} \end{figure} \subsection{Converting MRCI total energies to interaction energies} The MRCI procedure is not size-extensive, so cannot be corrected for basis-set superposition error (BSSE) using the counterpoise approach \cite{Boy}. In addition, there are Rb 5p orbitals that lie 1.87 eV above the ground state. This is below the ion-pair energy at large $R$, so that a (10,3) active space includes different orbitals asymptotically and at short range. It is thus hard to calculate asymptotic energies that are consistent with the short-range energies directly. Nevertheless, for collision calculations we ultimately need interaction energies, relative to the energy of free Rb($^2S$) + OH($^2\Pi$). To circumvent this problem, we obtained angle-dependent long-range coefficients $C_6(\theta)$ and $C_7(\theta)$ for the {\it triplet} states of RbOH and used these to extrapolate from 12\,\AA\ outwards at each angle. We carried out RCCSD calculations (restricted coupled-cluster with single and double excitations) on the $^3A'$ and $^3A^{\prime\prime}$ states at all angles at distances of 15, 25 and 100\,\AA. RCCSD was chosen in preference to RCCSD(T) for greater consistency with the MRCI calculations. Coupled cluster calculations are size-extensive, so in this case the interaction energies $V(R,\theta)$ for the $^3A'$ and $^3A^{\prime\prime}$ states were calculated including the counterpoise correction \cite{Boy}. The interaction energy at 100\,\AA\ was found to be non-zero (about $-1.38\,\mu E_h$), but was the same to within $10^{-9}\,E_h$ for both $^3A'$ and $^3A^{\prime\prime}$ states and at all angles. For each pair of angles $\theta$ and $\pi-\theta$, the energies were fitted to the form \begin{equation} E(R,\theta) = E_\infty -C_6(\theta) R^{-6} -C_7(\theta)R^{-7} -C_8(\theta) R^{-8}, \label{c678} \end{equation} with the constraints that \begin{eqnarray} C_6(\pi-\theta)&=&C_6(\theta) \\ C_7(\pi-\theta)&=&-C_7(\theta). \end{eqnarray} Sums and differences of the long-range coefficients for the two states, \begin{eqnarray} C_n^0(\theta)&=&\frac{1}{2}\left[C_n^{A^{\prime\prime}}(\theta)+C_n^{A'}(\theta)\right]\\ C_n^2(\theta)&=&\frac{1}{2}\left[C_n^{A^{\prime\prime}}(\theta)-C_n^{A'}(\theta)\right], \end{eqnarray} were then smoothed by fitting to the theoretical functional forms \begin{eqnarray} C_6^0(\theta)&=&C_6^{00} + C_6^{20} P_2^0(\cos\theta) \\ C_6^2(\theta)&=&C_6^{22} P_2^2(\cos\theta) \\ C_7^0(\theta)&=&C_7^{10} P_1^0(\cos\theta) + C_7^{30} P_3^0(\cos\theta) \\ C_7^2(\theta)&=&C_7^{32} P_3^2(\cos\theta), \end{eqnarray} where $P_\lambda^\nu(\cos\theta)$ are associated Legendre functions. The coefficients obtained by this procedure are summarized in Table \ref{tabdisp}. The resulting smoothed values of $C_6^0(\theta)$, $C_6^2(\theta)$, $C_7^0(\theta)$ and $C_7^2(\theta)$ were then used to generate $C_6^{A'}(\theta)$, $C_6^{A^{\prime\prime}}(\theta)$, $C_7^{A'}(\theta)$ and $C_7^{A^{\prime\prime}}(\theta)$. Finally, the MRCI {\it total} energies at $R=10$ and 12\,\AA\ for both singlet and triplet states were refitted to Eq.\ \ref{c678}, with $C_6(\theta)$ and $C_7(\theta)$ held constant at the smoothed values, to obtain an MRCI value of $E_\infty$ for each angle (and surface). These angle-dependent values of $E_\infty$ were used to convert the MRCI total energies into interaction energies. It should be noted that the long-range coefficients in Rb-OH have substantial contributions from induction as well as dispersion. A simple dipole-induced dipole model gives $C_{\rm6,ind}^{00} = C_{\rm6,ind}^{20} = 133\ E_ha_0^6$ and accounts for 40\% of $C_{\rm6}^{00}$ and 90\% of $C_{\rm6}^{20}$ \begin{table} \caption{Long-range coefficients for Rb-OH obtained by fitting to RCCSD calculations on triplet states.} \vspace{4mm} \begin{tabular}{ccccc} \hline\hline $\lambda$& 0 & 1 & 2 & 3 \\ \hline $C_6^{\lambda0}/E_ha_0^6$ & 325.0 & -- & 151.0 & -- \\ $C_6^{\lambda2}/E_ha_0^6$ & -- & -- & 1.9 & -- \\ $C_7^{\lambda0}/E_ha_0^7$ & -- & 1035.4 & -- & 630.0 \\ $C_7^{\lambda2}/E_ha_0^7$ & -- & -- & -- & --40.1 \\ \hline \end{tabular} \label{tabdisp} \end{table} \subsection{Interpolation and fitting} The procedures described above produce six potential energy surfaces on a two-dimensional grid of geometries $(R,\theta)$: four surfaces corresponding to covalent states (``c''), with $A'$ or $A''$ symmetry and triplet or singlet multiplicity, one surface with ion-pair character with $^1A'$ symmetry, and finally the non-vanishing coupling of this latter state with the covalent $^1A'$ state. We will denote the 6 surfaces $V^{^1A'_{\rm c}}$, $V^{^1A''_{\rm c}}$, $V^{^3A'_{\rm c}}$, $V^{^3A''_{\rm c}}$, $V^{^1A'_{\rm i}}$, and $V^{\rm ic}$, respectively. Labels will be dropped below when not relevant to the discussion. For the covalent states of each spin multiplicity, interpolation was carried out on sum and difference potentials, \begin{eqnarray} V_0(R,\theta)&=&\frac{1}{2}\left[V^{A^{\prime\prime}}(R,\theta)+V^{A'}(R,\theta)\right]\\ V_2(R,\theta)&=&\frac{1}{2}\left[V^{A^{\prime\prime}}(R,\theta)-V^{A'}(R,\theta)\right], \end{eqnarray} with the difference potentials set to zero at $\theta=0$ and $180^\circ$ to suppress the slight non-degeneracy in the MRCI results. Our approach to two-dimensional interpolation follows that of Meuwly and Hutson \cite{H99NeHFmorph} and Sold\'an {\em et al.}\ \cite{Soljcp02a}. The interpolation was carried out first in $R$ (for each surface and angular point) and then in $\theta$. The interpolation in $R$ used the RP-RKHS (reciprocal power reproducing kernel Hilbert space) procedure \cite{Ho96} with parameters $m=5$ and $n=3$. This gives a potential with long-range form \begin{equation} V_q(R,\theta) = -C_6^q(\theta) R^{-6} -C_7^q(\theta)R^{-7} -C_8^q(\theta) R^{-8}. \end{equation} The values of $C_6^q(\theta)$ and $C_7^q(\theta)$ were fixed at the values described in the previous subsection \cite{Ho00,Sol00}. For the ion-pair state, it is the quantity $V^{\rm i}(R,\theta) - E_\infty^{\rm i}$ that is asymptotically zero, where $E_\infty^{\rm i}=0.0863426\,E_h$ is the ion-pair threshold. This was interpolated using RP-RKHS interpolation with parameters $m=0$ and $n=2$, which gives a potential with long-range form \begin{equation} V^{\rm i}(R,\theta) = E_\infty^{\rm i} - C_1(\theta)R^{-1} -C_2(\theta) R^{-2}. \end{equation} The coefficient $C_1(\theta)$ was fixed at the Coulomb value of $1\,E_ha_0^{-1}$. The coupling potential $V^{\rm ic}(R,\theta)$ has no obvious inverse-power form at long range. It was therefore interpolated using the ED-RKHS (exponentially decaying RKHS) approach \cite{Hol99}, with $n=2$ and $\beta=0.77$\,\AA. This gives a potential with long-range form \begin{equation} V^{\rm ic}(R,\theta) = A(\theta) \exp(-\beta R). \end{equation} The value of $\beta$ was chosen by fitting the values of the coupling potential at $R=10$ and 12\,\AA\ to decaying exponentials. Interpolation in $\theta$ was carried out in a subsequent step. An appropriate angular form is \begin{equation} V_q(R,\theta) = \sum_k V_{kq}(R) \Theta_{kq}(\cos\theta), \label{fit} \end{equation} where $\Theta_{kq}(\cos\theta)$ are normalized associated Legendre functions, \begin{equation} \Theta_{kq}(\cos\theta)= \sqrt{\left(\frac{2k+1}{2}\right) \frac{(k-|q|)!}{(k+|q|)!}} P_k^{|q|}(\cos\theta), \end{equation} and $q=0$ for the sum and ion-pair potentials, 2 for the difference potentials and 1 for the coupling potential. $q$ can thus be used as a label to distinguish the different potentials. The coefficients $V_{kq}(R)$ for $k=q$ to 9 were projected out using Gauss-Lobatto quadrature, with weights $w_i$, \begin{equation} V_{kq}(R) = \sum_i w_i V_q(R,\theta_i) \Theta_{kq}(\cos\theta_i), \end{equation} Since there are fewer coefficients than points, the resulting potential function does not pass exactly through the potential points. However, the error for the covalent states in the well region is no more than 20 $\mu E_h$. We thus arrive at a set of $R$-dependent coefficients $V_{kq}^{\alpha,2S+1}(R)$, with $\alpha={\rm c}$, i or ic labeling the potentials for the pure covalent or ion-pair states or the coupling between them and $2S+1=0$ or 1 for singlet or triplet states respectively. These coefficients will be used below in evaluating the electronic potential matrix elements which couple collision channels in the dynamical calculations. \vspace{-.2cm} \section{Dynamical methodology} \vspace{-.2cm} \subsection{The basis sets} We carry out coupled-channel calculations of the collision dynamics. The channels are labeled by quantum numbers that characterize the internal states of the colliding partners, plus partial wave quantum numbers that define the way the partners approach each other. It is convenient to distinguish between the laboratory frame, whose $z$ axis is taken to be along the direction of the external field (if any) and the molecule frame, whose $z$ axis lies along the internuclear axis of the OH molecule and whose $xz$ plane contains the triatomic system. This allows us to define {\em external coordinates} that fix the collision plane, and {\em internal coordinates} that describe the relative position of the components on it. As external coordinates, we choose the Euler angles $(\alpha,\beta,\gamma)$ required to change from the laboratory frame to the molecule-frame; as internal coordinates we use the system of Jacobi coordinates ($R,\theta$) defined above. We also define the spherical angles ($\theta', \phi'$) that describe the orientation of the intermolecular axis in the external (laboratory) frame. We first focus on the covalent channels OH$(\pi^3,\ ^2\Pi)+{\rm Rb}(5s^1,\ ^2S_{1/2})$. The OH molecule can be described using Hund's case (a) quantum numbers: the internal state is expressed in a basis set given by $ |s_d \sigma \rangle | \lambda \rangle |j m \omega \rangle $, where $s_d$ is the electronic spin of OH and $\sigma$ its projection on the internuclear axis; $\lambda$ is the projection of the electronic orbital angular momentum onto the internuclear axis; $j$ is the angular momentum resulting from the electronic and rotational degrees of freedom, $m$ its projection on the laboratory axis and $\omega=\lambda+\sigma$ its projection on the internuclear axis. The symmetric top wavefunctions that describe the rotation of the diatom in space are defined by $ |j m \omega \rangle = \sqrt{\frac{2j+1}{8 \pi^2}} D_{ m \omega }^{j*}(\alpha,\beta,\gamma) $. At this stage $\lambda$, $\sigma$ and $\omega$ are still signed quantities. However, in zero electric field, energy eigenstates of OH are also eigenstates of parity, labeled $\epsilon=e$ or $f$. These labels refer to ``+``or ``$-$'' sign taken in the combination of $\omega$ and $-\omega$; the real parity is given by $\epsilon (-1)^{j-s_d}$. To include parity, we can define states $ |s_d \bar{\lambda} \bar{\omega} \epsilon j m \rangle $, where the bar indicates the absolute values of a signed quantity. Finally, we need to include also the nuclear spin degree of freedom: if $i_d$ designates the nuclear spin angular momentum of the diatom, then $j$ and $i_d$ combine to form $f_d$, the total angular momentum of the diatom, which has projection $m_{f_d}$ on the laboratory $z$ axis. The resulting basis set that describes the physical states of the OH molecule is \begin{eqnarray} |s_d \bar{\lambda} \bar{\omega} \epsilon (j i_d )f_d m_{f_d}. \rangle \end{eqnarray} For Rb, the electronic angular momentum is given entirely by the spin $s_a$ of the open-shell electron, which combines with the nuclear spin $i_a$ to form $f_a$, the total angular momentum of the atom. The state of the Rb atom can then be expressed as \begin{eqnarray} |(s_a i_a )f_a m_{f_a} \rangle. \end{eqnarray} The explicit inclusion of the $s_d$ and $s_a$ quantum numbers allows us to use the same notation for ion-pair channels OH$^-(\pi^4,\ ^1\Sigma^+ )+ {\rm Rb}^+ (^1S_0)$: in this case both partners are closed-shell, so $s_d=s_a=\bar{\lambda}=\bar{\omega}=0$ and $\epsilon=+1$. The resulting basis set for close-coupling calculations on the complete system (designated B1) has the form \begin{eqnarray} | B1 \rangle = |s_d \bar{\lambda} \bar{\omega} \epsilon (j i_d )f_d m_{f_d} \rangle |(s_a i_a )f_a m_{f_a} \rangle | L M_L \rangle, \end{eqnarray} where $| L M_L \rangle $ denotes the partial wave degree of freedom and is a function of the $(\theta',\phi')$ coordinates considered above. The ket $ | L M_L \rangle$ corresponds to $Y_{LM_L}(\theta',\phi')$. Ultimately, S-matrix elements for scattering are expressed in basis set B1. The scattering Hamiltonian is block-diagonal in {\em total angular momentum} and {\em total parity}. The total parity is well defined in the basis set we have selected, and given by $p=\epsilon (-1)^{j-s_d+L}$. It is conserved in the presence of a magnetic field but not an electric field. The total angular momentum is not conserved in the presence of either a magnetic or an electric field. However, the projection of total angular momentum on the laboratory $z$ axis, given by $M_{\mathcal J}=m_{f_d}+m_{f_a}+M_L$, is conserved in the presence of an external field aligned with the laboratory $z$ axis. \vspace{-.2cm} \subsection{Matrix elements of the potential energy} The PES in section II are diagonal in the total electronic spin, $S$, and in the states of the nuclei, $m_{i_a}$ and $m_{i_d}$, with $m_{i_a}$ and $ m_{i_d}$ the nuclear spin projections of the atom and diatom in the laboratory frame. We therefore find it convenient to define basis sets labeled by these quantum numbers. This allows not only the direct calculation of potential matrix elements, but the definition of some useful {\em frame transformations} \cite{Fano}. Two other basis sets, B2 and B3, defined with/without parity (B2p/B2w and B3p/B3w) are described in Appendix 2. The corresponding frame transformations are defined in section \ref{Solve}. The calculation of the matrix elements of the potential energy is most direct in basis set B2w. This is based on Hund's case (b) quantum numbers for the molecule, and is given by \begin{eqnarray} | B2w \rangle = | (s_d s_a) S M_S \rangle | n m_n \lambda \rangle | i_d m_{i_d} \rangle | i_a m_{i_a} \rangle | L M_L \rangle, \end{eqnarray} where $n$ is the total angular momentum excluding spin, with projection $\lambda$ on the internuclear axis, $|S M_S \rangle$ indicates the total spin state of the electrons, and $ | i_a m_{i_a} \rangle | i_d m_{i_d} \rangle $ indicates the states of the nuclei. In order to relate the PES obtained in section II to the quantum numbers of our channels, it is convenient to recast the electronic wavefunctions for the covalent states of $^{2S+1} A'$ and $^{2S+1} A''$ in terms of functions with definite values of $\lambda$, \begin{eqnarray} |^{1(3)}{+1}_{\rm c}\rangle&& = -\frac{1}{\sqrt{2}}( |^{1(3)} A'_{\rm c}\rangle + i |^{1(3)} A''_{\rm c}\rangle); \nonumber \\ |^{1(3)} {-1}_{\rm c}\rangle&& = \frac{1}{\sqrt{2}}( |^{1(3)} A'_{\rm c}\rangle - i |^{1(3)} A''_{\rm c}\rangle). \end{eqnarray} We can also associate $\lambda=0$ with the ion-pair wavefunction (and denote it $ | ^{1}0 _{\rm i} \rangle $). Then the multipole index $q$ in the potential expansion (\ref{fit}) is viewed as an angular momentum transfer $q=\lambda' - \lambda $. The B2w basis functions do not explicitly depend on $\theta$. We therefore rotate the $\Theta_{kq}$ functions \cite{Brink} onto the laboratory frame, to which molecular spins and partial waves are ultimately referred. The functions $\Theta_{kq}$ are proportional to renormalized spherical harmonics $C_{kq}(\theta,0)$ \cite{Brink}, for which \begin{eqnarray} C_{kq}(\theta,0)=\sum_\nu D_{\nu q }^k(\alpha,\beta,\gamma) C_{k\nu }(\theta', \phi'). \end{eqnarray} The potential now depends on the same angular coordinates as the B2w basis functions. Integrating and applying the usual relationships, we obtain the matrix elements of the electronic potential in basis set $ | B2w \rangle $, \begin{widetext} \begin{eqnarray} \label{potenenes} \lefteqn{\langle L M_L| \langle(s_d s_a) S M_S | \langle n m_n \lambda | V | n' m'_n \lambda' \rangle | (s'_d s'_a) S' M'_S \rangle| L' M'_L \rangle \nonumber} \hspace{2cm} \\ &&= \delta_{S S'} \delta_{M_S M'_S} \sum_k \sqrt{[n ] [n' ][L ][L' ]}(-1)^{(M_L+m'_n- \lambda')} \left(\begin{array}{ccc} L & k & L' \\ -M_L & \nu & M_L' \end{array}\right)\left(\begin{array}{ccc} L & k & L' \\ 0 & 0 & 0 \end{array}\right) \nonumber \\ &&\times \left(\begin{array}{ccc} n & k & n' \\ m_n & \nu & -m_n' \end{array}\right) \left(\begin{array}{ccc} n & k & n' \\ \lambda & q & -\lambda' \end{array}\right)\sqrt{\frac{(2k+1)}{2}} \kappa V_{k |q|}^{\alpha,2S+1}(R), \end{eqnarray} \end{widetext} where $[A]=2 A+1$, $q=\lambda' - \lambda$ and $\nu=m'_n-m_n$. $\kappa$ is a constant whose value in the case of covalent-covalent or ionic-ionic matrix elements ($q=0, \pm 2$) is 1. For covalent-ionic or ionic-covalent matrix elements ($q = \pm 1$), $\kappa = -\sqrt{1/2}$ or $\kappa = \sqrt{1/2}$ respectively. In these last two cases, $V_{k |q|}^{\alpha,2S+1}(R)=V_{k1}^{\rm ic,1}(R)$. Our aim is to evaluate the matrix elements of the potential in basis set B1. Basis set B3w, defined in Appendix 2, can be considered as an intermediate step between B2w and B1. Starting from Eq.\ \ref{potenenes} and changing basis to B3w (see Appendix 2), we obtain \begin{widetext} \begin{eqnarray}\label{elements} \lefteqn{ \langle L M_L| \langle i_a m_{i_{a}} | \langle s_a m_{s_a} | \langle i_d m_{i_{d}} | \langle j m \omega | \langle \lambda | \langle s_d \sigma | V |s_d' \sigma' \rangle | \lambda' \rangle|j' m' \omega' \rangle | i'_d m'_{i_{d}} \rangle |s_a' m'_{s_a} \rangle | i'_a m'_{i_{a}} \rangle | L' M_L'\rangle } \nonumber\hspace{2cm} \\ &=& \delta_{i_a i'_a} \delta_{m_{i_{a}} m'_{i_{a}} } \delta_{i_d i'_d} \delta_{m_{i_{d}} m'_{i_{d}} } \sum_k \sum_{m_{s_d}} \sum_{m'_{s_d}}\sum_{S}(2S+1) \left(\begin{array}{ccc} s_d & s_a & S \\ m_{s_d} & m_{s_a} & -M_S\end{array}\right) \left(\begin{array}{ccc} s_d' & s_a' & S \\ m'_{s_d} & m'_{s_a} & -M_S \end{array} \right) \nonumber \\ &\times& (-1)^{m+m'-\omega-\omega'+M_L+m'_n-\lambda'+j+j'+n+n'+s_d+s_d'} \sqrt{[j ][j' ][L ][L' ]}\sum_{n n'} [n ][n' ] \nonumber \\ &\times& \left(\begin{array}{ccc} j & s_d & n \\ m & -m_{s_d} & -m_n \end{array}\right) \left(\begin{array}{ccc} n & k & n' \\ m_n & \nu & -m_n' \end{array} \right)\left(\begin{array}{ccc} L & k & L' \\ -M_L & \nu & M_L' \end{array}\right) \left(\begin{array}{ccc} j' & s_d' & n' \\ m' & -m'_{s_d} & -m_n' \end{array} \right) \nonumber \\ &\times& \left(\begin{array}{ccc} j & s_d & n \\ \omega & -\sigma & - \lambda \end{array}\right) \left(\begin{array}{ccc} n & k & n' \\ \lambda & q & -\lambda' \end{array} \right)\left(\begin{array}{ccc} L & k & L' \\ 0 & 0 & 0 \end{array}\right) \left(\begin{array}{ccc} j' & s_d' & n' \\ \omega' & -\sigma' & -\lambda' \end{array} \right)\sqrt{\frac{(2k+1)}{2}} \kappa V_{k |q|}^{\alpha,2S+1}(R). \end{eqnarray} \end{widetext} The potential matrix elements in basis sets B2p and B3p are trivially related to the ones in B2w and B3w, respectively, requiring only the change to a parity-symmetrized basis set, built as a superposition of $+\lambda$ and $-\lambda$ or $+\omega$ and $-\omega$ vectors. Finally, the evaluation of the potential in basis set B1 can be easily obtained from that in B3p by taking the standard composition of $j$ and $s_a$ with the respective nuclear angular momenta $i_d$ and $i_a$. The true eigenstates of OH are linear combinations of functions with different values of $\omega$, mixed by spin-uncoupling terms in the hamiltonian. The mixing is significant even for the rotational ground state: $85\% $ of $\omega=3/2$ and $15\% $ of $\omega=1/2$. We have approximated OH as a pure case (a) molecule for convenience in the present work. The fine and hyperfine energies, taken from the work of Coxon {\em et al.} \cite{Coxon}, were associated to a unique set of case (a) quantum numbers $|s_d \bar{\lambda} \bar{\omega} \epsilon (j i_d )f_d m_{f_d} \rangle $. For Rb, the experimental values are taken from ref.\ \onlinecite{Arim}. Figure \ref{levelsh} shows the quantum numbers that characterize the internal states of the colliding partners for the 8 lowest asymptotic thresholds, corresponding to the rotational state $\bar{\omega}=3/2, j=3/2$ of OH. \begin{figure} \setlength{\unitlength}{4mm} \centerline{\includegraphics[width=.9\linewidth,height=0.75\linewidth,angle=0]{levels2.eps}} \caption{Threshold diagram for the channels of interest, labeled by quantum numbers that characterize the internal states of both colliding partners. These correspond to the ground rotational state $\bar{\omega}=3/2, j=3/2$ of OH. Thresholds corresponding to the incident channels considered in section \ref{CR} (C1 and C2) are indicated. Hyperfine splitting in Rb and $\Lambda$-doubling and hyperfine splitting in OH are given. \label{levelsh}} \end{figure} \vspace{-.2cm} \subsection{Solving the Schr{\"o}dinger equation}\label{Solve} The full Hamiltonian operator can be written \begin{eqnarray} \hat{H}&=&-{\hbar^2\over{2\mu}} \left[ R^{-1}\left(\frac{\partial^2}{\partial R^2}\right) R - \frac{{\hat{L}}^2}{R^2} \right] + \hat{V}(R) + \hat{H}_{\rm th}, \end{eqnarray} where $\mu$ is the reduced mass, $\hat{V}(R)$ indicates the {\em potential} matrix containing the electronic potential matrix elements given in the previous section and $\hat{H}_{\rm th}$ is the constant {\em thresholds} matrix. By construction, any constant difference of energy between channels has been relegated to the threshold matrix so that the potential matrix elements die off at long range. The coupled-channel equations that result from introducing this Hamiltonian into the total Schr{\"o}dinger equation are propagated using the log-derivative method of Johnson \cite{Johnson}, modified to take variable step sizes keyed to the local de Broglie wavelength. The log-derivative matrix $Y$ thus obtained is matched to spherical Bessel functions in the usual way to yield the scattering matrix $S$. Using $T$ matrix elements ($T=i(S-1)$) \cite{Mott}, the cross section for incident energy $E$ between internal states $\alpha$ and $\beta$, corresponding to a beam experiment, can be obtained using the expression \begin{eqnarray}\label{nondiag} \sigma_{\alpha \rightarrow \beta} (E) &=& \sum_{L L' M'_L L'' } \frac{\pi}{k^2} i^{L'' - L} \sqrt{(2L+1)(2L'' +1)} \nonumber \\ &\times&T^{*} _{\alpha L0 \rightarrow \beta L' M'_L}(E) T_{\alpha L'' 0 \rightarrow \beta L' M'_L}(E), \end{eqnarray} where the collision axis is chosen along the quantization axis (so that only $M_L=0$ contributes). The labels $\alpha$ and $\beta$ designate sets of quantum numbers specifying the internal states of both colliding partners ($s_d \bar{\lambda} \bar{\omega} \epsilon (j i_d )f_d m_{f_d}, (s_a i_a )f_a m_{f_a}$), and $k=(2\mu E/\hbar^2)^{1/2}$ is the wave number corresponding to incident kinetic energy $E$. Our aim is to extract information for collisional processes involving OH$(j=3/2, ^2\Pi_{3/2})+{\rm Rb}(5s ^1,\ ^2S_{1/2})$, in any of their hyperfine states, with translational energies in the range $10^{-6}$ to $1$ K. This range comfortably includes both the temperatures currently reached in buffer-gas or Stark deceleration experiments and the target temperatures of sympathetic cooling. A fully converged calculation would require propagation to large values of $R$ and the inclusion of a huge number of channels. Although both covalent and ion-pair channels should be considered, in this pilot study we include only covalent states, except in Section \ref{harp} below. The large anisotropy of the surfaces included in the calculation makes it necessary to use a large number of OH rotational states and partial waves; the inclusion of rotational states up to $j=11/2$ is required (these states lie $\approx 550$ cm$^{-1}$ ($\bar{\omega}=3/2$) and $\approx 850$ cm$^{-1}$ ($\bar{\omega}=1/2$) above the ground state, numbers comparable to the depth of the covalent potential energy surfaces). The number of partial waves needed for convergence for each rotational state is shown in Table \ref{tabwaves}; it may be seen that fewer partial waves are required for higher rotational states. Unfortunately, a fully converged calculation was beyond our computational resources, so we reduced the number of channels to include only $40 \%$ of those in Table \ref{tabwaves}. This gives cross sections accurate to within a factor of 2. \begin{table} \caption[table1]{ Approximate number of partial waves required in a converged calculation for each rotational state ($\bar{\omega}, j$). } \begin{center} \begin{tabular}{ccccccc} \hline &$j=1/2$&$j=3/2$&$j=5/2$&$j=7/2$ &$j=9/2$ & $j=11/2$ \\ \hline $\bar{\omega}=3/2$ & & 70 & 70 & 60 & 50 & 25 \\ $\bar{\omega}=1/2$ & 70 & 70 & 70 & 65 & 50 & 40 \\ \hline \end{tabular} \end{center} \label{tabwaves} \end{table} We consider first the collision of atoms and molecules that are both in their maximally stretched states, $(f_a,m_{f_a}) = (f_d,m_{f_d})=(2,+2)$. The s-wave incident channel has $L=0$ and $M_L=0$, so corresponds to $M_{\mathcal J}=+4$. The set of all $M_{\mathcal J}=+4$ channels with a defined total parity $p$, including all allowed $M_L$ projections, as well as all the $f_a$, $f_d$, $m_{f_a}$ and $m_{f_d}$ quantum numbers (or equivalently $m$, $m_{s_a}$ and $m_{i_a}$, $m_{i_d}$) contains 23433 channels. This makes an exact calculation infeasible. We have therefore introduced two approximations to reduce this number. First, the projection $m_{f_d}$ for channels with $j>5/2$ is fixed to its initial value; the suppressed projections increase the degeneracy and might split rotational Feshbach resonances, but numerical tests show that making this approximation does not substantially alter the overall magnitude of the cross sections reported here. This approximation reduces the number of channels to 10555. Second, for propagation at large $R$ we disregard channels that are ``locally closed'', that is, whose centrifugal barrier is higher than the incident energy in a given amount that is modified until converence. Even with these approximations, and the suppression of ion-pair channels, it is impractical to perform full calculations. However, dividing the radial solution of the Schr{\"o}dinger equation into an inner region ($R<R_0$) and an outer region ($R>R_0$) makes it possible to use different basis sets (``frames'') in each of them. Frame transformations have previously been employed for the simpler problem of alkaline earth+alkaline earth collisions \cite{Burke} and for electron-molecule collisions \cite{Fano}. They will be an essential tool for introducing hyperfine structure into atom-molecule and molecule-molecule collision problems \cite{Nueva}. The calculation is thus divided into two different steps: \begin{itemize} \item At short range, $R< R_0$, the hyperfine interaction is small compared to the depths of the short-range potentials. We therefore represent the Hamiltonian in basis set B3 (see Appendix 2), where the potential is diagonal in nuclear spin projections $m_{i_{a}}$ and $m_{i_{d}}$. There are 8 such blocks (since $(2i_d+1)(2i_a+1)=8$). We ignore elements of $ \hat{H}_{\rm th}$ that couple different pairs of $m_{i_{a}}$ and $m_{i_{d}}$. This reduces a single $(8N)\times (8N)$ calculation to 8 $(N\times N)$ calculations. At $R=R_0$ the complete $Y$ matrix can be rebuilt using the partial $Y_{m_{i_{a}},m_{i_{d}}}$ matrices obtained from each subset, $ |s_d \bar{\lambda} \bar{\omega} \epsilon j m \rangle |s_a m_a \rangle | L M_L\rangle $ \begin{widetext} \begin{eqnarray} \lefteqn{\langle L M_L | \langle i_a m_{i_{a}} | \langle s_a m_{s_a} | \langle i_d m_{i_{d}} | \langle s_d \bar{\lambda} \bar{\omega} \epsilon j m | Y |s'_d \bar{\lambda}' \bar{\omega}' \epsilon' j' m' \rangle | i_d' m_{i_{d}}' \rangle |s_a' m'_{s_a} \rangle | i_a' m'_{i_a} \rangle| L' M_L'\rangle } \nonumber \hspace{1.5cm}\\ &=& \delta_{i_a i'_a} \delta_{m_{i_{a}} m'_{i_{a}} } \delta_{i_d i'_d} \delta_{m_{i_{d}} m'_{i_{d}} } \langle L M_L | \langle s_a m_{s_a} | \langle s_d \bar{\lambda} \bar{\omega} \epsilon j m | Y_{m_{i_{a}},m_{i_{d}}} |s'_d \bar{\lambda}' \bar{\omega}' \epsilon' j' m' \rangle |s_a' m'_{s_a} \rangle | L' M_L'\rangle. \end{eqnarray} \end{widetext} This $Y$ matrix is then transformed into the asymptotic basis set B1. We have found that this frame transform provides a very accurate way to include the Rb-OH hyperfine structure in reduced calculations using only covalent channels. Moreover, owing to the depth of the short-range potentials, $Y$ is weakly dependent on energy and can be interpolated in the inner region. \item At long range, $R > R_0$, the $Y$ matrix, expressed already in B1, is propagated to large distances to obtain the $S$ matrix. We invoke an alternative approximation: The coupling between different asymptotic rotational and fine-structure states diminishes at longer distances and can be neglected. Thus the subset corresponding to the ground rotational diatomic state ($j=3/2$, $\bar{\omega}=3/2$) can be propagated by itself to asymptotic distances. We reinstate the full hyperfine Hamiltonian in this region. \end{itemize} It is worth noting that basis set B2, defined in Appendix 2, could also be the basis for another frame transformation. Approximate decoupling of singlet and triplet channels in the inner region would allow a partition of the numerical effort into two smaller groups of channels, and the introduction of the ionic channels in the singlet one. \vspace{-.2cm} \section{Scattering cross sections}\label{CR} We have calculated elastic and state-resolved inelastic cross sections for two different incident channels for collisions of Rb atoms with OH molecules. These are shown as C1 and C2 in Figure \ref{levelsh}. Although we do not include the effects of external fields explicitly in this work, we consider states in which both partners can be in weak-field-seeking states in a magnetic field, and thus magnetically trappable. The OH hyperfine states that can be trapped at laboratory magnetic fields are $(f_d=2, m_{f_d}=+2,+1,0)$ and $(f_d=1,m_{f_d}=+1)$, while the corresponding states for Rb are $(f_a=2, m_{f_a}=+2,+1,0)$; the Rb state $(f_a=1, m_{f_a}=-1)$ is trappable for fields smaller that $\approx 1250$ gauss. In the first case, designated C1 in Figure \ref{levelsh}, both partners are in maximally stretched states: OH$(\epsilon=f, f_d=2, m_{f_d}=2)$ + Rb$(f_a=2, m_{f_a}=2)$. This case corresponds to the highest threshold correlating with OH in its ground rotational state. In this case, OH is also electrostatically trappable. The second case, designated C2 in Figure \ref{levelsh}, correlates with the lowest asymptotic threshold in the absence of an external field: OH$(\epsilon=e, f_d=1, m_{f_d}=1)$ + Rb$(f_a=1, m_{f_a}=-1)$. Both partners are again magnetically trappable, although in a field they will no longer be the lowest energy states. Figure \ref{adiab} shows selected adiabatic curves correlating with the lower rotational states for the collision with both partners in maximally stretched states ($p=+1,M_{\mathcal J}=+4$). In our calculations we take $R_0=17\ a_0$. As described above, for $R < R_0$ the hyperfine interaction is partially neglected. For $R > R_0$, only hyperfine channels with OH in its ground rotational state are included. \begin{figure} \setlength{\unitlength}{4mm} \centerline{\includegraphics[width=1.05\linewidth,height=1.2\linewidth,angle=-0]{curvas.ps}} \hspace{-1.5cm} \caption{Adiabatic curves correlating with the lower rotational states for the collision in maximally stretched states of both partners ($p=+1,M_{\mathcal J}=+4$): for $R < 17\ a_0$ hyperfine interaction is partially neglected; for long range $R > 17\ a_0$, only channels with OH in its ground rotational state are considered and full hyperfine is included (inset gives a more detailed image of the latter regime). \label{adiab}} \end{figure} \vspace{-.2cm} \subsection{General behavior: total cross sections} We begin by showing in Figure \ref{crosses0} the total cross sections for incident channels C1 and C2. A brief description of these has been reported previously \cite{PRL}. Below $10^{-4}$~K for incident channel C1, or $10^{-5}$~K for C2, the Wigner threshold law applies. Namely, as the energy goes to zero, cross sections corresponding to elastic and isoenergetic processes approach a constant value, while those for exoergic processes vary as $1/\sqrt{E}$, rapidly exceeding elastic cross sections. No quantitative predictive power is expected in this region. Rather, the values of threshold cross sections are strongly subject to details of the PES, and are typically only uncovered by experiments. At higher energies, above $10^{-2}$\ K, where many partial waves contribute ($l \ge 4$ for C1 and $l \ge 5$ for C2), the behavior of the cross sections changes and inelastic processes are well described by a semiclassical Langevin capture model \cite{17, PRL}, \begin{eqnarray}\sigma_{\rm Langevin}(E)=3 \pi \left( \frac{C_6}{4E} \right)^{1/3}. \end{eqnarray} This cross section is also plotted in Figure \ref{crosses0} (points). The Langevin expression arises as a limit of the exact quantum expression in Eq.\ \ref{nondiag}, with the usual assumptions: the impact parameter takes values in a continuous range, and the height of the centrifugal barrier, determined using only long-range behavior, determines the number of partial waves that contribute for a given energy. Similar behavior has been observed previously in cold collisions, such as $M + M_2$ with $M$ an alkali metal. For Li + Li$_2$, $l>3$ was found to be a sufficient for the cross sections to exhibit Langevin behavior \cite{Li2homo}. As can be seen in Figure \ref{crosses0}, the Langevin limit reproduces the general trend across the entire semiclassical energy range. In a Hund's case (a) system like OH, where the electron spin is strongly tied to the intermolecular axis, the highly anisotropic PES might be expected to disrupt the spin orientation relative to the laboratory frame completely. As a consequence, inelastic processes are expected to be very likely and the Langevin model should describe well the behavior of Rb-OH and similar systems. A similar upper limit for the elastic cross section is given by four times the inelastic one. It is easy to verify that, if the inelastic cross section reaches its maximum value, the elastic and inelastic contributions to the cross section must be equal. This behavior is also seen in Figure \ref{crosses0}. The cross sections for incident channel C2 are quite different from those for incident channel C1. For C2, the cross sections are highly structured, exhibiting a large number of Feshbach resonances. Since the atom and the diatom are both in their lowest-energy state, there are plenty of higher-lying hyperfine states to resonate with. However, the pronounced minimum in the elastic cross section at $E \sim 10^{-4}$~K is the consequence of a near-zero s-wave phase shift. \vspace{1.2cm} \begin{figure}[ht] \setlength{\unitlength}{8mm} \begin{picture}(-8,15.0)(14,0) \put( 5, 9){\ \includegraphics[width=.85\linewidth,height=0.95\linewidth,angle=-90]{crossM0.ps}} \put( 5, 17){\ \includegraphics[width=.85\linewidth,height=0.95\linewidth,angle=-90]{crossM8.ps}} \end{picture} \caption{Elastic and total inelastic cross sections for the cases analyzed: Panel (a) corresponds to OH$(\epsilon=f, f_d=2, m_{f_d}=2)$ + Rb$(f_a=2, m_{f_a}=2)$ incident channel (C1). Panel (b) corresponds to OH$(\epsilon=e, f_d=1, m_{f_d}=1)$ + Rb$(f_a=1, m_{f_a}=-1)$ incident channel (C2). The points indicate the Langevin cross section. \label{crosses0}} \end{figure} \vspace{-.2cm} \subsection{Detailed picture: partial cross sections} Some additional insight into the collision process can be obtained by examining the partial cross sections to various final states. However, there are many of these. Starting in incident channel C1, there are 128 possible outcomes, counting the hyperfine states of both Rb and OH, plus the lambda doublet of OH. To simplify this information, we first break the cross sections into four classes: elastic scattering, scattering in which only the Rb state changes, scattering in which only the OH state changes, and scattering in which both change. These four possibilities are shown in Figure \ref{crosses2} for both incident channels, C1 and C2. In general, all four processes are likely to occur. This further attests to the complete disruption of the spin during a collision. Nevertheless, at the highest energies probed (where results are less sensitive to potential details) there is a definite propensity for the OH molecule to change its internal state more readily than the Rb atom, at least when only one of them changes. This is probably a consequence of the spherical symmetry of the Rb atom, whereby its electronic spin is indifferent to its orientation. By contrast, the electronic angular momentum of OH is strongly coupled to the molecular axis, and will follow its changes in orientation due to the anisotropies in the interaction. \begin{figure}[ht] \setlength{\unitlength}{8mm} \begin{picture}(-8,15.0)(14,0) \put( 5, 9){\ \includegraphics[width=.85\linewidth,height=0.95\linewidth,angle=-90]{Mtot0change.ps}} \put( 5, 17){\ \includegraphics[width=.85\linewidth,height=0.95\linewidth,angle=-90]{Mtot8change.ps}} \end{picture} \caption{ Total cross sections for 4 types of process are considered: those where only OH partner changes internal state, those where only Rb changes, those where both partners suffer inelastic change and those where neither change. Panel (a) corresponds to OH$(\epsilon=f, f_d=2, m_{f_d}=2)$ + Rb$(f_a=2, m_{f_a}=2)$ incident channel (C1). Panel (b) corresponds to OH$(\epsilon=e, f_d=1, m_{f_d}=1)$ + Rb$(f_a=1, m_{f_a}=-1)$ incident channel (C2). \label{crosses2}} \end{figure} A more detailed understanding can be obtained by considering Eq.\ \ref{elements}. For Rb the hyperfine projection is given by $m_{f_a} = m_{s_a} + m_{i_a}$, whereas for OH it is given by $m_{f_d} = m_{n}+m_{s_d}+m_{i_d}$. The nuclear spin projections $m_{i_a}$ and $m_{i_d}$ are untouched by potential energy couplings. The potential conserves $M_S=m_{s_a}+m_{s_d}$, so that if the Rb electronic spin $m_{s_a}$ changes, so will the OH electronic spin $m_{s_d}$. On the other hand, OH can also change the projection $m_n$ of the rotational angular momentum $n$, which is absent in Rb. Thus OH has more opportunities to change its internal state than does Rb, and this is reflected in the propensities in Figure \ref{crosses2}. \begin{figure}[ht] \setlength{\unitlength}{4mm} \centerline{\includegraphics[width=.85\linewidth,height=0.95\linewidth,angle=-90]{mayores.ps}} \caption{Dominant partial cross sections for OH$(\epsilon=f, f_d=2, m_{f_d}=2)$ + Rb$(f_a=2, m_{f_a}=2)$ incident channel (C1). \label{crosses}} \end{figure} We first consider incident channel C1, with both collision partners initially in their maximally stretched states. Some of the main outcomes are shown in Fig \ref{crosses}. In the absence of an anisotropic interaction, and remembering that $M_{\mathcal{J}}=m_{f_a}+m_{f_d}+M_L$ is conserved, the sum of the spin projections $M_F=m_{f_a}+m_{f_d}$ would be conserved. Thus inelastic collisions would be impossible. This is because the projection of $m_{f_d}$ could not be lowered without raising $m_{f_a}$, but $m_{f_a}$ could not be raised further. However, the anisotropic PES in Rb-OH allows such changes quite readily. There remains, however, a propensity for collisions with small values of $\Delta m_f$ to be more likely, as seen in Figure \ref{OHbaja}. Part (a) shows cross sections that change $m_{f_d}$ for OH without changing $m_{f_a}$ for Rb, while part (b) shows those that change $m_{f_a}$ without changing $m_{f_d}$. As noted above, Rb appears to be more reluctant to change its projection. In fact, since the potential is diagonal in the states of the nuclei, only consecutive values of $m_{f_a}$ are coupled in first order. On the other hand, first-order coupling exists between many different values of $m_{f_d}$. Finally, a decrease in the probability of processes with increasing $\Delta M_F$ can be related to the diminution of anisotropic terms in the potential when increasing the angular momentum transfer ($k$ in Eq.\ \ref{elements}). \begin{figure}[ht] \setlength{\unitlength}{8mm} \begin{picture}(-8,17.0)(14,0) \put( 5, 9){\ \includegraphics[width=.85\linewidth,height=0.95\linewidth,angle=-90]{bajoRb2.ps}} \put( 5, 17){\ \includegraphics[width=.85\linewidth,height=0.95\linewidth,angle=-90]{bajoOHconcerrados2.ps}} \end{picture} \caption{Panel (a) ((b)): inelastic cross sections for $m_{f_d}$ ($m_{f_a}$) changing collisions. The higher the change, the lower the cross section. They correspond to OH$(\epsilon=f, f_d=2, m_{f_d}=2)$ + Rb$(f_a=2, m_{f_a}=2)$ incident channel (C1). \label{OHbaja}} \end{figure} We now consider incident channel C2, OH$(\epsilon=e, f_d=1, m_{f_d}=1)$ + Rb$(f_a=1, m_{f_a}=-1)$. This is the lowest threshold in the absence of a field. Only three channels, degenerate with the initial channel, are possible outcomes at very low energies: $ |\epsilon=e, 1,{+1} \rangle |1,{-1} \rangle $ $ |\epsilon=e, 1,0 \rangle |1,0 \rangle $, and $|\epsilon=e, 1,{-1} \rangle |1,{+1} \rangle $. Since these states have the same value of $M_F=m_{f_d}+m_{f_a}$ as the initial channel, the processes can occur by ordinary spin exchange with no centrifugal barrier. Partial cross sections for these three channels are shown in Figure \ref{crosses3} (a) over the entire energy range. Intriguingly, inelastic (state-changing) collisions seem to be somewhat suppressed relative to elastic scattering. Suppressed spin-exchange rates would presumably require delicate cancellation between singlet and triplet phase shifts \cite{Burke98}. That such a cancellation occurs in a highly multichannel process is somewhat unexpected. However, examining the matrix elements of the potential reveals that there is no direct coupling between the initial state $ |\epsilon=e, 1,{+1} \rangle |1,{-1} \rangle $ and final state $ |\epsilon=e, 1,{-1} \rangle |1,{+1} \rangle $. Transitions via potential couplings are therefore a second-order process requiring the mediation of other channels. Many other exit channels are possible, once energy and angular momentum considerations permit them. Cross sections for several such processes are shown in Figure \ref{crosses3} (b). For example, the channels OH$(\epsilon=e, 1, 0)$ + Rb$(1, +1)$ and OH$(\epsilon=e, 1, 0)$ + Rb$(1, -1)$ are not connected to the initial channel by spin exchange, since $M_F$ changes by $-1$. In this case, angular momentum shunts from the molecule into the partial-wave degree of freedom, necessitating an $L=1$ partial wave in the exit channel. Therefore, this process is suppressed for energies below the $p$-wave centrifugal barrier, whose height is 1.6 mK. \begin{figure}[ht] \setlength{\unitlength}{8mm} \begin{picture}(-8,17.0)(14,0) \put( 5, 9){\ \includegraphics[width=.85\linewidth,height=0.95\linewidth,angle=-90]{fourpr.ps}} \put( 5, 17){\ \includegraphics[width=.85\linewidth,height=0.95\linewidth,angle=-90]{threpr.ps}} \end{picture} \caption{Some partial cross sections for C2 incident channel: OH$(\epsilon=e, f_d=1, m_{f_d}=1)$ + Rb$(f_a=1, m_{f_a}=-1)$. On panel (a) elastic cross section together with partial cross sections for two degenerate processes with the initial channel, opened at zero collision energy, are shown. The processes shown on panel (b) are also degenerate but closed by the centrifugal barrier at zero collision energy.\label{crosses3}} \end{figure} We conclude this subsection by stressing the vital importance of including the hyperfine structure in these calculations. Figure \ref{pairs} shows partial cross sections for incident channel C1 scattering into pairs of channels which differ only in the small hyperfine splitting of OH ($\approx 3$~mK). In one example (black and red solid curves), the parity of OH changes from $f$ to $e$, leading to the final channels OH$(\epsilon=e, f_d=1,2 , m_{f_d}=0)$ + Rb$(f_a=2, m_{f_a}=0)$. These channels, distinguished only by their hyperfine quantum number $f_d$, are almost identical at high energies, but quite different at low energies. As a second example, consider the final channels OH$(\epsilon=f, f_d=1,2 , m_{f_d}=0)$ + Rb$(f_a=2, m_{f_a}=+2)$, (green and orange dashed), which preserve the initial parity. The process with $f_d=1$ is exothermic, while the one with $f_d=2$ requires the opening of the partial wave threshold. Only after both channels are open, at higher energies, do the cross sections become almost identical. \begin{figure}[ht] \setlength{\unitlength}{4mm} \centerline{\includegraphics[width=.85\linewidth,height=0.95\linewidth,angle=-90]{F1F2.ps}} \caption{Partial cross sections for two pairs of processes corresponding to incident channel C1 (OH$(\epsilon=f, f_d=2, m_{f_d}=2)$ + Rb$(f_a=2, m_{f_a}=2)$) and final channels differing only in the $f_d$ quantum number. Their values converge at high kinetic energies but are very different in the cold regime. \label{pairs}} \end{figure} \subsection{The harpooning process}\label{harp} We have not yet fully incorporated the ion-pair channel in our calculations, but it is instructive to assess its influence. In the harpooning model it is usually understood that, if the electron transfer takes place at long range, large collision cross sections result \cite{Ber}. However, for Rb-OH the crossing point $R_0 \approx 6$~\AA\ gives a geometric cross section of only $\sigma=4 \pi R_0^2 \approx 4.5 \cdot 10^{-14}$ cm$^2$. This is substantially smaller than the quantum-mechanical cross sections we have estimated. Thus while harpooning may enhance cross sections substantially at thermal energies, it is likely to have less overall influence in cold collisions. The inelastic results above place cross sections near the semiclassical Langevin upper limit. The harpooning mechanism is unlikely to increase their magnitude, but could change their details. For example, it might be expected that the harpooning mechanism would distribute the total probability more evenly among the different spin orientations. We have explored the influence of the harpooning mechanism in a reduced calculation where the hyperfine structure is neglected. This makes the calculation tractable even with the ion-pair channel included. The resulting elastic and total inelastic cross sections, for incident channel OH$(\epsilon=f, j=3/2, m=3/2)$ + Rb$(s_a=1/2, m_{s_a}=1/2)$ (maximally stretched in the absence of nuclear degrees of freedom) are shown in Figure \ref{ionico}. They are compared to the results obtained without the ion-pair channels. In the semiclassical region, $E > 1$~mK, the general order of magnitude of the cross sections is preserved, although the detailed features are not. In the Wigner regime, by contrast, including the ion-pair channel makes a quite significant change, because completely different phase shifts are generated in low-lying partial waves. We stress again, however, that we do not expect this or any model to have predictive power for the values of low-energy cross sections until the ambiguities in absolute scattering lengths are resolved by experiments. \begin{figure}[ht] \setlength{\unitlength}{4mm} \centerline{\includegraphics[width=.85\linewidth,height=0.95\linewidth,angle=-90]{ionico.ps}} \caption{Effect of the inclusion of ion-pair channels in a reduced (neglecting hyperfine structure) calculation. Langevin cross section is also shown for comparison. \label{ionico}} \end{figure} \section{Conclusions and prospects} Based on the results above, we can draw several general conclusions concerning both the feasibility of observing Rb-OH collisions experimentally in a beam experiment and the possibility of sympathetic cooling of OH using ultracold Rb. We can assert with some confidence that Rb-OH cross sections at energies of tens of mK, typical of Stark decelerators, will be on the order of $10^{-13}\ \rm{cm}^{2}$. This is probably large enough to make the collisions observable. In round numbers, consider a Rb MOT with density $n_{\rm Rb} = 10^{10}$ cm$^{-3}$, and an OH packet emerging from a Stark decelerator at 10 m/s, with density $n_{\rm OH} = 10^{7}$ cm$^{-3}$. Further assume that the Rb is stored in an elongated MOT that allows an interaction distance of 1 cm as the OH passes through. In this case, a cross section $\sigma = 10^{-13}$ cm$^2$ implies that approximately $10^{-3}$ of the OH molecules are scattered out of the packet. This quantity, while small, should be observable in repeated shots of the experiment \cite{Yun}. As for sympathetic cooling OH using cold Rb, it seems unlikely that this will be possible for species in their stretched states as in channel C1. The inelastic rates are almost always comparable to, or greater than, the elastic rates. The situation becomes even worse as the temperature drops and cross sections for exoergic processes diverge. Every collision event {\it might} serve to thermalize the OH gas, but is equally likely to remove the molecule from the trap altogether, or contribute to heating the gas. For collision partners in the low-energy states of incident channel C2, the situation is not as bleak, at least in the low-energy limit, since inelastic cross sections do not diverge and may be fairly small. However, we have focused on the weak-magnetic-field seeking state of OH, which will not be the lowest-energy state in a magnetic field. Thus exoergic processes again appear, and inelastic rates will again become unacceptably large. Under these circumstances, rather than ``sympathetic cooling,'' the gas would exhibit ``simply pathetic cooling.'' From these considerations, it seems that the only way to guarantee inelastic rates sufficiently low to afford sympathetic cooling would be to remove all exoergic inelastic channels altogether. Thus both species, atom and molecule, should be trapped in their absolute ground states, using optical or microwave dipole traps \cite{Mille} or possibly an alternating current trap \cite{actrap}. Sympathetic cooling would then be forced due to the absence of any possible outcome other than elastic and endoergic processes. The latter might produce a certain population of hyperfine-excited OH molecules, which could then give up their internal energy again and contribute towards heating. However, the molecules are far more likely to collide with atoms than with other molecules, so that this energy would eventually be carried away as the Rb is cooled. Further aspects of the Rb-OH collision problem need further work. Foremost among these is the possibility that inelastic rates could be reduced or controlled by applying electric and/or magnetic fields. This may result partly from the simple act of moving Feshbach resonances to different energies, or from altering the effective coupling between incident and final channels, as has been hinted at previously \cite{Tick, Krems}. In addition, the influence of the ion-pair channel may be significant. There is the possibility, for example, that the highly polar RbOH molecule might be produced by absorption followed by either spontaneous or stimulated emission \cite{9}. Finally, new phenomena are possible, such as analogues of the field-linked states that have been predicted in dipole-dipole collisions \cite{31a, 31b}. \acknowledgements ML and JLB gratefully acknowledge support from the NSF and the W. M. Keck Foundation, and PS from the M\v{S}MT \v{C}R (grant No. LC06002) \label{sec4} \subsection*{Appendix 1} As described above, we encountered difficulties with non-degeneracies between the $^1A'$ and $^1A^{\prime\prime}$ components of the $^1\Pi$ state at linear geometries. These could have been avoided by carrying out the linear calculations in $C_{2v}$ rather than $C_s$ symmetry, but $C_s$ symmetry was essential to avoid discontinuities at $\theta=0$ and $180^\circ$. To understand the non-degeneracies it is essential to understand the procedure used to generate the basis set used in the CISD calculation. \begin{enumerate} \item The molecular orbital basis set is partitioned into internal and external sets, and the internal orbitals are in turn partitioned into closed and active sets. \item A reference space is generated, including all possible arrangements of the $N$ available electrons among the active orbitals that give states of the specified symmetry and spin multiplicity. The closed orbitals are doubly occupied in all reference configurations. \item A set of $(N$--2)-electron states is generated by all possible 2-electron annihilations from the reference states (with no symmetry constraints). Some of the closed orbitals may be designated as core orbitals, in which case annihilations from them are not included. \item A set of $(N$--1)-electron states is generated by all possible 1-electron additions to internal orbitals of the $(N$--2)-electron states (with no symmetry constraints). \item The final CI basis includes states of 3 different classes: \begin{itemize} \item {\it Internal} states are generated by all possible 1-electron additions to the $(N$--1)-electron states, with the extra electron in an internal orbital, that give states of the specified symmetry and spin multiplicity; \item {\it Singly external} states are generated by all possible 1-electron additions to the $(N$--1)-electron states, with the extra electron in an external orbital, that give states of the specified symmetry and spin multiplicity; \item Internally contracted {\it doubly external} states are generated by 2-electron excitations into the external space from a reference function obtained by solving a small CI problem in the reference space. \end{itemize} \end{enumerate} The non-degeneracies between the $^1A'$ and $^1A^{\prime\prime}$ components of the $^1\Pi$ state arise from two different sources. First, there is a part of the non-degeneracy due to the internal and singly external configurations. At linear geometries there are both $\Sigma$-type and $\Pi$-type reference configurations. Both types are included for $A'$ symmetry, but only the $\Pi$-type reference configurations are included for $A^{\prime\prime}$ symmetry. In the $A'$ calculation, therefore, there are additional $\Pi$-type internal and singly external configurations that arise from $\Sigma$-type reference configurations. This effect could in principle be suppressed in MOLPRO by including reference configurations of both symmetries in both calculations at all geometries, but this was prohibitively expensive in computer time. In addition, as described below, it is responsible for only about 20\% of the total non-degeneracy. Secondly, there is a part of the non-degeneracy due to the (contracted) doubly external configurations. It must be remembered that we wish to transform the $1A'$ and $2A'$ singlet states into a diabatic representation. To do this meaningfully, we need them to be calculated using identical basis sets. If we do the MRCI calculations separately, this is not true: even if the reference space is the same, each state produces a {\it different} set of internally contracted doubly external states. It is thus necessary to calculate the two $^1A'$ states in the {\it same} MRCI block, so that the basis set contains both sets of contracted functions. However, this results in a larger and more flexible basis set of contracted doubly external functions than is generated in the $^1A^{\prime\prime}$ case. It is helpful to document the magnitude of the non-degeneracies in various possible calculations. For RbOH at $\theta=0$ and $R=12$\,\AA: \begin{itemize} \item If calculations are carried out in $C_{2v}$ symmetry, the ion-pair and covalent states have different symmetry and the non-degeneracy is not present; \item For $C_s$ symmetry with the two $^1A'$ states calculated in a single MRCI block, both sources of non-degeneracy are present and the non-degeneracy is $22.78\,\mu E_h$; \item For $C_s$ symmetry with the two $^1A'$ states calculated in separate MRCI blocks, only the first source of non-degeneracy is present and the non-degeneracy is $4.08\,\mu E_h$; \item For $C_s$ symmetry with the two $^1A'$ states calculated in a single MRCI block, but reference configurations of both symmetries included, only the second source of non-degeneracy is present and the non-degeneracy is $18.70\,\mu E_h$; \item For $C_s$ symmetry with the two $^1A'$ states calculated in separate MRCI blocks, with reference configurations of both symmetries included, there is no non-degeneracy. \end{itemize} \subsection*{Appendix 2} The interaction potential does not involve the spins of the nuclei and is diagonal in the total electronic spin $S$. This makes it expedient to define two additional basis sets. Basis set B2 is based on Hund's case (b) quantum numbers for the molecule: the orbital angular momentum of the electron and the rotational angular momentum couple to form $n$, with $\lambda$ projection on the internuclear axis. $n$ coupled to the spin of the electron $s_d$ would give us $j$ for the diatomic fragment, as in basis set B1. Instead, in B2 we couple $s_d$ with $s_a$ to obtain the total spin of the electrons, described by kets $|S M_S \rangle$. However, we leave aside the states of the nuclei, which are represented by $ | i_a m_{i_{a}} \rangle | i_d m_{i_{d}} \rangle$, with $m_{i_{a}}$ and $ m_{i_{d}} $ being the projections of the nuclear spin of the atom and diatom in the laboratory frame. Basis set B2w is then given by \begin{eqnarray} | B2w \rangle = | (s_d s_a) S M_S \rangle | n m_n \lambda \rangle | i_d m_{i_{d}} \rangle | i_a m_{i_{a}} \rangle | L M_L \rangle, \end{eqnarray} where $ |n m_n \lambda \rangle$ represents $\sqrt{\frac{2n+1}{8 \pi^2}} D_{m_n \lambda }^{n*}(\alpha,\beta,\gamma)$. The parity operator $E^{*}$ acts on these states as follows: \begin{eqnarray} E^{*} | (s_d s_a) S M_S \rangle | n m_n \lambda \rangle | i_d m_{i_{d}} \rangle | i_a m_{i_{a}} \rangle | L M_L \rangle= \nonumber \\ (-1)^{n+L} | (s_d s_a) S M_S \rangle | n m_n -\lambda \rangle | i_d m_{i_{d}} \rangle | i_a m_{i_{a}} \rangle | L M_L \rangle,\nonumber \\ \end{eqnarray} which allows us to construct combinations of define parity, \begin{eqnarray} | B2p \rangle = | (s_d s_a) S M_S \rangle | n m_n \bar{\lambda} \epsilon \rangle | i_d m_{i_{d}} \rangle | i_a m_{i_{a}} \rangle | L M_L \rangle. \end{eqnarray} The third basis set, B3, is intermediate between B1 and B2. Here the molecule is again labeled by quantum numbers corresponding to a Hund's case (a) molecule, as in B1, \begin{eqnarray} | B3w \rangle = |s_d \sigma \rangle | \lambda \rangle|j m \omega \rangle | i_d m_{i_{d}} \rangle |s_a m_{s_a} \rangle | i_a m_{i_a} \rangle| L M_L\rangle. \end{eqnarray} However, the part describing the OH fragment is taken as $|j m \omega \rangle | \lambda \rangle |s_d \sigma \rangle$, that is, with signed values of $\lambda$, $\sigma$, and $\omega$. In this case, the parity operator $E^{*}$ acts as follows, \begin{align} &E^{*} |s_d \sigma \rangle | \lambda \rangle|j m \omega \rangle | i_d m_{i_{d}} \rangle |s_a m_{s_a} \rangle | i_a m_{i_{a}} \rangle | L M_L\rangle \nonumber \\ &= (-1)^{j-s_d+L} \nonumber \\ &\times |s_d -\sigma \rangle | -\lambda \rangle|j m -\omega \rangle | i_d m_{i_{d}} \rangle |s_a m_{s_a} \rangle | i_a m_{i_{a}} \rangle| L M_L\rangle, \end{align} so that we can again build combinations of defined parity, \begin{eqnarray} | B3p \rangle = |s_d \bar{\lambda} \bar{\omega} \epsilon j m \rangle | i_d m_{i_{d}} \rangle |s_a m_{s_a} \rangle | i_a m_{i_{a}} \rangle| L M_L\rangle. \end{eqnarray} The change from B2w to B3w, leaving aside the partial wave and nuclear spin states, is given by \begin{widetext} \begin{eqnarray} &|s_d \sigma \rangle& | \lambda \rangle| j m \omega \rangle |s_a m_{s_a} \rangle \nonumber \\ &=& \sum_{m_{s_d}} \sum_{n} \sum _{S} \sum _{M_S} \left(\begin{array}{ccc} s_d & s_a & S \\ m_{s_d} & m_{s_a} & -M_S\end{array}\right) \left(\begin{array}{ccc} s_d & n & j \\ m_{s_d} & m_n & -m \end{array} \right) \left(\begin{array}{ccc} s_d & n & j \\ - \sigma & -\lambda & \omega \end{array} \right) \\ &\times& (-1)^{(m- \omega + s_d - s_a +M_S)} | n m_n \lambda \rangle | (s_d s_a) S M_S \rangle \nonumber. \end{eqnarray} \end{widetext}
1,108,101,565,363
arxiv
\section{Introduction} \def\mathcal{N}{\mathcal{N}} The recent developments on the duality between $\mathcal{N}=6$ superconformal Chern-Simons theory in three dimensions and superstrings moving on ${\rm AdS}_4 \times {\mathbb P}^3$ \cite{{Aharony:2008ug}, Benna:2008zy,Schwarz:2004yj,Bagger:2007vi,Bagger:2006sk,Gustavsson:2007vu,Gustavsson:2008dy,Distler:2008mk,Lambert:2008et} have prompted the study of superstrings on ${\rm Osp}({\mathcal N}|4)$ backgrounds \cite{Arutyunov:2008if, Stefanski:2008ik,Fre:2008qc, Bonelli:2008us}. The main issue is of course the integrability of the system and this has been already studied in a series of papers \cite{Minahan:2008hf,Gaiotto:2008cg,Bak:2008cp,Gromov:2008bz,Gromov:2008qe,Ahn:2008aa,Ahn:2008hj,Lee:2008ui,Astolfi:2008ji,Chen:2008qq,Ahn:2008gd,Grignani:2008te}. On the other side, one would like also to consider the string theory in a framework where all symmetries are manifest and which takes the RR fields of the background properly into account. In \cite{Bonelli:2008us}, the limit for large RR fields is analyzed and it has been shown the relation with a topological model on the Grassmannian ${\rm Osp}({\mathcal 6}|4) / {\mathrm SO}(6) \times {\mathrm Sp}(4)$. The exactness of the background is also discussed in \cite{Bonelli:2008us}. The pure spinor formalism is well suited to the present situation and in a previous paper \cite{Fre:2008qc} two of the present authors provided the pure spinor version of the ${\rm AdS}_4 \times {\mathbb P}^3$ sigma model, described as the coset space ${\rm Osp}(6|4)/{\rm SO}(1,3) \times {\rm U}(3)$. Furthermore, the four authors published another paper \cite{D'Auria:2008ny} where a systematic study of pure spinor superstring on type IIA backgrounds has been completely performed. This analysis has been based on the previous studies by Berkovits and Howe \cite{Berkovits:2001ue}, by Oda and Tonin \cite{Oda:2001zm} and on the geometric (a.k.a. rheonomic) formulation of supergravity \cite{Castellani:1991et}. There it has been shown how to derive from the geometrical formulation of supergravity (in type IIA case) the pure spinor sigma model and the relative pure spinor constraints \cite{Psconstra}. It has been proved that the action is BRST invariant and, only in the case of type IIA, has a peculiar structure since it can be written in terms of four pieces which are the Green-Schwarz action, a $Q$-exact piece, a $\bar Q$-exact piece and a $Q\bar Q$-exact piece. This allows us to derive the complete expression of the sigma model where all superfields are made explicit. One of the advantages of the geometrical formulation of supergravity is that it provides a superspace framework where all bosonic fields are extended to be superfields and the rheonomic conditions ensure the integrability of the extension, leading to the correct field content. The advantage stays in the fact that one can very easily read off the sigma model action in terms of the background solution. As an example, here we derive of the pure spinor sigma model for the ${\rm AdS}_4 \times {\mathbb P}^3$ background. In this case we have to take into account the RR field stregths ${\bf G}^{[2]}$ and ${\bf G}^{[4]}$ which are respectively proportional to the K\"ahler form on ${\mathbb P}^3$ and to the Levi-Civita invariant tensor in ${\rm AdS}_4$. This background has 24 Killing spinors parametrized by the combinations $\chi_x \otimes \eta^A$ where $\chi_x$ are the Killing spinors of $\mathrm{AdS}_4$ and $\eta^A$ are the 6 Killing spinors of ${\mathbb P}^3$. Therefore, it is convenient to use a superspace with 24 fermionic coordinates. Now, the problem is whether this superspace is sufficient to provide a complete description of the supergravity states and, whether the vertex operators constructed in terms of this superspace describe on-shell ${\rm AdS}_4 \times {\mathbb P}^3$-supergravity fluctuations. It is established that all supergravity models with more than 16 supercharges are described by an on-shell superspace, since an auxiliary-field formulation does not exist, and therefore we expect that the 24-extended superspace is sufficient for the present formulation. There is also another aspect to be noticed: the formulation of GS superstrings on the same coset has been studied extensively in \cite{Arutyunov:2008if} and it has been argued that 24 fermions are indeed sufficient to formulate the model. Indeed, $\kappa$-symmetry removes exactly 8 fermions leading to a supersymmetric model. In our case, $\kappa$-symmetry is replaced by BRST symmetry plus pure spinor constraints, so that we have to check whether the pure spinors satisfying the new constraints \cite{Psconstra} cancel the central charge. In fact, we will see that by reducing the spinor space from 32 dimensions to the 24 dimensions adapted to the present background, there exists a solution of the pure spinor constraints with only 14 degrees of freedom, matching the bosonic and fermionic degrees of freedom. In addition, by means of the formalism constructed in \cite{D'Auria:2008ny}, we provide and explicit expression for the sigma model where all couplings are exhibited. We devote a particular attention on the quartic part of the action for the ghosts.\par The paper is organized as follows. In Section \ref{s2} we review the description of Type IIA supergravity in terms of its Free Differential Algebra (FDA) in the string frame and the corresponding rheonomic parametrization. In section \ref{s3} we describe the compactification of type IIA on $\mathrm{AdS}_4 \, \times \, {\mathbb{P}}^3$. Finally in section \ref{s4} we give the complete pure spinor superstring action on $\mathrm{AdS}_4 \, \times \, {\mathbb{P}}^3$. The reader is referred to the appendices for a definition of the $D=4$ and $D=6$ spinor conventions and for some useful formulae. \section{Summary of Type IIA Supergravity and of its FDA} \label{s2} In order to pursue our programme we have to consider the structure of the Free Differential Algebra of type IIA supergravity, the rheonomic parametrization of its curvatures and the corresponding field equations that are the integrability conditions of such rheonomic parametrizations. All these necessary ingredients were recently determined in \cite{D'Auria:2008ny}. In this section, we summarize those results collecting all the items for our subsequent discussion. \subsection{Definition of the curvatures} The p-forms entering the FDA of the type IIA theory are listed below: \par \vskip 0.2cm \begin{centering} \begin{tabular}{|c|c|c|c|c|c|} \hline Form & degree p & f(ermion)/b(oson) & Name & String Sector & Curvature \\ \hline $\omega^{\underline{ab}}$ & 1 & b & spin connection & NS-NS & $R^{\underline{ab}}$ \\ $V^{\underline{a}}$ & 1 & b & Vielbein & NS-NS & $T^{\underline{a}}$ \\ $\psi_{L/R}$ & 1 & f & gravitino & NS-R & $\rho_{L/R}$ \\ $\mathbf{C}^{[1]}$ & 1 & b & RR 1-form & R-R & $\mathbf{G}^{[2]}$ \\ $\varphi$ & 0 & b & dilaton & NS-NS & $\mathbf{f}^{[1]}$ \\ $\chi_{L/R}$ & 0 & f & dilatino & NS-R & $\nabla\chi_{L/R}$ \\ $\mathbf{B}^{[2]}$ & 2 & b & Kalb-Ramond field & NS-NS & $\mathbf{H}^{[3]}$ \\ $\mathbf{C}^{[3]}$ & 3 & b & RR 3-form & R-R & $\mathbf{G}^{[4]}$ \\ \hline \end{tabular} \end{centering} \par \vskip 0.2cm The explicit definition of the FDA curvatures, constructed with the above fields is displayed below: \begin{eqnarray} R^{\underline{ab}} & \equiv & d\omega^{\underline{ab}} \, - \, \omega^{\underline{ac}}\, \wedge \, \omega^{\underline{cb}}\label{2acurva}\\ T^{\underline{a}} & \equiv & \mathcal{D} \, V^{\underline{a}} \, - \, {\rm i} \, \ft 12 \left(\overline{\psi}_L \, \wedge \,\Gamma^{\underline{a}} \, \psi_L \, + \, \overline{\psi}_R \, \wedge \,\Gamma^{\underline{a}} \, \psi_R \right) \label{2atorsio}\\ \rho_{L,R} & \equiv & \mathcal{D}\psi_{L,R} \, \equiv \, d\psi_{L,R} \, - \, \ft 14 \omega^{\underline{ab}} \, \wedge \, \Gamma_{\underline{ab}} \,\psi_{L,R} \label{a2curlgrav}\\ \mathbf{G}^{[2]} & \equiv & d\mathbf{C}^{[1]} \, + \, \exp\left[ - \, \varphi \right] \, \overline{\psi}_R \, \wedge \, \psi_L \label{F2defi}\\ \mathbf{f}^{[1]} & \equiv & d\varphi \label{dilacurva}\\ \nabla \chi_{L/R} &\equiv& \, d\chi_{L,R} \, - \, \ft 14 \omega^{\underline{ab}} \, \wedge \, \Gamma_{\underline{ab}} \,\chi_{L,R} \label{a2curldil} \end{eqnarray} \begin{eqnarray} \mathbf{H}^{[3]} & = & d\mathbf{B}^{[2]} \, + \, {\rm i} \, \left(\overline{\psi}_L \, \wedge \, \Gamma_{\underline{a}} \, \psi_L \, - \, \overline{\psi}_R \, \wedge \,\Gamma_{\underline{a}} \, \psi_R \right) \, \wedge \, V^{\underline{a}} \label{KRcurva}\\ \mathbf{G}^{[4]} & = & d\mathbf{C}^{[3]} \, + \, \mathbf{B}^{[2]} \, \wedge \, d\mathbf{C}^{[1]}\, \nonumber\\ && - \, \ft 12 \, \exp\left [- \, \varphi \right]\,\left(\overline{\psi}_L \, \wedge \, \Gamma_{\underline{ab}} \, \psi_R \, + \, \overline{\psi}_R \, \wedge \,\Gamma_{\underline{ab}} \, \psi_L \right)\, \wedge \, V^{\underline{a}} \, \wedge \, V^{\underline{b}} \label{Gcurva} \end{eqnarray} The $0$--form dilaton $\varphi$ appearing in eq. (\ref{F2defi}) introduces a dynamic coupling constant. Furthermore, as mentioned in the table, $V^{\underline{a}}$, and $\omega^{\underline{ab}}$ respectively denote the vielbein and the spin connection, which together with the gravitino $\psi_{L/R}$ complete the multiplet of $1$-forms gauging the type IIA super Poincar\'e algebra in $D=10$. The two fermionic $1$-forms $\psi_{L/R}$ are Majorana-Weyl spinors of opposite chirality: \begin{equation} \Gamma_{11} \, \psi_{L/R} \, = \, \pm \, \psi_{L/R}\,. \label{psiLR} \end{equation} The flat metric $\eta_{\underline{ab}}\, = \, \mbox{diag}(+,-,\dots,-)$ is the mostly minus one and $\Gamma_{11}$ is hermitian and squares to the the identity $\Gamma_{11}^2=\mathbf{1}$. \subsection{Rheonomic parametrizations of the curvatures in the string frame} \label{stringframerheo} As explained in \cite{D'Auria:2008ny} the form of the rheonomic parametrization required in order to construct the pure spinor action of superstrings is that corresponding to the string frame and not that corresponding to the Einstein frame. This parametrization was derived in \cite{D'Auria:2008ny} and it is formulated in terms of a certain set of tensors, which involve both the supercovariant field strengths $\mathcal{G}_{\underline{ab}},\mathcal{G}_{\underline{abcd}}$ of the Ramond-Ramond $p$-forms and also bilinear currents in the dilatino field $\chi_{L/R}$. The needed tensors are those listed below: \begin{eqnarray} {\mathcal{M}}_{\underline{ab}} & = & \Big( \ft 18 \, \exp[ \varphi] \, \mathcal{G}_{\underline{ab}} \, + \, \ft {9}{64} \, \overline{\chi}_R \, \Gamma_{\underline{ab}} \, \chi_L \Big) \nonumber\\ {\mathcal{M}}_{\underline{abcd}} & = & - \, \ft 1{16} \, \exp[ \varphi] \, \mathcal{G}_{\underline{abcd}} - \, \ft{3}{256} \, \overline{\chi}_L \, \Gamma_{\underline{abcd}} \, \chi_R \nonumber\\ \mathcal{N}_0 \, & = & \ft 34 \, \overline{\chi}_L\, \chi_R \nonumber\\ \mathcal{N}_{\underline{ab}}&=& \ft 14 \, \exp[ \varphi] \, \mathcal{G}_{\underline{ab}} \, + \, \ft{9}{32} \, \overline{\chi}_R \, \Gamma_{\underline{ab}} \, \chi_L=2\,{\mathcal{M}}_{\underline{ab}} \nonumber\\ \mathcal{N}_{\underline{abcd}} \, &= & \ft 1{24} \, \exp[ \varphi] \, \mathcal{G}_{\underline{abcd}} + \, \ft{1}{128} \, \overline{\chi}_R \,\Gamma_{\underline{abcd}} \, \chi_L = - \ft 23 {\mathcal{M}}_{\underline{abcd}}\,. \label{Mntensors} \end{eqnarray} The above tensors are conveniently assembled into the following spinor matrices \begin{eqnarray} \mathcal{M}_\pm &=&{\rm i} \, \left(\mp \mathcal{M}_{\underline{ab}} \, \Gamma^{\underline{ab}} \, + \, \mathcal{M}_{\underline{abcd}} \, \Gamma^{\underline{abcd}}\right) \label{Mmatrapm}\\ \mathcal{N}^{(even)}_{\pm} & = & \mp \,\mathcal{N}_0 \, \mathbf{1} \, + \, \mathcal{N}_{\underline{ab}} \, \Gamma^{\underline{ab}} \, \mp \, \mathcal{N}_{\underline{abcd}} \, \Gamma^{\underline{abcd}} \label{pongo}\\ \mathcal{N}^{(odd)}_{\pm} & = &\pm \ft i3\,f_{\underline{a}}\,\Gamma^{\underline{a}}\pm\ft{1}{64}\,\overline{\chi}_{R/L} \, \Gamma_{\underline{abc}}\, \chi_{R/L}\,\Gamma^{\underline{abc}}-\ft i{12}\,\mathcal{H}_{\underline{abc}}\,\Gamma^{\underline{abc}}\label{pongo2}\\ \mathcal{L}^{(odd)}_{a\,\pm}&=& \mathcal{M}_{\mp}\,\Gamma_{\underline{a}}\,\,;\,\,\,\mathcal{L}^{(even)}_{a\,\pm}=\mp\ft 38\,\mathcal{H}_{\underline{abc}}\,\Gamma^{\underline{bc}}\,. \end{eqnarray} \par In terms of these objects the rheonomic parametrizations of the curvatures, solving the Bianchi identities can be written as follows: \paragraph{Bosonic curvatures} \begin{eqnarray} T^{\underline{a}} & = & 0 \label{nullatorsioSF}\\ R^{\underline{ab}} & = & R^{\underline{ab}}{}_{\underline{mn}} \, V^{\underline{m}} \,\wedge \, V^{\underline{n}}\, + \, \overline{\psi}_R\,{\Theta}^{\underline{ab}}_{\underline{m}|L}\, \wedge \, V^{\underline{m}}\, + \,\overline{\psi}_L\, {\Theta}^{\underline{ab}}_{\underline{m}|R} \, \wedge \, V^{\underline{m}}\, \nonumber\\ && + \,{\rm i} \, \ft 34 \, \left( \overline{\psi}_L \, \wedge \, \Gamma_{\underline{c}} \, \psi_L \, - \, \overline{\psi}_R \, \wedge \, \Gamma_{\underline{c}} \, \psi_R \right) \, \mathcal{H}^{\underline{abc}}\nonumber\\ && \, + 2i\, \overline{\psi}_L \, \wedge \, \Gamma^{[\underline{a}} \, \mathcal{M}_+ \, \Gamma^{\underline{b}]} \, \psi_R \label{rheoRiemannSF}\\ \mathbf{H}^{[3]} & = & \mathcal{H}_{\underline{abc}} V^{\underline{a}} \, \wedge \, V^{\underline{b}} \, \wedge \, V^{\underline{c}} \label{rheoHSF}\\ \mathbf{G}^{[2]} & = & \mathcal{G}_{\underline{ab}} V^{\underline{a}} \, \wedge \, V^{\underline{b}} \, \, + \, {\rm i} \, \ft 32 \exp\left[ - \, \varphi \right] \, \left(\overline{\chi}_L \, \Gamma_{\underline{a}} \, \psi_L \, +\, \overline{\chi}_R \,\Gamma_{\underline{a}} \, \psi_R \right)\, \wedge \, V^{\underline{a}} \label{rheoFSF}\\ \mathbf{f}^{[1]} & = & f_{\underline{a}} V^{\underline{a}} \, + \, \ft 32 \, \left(\overline{\chi}_R \, \psi_L \, - \, \overline{\chi}_L \, \psi_R \right)\label{rheodilatonFSF}\\ \mathbf{G}^{[4]} & = & \mathcal{G}_{\underline{abcd}} V^{\underline{a}} \, \wedge \, V^{\underline{b}} \, \wedge \, V^{\underline{c}} \, \wedge \, V^{\underline{d}}\label{rheoGSF} \nonumber\\ &&\, - \, {\rm i} \, \ft 12 \, \exp[-\varphi] \, \left(\overline{\chi}_L \, \Gamma_{\underline{abc}} \, \psi_L \, - \, \overline{\chi}_R \, \Gamma_{\underline{abc}} \, \psi_R \right) \, \wedge \, V^{\underline{a}} \, \wedge \, V^{\underline{b}} \, \wedge \, V^{\underline{c}} \end{eqnarray} \paragraph{Fermionic curvatures} \begin{eqnarray} \rho_{L/R} & = &\rho^{L/R}_{\underline{ab}} \, V^{\underline{a}} \, \wedge \, V^{\underline{b}} \, +\mathcal{L}^{(even)}_{\underline{a}\,\pm}\,\psi_{L/R}\wedge V^{\underline{a}}+\mathcal{L}^{(odd)}_{\underline{a}\,\mp}\,\psi_{R/L}\wedge V^{\underline{a}} \, + \, \rho_{L/R}^{(0,2)} \label{rhoparaSF}\\ \nabla\, \chi_{L/R} & = & \mathcal{D}_{\underline{a}} \, \chi_{L/R} \, V^{\underline{a}} +\mathcal{N}^{(even)}_{\pm}\,\psi_{L/R}+\mathcal{N}^{(odd)}_{\mp}\,\psi_{R/L}\,. \label{dechiparaSF} \end{eqnarray} Note that the components of the generalized curvatures along the bosonic vielbeins do not coincide with their spacetime components, but rather with their supercovariant extension. Indeed expanding for example the four-form along the spacetime differentials one finds that \begin{eqnarray} \widetilde G_{\mu\nu\rho\sigma} &\equiv&\mathcal{G}_{\underline{abcd}} V^{\underline{a}}_{\mu} \, \wedge \, V^{\underline{b}}_{\nu} \, \wedge \, V^{\underline{c}}_{\rho} \, \wedge \, V^{\underline{d}}_{\sigma} = \partial_{[\mu}C_{\nu\rho\sigma]}^{[4]} + B_{[\mu\nu}^{[2]}\,\partial_\rho C^{[1]}_{\sigma]}-\nonumber\\ &&-\frac{1}{2}\,e^{-\varphi}\,\left(\overline{\psi}_{L[\mu}\,\Gamma_{\nu\rho}\,\psi_{R\sigma]}+\overline{\psi}_{R[\mu}\,\Gamma_{\nu\rho}\,\psi_{L\sigma]}\right) \nonumber\\ &&+ \, {\rm i} \, \ft 12 \, \exp[-\varphi] \, \left(\overline{\chi}_L \, \Gamma_{[\mu\nu\rho} \, \psi_{L\sigma]} \, - \, \overline{\chi}_R \, \Gamma_{[\mu\nu\rho} \, \psi_{R\sigma]} \right)\nonumber\end{eqnarray} where $\widetilde G$ is the supercovariant field strength. \par In the parametrization (\ref{rheoRiemannSF}) of the Riemann tensor we have used the following definition: \begin{eqnarray} \Theta_{\underline{ab|c}L/R} &=& -i \Big( \Gamma_{\underline{a}} \rho_{\underline{bc} R/L} + \Gamma_{\underline{b}} \rho_{\underline{ca} R/L} - \Gamma_{\underline{c}} \rho_{\underline{ab} R/L} \Big)\,. \end{eqnarray} Finally by $\rho^{(0,2)}_{L/R}$ we have denoted the fermion-fermion part of the gravitino curvature whose explicit expression can be written in two different forms, equivalent by Fierz rearrangement: \begin{eqnarray} \rho_{L/R}^{(0,2)}&=& \, \pm \, \ft{21}{32} \, \Gamma_{\underline{a}} \, \chi_{R/L} \, {\bar \psi}_{L/R} \, \wedge \, \Gamma^{\underline{a}} \, \psi_{L/R} \nonumber\\ && \mp \, \ft{1}{2560} \, \Gamma_{\underline{a_1a_2a_3a_4a_5}} \, \chi_{R/L} \, \left ( \overline{\psi}_{L/R} \, \Gamma^{\underline{a_1a_2a_3a_4a_5}} \, \psi_{L/R} \right ) \label{rhoparaSF2}\\ && \mbox{or} \nonumber\\ \rho_{L/R}^{(0,2)}&=& \, \pm \, \ft{3}{8} \, {\rm i}\, \psi_{L/R} \, \wedge \, {\bar \chi}_{R/L} \, \, \psi_{L/R} \, \pm \, \ft{3}{16} \, {\rm i}\, \Gamma_{\underline{ab}} \,\psi_{L/R} \, \wedge \, {\bar \chi}_{R/L} \, \, \Gamma^{\underline{ab}} \, \psi_{L/R}\,. \label{rhoparaSF3} \end{eqnarray} \subsection{Field equations of type IIA supergravity in the string frame} \label{equefilde} The rheonomic parametrizations of the supercurvatures displayed above imply, via Bianchi identities, a certain number of constraints on the inner components of the same curvatures which can be recognized as the field equations of type IIA supergravity in the string frame. These are the equations that have to be solved in constructing any specific supergravity background and read as follows. \par We have an Einstein equation of the following form: \begin{eqnarray} \mbox{\emph{$\mathcal{R}$}}_{\underline{ab}} & = & \widehat T_{\underline{ab}}\left( f\right) \, + \, \widehat T_{\underline{ab}}\left( \mathcal{G}_2\right) \,+ \, \widehat T_{\underline{ab}}\left( \mathcal{H}\right)\, + \, \widehat T_{\underline{ab}} \left( \mathcal{G}_4\right) \label{Einsteinus} \end{eqnarray} where the stress-energy tensor on the right hand side are defined as \begin{eqnarray} \widehat T_{\underline{ab}}\left( f\right) & = &\, - \, \mathcal{D}_{\underline{a}} \, \mathcal{D}_{\underline{b}}\varphi \, + \, \ft 89 \, \mathcal{D}_{\underline{a}} \, \varphi \, \mathcal{D}_{\underline{b}} \, \varphi \, - \, \eta_{\underline{ab}} \left( \ft 16 \Box \, \varphi \, + \, \ft 59 \, \mathcal{D}^{\underline{m}} \, \varphi \, \mathcal{D}_{\underline{m}} \, \varphi \right)\label{dialtostress}\\ \widehat T_{\underline{ab}}\left( \mathcal{G}_2\right) & = & \exp\left[2 \, \varphi \right] \, \mathcal{G}_{\underline{ax}} \, \mathcal{G}_{\underline{by}} \, \eta^{\underline{ab}} \label{Fstresso}\\ \widehat T_{\underline{ab}}\left( \mathcal{H}\right)\, & = & \, - \,\exp\left[\ft 13 \, \varphi \right] \, \left( \ft 98 \, \mathcal{H}_{\underline{axy}} \, \mathcal{H}_{\underline{bwt}} \, \eta^{\underline{xw}} \, \eta^{\underline{yt}} \, - \, \ft 18 \, \eta_{\underline{ab}} \, \mathcal{H}_{\underline{xyz}} \, \mathcal{H}^{\underline{xyz}}\right) \label{Hstresso}\\ \widehat T_{\underline{ab}}\left( \mathcal{G}_4\right) & = & \exp\left[2 \, \varphi \right] \,\left( 6 \,\mathcal{G}_{\underline{ax_1x_2x_3}} \, \mathcal{G}_{\underline{by_1y_2y_3}} \, \eta^{\underline{x_1y_1}} \, \eta^{\underline{x_2y_2}}\, \eta^{\underline{x_3y_3}}\, - \, \ft 12 \, \eta_{\underline{ab}} \, \mathcal{G}_{\underline{x_1\dots x_4}} \, \mathcal{G}^{\underline{x_1\dots x_4}}\right)\,. \end{eqnarray} Next we have the equations for the dilaton and the Ramond $1$-form: \begin{eqnarray} 0 & = &\Box \, \varphi \, - \, 2 \, f_{\underline{a}} \, f^{\underline{a}} \, + \, \ft 32 \, \exp \left[2 \, \varphi \right] \, \mathcal{G}^{\underline{x_1x_2}} \, \mathcal{G}_{\underline{x_1x_2}}\nonumber\\ & & + \, \ft 32 \, \exp \left[2 \, \varphi \right] \, \mathcal{G}^{\underline{x_1x_2x_3x_4}} \, \mathcal{G}_{\underline{x_1x_2x_3x_4}} \, + \, \ft 34 \, \exp \left[\ft 43\, \varphi \right] \,\mathcal{H}^{\underline{x_1x_2x_3}} \, \mathcal{H}_{\underline{x_1x_2x_3}} \label{Rreq01}\\ 0 & = & \mathcal{D}_{\underline{m}} \,\mathcal{G}^{\underline{ma}}\, - \, \ft 53 \,f^{\underline{m}} \, \mathcal{G}_{\underline{ma}} \, + \, 3 \, \mathcal{G}^{\underline{ax_1x_2 x_3}} \, \mathcal{H}_{\underline{x_1 x_2 x_3}} \label{G2equation} \end{eqnarray} and the equations for the NS $2$-form and for the RR $3$-form: \begin{eqnarray} 0 & = & \mathcal{D}_{\underline{m}} \,\mathcal{H}^{\underline{mab}} \, - \, \ft 23 \, f^{\underline{m}} \, \mathcal{H}_{\underline{mab}}\nonumber\\ && \, - \, \exp\left[ \ft 43 \, \varphi \right]\, \left( 4 \, \, \mathcal{G}^{\underline{x_1x_2 ab}} \, \mathcal{G}_{\underline{x_1 x_2}} \, - \, \ft {1}{24} \, \epsilon^{\underline{abx_1 \dots x_8}} \, \mathcal{G}_{\underline{x_1x_2x_3x_4}} \,\mathcal{G}_{\underline{x_5x_6x_7x_8}}\right) \label{H3equa}\\ 0 & = & \mathcal{D}_{\underline{m}}\,\mathcal{G}^{\underline{ma_1 a_2a_3}} \, + \, \ft 13 \, f_{m} \, \mathcal{G}^{\underline{ma_1 a_2a_3}}\nonumber\\ && + \, \exp\left[ \ft 23 \, \varphi \right]\, \left( \ft 32 \, \mathcal{G}^{\underline{m}[\underline{a_1}} \, H^{\underline{a_2a_3}]\underline{n}} \, \eta_{\underline{mn}} \, \, + \,\ft {1}{48} \, \epsilon^{\underline{a_1a_2a_3x_1 \dots x_7}} \mathcal{G}_{\underline{x_1x_2x_3x_4}} \, H_{\underline{x_5x_6x_7}}\right)\,. \label{23formeque} \end{eqnarray} Any solution of these bosonic set of equations can be uniquely extended to a full superspace solution involving $32$ theta variables by means of the rheonomic conditions. The implementation of such a fermionic integration is the \textit{supergauge completion}. \section{Compactifications of type IIA on $\mathrm{AdS}_4 \, \times \, {\mathbb{P}}^3$ }\label{s3} In this section we construct a compactification of type IIA supergravity on the following direct product manifold: \begin{equation} \mathcal{M}_{10} \, = \, \mathrm{AdS}_4 \, \times \, {\mathbb{P}}^3 \label{productspace} \end{equation} The local symmetries of the effective theory on this background is encoded in the supergroup $\mathrm{OSp(6|4)}$. The supergauge completion of the $\mathrm{AdS}_4 \, \times \, {\mathbb{P}}^3$ space consists in expressing the ten--dimensional superfields, satisfying the rheonomic parametrizations in terms of the coordinates of the \emph{mini-superspace} associated with this background, namely of the 10 space-time coordinates $x^{\underline{\mu}}$ and the 24 fermionic ones $\theta$, parametrizing the preserved supersymmetries only. This procedure relies on the representation of the \emph{mini-superspace} in terms of the following super--coset manifold \begin{eqnarray} \mathcal{M}^{10|24}&=&\frac{\mathrm{OSp(6|4)}}{\mathop{\rm SO}(1,3)\times\mathop{\rm {}U}(3)}\,.\label{supermanifoldamente} \end{eqnarray} The bosonic subgroup of $\mathrm{OSp(6|4)}$ is ${\rm Sp}(4,\mathbb{R})\times \mathop{\rm SO}(6)$. The Maurer-Cartan 1--forms of $\mathfrak{sp}(4,\mathbb{R})$ are denoted by $\Delta^{xy}$ ($x,y=1,\dots, 4$), the $\mathfrak{so}(6)$ 1--forms are denoted by $\mathcal{A}_{AB}$ ($A,B=1,\dots, 6$) while the (real) fermionic 1-forms are denoted by $\Phi^x_A$ and transform in the fundamental representation of $\mathop{\rm {}Sp}(4,\mathbb{R})$ and in the fundamental representation of $\mathop{\rm SO}(6)$. These forms satisfy the $\mathrm{OSp(6|4)}$ Maurer-Cartan equations: \begin{eqnarray} d \Delta^{xy} + \Delta^{xz} \, \wedge \,\Delta^{ty} \, \epsilon_{zt} &=& -\, 4 \, {\rm i}\, e \, {\Phi}_A^x \, \wedge \, {\Phi}_A^y, \nonumber \\ d {\mathcal{A}}_{AB} - e {\mathcal{A}}_{AC}\, \wedge \, {\mathcal{A}}_{CB} &=& 4 \, {\rm i} {\Phi}_A^x \, \wedge \, {\Phi}_B^y \, \epsilon_{xy}\nonumber\\ d \Phi^x_A \, + \, \Delta^{xy} \, \wedge \, \epsilon_{yz} \, \Phi^z_A \, - \, e \, {\mathcal{A}}_{AB} \, \wedge \,\Phi^x_B &=& 0 \label{orfan26} \end{eqnarray} where \begin{equation} \epsilon_{xy}= - \epsilon_{yx} \, = \, \left(\matrix{ 0 & 0 & 0 & 1 \cr 0 & 0 & -1 & 0 \cr 0 & 1 & 0 & 0 \cr -1 & 0 & 0 & 0 \cr } \right) \label{epsilon} \end{equation} The Maurer-Cartan equations are solved in terms of the super-coset representative of (\ref{supermanifoldamente}). We rely for this analysis on the general discussion in \cite{Fre:2008qc}. It is convenient to express this solution in terms of the 1-forms describing the on the bosonic submanifolds $\mathrm{AdS}_4\equiv \frac{\mathop{\rm {}Sp}(4,\mathbb{R})}{\mathop{\rm SO}(1,3)}$, $\mathbb{P}^3\equiv \frac{\mathop{\rm SO}(6)}{\mathop{\rm {}U}(3)}$ of (\ref{supermanifoldamente}) and 1--forms on the fermionic subspace of (\ref{supermanifoldamente}). Let us denote by $B^{ab},\,B^{a}$ and by $\mathcal{B}^{\alpha\beta},\,\mathcal{B}^{\alpha}$ the connections and vielbein on the two bosonic subspaces respectively. The supergauge completion is finally accomplished by expressing the $p$-forms satisfying the rheonomic parametrization of the FDA in the mini-superspace. This amounts to expressing them in terms of the 1--forms on (\ref{supermanifoldamente}). The final expression of the $D=10$ fields will involve not only the bosonic 1--forms $B^{ab},\,B^{a},\,\mathcal{B}^{\alpha\beta},\,\mathcal{B}^{\alpha}$, but also the Killing spinors on the background. The latter play indeed a spacial role in this analysis since they can be identified with the fundamental harmonics of the cosets $\mathrm{SO(2,3)/SO(1,3)}$ and $\mathrm{SO(6)/U(3)}$, respectively, \cite{Fre':2006es}. Before writing the explicit solution we need to discuss the Killing spinors on the $\mathrm{AdS}_4\times \mathbb{P}^3$ background. \subsection{Killing spinors of the $\mathrm{AdS_4}$ manifold} As anticipated, on of the main items for the construction of the supergauge completion is given by the Killing spinors of anti de Sitter space. They can be constructed in terms of the coset representative $\mathrm{L_B}$, namely in terms of the fundamental harmonic of the coset $\mathrm{SO(2,3)/SO(1,3)}$. \par The defining equation is given by: \begin{equation} \nabla^{\mathrm{Sp(4)}}\, \chi_x \, \equiv \, \left( d \, - \, \ft 14 \, B^{ab} \, \gamma_{ab} \, - \,2\, e \, \gamma_a \, \gamma_5 \, B^a \,\right) \, \chi_x \, = \, 0 \label{d4Killing} \end{equation} and states that the Killing spinor is a covariantly constant section of the $\mathfrak{sp} (4,\mathbb{R})$ bundle defined over $\mathrm{AdS_4}$. This bundle is flat since the vanishing of the $\mathfrak{sp} (4,\mathbb{R})$ curvature is nothing else but the Maurer-Cartan equation of $\mathfrak{sp} (4,\mathbb{R})$ and hence corresponds to the structural equations of the $\mathrm{AdS_4}$ manifold. We are therefore guaranteed that there exists a basis of four linearly independent sections of such a bundle, namely four linearly independent solutions of eq.(\ref{d4Killing}) which we can normalize as follows: \begin{equation} \overline{\chi}_x \, \gamma_5 \, \chi_y \, = \, \epsilon_{xy}\,. \label{normakillo4} \end{equation} The 1--forms on $\mathrm{AdS_4}$ are defined in terms of $\mathrm{L_B}$ as follows: \begin{equation} - \, \ft 14 \, B^{ab} \, \gamma_{ab} \, - \,2\, e \, \gamma_a \, \gamma_5 \, B^a \, = \, \Delta_B \, = \, \mathrm{L^{-1}_B} \, d \mathrm{L_B}\,. \label{salamecanino} \end{equation} It follows that the inverse matrix $\mathrm{L^{-1}_B}$ satisfies the equation: \begin{equation} \left( d \, + \, \Delta_B \right) \, \mathrm{L^{-1}_B} \, = \, 0 \label{euchessina} \end{equation} Regarding the first index $y$ of the matrix $\left( \mathrm{L^{-1}_B}\right)^y{}_x$ as the spinor index acted on by the connection $\Delta_B$ and the second index $x$ as the labeling enumerating the Killing spinors, eq.(\ref{euchessina}) is identical with eq.(\ref{d4Killing}) and hence we have explicitly constructed its four independent solutions. In order to achieve the desired normalization (\ref{normakillo4}) it suffices to multiply by a phase factor $\exp \left[{-\rm i} \, \ft 14 \pi \right]$, namely it suffices to set: \begin{equation} \chi^y_{(x)} \, = \, \exp \left[-{\rm i} \, \ft 14 \pi \right] \, \left( \mathrm{L^{-1}_B}\right)^y{}_x \label{killini} \end{equation} In this way the four Killing spinors fulfill the Majorana condition, having chosen a representation of the $D=4$ Clifford algebra in which $\mathcal{C}=i\,\gamma_0$ (see Appendix \ref{d4spinorbasis} for conventions on spinors). Furthermore since $\mathrm{L^{-1}_B}$ is symplectic it satisfies the defining relation \begin{equation} \mathrm{L^{-1}_B} \, \mathcal{C} \, \gamma_5 \, \mathrm{L_B} \, = \, \mathcal{C} \, \gamma_5 \, \label{defirelazia} \end{equation} which implies (\ref{normakillo4}). \subsection{Explicit construction of $\mathbb{P}^3$ geometry} The complex three-fold $\mathbb{P}^3$ is K\"ahler. Indeed the existence of the K\"ahler $2$-form is one of the essential items in constructing the solution ansatz. \par Let us begin by discussing all the relevant geometric structures of $\mathbb{P}^3$. We need now to construct the explicit form of the internal manifold geometry, in particular the spin connection, the vielbein and the K\"ahler $2$-form. This is fairly easy, since $\mathbb{P}^3$ is a coset manifold: \begin{equation} \mathbb{P}^3 \, = \, \frac{\mathrm{SU(4)}}{\mathrm{SU(3) \times U(1)}} \label{Ptre} \end{equation} so that everything is defined in terms of structure constants of the $\mathfrak{su}(4)$ Lie algebra. The quickest way to introduce these structure constants and their chosen normalization is by writing the Maurer--Cartan equations. We do this introducing already the splitting: \begin{equation} \mathfrak{su}(4) \, = \, \mathbb{H} \, \oplus \, \mathbb{K} \label{splittato} \end{equation} between the subalgebra $\mathbb{H} \, \equiv \, \mathfrak{su}(3) \times \mathfrak{u}(1)$ and the complementary orthogonal subspace $\mathbb{K}$ which is tangent to the coset manifold. Hence we name $H^i \, (i=1,\dots, 9)$ a basis of $1$-form generators of $\mathbb{H}$ and $K^\alpha \, (\alpha =1,\dots, 6)$ a basis of $1$-form generators of $\mathbb{K}$. With these notation the Maurer--Cartan equations defining the structure constants of $\mathfrak{su}(4)$ have the following form: \begin{eqnarray} dK^\alpha + \mathcal{B}^{\alpha\beta} \, \wedge \, K^\gamma \, \delta_{\beta \gamma} & = & 0 \nonumber\\ d\mathcal{B}^{\alpha \beta } \, + \, \mathcal{B}^{\alpha \gamma } \, \wedge \, \mathcal{B}^{\delta \beta } \, \delta_{\gamma\delta} \, - \,\mathcal{X}^{\alpha \beta }_{\phantom{\alpha \beta}\gamma\delta} \, K^\gamma \, \wedge \, K^\delta & = & 0 \label{maureCP3} \end{eqnarray} where: \begin{enumerate} \item { the antisymmetric $1$-form valued matrix $B^{\alpha \beta }$ is parametrized by the $9$ generators of the $\mathfrak{u}(3)$ subalgebra of $\mathfrak{so}(6)$ in the following way: \begin{equation} \mathcal{B}^{\alpha \beta } \, = \, \left( \begin{array}{llllll} 0 & H^9 & -H^8 & H^1+H^2 & H^6 & -H^5 \\ -H^9 & 0 & H^7 & H^6 & H^1+H^3 & H^4 \\ H^8 & -H^7 & 0 & -H^5 & H^4 & H^2+H^3 \\ -H^1-H^2 & -H^6 & H^5 & 0 & H^9 & -H^8 \\ -H^6 & -H^1-H^3 & -H^4 & -H^9 & 0 & H^7 \\ H^5 & -H^4 & -H^2-H^3 & H^8 & -H^7 & 0 \end{array} \right) \label{spinconP3} \end{equation}} \item {the symbol $ \mathcal{X}^{\alpha \beta }_{\phantom{\alpha \beta}\gamma\delta} $ denotes the following constant, 4-index tensor: \begin{equation} \mathcal{X}^{\alpha \beta }_{\phantom{\alpha \beta}\gamma\delta} \, \equiv \, \left( \delta^{\alpha \beta }_{\gamma \delta } \, + \, \mathcal{K}^{\alpha \beta } \, \mathcal{K}^{\gamma \delta } \, + \, \mathcal{K}^{\alpha}_{\phantom{\alpha } \gamma} \, \mathcal{K}^{\beta}_{\phantom{\beta } \delta} \,\right) \label{Qtensor} \end{equation}} \item{ the symbol $\mathcal{K}^{\alpha \beta }$ denotes the entries of the following antisymmetric matrix: \begin{equation} \mathcal{{K}} \, = \, \left( \begin{array}{llllll} 0 & 0 & 0 & -1 & 0 & 0 \\ 0 & 0 & 0 & 0 & -1 & 0 \\ 0 & 0 & 0 & 0 & 0 & -1 \\ 1 & 0 & 0 & 0 & 0 & 0 \\ 0 & 1 & 0 & 0 & 0 & 0 \\ 0 & 0 & 1 & 0 & 0 & 0 \end{array} \right) \label{Khat} \end{equation}} \end{enumerate} The Maurer Cartan equations (\ref{maureCP3}) can be reinterpreted as the structural equations of the $\mathbb{P}^3$ 6-dimensional manifold. It suffices to identify the antisymmetric $1$-form valued matrix $\mathcal{B}^{\alpha \beta }$ with the spin connection and identify the vielbein $\mathcal{B}^\alpha$ with the coset generators $K^\alpha$, modulo a scale factor $\lambda$ \begin{equation} \mathcal{B}^\alpha \, = \, \frac{1}{\lambda} \, K^\alpha \label{intevielb} \end{equation} With these identifications the first of eq.s(\ref{maureCP3}) becomes the vanishing torsion equation, while the second singles out the Riemann tensor as proportional to the tensor $\mathcal{X}^{\alpha \beta }_{\phantom{\alpha \beta}\gamma\delta}$ of eq.(\ref{Qtensor}). Indeed we can write: \begin{eqnarray} \mathcal{R}^{\alpha \beta } & = & d\mathcal{B}^{\alpha \beta } \, + \, \mathcal{B}^{\alpha \gamma } \, \wedge \, \mathcal{B}^{\delta \beta } \, \delta_{\gamma\delta} \nonumber\\ & = & \mathcal{R}^{\alpha \beta }_{\phantom{\alpha \beta}\gamma\delta} \mathcal{B}^{\gamma} \, \wedge \, \mathcal{B}^{\delta} \label{2curvaP3} \end{eqnarray} where: \begin{equation} \mathcal{R}^{\alpha \beta }_{\phantom{\alpha \beta}\gamma\delta} \, = \, \lambda^2 \, \mathcal{X}^{\alpha \beta }_{\phantom{\alpha \beta}\gamma\delta} \label{rimanone} \end{equation} \par Using the above Riemann tensor we immediately retrieve the explicit form of the Ricci tensor: \begin{equation} \mathrm{Ric}_{\alpha\beta} \, = \, 4 \, \lambda^2 \,\eta_{\alpha\beta} \label{riccilambda} \end{equation} For later convenience in discussing the compactification ansatz it is convenient to rename the scale factor as follows: \begin{equation} \lambda \, = \, 2\, e \label{valuelambda} \end{equation} In this way we obtain: \begin{equation} \mathrm{Ric}_{\alpha\beta} \, = \, 16 \, e^2 \,\eta_{\alpha\beta} \label{riccilambda2} \end{equation} which will be recognized as one of the field equations of type IIA supergravity. \par Let us now come to the interpretation of the matrix $\mathcal{K}$. This matrix is immediately identified as encoding the intrinsic components of of the K\"ahler $2$-form. Indeed $ \mathcal{{K}}$ is the unique antisymmetric matrix which, within the fundamental $6$-dimensional representation of the $\mathfrak{so}(6) \sim \mathfrak{su}(4)$ Lie algebra, commutes with the entire subalgebra $\mathfrak{u}(3) \, \subset \, \mathfrak{su}(4)$. Hence $\mathcal{K} $ generates the $\mathrm{U(1)}$ subgroup of $\mathrm{U(3)}$ and this guarantees that the K\"ahler $2$-form will be closed and coclosed as it should be. Indeed it is sufficient to set: \begin{equation} \widehat{\mathcal{K}}\, = \, \mathcal{K}_{\alpha \beta } \, \mathcal{B}^\alpha \, \wedge \, \mathcal{B}^\beta \label{idekahler} \end{equation} namely: \begin{equation} \widehat{\mathcal{K}} \, = \,- \, 2 \, \left( \mathcal{B}^1 \, \wedge \, \mathcal{B}^4 \, + \, \mathcal{B}^2 \, \wedge \, \mathcal{B}^5 \, + \, \mathcal{B}^3 \, \wedge \, \mathcal{B}^6 \right) \label{Kappoform} \end{equation} and we obtain that the $2$-form $\widehat{\mathcal{K}}$ is closed and coclosed: \begin{equation} d\, \widehat{\mathcal{K}} \, = \, 0 \quad , \quad d^\star \widehat{\mathcal{K}} \, = \, 0 \label{chiuca&cochiusa} \end{equation} Let us also note that the antisymmetric matrix $\mathcal{K}$ satisfies the following identities: \begin{eqnarray} \mathcal{K}^2 & = & - \,{1}_{6 \times 6} \nonumber\\ 8 \, \mathcal{K}_{\alpha \beta } & = & \epsilon _{\alpha \beta \gamma \delta \tau \sigma } \mathcal{K}^{\gamma \delta } \, \mathcal{K}^{\tau \sigma} \label{preperK} \end{eqnarray} Using the $\mathfrak{so}(6)$ Clifford Algebra defined in appendix \ref{d7spinorbasis} we define the following spinorial operators: \begin{equation} \mathcal{W} \, = \, {\mathcal{K}}_{\alpha \beta } \, \tau^{\alpha \beta } \quad ; \quad \mathcal{P} \, = \, \mathcal{W}\, \tau_7 \label{operatorini} \end{equation} and we can verify that the matrix $\mathcal{P}$ satisfies the following algebraic equations: \begin{equation} \mathcal{P}^2 +4\, \mathcal{P} -12 \, \times \, \mathbf{1} \, = \, 0 \label{agequadiP} \end{equation} whose roots are $2$ and $-6$. Indeed in the chosen $\tau$-matrix basis the matrix $\mathcal{P}$ is diagonal with the following explicit form: \begin{equation} \mathcal{P} \, = \, \left( \begin{array}{llllllll} 2 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ 0 & 2 & 0 & 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 2 & 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 2 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 2 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & 2 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 & -6 & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 & 0 & -6 \end{array} \right) \label{ExpformadiP} \end{equation} Let us also introduce the following matrix valued $1$-form: \begin{equation} \mathcal{Q} \, \equiv \, \left(\ft 32 \, \mathbf{1} \, + \, \ft 14 \, \mathcal{P} \right)\, \tau_\alpha \, \mathcal{B}^\alpha \label{Qforma} \end{equation} whose explicit form in the chosen basis is the following one: \begin{equation} \mathcal{Q} \, = \, \left( \begin{array}{llllllll} 0 & 2 \mathcal{B}^3 & -2 \mathcal{B}^2 & 0 & -2 \mathcal{B}^6 & 2 \mathcal{B}^5 & -2 \mathcal{B}^4 & 2 \mathcal{B}^1 \\ -2 \mathcal{B}^3 & 0 & 2 \mathcal{B}^1 & 2 \mathcal{B}^6 & 0 & -2 \mathcal{B}^4 & -2 \mathcal{B}^5 & 2 \mathcal{B}^2 \\ 2 \mathcal{B}^2 & -2 \mathcal{B}^1 & 0 & -2 \mathcal{B}^5 & 2 \mathcal{B}^4 & 0 & -2 \mathcal{B}^6 & 2 \mathcal{B}^3 \\ 0 & -2 \mathcal{B}^6 & 2 \mathcal{B}^5 & 0 & -2 \mathcal{B}^3 & 2 \mathcal{B}^2 & 2 \mathcal{B}^1 & 2 \mathcal{B}^4 \\ 2 \mathcal{B}^6 & 0 & -2 \mathcal{B}^4 & 2 \mathcal{B}^3 & 0 & -2 \mathcal{B}^1 & 2 \mathcal{B}^2 & 2 \mathcal{B}^5 \\ -2 \mathcal{B}^5 & 2 \mathcal{B}^4 & 0 & -2 \mathcal{B}^2 & 2 \mathcal{B}^1 & 0 & 2 \mathcal{B}^3 & 2 \mathcal{B}^6 \\ 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \end{array} \right) \label{Qexplicit} \end{equation} and let us consider the following Killing spinor equation: \begin{equation} \mathcal{D} \, \eta \, + \, e \, \mathcal{Q} \, \eta \, = \, 0 \label{killospinoequa} \end{equation} where, by definition: \begin{equation} \mathcal{D} \, = \, d \, - \, \ft 14 \, \mathcal{B}^{\alpha\beta } \, \tau_{\alpha \beta } \label{so6covderi} \end{equation} denotes the $\mathfrak{so}(6)$ covariant differential of spinors defined over the $\mathbb{P}^3$ manifold. The connection $\mathcal{Q}$ is closed with respect to the spin connection \begin{equation} \Omega \, = \, - \, \ft 14 \, \mathcal{B}^{\alpha\beta } \, \tau_{\alpha \beta } \label{spinaconnaU3} \end{equation} since we have: \begin{equation} \mathcal{D} \, \mathcal{Q} \, \equiv \, d\mathcal{Q} \, + \, e^2 \, \Omega \, \wedge \, \mathcal{Q} \, + \, \mathcal{Q} \, \wedge \, \Omega \, = \, 0 \label{closureQ} \end{equation} as it can be explicitly checked. The above result follows because the matrix $\mathcal{K}_{\alpha \beta }$ commutes with all the generators of $\mathfrak{u}(3)$. In view of eq.(\ref{closureQ}) the integrability of the Killing (\ref{killospinoequa}) becomes the following one: \begin{equation} \mathrm{ Hol} \, \eta \, = \, 0 \label{integracondo} \end{equation} where we have defined the holonomy $2$-form: \begin{equation} \mathrm{ Hol} \,\equiv \, \left( \mathcal{D}^2 \, + \,e^2 \, \mathcal{Q} \, \wedge \, \mathcal{Q}\right) \, = \, \left( - \, \ft 14 \, \mathcal{R}^{\alpha \beta } \, \tau_{\alpha \beta } \, + \, e^2 \, \mathcal{Q} \, \wedge \, \mathcal{Q}\right) \label{Holodefi} \end{equation} and $\mathcal{R}^{\alpha \beta }$ denotes the curvature $2$-form (\ref{2curvaP3}). Explicit evaluation of the holonomy $2$-form yields the following result. \begin{equation} \mathrm{Hol} \, = \, e^2 \, \left( \begin{array}{llllllll} 0 & 0 & 0 & 0 & 0 & 0 & 8 [\mathcal{B}^2 \wedge \mathcal{B}^6 -\mathcal{B}^3 \wedge \mathcal{B}^5 ] & 8 \mathcal{B}^5 \wedge \mathcal{B}^6 -8 \mathcal{B}^2 \wedge \mathcal{B}^3 \\ 0 & 0 & 0 & 0 & 0 & 0 & 8 \mathcal{B}^3 \wedge \mathcal{B}^4 -8 \mathcal{B}^1 \wedge \mathcal{B}^6 & 8 [\mathcal{B}^1 \wedge \mathcal{B}^3 -\mathcal{B}^4 \wedge \mathcal{B}^6 ] \\ 0 & 0 & 0 & 0 & 0 & 0 & 8 [\mathcal{B}^1 \wedge \mathcal{B}^5 -\mathcal{B}^2 \wedge \mathcal{B}^4 ] & 8 \mathcal{B}^4 \wedge \mathcal{B}^5 -8 \mathcal{B}^1 \wedge \mathcal{B}^2 \\ 0 & 0 & 0 & 0 & 0 & 0 & 8 [\mathcal{B}^2 \wedge \mathcal{B}^3 -\mathcal{B}^5 \wedge \mathcal{B}^6 ] & 8 [\mathcal{B}^2 \wedge \mathcal{B}^6 -\mathcal{B}^3 \wedge \mathcal{B}^5 ] \\ 0 & 0 & 0 & 0 & 0 & 0 & 8 \mathcal{B}^4 \wedge \mathcal{B}^6 -8 \mathcal{B}^1 \wedge \mathcal{B}^3 & 8 \mathcal{B}^3 \wedge \mathcal{B}^4 -8 \mathcal{B}^1 \wedge \mathcal{B}^6 \\ 0 & 0 & 0 & 0 & 0 & 0 & 8 [\mathcal{B}^1 \wedge \mathcal{B}^2 -\mathcal{B}^4 \wedge \mathcal{B}^5 ] & 8 [\mathcal{B}^1 \wedge \mathcal{B}^5 -\mathcal{B}^2 \wedge \mathcal{B}^4 ] \\ 0 & 0 & 0 & 0 & 0 & 0 & 0 & -8 \, \widehat{\mathcal{K}} \\ 0 & 0 & 0 & 0 & 0 & 0 & 8 \, \widehat{\mathcal{K}} & 0 \end{array} \right ) \label{Holoper} \end{equation} It is evident by inspection that the holonomy $2$-form vanishes on the subspace of spinors that belong to the eigenspace of eigenvalue $2$ of the operator $\mathcal{P}$. In the chosen basis this eigenspace is spanned by all those spinors whose last two components are zero and on such spinors the operator $\mathrm{Hol}$ vanishes. \par Let us now connect these geometric structures to the compactification ansatz. \subsection{The compactification ansatz} As usual we denote with latin indices those in the direction of $4$-space and with Greek indices those in the direction of the internal $6$-space. Let us also adopt the notation: $B^a$ for the $\mathrm{AdS}_4$ vielbein just as $\mathcal{B}^{\alpha}$ is the vielbein of the K\"ahler three-fold described in the previous section\footnote{This formulation is analogue to the one used in the case of M-theory compactifications \cite{}}. With these notations the Kaluza-Klein ansatz is the following one: \begin{eqnarray} \mathcal{G}_{\underline{ab}} & = & \left \{\begin{array}{c} 2 \, e \, \exp\left[-\varphi_0 \right] \, \mathcal{K}_{\alpha \beta } \\ 0 \quad \mbox{otherwise} \end{array} \right. \nonumber\\ \mathcal{G}_{\underline{a_1a_2a_3a_4}} & = & \left \{\begin{array}{c} - \, e \, \exp\left[-\varphi_0 \right] \, \epsilon_{a_1a_2a_3a_4} \\ 0 \quad \mbox{otherwise} \end{array} \right. \nonumber\\ \mathcal{H}_{\underline{a_1a_2a_3}} & = & 0 \nonumber\\ \varphi & = & \varphi_0 \, = \, \mbox{constant}\, \nonumber\\ V^a & = & B^{a} \nonumber\\ V^\alpha & = & \mathcal{B}^\alpha\nonumber\\ \omega^{ab} & = & B^{ab} \nonumber\\ \omega^{\alpha \beta } & = & \mathcal{B}^{\alpha \beta } \label{Kkansatz} \end{eqnarray} where $B^a\, , \, B^{ab}$ respectively denote the vielbein and the spin connection of $\mathrm{AdS_4}$, satisfying the following structural equations: \begin{eqnarray} 0 & = & dB^a \, - \, B^{ab} \, \wedge \, B^c \, \eta_{bc} \nonumber\\ dB^{ab} \, - \, B^{ac} \, \wedge \, B^{db} \, \eta_{cd} & = & - 16 \, e^2 \, \, B^a \, \wedge \, B^b \nonumber\\ & \Downarrow & \nonumber\\ \mbox{Ric}_{ab} & = & \, - \, 24\, e^2 \, \eta_{ab} \label{ads4geo} \end{eqnarray} while $\mathcal{B}^\alpha$ and $\mathcal{B}^{\alpha \beta }$ are the analogous data for the internal $\mathbb{P}^3$ manifold: \begin{eqnarray} 0 & = & d\mathcal{B}^\alpha \, - \, \mathcal{B}^{\alpha \beta } \, \wedge \, \mathcal{B}^\gamma \, \eta_{ \beta\gamma } \nonumber\\ d\mathcal{B}^{\alpha \beta } \, - \, \mathcal{B}^{\alpha \gamma } \, \wedge \, \mathcal{B}^{\delta \beta } \, \eta_{\gamma \delta } & = & - R^{\alpha \beta }_{\,\,\,\ \gamma \delta }\, \mathcal{B}^\gamma \, \wedge \, \mathcal{B}^\delta \nonumber\\ & \Downarrow & \nonumber\\ \mbox{Ric}_{\alpha \beta } & = & 16\, e^2 \, \eta_{\alpha \beta } \label{HKgeo} \end{eqnarray} whose geometry we described in the previous section. \par With these normalizations we can check that the dilaton equation (\ref{Rreq01}) and the Einstein equation (\ref{Einsteinus}), are satisfied upon insertion of the above Kaluza Klein ansatz. All the other equations are satisfied thanks to the fact that the K\"ahler form $\widehat{\mathcal{K}}$ is closed and coclosed: eq.(\ref{chiuca&cochiusa}) \subsection{Killing spinors on $\mathbb{P}^3$} The next task we are faced with is to determine the equation for the Killing spinors on the chosen background, which by construction is a solution of supergravity equations. \par Following a standard procedure we recall that the vacuum has been defined by choosing certain values for the bosonic fields and setting all the fermionic ones equal to zero: \begin{eqnarray} \psi_{L/R|\underline{\mu}} & = & 0 \nonumber\\ \chi_{L/R} & = & 0\nonumber\\ \rho_{L/R|\underline{ab}} & = & 0 \label{zerofermioni} \end{eqnarray} The equation for the Killing spinors will be obtained by imposing that the parameter of supersymmetry preserves the vanishing values of the fermionic fields once the specific values of the bosonic ones is substituted into the expression for the susy rules, namely into the rheonomic parametrizations. \par To implement these conditions we begin by choosing a well adapted basis for the $d=11$ gamma matrices. This is done by setting: \begin{equation} \Gamma^{\underline{a}} \, = \, \left \{\begin{array}{ccc} \Gamma^a & = & \, \gamma^a \, \otimes \, \mathbf{1} \\ \Gamma^\alpha & = & \gamma^5 \, \otimes \, \tau^\alpha \\ \Gamma^{11} & = & {\rm i} \, \gamma^5 \, \otimes \, \tau^7 \ \end{array} \, \right. \label{productgamma} \end{equation} Next we consider the tensors and the matrices introduced in eq.s (\ref{Mntensors},\ref{Mmatrapm},\ref{pongo},\ref{pongo2}). In the chosen background we find: \begin{eqnarray} \mathcal{M}_{\alpha \beta } &=& \ft 14 \, e \, \mathcal{K}_{\alpha \beta }\,\,\,;\,\,\,\, \mathcal{M}_{abcd} = \, \ft 1{16} \, e \, \epsilon_{abcd}\nonumber\\ \mathcal{N}_0 & = & 0 \,\,\,;\,\,\,\, \mathcal{N}_{\alpha \beta } =\ft 12 \, e \, \mathcal{K}_{\alpha \beta }\,\,\,;\,\,\,\, \mathcal{N}_{abcd} = \, - \,\ft 1{24} \, e \, \epsilon_{abcd}\,, \label{tensorisuvuoto} \end{eqnarray} all the other components of the above matrices being zero. Hence in terms of the operators introduced in the previous section we find: \begin{eqnarray} \mathcal{M}_\pm & = & {\rm i} \, e \, \left(\mp \ft 14 \, \mathbf{1} \, \otimes \, \mathcal{W} \, - \, \ft 32 \,{\rm i} \gamma_5 \, \otimes \, \mathbf{1}\right)\nonumber\\ \mathcal{N}^{(even)}_\pm & = & e \, \left(\ft 12 \, \mathbf{1} \, \otimes \, \mathcal{W} \, \mp \, {\rm i} \gamma_5 \, \otimes \, \mathbf{1}\right)\nonumber\\ \mathcal{N}^{(odd)}_\pm & = & 0 \label{valoritensori} \end{eqnarray} It is now convenient to rewrite the Killing spinor condition in a non chiral basis introducing a supersymmetry parameter of the following form: \begin{equation} \epsilon \, = \, \epsilon _L \, + \, \epsilon _R \label{nonchiral} \end{equation} In this basis the matrices $\mathcal{M}$ and $\mathcal{N}^{(even)}$ read \begin{eqnarray} \mathcal{M}&=&\mathcal{M}_+\,\frac{1}{2}\,(\relax{\rm 1\kern-.35em 1}+\Gamma^{11})+\mathcal{M}_-\,\frac{1}{2}\,(\relax{\rm 1\kern-.35em 1}-\Gamma^{11}) =-\frac{i}{8}\,e^{\varphi}\,G_{\underline{ab}}\,\Gamma^{\underline{ab}}\,\Gamma^{11}- \frac{i}{16}\,e^{\varphi}\,G_{\underline{abcd}}\,\Gamma^{\underline{abcd}}=\nonumber\\&=& \frac{e}{4}\,\gamma_5\otimes (\mathcal{W}\tau_7+6\,\relax{\rm 1\kern-.35em 1})\,,\label{calm}\\ \mathcal{N}^{(even)}&=&\mathcal{N}^{(even)}_+\,\frac{1}{2}\,(\relax{\rm 1\kern-.35em 1}+\Gamma^{11})+\mathcal{N}^{(even)}_-\,\frac{1}{2}\,(\relax{\rm 1\kern-.35em 1}-\Gamma^{11})=\frac{1}{4}\,e^{\varphi}\,G_{\underline{ab}}\,\Gamma^{\underline{ab}}+ \frac{1}{24}\,e^{\varphi}\,G_{\underline{abcd}}\,\Gamma^{\underline{abcd}}=\nonumber\\&=& \frac{e}{2}\,\relax{\rm 1\kern-.35em 1}\otimes (\mathcal{W}+2\tau_7)\,.\label{caln} \end{eqnarray} Upon use of this parameter the Killing spinor equation coming from the gravitino rheonomic parametrization (\ref{rhoparaSF}) takes the following form: \begin{equation} \mathcal{D} \, \epsilon \, = - \mathcal{M} \, \Gamma_{\underline{a}} \, V^{\underline{a}} \,\epsilon \,, \label{gravino} \end{equation} while the Killing spinor equation coming from the dilatino rheonomic parametrization is as follows: \begin{equation} 0 \, =\mathcal{N}^{(even)} \, \epsilon\,. \label{ditalino} \end{equation} Let us now insert these results into the Killing spinor equations and let us take a tensor product representation for the Killing spinor: \begin{equation} \epsilon \, = \, \varepsilon \, \otimes \, \eta \label{tensorerepre} \end{equation} where $\varepsilon$ is a $4$-component $d=4$ spinor and $\eta$ is an $8$-component $d=6$ spinor. \par With these inputs equation (\ref{gravino}) becomes: \begin{eqnarray} 0 & = & \mathcal{D}_{[4]}\varepsilon \, \otimes \, \eta \, - \, e \, \gamma_a\,\gamma_5 \, B^a \varepsilon \otimes \, \left(\ft 32 \, + \, \ft 14 \, \mathcal{P} \right)\, \eta \nonumber\\ & \null & \, + \, \varepsilon \, \otimes \left[\mathcal{D}_{[6]} \, + \, e \, \left(\ft 32 \, + \, \ft 14 \, \mathcal{P} \right) \, \tau_\alpha \, \mathcal{B}^\alpha \right] \,\eta \label{gravino2} \end{eqnarray} while eq.(\ref{ditalino}) takes the form: \begin{equation} 0 \, = \, \varepsilon \, \otimes \, \left( \ft 12 \, \mathcal{W} \, + \tau_7 \right) \, \eta \label{ditalino2} \end{equation} Let us now recall that equation (\ref{killospinoequa}) is integrable on the eigenspace of eigenvalue $2$ of the $\mathcal{P}$-operator. Then equation (\ref{gravino2}) is satisfied if: \begin{eqnarray} \left( \mathcal{D}_{[4]} \, - 2\, e \, \gamma_a\,\gamma_5 \, B^a \right) \varepsilon & = & 0\nonumber\\ \mathcal{P}\, \eta & = & 2 \, \eta \nonumber\\ \left( \mathcal{D}_{[6]} \, + \, e \, \mathcal{Q} \right) \,\eta & = & 0 \label{croma} \end{eqnarray} The first of the above equation is the correct equation for Killing spinors in $\mathrm{AdS_4}$. It emerges if the eigenvalue of $\mathcal{P}$ is $2$. The second and the third are the already studied integrable equation for six Killing spinors out of eight. It should now be that the dilatino equation (\ref{ditalino2}) is satisfied on the eigenspace of eigenvalue $2$, which is indeed the case: \begin{equation} \mathcal{P}\, \eta \, = \, 2 \,\eta \, \Rightarrow \, \left( \ft 12 \, \mathcal{W} \, + \tau_7 \right) \, \eta \, = \, 0 \label{rimasuglio} \end{equation} \subsection{Gauge completion in mini superspace} As a necessary ingredient of our construction let $\eta_A$ ($A=1,\dots,6$) denote a complete and orthonormal basis of solutions the internal Killing spinor equation, namely: \begin{eqnarray} \mathcal{P}\, \eta_A & = & 2 \, \eta_A \nonumber\\ \left( \mathcal{D}_{[6]} \, + \, e \, \mathcal{Q} \right) \,\eta_A & = & 0 \nonumber\\ \eta^T_A \, \eta_B & = & \delta_{AB} \quad ; \quad A,B \, = \,A=1,\dots,6 \label{basispinotti} \end{eqnarray} On the other hand let $\chi_x$ denote a basis of solutions of the Killing spinor equation on $AdS_4$-space, namely (\ref{d4Killing}) , normalized as in eq.(\ref{normakillo4}). Furthermore let us recall the matrix $K$ defining the intrinsic components of the K\" ahler $2$-form. \par In terms of these objects we can satisfy the rheonomic parametrizations of the $1$-forms spanning the $d=10$ superPoincar\'e subalgebra of the FDA with the following position:\footnote{With respect to the results obtained in \cite{Fre:2007xy} for the mini superspace extension of M-theory configuration everything is identical in eq.s(\ref{graviansaz}-\ref{omabAnsaz}) except the obvious reduction of the index range of ($\alpha,\beta,\dots$) from $7$ to $6$-values. The only difference is in eq.(\ref{omalbetAnsaz}) where the last contribution proportional to the K\"ahler form is an essential novelty of this new type of compactification.} \begin{eqnarray} \Psi & = & \chi_x \, \otimes \, \eta_A \, \Phi^{x|A} \label{graviansaz}\\ V^a & = & B^a \, - \, \ft {1}{8e} \, \overline{\chi}_x \, \gamma^a \, \chi_y \, \Delta^{xy} \label{VaAnsaz}\\ V^\alpha & = & \mathcal{B}^\alpha \, - \, \ft {1}{8} \, \eta_A^T \, \tau^\alpha \, \eta_B \, \mathcal{A}^{AB} \label{ValpAnsaz}\\ \omega ^{ab} & = & B^{ab} \, + \, \ft {1}{2} \, \overline{\chi}_x \, \gamma^{ab} \,\gamma_5 \, \chi_y \, \Delta^{xy} \label{omabAnsaz}\\ \omega^{\alpha \beta} & = & \mathcal{B}^{\alpha \beta } \, + \, \ft {e}{4} \, \eta_A^T \, \tau^{\alpha\beta } \, \eta_B \, \mathcal{A}^{AB} \, - \, \ft e4 \mathcal{K}^{\alpha \beta } \, \mathcal{K}_{AB} \, \mathcal{A}^{AB}\label{omalbetAnsaz} \end{eqnarray} \par The proof that the above ansatz satisfies the rheonomic parametrizations is by direct evaluation upon use of the following crucial spinor identities. \par Let us define \begin{equation} \mathcal{U} \, = \, \left( \ft 32 \, \mathbf{1} \, + \, \ft 14 \, \mathcal{P} \right)\,. \label{prillo1} \end{equation} We can verify that: \begin{equation} \left( \eta_A \, \tau^\alpha \, \mathcal{U} \, \tau^\alpha \, \eta_B \, - \, \eta_A \, \tau^{\alpha\beta} \, \eta_B \right) \, \mathcal{A}^{AB} \, = \, \mathcal{K}^{\alpha \beta } \, \mathcal{K}_{AB} \, \mathcal{A}^{AB}\,. \label{fortedeimarmi} \end{equation} Furthermore, naming: \begin{eqnarray} \Delta \mathcal{B}^\alpha & = & - \, \ft {1}{8} \, \eta_A^T \, \tau^\alpha \, \eta_B \, \mathcal{A}^{AB} \label{DalpAnsaz}\\ \Delta\omega^{\alpha \beta} & = & \ft {e}{4} \, \eta_A^T \, \tau^{\alpha\beta } \, \eta_B \, \mathcal{A}^{AB} \, - \, \ft e4 \mathcal{K}^{\alpha \beta } \, \mathcal{K}_{AB} \, \mathcal{A}^{AB}\label{DomalbetAnsaz} \end{eqnarray} we obtain: \begin{equation} - \,\Delta\omega^{\alpha \beta} \, \wedge \, \Delta \mathcal{B}^\beta \, = \, \ft {e}{8} \, \eta_A^T \, \tau^\alpha \, \eta_B \, \mathcal{A}^{AC} \, \wedge \, \mathcal{A}^{CB} \label{ferdison} \end{equation} These identities together with the $d=4$ spinor identities (\ref{firzusd4x1},\ref{firzusd4x2}) suffice to verify that the above ansatz satisfies the required equations. \subsection{Gauge completion of the $\mathbf{B}^{[2]}$ form} The next task in order to write the explicit form of the pure spinor sigma-model is the derivation of the explicit expression for the $\mathbf{B}^{[2]}$ form. When this is done we will be able to write the complete Green Schwarz action in explicit form. \par There is an ansatz for $\mathbf{B}^{[2]}$ which is the following one: \begin{equation} \mathbf{B}^{[2]} \, = \, \alpha \, \, \overline{\chi}_x \, \chi_y \, \overline{\eta}_A \, \tau_7 \, \eta_B \, \Phi^x_A \, \wedge \, \Phi^y_B \label{ronron} \end{equation} By explicit evaluation we verify that with \begin{equation} \alpha \, = \, \frac{1}{4 \, e} \label{ruppo} \end{equation} The rheonomic parametrization of the H-field strength is satisfied, namely: \begin{equation} d \mathbf{B}^{[2]} \, = \, - {\rm i} \, \overline{\psi} \,\wedge \, \Gamma_{\underline{a}} \, \Gamma_{11} \, \psi \, \wedge \, V^{\underline{a}} \label{b2eque} \end{equation} \subsection{Rewriting the mini-superspace gauge completion as MC forms on the complete supercoset} Next, following the procedure introduced in \cite{Fre':2006es}, we rewrite the mini-superspace extension of the bosonic solution solely in terms of Maurer Cartan forms on the supercoset (\ref{supermanifoldamente}). Let the graded matrix $\mathbb{L} \, \in \, \mathrm{Osp(6|4)}$ be the coset representative of the coset $\mathcal{M}^{10|24}$, such that the Maurer Cartan form $\Sigma$ can be identified as: \begin{equation} \Sigma = \mathbb{L}^{-1} \, d \mathbb{L} \label{cosettusrepre2} \end{equation} Let us now factorize $\mathbb{L}$ as in \cite{Fre':2006es}: \begin{equation} \mathbb{L} = \mathbb{L}_F \, \mathbb{L}_B \label{factorL2} \end{equation} where $\mathbb{L}_F$ is a coset representative for the coset : \begin{equation} \frac{\mathrm{Osp(6 | \, 4 })}{\mathrm{SO(6)} \times \mathrm{Sp(4,\mathbb{R})}} \, \ni \, \mathbb{L}_F \label{LF2} \end{equation} just in eq.(\ref{LF2}) but $\mathbb{L}_B$ rather than being the $\mathrm{Osp(6|4)}$ embedding of a coset representative of just $\mathrm{AdS_4}$, is the embedding of a coset representative of $\mathrm{AdS_4} \times \mathbb{P}^3$, namely: \begin{equation} \mathbb{L}_B \, = \, \left(\begin{array}{c|c} \mathrm{L_{\mathrm{AdS_4}}} & 0 \\ \hline 0 & \mathrm{L_{{\mathbb{P}}^3}} \ \end{array} \right) \quad ; \quad \frac{\mathrm{Sp(4,\mathbb{R})}}{\mathrm{SO(1,3)}}\, \ni \, \mathrm{L_{\mathrm{AdS_4}}} \quad ; \quad \frac{\mathrm{SO(6)}}{\mathrm{U(3)}}\, \ni \, \mathrm{L_{{\mathbb{P}}^3}} \label{salamefelino2} \end{equation} In this way we find: \begin{equation} \Sigma = \mathbb{L}_B^{-1} \, \Sigma_F \, \mathbb{L}_B \, + \, \mathbb{L}_B^{-1} \, d \, \mathbb{L}_B \label{ferrone2} \end{equation} Let us now write the explicit form of $\Sigma_F $, as in \cite{Fre':2006es}: \begin{equation} \Sigma_F =\left(\begin{array}{c|c} \Delta_F & \Phi_A \\ \hline \, 4 \, {\rm i} \, e \, \overline{\Phi}_A \, \gamma_5 & - \, e \, \widetilde{\mathcal{A}}_{AB} \ \end{array} \right) \label{fermionform2} \end{equation} where $\Phi_A$ is a Majorana-spinor valued fermionic $1$-form and where $\Delta_F$ is an $\mathfrak{sp}(4,\mathbb{R})$ Lie algebra valued $1$-form presented as a $4 \times 4$ matrix. Both $\Phi_A$ as $\Delta_F$ and $\widetilde{\mathcal{A}}_{AB}$ depend only on the fermionic $\theta$ coordinates and differentials. \par On the other hand we have: \begin{equation} \mathbb{L}_B^{-1} \, d \, \mathbb{L}_B \, = \, \left(\begin{array}{c|c} \Delta_{\mathrm{AdS_4}} & 0 \\ \hline 0 & \mathcal{A}_{\mathbb{P}^3} \ \end{array} \right) \label{boseforma2} \end{equation} where the $\Delta_{\mathrm{AdS_4}}$ is also an $\mathfrak{sp}(4,\mathbb{R})$ Lie algebra valued $1$-form presented as a $4 \times 4$ matrix, but it depends only on the bosonic coordinates $x^\mu$ of the anti de Sitter space $\mathrm{AdS_4}$. In the same way $\mathcal{A}_{\mathbb{P}^3}$ is an $\mathfrak{su}(4)$ Lie algebra element presented as an $\mathfrak{so}(6)$ antisymmetric matrix in $6$-dimensions. It depends only on the bosonic coordinates $y^\alpha$ of the internal $\mathbb{P}^3$ manifold. According to eq(\ref{lambamatra}) we can write: \begin{equation} \Delta_{\mathrm{AdS_4}} \, = \, - \ft 14 \, B^{ab} \, \gamma_{ab} \, - \,2\, e \, \gamma_a \, \gamma_5 \, B^a \label{Bbwriting2} \end{equation} where $\left\{ B^{ab}\, ,\, B^a\right\} $ are respectively the spin-connection and the vielbein of $\mathrm{AdS_4}$. \par Similarly, using the inversion formula (\ref{gonzalo}) presented in appendix we can write: \begin{equation} \mathcal{A}_{\mathbb{P}^3} \, = \left( - \, 2 \, \mathcal{B}^{\alpha } \, {\bar \tau}_\alpha \, + \, \ft{1} {4 \, e} \, \mathcal{B}^{\alpha\beta } \, {\bar \tau}_{\alpha \beta } - \, \ft{1} {4 \, e} \, \mathcal{B}^{\alpha\beta } \, {\mathcal{K}}_{\alpha \beta } \, K\right) \label{cuzco} \end{equation} where $\left\{ \mathcal{B}^{\alpha\beta}\, ,\, \mathcal{B}^\alpha\right\}$ are the connection and vielbein of the internal coset manifold $\mathbb{P}^3$. \par Relying once again on the inversion formulae discussed in the appendix we conclude that we can rewrite eq.s (\ref{graviansaz} - \ref{omalbetAnsaz}) as follows: \begin{eqnarray} \Psi^{x|A} & = & \Phi^{x|A} \label{graviansaz2}\\ V^a & = & E^a \label{VaAnsaz2}\\ V^\alpha & = & E^\alpha \label{ValpAnsaz2}\\ \omega ^{ab} & = & E^{ab} \label{omabAnsaz2}\\ \omega^{\alpha \beta} & = & E^{\alpha \beta } \label{omalbetAnsaz2} \end{eqnarray} where the objects introduced above are the MC forms on the supercoset (\ref{cosettone1024}) according to: \begin{equation} \Sigma \, =\, \mathbb{L}^{-1} \, d \mathbb{L} \, = \, \left(\begin{array}{c|c} - \ft 14 \, E^{ab} \, \gamma_{ab} \, - \,2\, e \, \gamma_a \, \gamma_5 \, E^a & \Phi \\ \hline \, 4 \, {\rm i} \, e \, \overline{\Phi} \, \gamma_5 & \, 2 \, e E^{\alpha } \, {\bar \tau}_\alpha \, - \, \ft{1} {4 } \, \mathcal{B}^{\alpha\beta } \, {\bar \tau}_{\alpha \beta } + \, \ft{1} {4 } \, E^{\alpha\beta } \, {\mathcal{K}}_{\alpha \beta } \, K \ \end{array} \right) \label{pattolina} \end{equation} Consequentely the gauge completion of the $\mathbf{B}^{[2]}$ form becomes: \begin{equation} \mathbf{B}^{[2]} \, = \, \frac{1}{4 \, e} \, \overline{\Phi}\, \left( 1 \, \otimes \, \overline{\tau}_7 \right) \wedge \, \Phi \label{ronron2} \end{equation} \section{Pure Spinors for ${\rm Osp}(6|4)$} In the present section, we show that the number of independent pure spinor components obtained by solving the pure spinor constraint in the present background matches correctly the number of anticommuting $\theta$'s. This implies that, at least formally (since it must be proved in detail) the number of bosonic and fermionic fields match leading to a conformal invariant theory. However, as is known, this is not sufficient for having a conformal invariant theory since all loop contributions to the Weyl anomaly should cancel. This can be guaranteed only by symmetry reasons and for the vanishing of one-loop contribution. \par Nevertheless, we study the pure spinor equations adapted to the present background and we will see that the number of the independent components of the pure spinors is equal to 14 (since we have an interacting theory with RR fields we cannot distinguish between left- and right-movers). We recall the form of the pure spinor constraints for type IIA theory \begin{eqnarray} \label{psA} & \bar\lambda \Gamma_{\underline a} \lambda =0\,, ~~~~~~~~ \bar\lambda \Gamma_{\underline a} \Gamma^{11} \lambda \, V^{\underline a} =0\,, \\ & \bar\lambda \Gamma_{[\underline{ab}]} \lambda \, V^{\underline a} V^{\underline b} =0\,, ~~~~~~ \bar\lambda \Gamma^{11} \lambda =0 \,. \end{eqnarray} where we have combined the 16-component spinors $\lambda_1$ and $\lambda_2$ into a 32-component Dirac spinor $\lambda$. These equations are valid for any background and we have shown in \cite{Psconstra} the number of independent components for the pure spinors matches the number of pure spinor in the Berkovits' "background-independent" constraints. However, in the present setting we can adapt the constraints to the specific background and in particular we choose to embed the vielbein $V^{\underline a}$ using his equation of motion in the momentum $\Pi^{\underline a}_{\pm} e^{\pm}$ and thus simplifying the constraints as follows \begin{eqnarray} \label{psB} & \bar\lambda \Gamma_{a} \lambda =0\,, ~~~~~~~~a=1,\dots,4\,, ~~~~~~ \bar\lambda \Gamma_{\alpha} \lambda =0\,, ~~~~~~~~ \alpha =1, \dots,6\,, \\ & \bar\lambda \Gamma_{\pm} \Gamma^{11} \lambda =0\,, ~~~~~~ \bar\lambda \Gamma_{+-} \lambda =0\,, ~~~~~~ \bar\lambda \Gamma^{11} \lambda = 0 \,. \end{eqnarray} For $\Gamma_{\pm}$ we use the combination $\Gamma_1 \pm \Gamma_3$. \par Now, we can insert the decomposition of $\lambda$ on the basis of Killing spinors \begin{eqnarray} \lambda &=& \chi_x \otimes \eta_A \, \Lambda^{x|A}\label{pureans} \end{eqnarray} where, as usual, $\chi_x$ are the $AdS_4$-Killing spinors and $\eta_A$ are the $\mathbb{CP}^3$ Killing spinors. The free parameters $\Lambda^{x|A}$ are the components the pure spinors. Notice that the index $x$ runs over the four independent $AdS$-Killing spinor basis and the index $A$ runs over the six values of vector representation of $SO(6)$.Therefore, we have in total 24 independent degrees of freedom to solve (\ref{psB}). The number of equations is independent of the backgorund, but the number of independent degrees of freedom is reduced from 32 to 24 and therefore, we need to explore the esistence of the solution. \par Using the decomposition of the Gamma matrices provided in (\ref{productgamma}) and the normalizations of the Killing spinors $\chi_x C \gamma_5 \chi_y = \epsilon_{xy}$ and $\eta_A \eta_B = \delta_{AB}$ , equations (\ref{psB}) read \begin{eqnarray} \label{psC} & (\chi_x C \gamma_{a} \chi_y) \, \delta_{AB} \, \Lambda^{x|A} \Lambda^{y|B} =0\,, ~~~~~~ (\chi_x C \gamma_5 \chi_y) \, \eta_{A} \tau^\alpha \eta_B \, \Lambda^{x|A} \Lambda^{y|B} =0\,, \\ & (\chi_x C \gamma_5 \chi_y)\, \eta_{A} \tau^7 \eta_B \, \Lambda^{x|A} \Lambda^{y|B} =0\,, \\ & (\chi_x C \gamma_5 \gamma_{\pm} \chi_y)\, \eta_{A} \tau^7 \eta_B \, \Lambda^{x|A} \Lambda^{y|B} =0\,, ~~~~~~ (\chi_x C \gamma_{+-} \chi_y) \, \delta_{AB} \, \Lambda^{x|A} \Lambda^{y|B} =0\,. \end{eqnarray} where $C$ is charge conjugation matrix. \par To solve these equations is convenient to adopt a new basis. Since we already know the solution in the basis when the spinor $\Lambda$ is decomposed as follows \begin{equation} \lambda_1 \, = \, \phi_+ \, \otimes \, \zeta_1^+ \, + \, \phi_- \, \otimes \, \zeta^-_1 \,, ~~~~ \lambda_2 \, = \, \phi_+ \, \otimes \, \zeta_2^- \, + \, \phi_- \, \otimes \, \zeta^+_2 \label{tensoproducto} \end{equation} where: \begin{equation} \begin{array}{ccccccc} \phi_+ & = & \left( \begin{array}{c} 1 \\ 0 \\ \end{array}\right) & ; & \phi_- & = & \left(\begin{array}{c} 0 \\ 1 \\ \end{array} \right) \\ \zeta_A^+ & = & \left(\begin{array}{c} 0 \\ \omega^+_A \ \end{array} \right) & ; & \zeta_A^- & = & \left( \begin{array}{c} \omega_A^- \\ 0 \ \end{array}\right) \ \end{array} \label{blocchini} \end{equation} where $\omega^{\pm}_A$ are $8$-dimensional vectors. In writing eq.s~(\ref{blocchini}) we have observed that the unique component of $\phi_\pm$ can always be reabsorbed in the normalization of $\omega_A^\pm$ and hence set to one. Thus, we have to express the entries of the rectangular matrix $\Lambda^{x|A}$ in terms of $\omega^{\pm}_A$ ($A=1,2$) and this can be done by combining $\lambda_{1}$ and $\lambda_2$ in a single 32-dimensional pure spinor and projecting it on the basis formed by $\chi_x \otimes \eta_A$ (where we left $A$ running over 8 values) and we get the relation \begin{equation}\label{psE} \Lambda^{x|A} = \left( \begin{array}{ccc} \omega^-_{2,1} & \dots & \omega^-_{2,8} \\ - {\mathrm i}\, \omega^+_{1,1} & \dots & - {\mathrm i} \omega^+_{1,8} \\ - {\mathrm i}\, \omega^-_{1,1} & \dots & - {\mathrm i} \omega^-_{1,8} \\ \omega^+_{2,1} & \dots & \omega^+_{2,8} \\ \end{array} \right) \end{equation} In order to reduce the number of components to the neccessary 24 ones, we will set the last components $\omega^{\pm}_{A,7}$ and $\omega^{\pm}_{A,8}$ to zero. In order to check if this is possible it is convenient first to exploit all gauge symmetries. \par We recall that $\lambda_A$ are solutions of the constraints if the components $\omega^{\pm}_A$ are decomposed in the following way \begin{eqnarray} \omega^+_1 & = & \left(\varpi^\alpha \, , \, 0 \right) \nonumber\\ \omega^-_2 & = & \left(\pi^\alpha \, , \, 0 \right) \nonumber\\ \omega^-_1 & = & \left(a^{\alpha \beta \gamma }\, \chi_\beta \, \varpi_\gamma \, , \, \chi\, \cdot \, \varpi \right)\nonumber\\ \omega^+_2 & = & \left(a^{\alpha \beta \gamma }\, \xi_\beta \, \pi_\gamma \, , \, \xi\, \cdot \, \pi \right) \label{soluzia} \end{eqnarray} in terms of $7$-component fields $\varpi^\alpha\,, \pi^\alpha\,, \xi^\alpha\,, \chi^\alpha$ satisfying the constraints \begin{eqnarray} \varpi \, \cdot \, \varpi & = & 0 \label{purga1}\\ \pi \, \cdot \, \pi & = & 0 \label{purga2}\\ a^{\alpha \beta \gamma } \, \chi_\alpha \, \pi_\beta \, \varpi_\gamma & = & 0 \label{purga3}\\ a^{\alpha \beta \gamma } \, \xi_\alpha \, \pi_\beta \, \varpi_\gamma & = & 0\,. \label{purga4} \end{eqnarray} Here $a^{\alpha\beta\gamma}$ is the totally-antisymmetric invariant tensor for $\mathrm{G_2}$ group. Notice that constraints (\ref{purga1})-(\ref{purga4}) are invariant under the gauge symmetry \begin{equation}\label{psF} \chi_\alpha \rightarrow \chi_\alpha + x_1 \pi_\alpha + x_2 \varpi_\alpha\,, ~~~~~~~ \xi_\alpha \rightarrow \xi_\alpha + x_3 \pi_\alpha + x_4 \varpi_\alpha\,, ~~~~ \end{equation} On the other side, the decomposition (\ref{soluzia}) is not invariant under the symmetries parameterized by $x_1$ and $x_4$. So, there are only two gauge symmetries generated by $x_2$ and $x_3$ which can be used to set some components of $\chi_\alpha$ and $\xi_\alpha$ to zero. \par In order to reduce the number of independent degrees of freedom from 32 to 24, we set $\varpi^7$ and $\pi^7$ to zero, this condition, together with (\ref{purga1}) and (\ref{purga2}), implies that $\omega^+_1$ and $\omega^-_2$ have respectively 5 and 5 independent degrees of freedom. In addition, we impose the equations \begin{eqnarray}\label{psG} &\chi\, \cdot \, \varpi =0\,, ~~~~~ a^{7 \, \beta \gamma }\, \chi_\beta \, \varpi_\gamma =0\,, \\ & \xi\, \cdot \, \pi =0 \,, ~~~~~ a^{7\, \beta \gamma }\, \xi_\beta \, \pi_\gamma =0\,. \end{eqnarray} such that the 7$^{\rm th}$ and the 8$^{\rm th}$ components of $\Lambda^{x|A}$ are zero. Together with constraints (\ref{purga3}) and (\ref{purga4}), they can be solved in terms of 3 components of $\chi_\alpha$ and 3 components $\xi_\alpha$. This reduces the number of unfixed components from 14 to 8. Using the gauge symmetries (\ref{psF}), we can lower them to 6 unfixed components. Finally, observe that there are two additional gauge symmetries generated by the constraints $\pi^7=0$ and $\varpi^7=0$ which reduce the number of unfixed parameters for $\chi_\alpha$ and $\xi_\alpha$ to 4. The total counting of the pure spinor conditions, in the space of 24 components of the matrix $\Lambda^{x|A}$, is exactly 14 (5 for $\varpi$, 5 for $\pi$, 2 for $\chi$ and 2 for $\xi$), which is the correct number of degrees of freedom in order to cancel the total central charge. Indeed, we have $10$ from the boson $x^{\underline a}$, 24 for $\theta$'s and the bosons $\Lambda$ which are 14 cancel the total charge. \par In addition, we can compute the number of the conjugate fields for the $\theta$ and for $w$ and using the constraints and the gauge symmetry it is easy to prerform the same computations as in \cite{Psconstra} to see that the number matches again. \section{Action}\label{s4} Following the notations of \cite{D'Auria:2008ny} the complete action of Pure Spinor superstrings on Type IIA backgrounds is the sum of two parts, the Green-Schwarz action plus the gauge-fixing action containing the pure spinor sector: \begin{equation} \mathcal{A}^{IIA}_{PS} \, = \, \int \mathcal{L}_{GS} \, + \, \int \mathcal{L}^{IIA}_{gf}\,, \label{lulla1} \end{equation} The GS action is written as follows \begin{eqnarray} \mathcal{L}_{GS} = \left( \Pi^{\underline{a}}_+ \, V^{\underline{b}} \, \eta_{\underline{ab}} \, \wedge \, e^+ \, - \, \Pi^{\underline{a}}_- \, V^{\underline{b}} \, \eta_{\underline{ab}} \, \wedge \, e^- + \, \ft 12 \Pi^{\underline{a}}_i\, \Pi^{\underline{b}}_j \, \eta^{ij}\, \eta_{\underline{ab}} \, e^+ \, \wedge \, e^- \right )\, + \ft 12 \, \mathbf{B}^{[2]}\,. \label{2akinact} \end{eqnarray} where $\Pi^{\underline a}_\pm$ are auxiliary fields whose field equations identify them with the pull-back of the target-space vielbein $V^{\underline a}$ on the worldsheet respectively along the zweibein $e^+$ and $e^-$. $\eta_{ij}$ and $\eta_{\underline ab}$ are the Minkowskian flat metrics respectively on the worldsheet and on the 10d target space. The variation in the zweibein yields the Virasoro constraints. The background geometry of the worldsheet encoded in the reference frame $e^\pm$ is treated classically \cite{Berkovits:2007wz,Hoogeveen:2007tu}.\par The gauge-fixing terms of the string-action is written in \cite{D'Auria:2008ny} as: \begin{eqnarray}\label{expA} \mathcal{L}_{gf}^{\mathrm{IIA}} & = & \overline{\mathbf d}_+ \, \psi_R \, \wedge \, e^+ + \overline{\mathbf d}_- \, \psi_L \, \wedge \, e^- + \frac{\rm i}{2} \overline{\mathbf d}_+ \, \mathcal{M}_- \, {\mathbf d}_- \nonumber \\ &+& \overline{w}_+ \mathcal{D}\lambda_R \, \wedge \, e^+ + \overline{w}_- \, \mathcal{D}\lambda_L \, \wedge \, e^- \nonumber \\ &-& \frac{\rm i}{2} \, \overline{w}_+ \left( \mathcal{S}_{R} \mathcal{M}_- \right) {\mathbf d}_- + \frac{\rm i}{2} \, \overline{\mathbf d}_+ \left( \mathcal{S}_{L} \mathcal{M}_- \right) {w}_- \nonumber\\ &-& \frac{\rm i}{2} \, \overline{w}_+ \left( \mathcal{S}_{R} \mathcal{S}_{L} \mathcal{M}_- \right) {w}_- + \frac{\rm i}{2} \overline{w}_+ {\mathcal {M}}_- \{{\mathcal S}_L, {\mathcal S}_R\} w_- \,. \end{eqnarray} The operators $\mathcal{S}_{L/R}$ represent the components of the BRST operator $\mathcal{S}$ which are parametrized by the left/right components of the pure spinor $\lambda$. The subscript $\pm$ on the spinor matrices refer to their action on fermions with left/right chirality respectively. The last term is generated by the non-vanishing the ${\mathcal S}_L {\mathcal S}_R$-piece of the action in \cite{D'Auria:2008ny}. With reference to \cite{D'Auria:2008ny}, we note that on the considered background the operator $\widehat{\mathcal S}_{L/R}$ coincide with ${\mathcal S}_{L/R}$ since ${\mathcal H}^{abc}$ field strength vanishes in this case. The bosonic background corresponding to the $\mathrm{AdS_4}\times \mathbb{P}^3$ solution of Type IIA theory is characterized by the values of the background fields displayed in eq.(\ref{Kkansatz}). The spinor matrices $\mathcal{M}$ and $\mathcal{N}^{(even)}$, encoding the RR field-strengths, are given in eqs. (\ref{calm}), (\ref{caln}) respectively. The matrix ${\mathcal M}$ in the present background is constant and, therefore we can eliminate the auxiliary fields ${\mathbf d}_\pm$ and write the complete quadratic part of the action in terms of the MC forms. We start from the first two lines of (\ref{expA}) \begin{equation}\label{acA} {\cal L}^{IIA}_{gf, 2} = \overline{\mathbf d}_+ \, \psi_R \, \wedge \, e^+ + \overline{\mathbf d}_- \, \psi_L \, \wedge \, e^- + \frac{\rm i}{2} \overline{\mathbf d}_+ \, \mathcal{M}_- \, {\mathbf d}_- \, e^+ \wedge e^- \,. \end{equation} We use the the decomposition of the gravitinos $$\Psi = \psi_+ e^+ + \psi_- e^- = \chi_x \otimes \eta_A (\Phi^{x A}_+ \, e^+ +\Phi^{x A}_- \, e^-)\,,$$ where the 1-form is pull-back onto the worldsheet, then (\ref{acA}) yields \begin{equation}\label{acB} {\cal L}^{IIA}_{gf, 2} = \Big( - {\mathbf d}^T_+ \frac{C (1 - \Gamma_{11})}{2}\, \psi_- + {\mathbf d}^T_- \, \frac{C (1 + \Gamma_{11})}{2}\, \psi_+ + \frac{\rm i}{2} {\mathbf d}^T_+ \, C\, \mathcal{ M}_- \, {\mathbf d}_- \Big) \, e^+ \wedge e^- \,. \end{equation} By eliminating the $d$'s, we have \begin{equation}\label{acC0} {\cal L}^{IIA}_{gf, 2} = - 2 {\rm i}\, \psi^T_+ \, \frac{C (1 - \Gamma_{11})}{2} \, {\mathcal M}_-^{-1}\, \frac{(1 - \Gamma_{11})}{2} \psi_-\,. \end{equation} and after some simple algebra, one gets \begin{equation}\label{acC} {\cal L}^{IIA}_{gf, 2} = - \frac{1}{2 \, e}\, \Phi^T_+ \Big(C_4 \otimes \bar\tau^7 + {\rm i}\, C_4\, \gamma^5 \otimes \relax{\rm 1\kern-.35em 1}_6 \Big) \Phi_-\,. \end{equation} Finally summing the $B^{[2]}$ part and the contribution of the ghost fields we have the quadratic part of the fermionic action \begin{eqnarray}\label{compACT} {\cal L}^{IIA}_{gf, 2} &-& \frac{1}{\, e}\, \Phi^T_+ \Big(\frac{1}{4} C_4 \otimes \bar\tau^7 - \frac{\rm i}{2} \, C_4\, \gamma^5 \otimes \relax{\rm 1\kern-.35em 1}_6 \Big) \Phi_- \, e^+ \wedge e^- \\ &+& \left(\frac{1}{2} w^T_- \Big(C_4 \otimes \relax{\rm 1\kern-.35em 1}_6 - \gamma^5 \otimes \bar\tau^7\Big) \nabla_+ \lambda - \frac{1}{2} w^T_+ \Big(C_4 \otimes \relax{\rm 1\kern-.35em 1}_6 + \gamma^5 \otimes \bar\tau^7\Big) \nabla_- \lambda \right) e^+\wedge e^- \,. \nonumber \end{eqnarray} Notice that the matrices $(C_4 \otimes \relax{\rm 1\kern-.35em 1}_6 \pm \gamma^5 \otimes \bar\tau^7)$ are projectors and by using the result of the appendix (\ref{inversion}), $\bar\tau^7 _{AB} = \overline\eta_A \tau^7 \eta_B = K_{AB}$, we see that the projectors couple the 4-d chirality to the eigenspaces of $K_{AB}$. The third line of eq. (\ref{expA}) vanishes on our background by showing that $$\mathcal{S}_{L/R}\mathcal{M}=\mathcal{S}_{R}\mathcal{S}_{L}\mathcal{M}=0\,.$$ Using the formulae in \cite{D'Auria:2008ny} one can easily verify that $\mathcal{S}\mathcal{M}=0$ since the BRST transformation of the RR field strengths $G_{\underline{ab}},\,G_{\underline{abcd}}$ vanishes as a consequence of the fact that, on our background, $\chi=\mathcal{D}_{\underline{a}} \chi=\rho_{\underline{ab}}=0$. The vanishing of $\mathcal{S}_{R}\mathcal{S}_{L}\mathcal{M}=0$, on the other hand, follows from the properties $\mathcal{S}\chi=\mathcal{S}\mathcal{D}_{\underline{a}}\chi=\mathcal{S}\rho_{\underline{ab}}=0$, which must hold for consistency and which can be recast, on our background, in the following way: \begin{eqnarray} \mathcal{S}\chi &=& \mathcal{N}\,\lambda=0\,\,,\,\,\,\mathcal{S}\mathcal{D}_{\underline{a}}\chi=-\mathcal{N}\,\mathcal{M}\, \Gamma_{\underline{a}}\,\lambda=0\,\,,\,\,\, \mathcal{S}\rho_{\underline{ab}}=\left(\mathcal{M}\,\Gamma_{[\underline{a}}\,\mathcal{M}\,\Gamma_{\underline{b}]}- \frac{1}{4}\,R_{\underline{ab},\underline{cd}} \,\Gamma^{\underline{cd}}\right)\,\lambda=0\,.\nonumber \end{eqnarray} The above equations are satisfied in virtue of the ansatz (\ref{pureans}) and the Killing spinor equations (\ref{gravino}), (\ref{ditalino}). The last line can be computed and we get \begin{eqnarray}\label{4term} {\mathcal L}^{\rm IIA}_{gf, 4} &=& \frac{1}{4} \overline{w}_+ {\mathcal M}_- \Gamma_{\underline{ab}} w_- \overline\lambda_L \Gamma^{[\underline{a}} {\mathcal M}_+ \Gamma^{\underline{b}]} \lambda_R\,. \end{eqnarray} By simple algebra, (\ref{4term}) can be decomposed in terms of the eigenspaces of ${\mathcal K}_{AB}$ and of given chiralities so as to get the expected form of the action \begin{equation}\label{4termB} {\mathcal L}^{\rm IIA}_{gf, 4} = R^{ab,cd} \, N_{ab,+}\, N_{cd,-} + R^{I~~J~~}_{~K~~L} N_{I,+}^{~K} N_{J,-}^{~L} \end{equation} where $R^{ab,cd}$ is the $\mathrm{AdS}_4$ Riemann tensor and $ R^{I~~J~~}_{~K~~L} $ is the Riemann tensor for ${\mathbb P}^3$. The bilinears $ N_{ab}, N_{I,+}^{~K} $ are the Lorentz generators of $\mathrm{SO}(1,3)$ and of $\mathrm{U}(3)$ of the subgroup of the coset $\mathrm{Osp}(6|4)/\mathrm{SO}(1,3) \times \mathrm{U}(3)$. They can be written compactly in $4\oplus6$ notation as follows \begin{eqnarray} N_{\underline ab, +} \equiv \bar{w}_+ \Gamma_{{ab}} \lambda _R &=& -\frac{i}{8} \left( \overline{w}_{I,+} \left({\bf 1}+\gamma _5\right) \gamma_{ab} \lambda^I + \overline{w}^I_- \left({\bf 1}-\gamma _5\right) \gamma _{{ab}} \lambda_I \right) \nonumber \\ N_{\underline ab, -} \equiv \bar{w}_- \Gamma_{\underline{ab}} \lambda _L &=& -\frac{i}{8} \left( \overline{w}^I_- \left({\bf 1}+\gamma _5\right) \gamma_{ab} \lambda_I + \overline{w}_{I,-} \left({\bf 1}-\gamma _5\right) \gamma _{{ab}} \lambda^I \right) \end{eqnarray} Notice that the specific form of the action is dictated by the invariance under the gauge symmetry of the subgroup $\mathrm{SO}(1,3) \times \mathrm{U}(3)$ and by the pure spinor conditions. By using the decomposition as in \cite{Fre:2008qc} it is easy to perform the Fierz identities. Even if the result is written in a different notation, the equivalence with \cite{Bonelli:2008us} can be easily checked. \section{Conclusion} We have shown how to derive the pure spinor sigma model for the background $AdS_4 \times {\mathbb P}^3$. Using the formulation provided in \cite{D'Auria:2008ny}, we have specified all tensors appearing in the general action and we have compared with the formulation derived in \cite{Fre:2008qc}. The action is the classical starting point form where to compute higher order corrections in $\alpha'$. Of course, one can repeat the work done in the case of $AdS_5 \times S^5$ and check the conformal invariance. We leave this work to a future work. \newpage
1,108,101,565,364
arxiv
\section{Introduction} \label{sec:intro} {A class of scalar-tensor} theories of {gravity --- Horndeski theories~\cite{Horndeski} and their extensions~\cite{Zuma,GLVP,KobaRev} --} has proved itself promising candidate for supporting various cosmological scenarios including those without the initial {singularity.} What makes (beyond) Horndeski theories and {more} general DHOST theories~\cite{DHOST} suitable for constructing non-singular cosmological solutions is their ability to violate the Null Energy Condition (NEC)/Null Convergence Condition (NCC) while leaving the stability of {the background} intact (for a review see, e.g., Ref.~\cite{RubakovNEC}). {Even though the NEC/NCC can be safely violated in unextended Horndeski theories, the latter do not enable one to construct non-singular spatially flat cosmological} solutions which are stable during the entire evolution~\cite{LMR,Koba_nogo}. {On the contrary,} beyond Horndeski and DHOST theories admit {completely} stable cosmologies with a bouncing or Genesis stage, see Refs.~\cite{Cai,CreminelliBH,RomaBounce,CaiBounce,genesisGR,chineseBounce2} for {specific} examples and Refs.~\cite{KobaRev,Khalat} for topical reviews. Another characteristic feature of {modified gravities} is potential appearance of superluminal perturbations. The issue of superluminality in Horndeski theories has been addressed from different viewpoints, see Refs.~\cite{BabVikMukh,gen_original,subl_gen,MatMat,Unbraiding} and references therein. One of the most {striking findings is that at least in a pure Horndeski Genesis model of Ref.~\cite{subl_gen}, addition of even tiny amount of external matter (ideal fluid) inevitably induces superluminality in some otherwise healthy region of phase space}~\cite{MatMat}. The latter fact {is troublesome (provided one would like to avoid superluminality altogether in view of arguments of Ref.~\cite{superlum1}), since nothing appears to prevent adding extra fluid to Horndeski theory.} {Likewise, superluminality has been shown to occur in other stable non-singular cosmological backgrounds: in Cuscuton gravity~\cite{Quintin:2019orx} and in DHOST theory~\cite{Ilyas:2020qja}.} A step forward has been recently made in Ref.~\cite{sublum}, where {a beyond Horndeski model admitting} a completely stable bouncing solution {has been analyzed from the viewpoint of potential superluminality.} As opposed to {Genesis-supporting} unextended Horndeski {model} with external matter~\cite{MatMat}, it {has been} shown that a specifically designed beyond Horndeski Lagrangian, which {on its own admits a stable} and subluminal bouncing {solution, remains free of superluminalities} upon adding extra matter {in the form of perfect fluid with} equation of state parameter $w \leq 1/3$ (or even somewhat larger). {On the other hand, by} analysing the general expressions for the sound speeds {of} scalar modes in the system {``beyond Horndeski + perfect fluid'', it has been} found that for {$w$ equal or close to 1,} one of the scalar propagation speeds inevitably becomes superluminal. The latter statement holds irrespectively of the cosmological scenario one considers, and is true for the most general beyond Horndeski {theory~\cite{sublum}. This has to do with the fact, already {noticed} in Refs.~\cite{GLVP,Gleyzes}, that} due to specific structure of beyond Horndeski Lagrangian, {there is kinetic mixing between matter and Galileon perturbations, and hence the sound speeds of both scalar modes get modified (the superluminal one is predominantly sound wave in matter). The results of Ref.~\cite{sublum} imply} that in beyond Horndeski {theory with} an additional minimally coupled conventional scalar field, {whose flat-space propagation speed is that of light, one of the scalar modes is superluminal when this extra field has small but non-zero background kinetic energy. The main purpose of this note is to derive this property explicitly.} {We emphasize that superluminality is generic for beyond Horndeski theory (whose action is given by eq.~\eqref{eq:action_setup}) {in the presence of {additional} minimally coupled conventional scalar field;} this property holds for any {choice of} Lagrangian functions provided that at least one of the beyond Horndeski terms does not vanish. {This result} {applies to} a completely arbitrary {stable} cosmological background with rolling scalar (except for configurations of measure zero in the phase space), irrespectively of whether NEC/NCC is violated or not.} In Sec.~\ref{sec:1} {we adopt the covariant formulation and notations of Refs.~\cite{sublum,KobaRev} and derive} the quadratic action for perturbations about a cosmological background in beyond Horndeski theory in {the} presence of an additional minimally coupled scalar field of the most general type\footnote{{This generalizes the formulas given in Ref.~\cite{KobaRev}; similar results have been obtained} in ADM formalism in Refs.~\cite{GLVP,Gleyzes}.}. {In this way we} obtain stability conditions and {prepare for the calculation of the propagation speeds of perturbations in Sec.~\ref{sec:speeds}.} Our expressions for speeds show explicitly that {once the flat-space speed of the scalar is equal to 1, one of the modes is superluminal in ``beyond Horndeski + scalar field'' system provided the scalar field background is rolling, even slowly.} We discuss the results in Sec.~4. \section{Beyond Horndeski theory with additional scalar field} \label{sec:1} \subsection{{Setup}} In this section we specify our setup and give {background} equations in spatially flat FLRW geometry {(our signature convention is mostly negative).} {We} consider beyond Horndeski theory of the most general form: \begin{subequations} \label{eq:action_setup} \begin{align} S_{\pi}&=\int\mathrm{d}^4x\sqrt{-g}\left(\mathcal{L}_2 + \mathcal{L}_3 + \mathcal{L}_4 + \mathcal{L}_5 \right),\\ \mathcal{L}_2&=F(\pi,X),\\ \mathcal{L}_3&=K(\pi,X)\Box\pi,\\ \mathcal{L}_4&=-G_4(\pi,X)R+2G_{4X}(\pi,X)\left[\left(\Box\pi\right)^2-\pi_{;\mu\nu}\pi^{;\mu\nu}\right] \nonumber \\ &+ F_4(\pi,X)\epsilon^{\mu\nu\rho}_{\quad\;\sigma}\epsilon^{\mu'\nu'\rho'\sigma}\pi_{,\mu}\pi_{,\mu'}\pi_{;\nu\nu'}\pi_{;\rho\rho'},\\ \mathcal{L}_5&=G_5(\pi,X)G^{\mu\nu}\pi_{;\mu\nu}+\frac{1}{3}G_{5X}\left[\left(\Box\pi\right)^3-3\Box\pi\pi_{;\mu\nu}\pi^{;\mu\nu}+2\pi_{;\mu\nu}\pi^{;\mu\rho}\pi_{;\rho}^{\;\;\nu}\right] \nonumber \\ & +F_5(\pi,X)\epsilon^{\mu\nu\rho\sigma}\epsilon^{\mu'\nu'\rho'\sigma'}\pi_{,\mu}\pi_{,\mu'}\pi_{;\nu\nu'}\pi_{;\rho\rho'}\pi_{;\sigma\sigma'}, \end{align} \end{subequations} where $\pi$ is a scalar field sometimes dubbed Galileon, $X=g^{\mu\nu}\pi_{,\mu}\pi_{,\nu}$, $\pi_{,\mu}=\partial_\mu\pi$, $\pi_{;\mu\nu}=\nabla_\nu\nabla_\mu\pi$, $\Box\pi = g^{\mu\nu}\nabla_\nu\nabla_\mu\pi$, $G_{4X}=\partial G_4/\partial X$, etc. The functions $F$, $K$, $G_4$ and $G_5$ are characteristic of unextended Horndeski theories, while non-vanishing $F_4$ and $F_5$ extend the theory to beyond Horndeski type. Along with the scalar field of beyond Horndeski type we consider another scalar field $\chi$ in the form of k-essence \begin{equation} \label{eq:action_setup_kess} S_{\chi} = \int\mathrm{d}^4x\sqrt{-g} \,P(\chi,Y), \quad Y = g^{\mu\nu}\chi_{,\mu}\chi_{,\nu}\,. \end{equation} The Lagrangian in eq.~\eqref{eq:action_setup_kess} describes a minimally coupled scalar field $\chi$ of the most general type {(assuming the absence of second derivatives in the Lagrangian)}. {In flat space-time and for spatially homogeneous background (possibly rolling, $Y=\dot{\chi}^2 \neq 0$), the stability conditions for the scalar field $\chi$ have standard form \begin{equation} P_Y > 0 \; , \;\;\;\;\; R \equiv P_Y +2 Y P_{YY} >0 \; , \label{apr24-20-1} \end{equation} while flat-space propagation speed of perturbations is \begin{equation} \label{eq:cm} c_m^2 = \dfrac{P_Y}{R} \; . \end{equation} Our main result on superluminality in Sec.~\ref{sec:speeds} applies most straightforwardly to the conventional scalar field with \begin{equation} P= \dfrac{1}{2} Y - V(\chi) \; , \label{apr25-20-2} \end{equation} but in this Section we proceed in full generality and do not make any assumptions on the form of the function $P(\chi, Y)$. } In what follows we consider cosmological setting {with} spatially flat FLRW {metric} and homogeneous background scalar fields $\pi=\pi(t)$ and $\chi=\chi(t)$ {($t$ is cosmic time).} Then the background {gravitational} equations following from the action $S_{\pi}+S_{\chi}$ read \begin{subequations} \label{eq:Einstein_kess} \begin{align} \nonumber \delta g^{00}: \;\; &F-2F_XX-6HK_XX\dot{\pi}+K_{\pi}X+6H^2G_4 +6HG_{4\pi}\dot{\pi}-24H^2X(G_{4X}+G_{4XX}X) \\\nonumber&+12HG_{4\pi X}X\dot{\pi} -2H^3X\dot{\pi}(5G_{5X}+2G_{5XX}X)+3H^2X(3G_{5\pi}+2G_{5\pi X}X) \\&+6H^2X^2(5F_4+2F_{4X}X) +6H^3X^2\dot{\pi}(7F_5+2F_{5X}X) + P - 2 P_Y Y = 0, \label{eq:dg00_kess}\\ \nonumber \delta g^{ii}: \;\; &F-X(2K_X\ddot{\pi}+K_\pi)+2(3H^2+2\dot{H})G_4-12H^2G_{4X}X -8\dot{H}G_{4X}X-8HG_{4X}\ddot{\pi}\dot{\pi} \\\nonumber&-16HG_{4XX}X\ddot{\pi}\dot{\pi} +2(\ddot{\pi}+2H\dot{\pi})G_{4\pi}+4XG_{4\pi X}(\ddot{\pi}-2H\dot{\pi})+2XG_{4\pi\pi} \\\nonumber&-2XG_{5X}(2H^3\dot{\pi}+2H\dot{H}\dot{\pi}+3H^2\ddot{\pi})+G_{5\pi}(3H^2X+2\dot{H}X+4H\ddot{\pi}\dot{\pi})-4H^2G_{5XX}X^2\ddot{\pi} \\\nonumber&+2HG_{5\pi X}X(2\ddot{\pi}\dot{\pi}-HX) +2HG_{5\pi\pi}X\dot{\pi}+2F_4X(3H^2X+2\dot{H}X+8H\ddot{\pi}\dot{\pi}) \\\nonumber&+8HF_{4X}X^2\ddot{\pi}\dot{\pi}+4HF_{4\pi}X^2\dot{\pi}+6HF_5X^2(2H^2\dot{\pi}+2\dot{H}\dot{\pi}+5H\ddot{\pi}) +12H^2F_{5X}X^3\ddot{\pi} \\&+6H^2F_{5\pi}X^3 + P= 0, \label{eq:dgii_kess} \end{align} \end{subequations} where $P_Y \equiv \partial P/\partial Y$, and $H=\dot{a}/a$ is the Hubble parameter. The field equation for the additional scalar field $\chi$ is: \begin{equation} \label{eq:background_kess} \ddot{\chi} + 3 c_m^2 H \dot{\chi} - \dfrac{P_{\chi} - 2 Y P_{\chi Y}}{2\,R} = 0 \; . \end{equation} {The} field equation for Galileon $\pi$ follows from {the gravitational} equations~\eqref{eq:Einstein_kess}, their derivatives and eq.~\eqref{eq:background_kess}, so we do not give it here for brevity. \subsection{Quadratic action and stability conditions} \label{sec:stability} {To} address stability {and superluminality} issues, {we} calculate the quadratic action {for perturbations about homogeneous background} in terms {of} propagating degrees of freedom (DOFs). {We} make use of the standard ADM parametrization of the metric perturbations, \begin{equation} \label{eq:FLRW_perturbed} \mathrm{d}s^2 = N^2 \mathrm{d}t^2 - \gamma_{ij}(\mathrm{d}x^i+ N^i \mathrm{d}t)(\mathrm{d}x^j+N^j \mathrm{d}t), \end{equation} where \begin{equation} \label{eq:ADM} N = 1+\alpha, \qquad N_i = \partial_i\beta, \qquad \gamma_{ij}= a^2(t) e^{2\zeta} \left(\delta_{ij} + h_{ij}^T + \dfrac12 h_{ik}^T {h^{k\:T}_j}\right), \end{equation} {and we have already used some part of gauge freedom by {setting the longitudinal part of $\delta \gamma_{ij}$ equal to zero,} $\partial_i\partial_j E = 0$.} Here the scalar sector consists of $\alpha$, $\beta$, $\zeta$ from eq.~\eqref{eq:FLRW_perturbed} and scalar field perturbations $\delta\pi$ and \[ {\delta\chi \equiv \omega\; ,} \] while $h_{ij}^T$ denote tensor modes ($h_{ii}^T = 0, \partial_i h_{ij}^T = 0$). Like in Ref.~\cite{sublum} we adopt the unitary gauge {where} $\delta\pi = 0$. {Then} the quadratic action for beyond Horndeski theory~\eqref{eq:action_setup} reads~\cite{RomaBounce}: \begin{equation} \label{eq:pert_action_setup} \begin{aligned} S^{(2)}_{\pi}=\int\mathrm{d}t\,\mathrm{d}^3x \,a^3\Bigg[\left(\dfrac{\mathcal{{G}_T}}{8}\left(\dot{h}^T_{ik}\right)^2-\dfrac{\mathcal{F_T}}{8a^2}\left(\partial_i h_{kl}^T\right)^2\right)+ \left(-3\mathcal{{G}_T}\dot{\zeta}^2+\mathcal{F_T}\dfrac{(\nabla\zeta)^2}{a^2}+\Sigma\alpha^2 \right.\\\left. -2(\mathcal{G_T}+\mathcal{D}\dot{\pi})\alpha\dfrac{\nabla^2\zeta}{a^2}+6\Theta\alpha\dot{\zeta}-2\Theta\alpha\dfrac{\nabla^2\beta}{a^2} +2\mathcal{{G}_T}\dot{\zeta}\dfrac{\nabla^2\beta}{a^2}\right)\Bigg], \end{aligned} \end{equation} with $(\nabla\zeta)^2 = \delta^{ij} \partial_i \zeta \partial_j \zeta$, $\nabla^2 = \delta^{ij} \partial_i \partial_j$ and \begin{subequations} \begin{align} \label{eq:GT_coeff_setup} &\mathcal{G_T}=2G_4-4G_{4X}X+G_{5\pi}X-2HG_{5X}X\dot{\pi} + 2F_4X^2+6HF_5X^2\dot{\pi}, \\ &\mathcal{F_T}=2G_4-2G_{5X}X\ddot{\pi}-G_{5\pi}X,\\ \label{eq:D_coeff_setup} &\mathcal{D}=-2F_4X\dot{\pi}-6HF_5X^2,\\ &\Theta=-K_XX\dot{\pi}+2G_4H-8HG_{4X}X-8HG_{4XX}X^2+G_{4\pi}\dot{\pi}+2G_{4\pi X}X\dot{\pi}-5H^2G_{5X}X\dot{\pi}\nonumber\\ &-2H^2G_{5XX}X^2\dot{\pi}+3HG_{5\pi}X+2HG_{5\pi X}X^2 +10HF_4X^2+4HF_{4X}X^3+21H^2F_5X^2\dot{\pi} \nonumber\\ & +6H^2F_{5X}X^3\dot{\pi}, \label{eq:Theta_coeff_setup} \\ &\Sigma=F_XX+2F_{XX}X^2+12HK_XX\dot{\pi}+6HK_{XX}X^2\dot{\pi}-K_{\pi}X-K_{\pi X}X^2-6H^2G_4 \nonumber\\ &+42H^2G_{4X}X+96H^2G_{4XX}X^2+24H^2G_{4XXX}X^3-6HG_{4\pi}\dot{\pi}-30HG_{4\pi X}X\dot{\pi} \nonumber\\ \nonumber &-12HG_{4\pi XX}X^2\dot{\pi}+30H^3G_{5X}X\dot{\pi}+26H^3G_{5XX}X^2\dot{\pi}+4H^3G_{5XXX}X^3\dot{\pi}-18H^2G_{5\pi}X\\ \nonumber &-27H^2G_{5\pi X}X^2-6H^2G_{5\pi XX}X^3-90H^2F_4X^2-78H^2F_{4X}X^3-12H^2F_{4XX}X^4\\ &-168H^3F_5X^2\dot{\pi}-102H^3F_{5X}X^3\dot{\pi}-12H^3F_{5XX}X^4\dot{\pi}. \label{eq:Sigma_coeff_setup} \end{align} \end{subequations} {The first round brackets in eq.~\eqref{eq:pert_action_setup} describe tensor sector, while the second ones refer to scalar modes.} {The} quadratic action for k-essence~\eqref{eq:action_setup_kess} is as follows: \begin{equation} \begin{aligned} \label{eq:quadr_action_kess} S^{(2)}_{\chi} = \int \mathrm{d}t\,\mathrm{d}^3x \,a^3 \left[ Y R \,\alpha^2 - 2 \dot{\chi} R \, \alpha\dot{\omega} + 2\dot{\chi} P_Y \, \omega\dfrac{\nabla^2\beta}{a^2} + R\, \dot{\omega}^2 - P_Y\, \dfrac{(\nabla\omega)^2}{a^2} \right.\\ \left. - 6 \dot{\chi} P_Y\, \dot{\zeta}\omega + (P_{\chi}-2 Y P_{\chi Y})\,\alpha\omega +\Omega \, \omega^2 \right], \end{aligned} \end{equation} where $\Omega = P_{\chi\chi}/2 - 3 H \dot{\chi} P_{\chi Y} - Y P_{\chi\chi Y} - \ddot{\chi} (P_{\chi Y} + 2 Y P_{\chi YY})$. When deriving the actions~\eqref{eq:pert_action_setup} and~\eqref{eq:quadr_action_kess} we used background equations~\eqref{eq:Einstein_kess}, which made the {terms with} $\alpha\zeta$, $\zeta^2$ and $\zeta\omega$ vanish. Let us for a moment concentrate on the scalar sector. According to the form of actions~\eqref{eq:pert_action_setup} and~\eqref{eq:quadr_action_kess}, $\alpha$ and $\beta$ are non-dynamical variables, so varying $S^{(2)}_{\pi} +S^{(2)}_{\chi}$ with respect to $\alpha$ and $\beta$ gives the following constraint equations, respectively: \begin{subequations} \label{eq:constraints} \begin{align} \label{eq:beta} & \Sigma \alpha - \left(\mathcal{G_T} + \mathcal{D} \dot{\pi}\right) \dfrac{(\nabla^2\zeta)}{a^2} +3 \Theta \dot{\zeta} - \Theta \dfrac{(\nabla^2\beta)}{a^2} + Y R\,\alpha - \dot{\chi} R\, \dot{\omega} + \frac12 (P_{\chi} - 2 Y P_{\chi Y})\,\omega = 0,\\ \label{eq:alpha} & \hspace{7cm}\Theta \alpha -\mathcal{{G}_T} \dot{\zeta} - \dot{\chi}P_Y \,\omega =0. \end{align} \end{subequations} {By} solving eqs.~\eqref{eq:beta} and~\eqref{eq:alpha} for $(\nabla^2\beta)/{a^2}$ and $\alpha$ and substituting the result {back} into actions~\eqref{eq:pert_action_setup} and~\eqref{eq:quadr_action_kess}, one arrives {at} the quadratic action for scalar DOFs in terms of dynamical curvature perturbation $\zeta$ and scalar field perturbation $\omega$: \begin{equation} \label{eq:quadratic_action_final} S^{(2)}_{\pi+\chi} = \int \mathrm{d}t\,\mathrm{d}^3x \,a^3 \left[G_{AB} \dot{v}^A \dot{v}^B - \dfrac{1}{a^2} F_{AB} \nabla_i\,{v^A} \nabla^i\,{v^B}+ \Psi_1\dot{\zeta}\omega + \Psi_2 \omega^2\right], \end{equation} where $A,B = 1,2$ and $v^1 = \zeta$, $v^2 = \omega$. Even though coefficients $\Psi_1$ and $\Psi_2$ {are} irrelevant for {kinetic stability (absence of ghosts and gradient instabilities) as well as propagation} speeds of $\zeta$ and $\omega$, they are given in Appendix for completeness. Kinetic matrices $G_{AB}$ and $F_{AB}$ have the following {forms}: \begin{equation} G_{AB} = \begin{pmatrix} \mathcal{{G}_S} + \dfrac{\mathcal{{G}_T}^2}{\Theta^2} YR & -\dfrac{\mathcal{{G}_T}}{\Theta} \dot{\chi}R \\ -\dfrac{\mathcal{{G}_T}}{\Theta} \dot{\chi}R & R \end{pmatrix}, \;\; F_{AB} = \begin{pmatrix} \mathcal{{F}_S} & -\dfrac{\left(\mathcal{G_T} + \mathcal{D} \dot{\pi}\right)}{\Theta} \dot{\chi} P_Y\\ -\dfrac{\left(\mathcal{G_T} + \mathcal{D} \dot{\pi}\right)}{\Theta} \dot{\chi} P_Y & P_Y \end{pmatrix}\; , \end{equation} {where \begin{subequations} \begin{align} \mathcal{G_S} & = \dfrac{\Sigma\mathcal{{G}_T}^2}{\Theta^2}+3\mathcal{{G}_T}, \\ \mathcal{{F}_S} &= \dfrac{1}{a}\dfrac{\mathrm{d}}{\mathrm{d}t} \left[ \dfrac{a \;\mathcal{{G}_T}\left(\mathcal{G_T} + \mathcal{D} \dot{\pi}\right)}{\Theta}\right] -\mathcal{F_T}\; . \end{align} \label{may21-20-1} \end{subequations}} It is worth noting that both $\mathcal{G_S}$ and $\mathcal{{F}_S}$ are {generally} singular at $\Theta =0$ ($\Theta$-crossing, or $\gamma$-crossing in terminology of Refs.~\cite{Ijjas:2016tpn,Ijjas:2017pei}). However, no singularity exists at $\Theta =0$ in the Newtonian gauge~\cite{Ijjas:2017pei}, and the perturbations are non-singular in the unitary gauge {as well}~\cite{Mironov:2018oec}. Thus, the system is well behaved at the moment of time when $\Theta=0$. Now we can formulate the stability conditions for beyond Horndeski theories with additional scalar field in {the} cosmological setting. Recalling the tensor part of quadratic action in eq.~\eqref{eq:pert_action_setup}, we see that the tensor sector is free of ghosts and gradient instabilities provided {that} \begin{equation} \label{eq:stability_conditions_tensor} \mathcal{G_T}>0, \quad \mathcal{F_T}>0. \end{equation} Let us note here that stability conditions~\eqref{eq:stability_conditions_tensor} have retained their form as compared to the case of pure beyond Horndeski, see e.g. Ref.~\cite{Khalat}. However, since generally the coefficient $\mathcal{G_T}$~\eqref{eq:GT_coeff_setup} involves the Hubble parameter, the stability of gravitational waves gets affected by the additional k-essence through the Friedmann equation~\eqref{eq:dg00_kess}. As for the scalar modes, it follows from action~\eqref{eq:quadratic_action_final} that scalar sector is {free of ghosts and gradient instabilities iff} both kinetic matrices are positive definite {($G_{11}, G_{22}>0$, $\det G > 0$ and $F_{11}, F_{22}>0$, $\det F > 0 $): \begin{equation} \label{eq:stability_conditions_scalar} \mathcal{{G}_S} > 0 \; , \quad \mathcal{{F}_S}>0, \quad R > 0\; , \quad P_Y>0 \;, \quad \mathcal{{F}_S} - Y P_Y\dfrac{\left(\mathcal{G_T} + \mathcal{D} \dot{\pi}\right)^2}{\Theta^2} > 0. \end{equation} The first four conditions are formally the same as the stability conditions in pure beyond Horndeski theory and pure k-essence theory (extra scalar field affects $\mathcal{{G}_S}$ and $\mathcal{{F}_S}$ through the Hubble parameter only) while the last condition is specific to the interacting theory.} \section{Superluminality {due to conventional} scalar field} \label{sec:speeds} Let us now turn to the propagation speeds of {perturbations}. The sound speed squared for tensor perturbations follows immediately from action~\eqref{eq:pert_action_setup}: \begin{equation} \label{eq:tensor_speed} c_{\mathcal{T}}^2 = \dfrac{\mathcal{F_T}}{\mathcal{G_T}}. \end{equation} Again, $c_{\mathcal{T}}^2$ has a standard form, but in fact the tensor sound speed changes upon introducing additional k-essence due to new contributions in eq.~\eqref{eq:dg00_kess} and, hence, the modified Hubble parameter. In the scalar sector, the propagation speeds of $\zeta$ and $\omega$ are given by eigenvalues of matrix $G_{AB}^{-1}F_{AB}$: \begin{equation} \label{eq:matrix_final} G_{AB}^{-1} F_{AB} = \begin{pmatrix} \dfrac{\mathcal{F_S}}{\mathcal{{G}_S}} - \dfrac{\left(\mathcal{G_T} + \mathcal{D} \dot{\pi}\right)\mathcal{{G}_T}}{\Theta^2} \dfrac{Y P_Y}{\mathcal{{G}_S}} & -\dfrac{\dot{\chi} P_Y}{\mathcal{{G}_S}} \dfrac{\mathcal{D}\dot{\pi}}{\Theta} \\ \dfrac{\mathcal{{G}_T}}{\Theta}\dot{\chi}\left[ \dfrac{\mathcal{F_S}}{\mathcal{{G}_S}} - \dfrac{\left(\mathcal{G_T} + \mathcal{D} \dot{\pi}\right)\mathcal{{G}_T}}{\Theta^2} \dfrac{Y P_Y}{\mathcal{{G}_S}} \right] - \dfrac{\left(\mathcal{G_T} + \mathcal{D} \dot{\pi}\right)}{\Theta}\dfrac{\dot{\chi} P_Y}{R} \;\;\; & c_m^2- \dfrac{Y P_Y}{\mathcal{{G}_S}} \dfrac{\mathcal{{G}_T}\left(\mathcal{D}\dot{\pi}\right)}{\Theta^2} \end{pmatrix} \; . \end{equation} {Explicitly, the speeds are} (recall that $c_m^2 = P_Y/R$): \begin{eqnarray} \label{eq:speeds_eigenvalues} {c_{\mathcal{S}\, \pm}^2} &=& \dfrac12 c_m^2+ \dfrac{1}{2} \left[ \dfrac{\mathcal{F_S}}{\mathcal{{G}_S}} - \dfrac{Y P_Y}{\mathcal{{G}_S}} \dfrac{\mathcal{{G}_T}(\mathcal{{G}_T}+2\mathcal{D}\dot{\pi})}{\Theta^2} \right.\\\nonumber &&\left.\pm \sqrt{ \left(\dfrac{\mathcal{F_S}}{\mathcal{{G}_S}} - \dfrac{Y P_Y}{\mathcal{{G}_S}} \dfrac{\mathcal{{G}_T}(\mathcal{{G}_T}+2\mathcal{D}\dot{\pi})}{\Theta^2} + c_m^2 \right)^2 - 4 \, c_m^2 \left(\dfrac{\mathcal{F_S}}{\mathcal{{G}_S}} - \dfrac{Y P_Y}{\mathcal{{G}_S}} \dfrac{\left(\mathcal{{G}_T}+\mathcal{D}\dot{\pi}\right)^2}{\Theta^2} \right) } \,\, \right]. \end{eqnarray} {In accordance with the above remark, there is no singularity in the sound speeds at $\Theta=0$. Indeed, the speeds are finite as $\Theta \to 0$: one finds from eq.~\eqref{may21-20-1} that both ${\mathcal{F_S}}/{\mathcal{{G}_S}}$ and $\Theta^2 \,{\mathcal{{G}_S}}$ are finite in this limit.} {On the other hand, depending on the model, one of the sound speeds may become arbitrarily large in some region of parameter space, say, where ${\mathcal{{G}_S}} \to 0$ and ${\mathcal{{F}_S}}$ remains finite, cf. Ref.~\cite{MatMat}.} {Now we see} a considerable difference between the unextended Horndeski {and} beyond Horndeski theories. In the {unextended Horndeski} case, the coefficient $\mathcal{D}$ {vanishes} (see eq.~\eqref{eq:D_coeff_setup}), so the matrix~\eqref{eq:matrix_final} {is} triangular and the speed of perturbations in k-essence recovers its standard value $c_m^2$, while the propagation speed of Galileon perturbations is modified. {Indeed, for $\mathcal{D}=0$,} eqs.~\eqref{eq:speeds_eigenvalues} reduce {to } \begin{equation} \label{eq:speed_Horndeski} {c_{\mathcal{S} \, -}^2}|_{\mathcal{D}=0} = \dfrac{\mathcal{F_S}}{\mathcal{{G}_S}} - \dfrac{Y P_Y}{\mathcal{{G}_S}} \dfrac{\mathcal{{G}_T}^2}{\Theta^2}, \quad {c_{\mathcal{S}\, +}^2}|_{\mathcal{D}=0} = c_m^2, \end{equation} and we restore the results for Horndeski theory with k-essence $P(Y)$ given in Ref.~\cite{KobaRev}. On the contrary, with $\mathcal{D} \neq 0$, {there is kinetic mixing between the scalars $\zeta$ and $\omega$, so both scalar speeds get modified, in general agreement with Refs.~\cite{GLVP,Gleyzes}.} {The key observation is that eq.~\eqref{eq:speeds_eigenvalues} has the following form (cf. Ref.~\cite{sublum}): \begin{equation} c_{\mathcal{S}\, \pm}^2 = \dfrac12 (c_m^2 + \mathcal{A}) \pm \dfrac12 \sqrt{(c_m^2 - \mathcal{A})^2 + \mathcal{B}}, \label{apr25-20-5} \end{equation} where \begin{equation} \mathcal{A}= \frac{\mathcal{F_S}}{\mathcal{G_S}} - \frac{YP_Y}{\mathcal{G_S}} \, \frac{\mathcal{G_T}(\mathcal{G_T} + 2 \mathcal{D}\dot{\pi})}{ \Theta^2} \; , \;\;\;\;\;\; \mathcal{B}= 4c_m^2\frac{YP_Y}{\mathcal{G_S}} \frac{(\mathcal{D} \dot{\pi})^2}{ \Theta^2} \; . \nonumber \end{equation} In stable and rolling background ($\mathcal{G_S}, P_Y >0$, $Y>0$), the coefficient $\mathcal{B}$ is positive ($\mathcal{D} \neq 0$ unless the value of $Y$ and, hence, the Hubble parameter is fine-tuned, see eq.~\eqref{eq:D_coeff_setup}). This gives immediately \begin{equation} c_{\mathcal{S}\, +}^2 > c_m^2 \;\;\;\;\mbox{for}\;\; Y\neq 0 \; . \label{apr25-20-1} \end{equation} So, if the flat-space propagation of the scalar perturbation $\omega$ is luminal, $c_m=1$, then it becomes superluminal in the ``beyond Horndeski + scalar field'' system. Equations \eqref{eq:speeds_eigenvalues}, \eqref{apr25-20-5} and \eqref{apr25-20-1} are our main results. } {\section{Discussion}} \label{sec:conclusion-bis} The interpretation of the result~\eqref{apr25-20-1} is most straightforward in the case of the conventional scalar field $\chi$ with the Lagrangian~\eqref{apr25-20-2}. In that case one has $c_m = 1$ for any $Y$, and even tiny kinetic energy of rolling scalar background $\chi(t)$ immediately yields superluminal propagation of one of the modes. {It is suggested (see, e.g., Ref.~\cite{deRham:2019ctd}) that a covariant theory which is fundamentally Lorentz invariant should recover a sound speed equal to unity in the far UV limit ($k \rightarrow \infty$, where $k$ is spatial momentum), even though for smaller $k$ perturbation modes could be superluminal. Our result is independent of $k$, so this is not the case in theories we consider: superluminality would occur even as $k$ tends to infinity. Therefore, if we decide to insist on Lorentz invariance of an underlying theory and hence} to avoid superluminality for good, we have to conclude that in scalar-tensor theories with multiple scalar fields, none of these fields can be conventional and minimally coupled, as long as at least one of the scalar fields is of beyond Horndeski type. More generally, if we insist on the absence of superluminality, the result \eqref{eq:speeds_eigenvalues}, \eqref{apr25-20-5} implies a non-trivial constraint on the structure of ``beyond Horndeski + minimal quintessence'' systems: it is required that $c_{\mathcal{S}\, +} \leq 1$ everywhere in the part of the phase space $(\pi, \dot{\pi}, \chi, \dot{\chi})$ where stability conditions \eqref{eq:stability_conditions_scalar} are satisfied. In particular, this constraint forbids luminal flat-space propagation, $c_m=1$ (and, by continuity, $c_m$ close to 1), in any rolling background $Y\neq 0$, unless such a background is unstable for any $\pi$ and $\dot{\pi}$. Viewed differently, the constraint that $c_{\mathcal{S}\, +} \leq 1$ in the entire ``stable'' part of phase space suggests intricate properties of the UV completion of the scalar-tensor theories considered in this note, if such a UV completion exists and is Lorentz-invariant. {We conclude by adding that it is certainly of interest to study the superluminality issue in more general DHOST theories coupled to conventional or $k$-essence scalar field(s), and also address phenomenological implications of our result, especially in models for dark energy in the late-time Universe.} \section*{Acknowledgements} {We are indebted to an anonymous referee for valuable comments.} This work has been supported by Russian Science Foundation grant 19-12-00393. \section*{Appendix} In this Appendix we give explicit expressions {for} coefficients $\Psi_1$ and $\Psi_2$ involved in the quadratic action~\eqref{eq:quadratic_action_final} for beyond Horndeski + k-essence $P(\chi,Y)$ {theory}: \begin{eqnarray} &\Psi_1 = \dfrac{\mathcal{G_T}}{\Theta^2}\left[ 2 \dot{\chi} P_Y (\Sigma + Y R) + \Theta(P_{\chi} - 2 Y P_{\chi Y}) \right] , \\ &\Psi_2 = \Omega + \dfrac{\dot{\chi} P_Y}{\Theta} (P_{\chi} - 2 Y P_{\chi Y}) + \dfrac{Y P_Y^2}{\Theta^2}(\Sigma + Y R) + \dfrac{d}{dt}\Big[2Y P_Y\, R\Big], \end{eqnarray} where $$ \Omega = P_{\chi\chi}/2 - 3 H \dot{\chi} P_{\chi Y} - Y P_{\chi\chi Y} - \ddot{\chi} (P_{\chi Y} + 2 Y P_{\chi YY}). $$
1,108,101,565,365
arxiv
\section{Introduction} In the present note we focus on the sub-Riemannian geodesics for sub-Riemannian structures associated with the complex dimensional Hopf fibration $$\mathbb S^1 \hookrightarrow \mathbb S^{2n+1}\hookrightarrow \mathbb {CP}^n$$ over the complex projective space $\mathbb {CP}^n$. The motivation for the work is two-fold. On one hand, it has a natural quantum physics background. The case $n=1$ is of course the classical Hopf fibration which was well studied and explicit formulas for geodesics were obtained. However, the calculations for the high dimensional case ($n\geq 2$) become quite complicate and only partial results were obtained. See \cite{cmvhopf} and the references therein for details. On the other hand, using the tool developed in the study of geometry of curves in Lagrange Grassmannians, we constructed in \cite{cijacobi} the curvature maps and expressed them in terms of the Riemannian curvature tensor of the base manifold and the curvature form of the principle connection of the principle bundle. However, the disadvantage there is the lack of examples to be complementary for the theory while the sub-Riemannian structures associated with the complex Hopf fibrations can exactly play such a role. More precisely, instead of making efforts to obtain the explicit parametric expression of sub-Riemannian geodesics, we study the curvature maps of the sub-Riemannian structures associated with the complex Hopf fibration in order to have the intrinsic Jacobi equation along a sub-Riemannian extremal so that we can establish the comparison theorems to estimate the number of conjugate points along the sub-Riemannian geodesic. We organize the note as follows. First of all, we formulate the sub-Riemannian geodesic problems for the sub-Riemannian structures associated with complex Hopf fibrations. Secondly, we explain the constructions of the curvature maps for a contact sub-Riemannian structure and then show the expressions of the curvature maps when there are additional transverse symmetries. Finally, we apply the results to the sub-Riemannian structures associated with complex Hopf fibrations and get the comparison theorems of the estimation of the number of conjugate points along a sub-Riemannian geodesic. \section{sub-Riemannian geodesic problem associated with complex Hopf fibrations} We will start with a description of a sub-Riemannian structure associated with a principle $G$-connection on a principle $G$-bundle over a Riemannian manifold and then specialize to the case for the complex Hopf fibrations. We use the standard terminology from the theory of principle $G$-bundles (see e.g. \cite{knfoundations1}). Let $\pi:P\rightarrow M$ be a principle G-bundle over a smooth manifold $(M,g)$. For any $p\in P$ we can define in $T_pP$ the vertical subspace $$\mathcal V_p:=\{v\in T_pP|\pi_*v=0\}$$ and $\mathcal V=\{\mathcal V_p:p\in P\}$ is usually called the \emph{vertical distribution}. A principal $G$-connection on $P$ is a differential 1-form (connection form) on $P$ with values in the Lie algebra $\mathfrak g$ of $G$ which is G-equivariant and reproduces the Lie algebra generators of the fundamental vector fields on $P$. In other words, it is an element of $\omega\in\Omega^1(P,\mathfrak g)$ such that \begin{itemize} \item $\hbox{Ad}(g)(R_g^*\omega)=\omega$,where $R_g$ denotes right multiplication by $g$; \item if $\xi\in\mathfrak g$ and $X_\xi$ is the fundamental vector field on $p$ associated to $\mathfrak g$, then $\omega(X_\xi)=\xi.$ \end{itemize} A principle $G$-connection is equivalent to a $G$-equivariant Ehresmann connection $\mathcal H$, i.e., a smooth vector distribution $\mathcal H$ on $P$ satisfying $$T_pP=\mathcal H_p+\mathcal V_p,\quad\quad\mathcal H_{pg}=d(R_g)_p(\mathcal H_p),\quad \forall p\in P, g\in G.$$ Such a distribution $\mathcal H$ is usually called a \emph{horizontal distribution}. What we concerned is the case that the manifold $M$ is equipped with a Riemannian metric $\langle\cdot ,\cdot\rangle$ because a sub-Riemannian structure is then naturally associated with. Namely, the pull back $\pi^*(\langle\cdot,\cdot\rangle)$ defines an inner product on the distribution $\mathcal H$ as $\pi$ is an isomorphism between $\mathcal H_p$ and $T_{\pi(p)}M.$ The triple $(P,\mathcal H,\langle\cdot,\cdot\rangle)$ is called \emph{a sub-Riemannian structure associated with the principle $G$-bundle $\pi:P\rightarrow M$.} As a special case, the complex Hopf fibration $$\mathbb S^1 \hookrightarrow \mathbb S^{2n+1}\stackrel{\pi}{\hookrightarrow} \mathbb {CP}^n$$ is a principle $G-$bundle, where $G=U(1)\cong \mathbb S^1$ is the circle action and $\mathbb {CP}^n$ is equipped with the K\"{a}hlerian Fubini-Study metric $\langle\cdot,\cdot\rangle$. The action of $e^{2\pi it} \in U(1)$ on $\mathbb S^{2n+1}$ is defined by $$e^{2\pi it}z = e^{2\pi it}(z_1, . . . , z_n) = (e^{2\pi it}z_1, . . . , e^{2\pi it}z_n).$$ We will use both real $(x_1, y_1, . . . , x_n, y_n)$ and complex coordinates $z_k = x_k + iy_k, k = 1, . . . , n$. The horizontal tangent space $H_z$ at $z\in \mathbb S^{2n+1}$ is the maximal complex subspace of the real tangent space $T_z\mathbb S^{2n+1}$. The unit normal real vector field $N(z)$ at $z\in\mathbb S^{2n+1}$ is given by $$N(z) =\sum_{i=1}^n x_k\partial_{x_k}+y_k\partial_{y_k}=2Re\sum_{i=1}^nz_k\partial_{z_k}.$$ The vertical real vector field \begin{equation}\label{vertical} V(z)=iN(z)=\sum_{i=1}^n -y_k\partial_{x_k}+x_k\partial_{y_k}=2Re \sum_{i=1}^n iz_k\partial_{z_k} \end{equation} is globally defined and non-vanishing and spans the vertical distribution $\mathcal V$ of the complex Hopf fibration $\mathbb S^1 \hookrightarrow \mathbb S^{2n+1}\stackrel{\pi}{\hookrightarrow} \mathbb {CP}^n$. A natural choice of $G$-principle connection $\mathcal H$ is such that $\mathcal H_z$ is the orthogonal complement of $\mathcal V_z$ in $T_z\mathbb S^{2n+1}$ w.r.t. the round metric at $\forall z\in\mathbb S^{2n+1}$. The restriction of the round metric to the horizontal and vertical subspaces is denoted by $d_\mathcal H$ and $d_\mathcal V$, respectively. Concluding the above constructions we will work with a sub-Riemannian manifold that is the triple $(\mathbb S^{2n+1},\mathcal H, d_\mathcal H)$. Note that it is a special case of a sub-Riemannian structure associated with a principle bundle. Indeed, $\pi:\mathbb S^{2n+1}\rightarrow \mathbb{CP}^n$ is a Riemannian submersion, where the round metric is endowed upon $\mathbb S^{2n+1}$ and the the Fubini-Study metric is endowed upon $\mathbb{CP}^n$ (see. e.g.\cite{priemannian}), therefore $d_\mathcal H=\pi^*(\langle\cdot,\cdot\rangle)$. We finally remark that by definition the distribution $\mathcal H$ is nonholonomic or bracket-generating. \section{Construction of curvature maps for a contact sub-Riemannian structure} The following construction can actually be done for a general sub-Riemannian structure. However, as the sub-Riemannian structure $(\mathbb S^{2n+1},\mathcal H,d_\mathcal H)$ is of contact type, it suffices to focus on a contact sub-Riemannian structure, which also proves to be a more concrete exposition than that of the general one. For this, let $(M, \mathcal D, \left\langle\cdot,\cdot\right\rangle)$ be a sub-Riemannian structure on $M$ and $\mathcal D$ is a contact distribution. Assume that $M$ is connected and that $\mathcal{D}$ is nonholonomic or bracket-generating. A Lipschitzian curve $\gamma: [0, T]\longrightarrow M$ is called \emph{admissible} if $\dot\gamma(t)\in \mathcal{D}_{\gamma(t)}$, for a.e. $t$. It follows from the Rashevskii-Chow theorem that any two points in $M$ can be connected by an admissible curve. One can define the length of an admissible curve $\gamma: [0, T]\longrightarrow M$ by $\int_0^T \|\dot\gamma(t)\|dt,$ where $\|\dot\gamma(t)\|=\left\langle\dot\gamma(t),\dot\gamma(t)\right\rangle^{\frac{1}{2}}.$ \subsection{Sub-Riemannian geodesics} The length minimizing problem is to find the shortest admissible curve connecting two given points on $M$. As in Riemannian geometry, it is equivalent to the problem of minimizing the kinetic energy $\frac{1}{2}\int_0^T \|\dot\gamma(t)\|^2dt$. The problem can be regarded as an optimal control problem and its extremals can be described by the Pontryagin Maximum Principle of Optimal Control Theory (\cite{pbgmthe}). There are two different types of extremals: abnormal and normal, according to vanishing or nonvanishing of Lagrange multiplier near the functional, respectively. For the case of a contact sub-Riemannian structure, all sub-Riemannian energy (length) minimizers are the projections of normal extremals (see e.g. \cite{matour}). Therefore we shall focus on normal extremals only. To describe them let us introduce some notations. Let $T^*M$ be the cotangent bundle of $M$ and $\sigma$ be the canonical symplectic form on $T^*M$, i.e., $\sigma=-d\varsigma$, where $\varsigma$ is the tautological (Liouville) 1-form on $T^*M$. For each function $H:T^*M\to \mathbb R$, the Hamiltonian vector field $\vec h$ is defined by $i_{\vec h}\sigma=dh.$ Given a vector $u\in T_qM$ and a covector $p\in T_q^*M$ we denote by $p\cdot u$ the value of $p$ at $u$. Le \begin{equation}\label{h} h(\lambda)\stackrel{\Delta}{=}\max_{u\in\mathcal{D}}(p\cdot u-\frac{1}{2}\|u\|^2)=\frac{1}{2}\|p|_{\mathcal{D}_q}\|^2,\quad\lambda=(p,q)\in T^*M,\ q\in M,\ p\in T^*_qM, \end{equation} where $p|_{\mathcal{D}_q}$ is the restriction of the linear functional $p$ to $\mathcal{D}_q$ and the norm $\|p|_{\mathcal{D}_q}\|$ is defined w.r.t. the Euclidean structure on $\mathcal D_q.$ The normal extremals are exactly the trajectories of $\dot\lambda(t)=\vec h(\lambda)$. \subsection{Jacobi curve and conjugate points along normal extremals} Let us fix the level set of the Hamiltonian function $h$: $$\mathcal{H}_{c}\stackrel{\Delta}{=}\{\lambda\in T^*M| h(\lambda)=c\}, c>0$$ Let $\Pi_{\lambda}$ be the vertical subspace of $T_{\lambda}\mathcal{H}_{c}$, i.e. $$ \Pi_{\lambda}=\{\xi\in T_{\lambda}\mathcal{H}_c: \pi_*(\xi)=0\}, $$ where $\pi: T^*M\longrightarrow M$ is the canonical projection. With any normal extremal $\lambda(\cdot)$ on $\mathcal H_{c}$, one can associate a curve in a Lagrange Grassmannian which describe the dynamics of the vertical subspaces $\Pi_\lambda$ along this extremal w.r.t. the flow $e^{t\vec h}$, generated by $\vec h$. For this let \begin{equation}\label{Jacobi} t\longmapsto \mathfrak J_{\lambda}(t)\stackrel{\Delta} {=}e_*^{-t\vec h}(\Pi_{e^{t\vec h}\lambda})/\{\mathbb{R}\vec h(\lambda)\}. \end{equation} The curve $\mathfrak J_\lambda(t)$ is the curve in the Lagrange Grassmannian of the linear symplectic space $W_\lambda = T_\lambda\mathcal H_{c}/{\mathbb R\vec h(\lambda)}$ (endowed with the symplectic form induced in the obvious way by the canonical symplectic form $\sigma$ of $T^*M$). It is called the \emph{Jacobi curve} of the extremal $e^{t\vec h}\lambda$ (attached at the point $\lambda$). The reason to introduce Jacobi curves is two-fold. On one hand, it can be used to construct differential invariants of sub-Riemannian structures, namely, any symplectic invariant of Jacobi curve, i.e., invariant of the action of the linear symplectic group $Sp(W_\lambda)$ on the Lagrange Grassmannian $L(W_\lambda)$, produces an invariant of the original sub-Riemannian structure. On the other hand, the Jacobi curve contains all information about conjugate points along the extremals. Recall that time $t_0$ is called conjugate to $0$ if \begin{equation}\label{conju} e^{t_0\vec h}_*\Pi_\lambda\cap\Pi_{e^{t_0\vec h}\lambda}\neq 0. \end{equation} and the dimension of this intersection is called the multiplicity of $t_0$. The curve $\pi(\lambda(\cdot))|_{[0, t]}$ is $W^1_\infty$-optimal (and even $C$-optimal) if there is no conjugate point in $(0, t)$ and is not optimal otherwise. Note that (\ref{conju}) can be rewritten as: $e^{-t_0\vec h}_*\Pi_{e^{t_0\vec h}\lambda}\cap \Pi_\lambda\neq 0$, which is equivalent to $$\mathfrak J_\lambda(t_0)\cap \mathfrak J_\lambda(0)\neq 0.$$ \subsection{Curvature maps and structural equations} For a curve $\Lambda(\cdot)$ in Lagrange Grassmannian of a linear symplectic space $W$, satisfying very mild condition, one can construct the complete system of symplectic invariants(\cite{icdifferential}) and the \emph{normal moving frame} satisfying some canonical structural equation. In particular, for the Jacobi curve $\mathcal J_\lambda(\cdot)$, where $\lambda\in\mathcal H_{\frac{1}{2}}$, associated with a sub-Riemannian extremal of a contact sub-Riemannian structure, such a result reads in a simpler way. Fix $\dim M=n.$ \begin{defin} \label{normframe} The moving Darboux frame $(E^\lambda_a(t),E^\lambda_b(t),E^\lambda_c(t),F^\lambda_a(t),F^\lambda_b(t),F^\lambda_{c}(t))$, where $$E^\lambda_a(t), E^\lambda_b(t),F^\lambda_a(t), F^\lambda_b(t)\hbox{are vectors and}\ E^\lambda_c(t)=(E^\lambda_{c_1}(t),\cdots,E^\lambda_{c_{n-3}}(t)), F^\lambda_c(t)=(F^\lambda_{c_1}(t),\cdots,F^\lambda_{c_{n-3}}(t)),$$ is called the normal moving frame of $\mathcal J_\lambda(t)$, if for any $t$, $$\mathcal J_\lambda(t)={\rm span}\{E^\lambda_a(t),E^\lambda_b(t),E^\lambda_c(t)\}$$ and there exists an one-parametric family of normal mappings $(R_t(a,a),R_t(a,c),R_t(b,b),R_t(b,c),R_t(c,c))$, where $R_t(a,a),R_t(b,b)\in \mathbb R$ and $R_t(a,c),R_t(b,c)\in \mathbb R^{(n-3)\times 1}$ and $R_t(c,c)\in \mathbb R^{(n-3)\times(n-3)}$ is symmetric for any t, such that the moving frame $(E^\lambda_a(t),E^\lambda_b(t),E^\lambda_c(t),F^\lambda_a(t),F^\lambda_b(t),F^\lambda_{c}(t))$, satisfies the following structural equation: \begin{equation} \label{structeq} \begin{cases} E_a'(t)=E_{b}(t)\\ E_b'(t)=E_c(t)\\ E_c'(t)=F_c(t)\\ F_a'(t)=-E_a(t)R_t(a,a)-E_c(t)R_t(a,c)\\ F_b'(t)=-F_a(t)-E_b(t)R_t(b,b)-E_c(t)R_t(b,c)\\ F_c'(t)=-E_a(t)(R_t(a,c))^T-E_b(t)(R_t(b,c))^T-E_c(t)R_t(c,c). \end{cases} \end{equation} \end{defin} \begin{theor} There exists a normal moving frame $(E^\lambda_a(t),E^\lambda_b(t),E^\lambda_c(t),F^\lambda_a(t),F^\lambda_b(t),F^\lambda_{c}(t))$ of $\mathcal J_\lambda(t)$. Moreover, if there is another normal moving frame $(\tilde E^\lambda_a(t),\tilde E^\lambda_b(t),\tilde E^\lambda_c(t),\tilde F^\lambda_a(t),\tilde F^\lambda_b(t),\tilde F^\lambda_{c}(t))$ of $\mathcal J_\lambda(t)$, then it must hold \begin{eqnarray*} (\tilde E^\lambda_a(t),\tilde E^\lambda_b(t))&=&\pm (E^\lambda_a(t),E^\lambda_b(t)),\\ \tilde E^\lambda_c(t)&=&E^\lambda_c(t)O, \end{eqnarray*} where $O$ is a constant orthornormal matrix. \begin{remark} If $n=3$, then $E^\lambda_c(t),F^\lambda_c(t)$ do not appear in the above construction and such a convention is understood in the remainder of the text. \end{remark} \end{theor} It follows from the last theorem that there is a canonical splitting of the subspace $\mathcal J_\lambda(t)$, i.e., $$\mathcal J_\lambda(t)=V_a(t)\oplus V_b(t)\oplus V_c(t),$$ where $V_a(t)=\mathbb RE_a(t),\ V_b(t)=\mathbb RE_b(t),\ V_c(t)={\rm span}\{E_c(t)\}$. Each space is endowed with the canonical Euclidean structure, in which a or a tuple of vectors $E_a(t),E_b(t),E_c(t)$ from some (and therefore any) normal moving frame constitutes the orthonormal frame. For any $s_1,s_2\in\{a,b,c\}$, the linear map from $V_{s_1}(t)$ to $V_{s_2}(t)$ with the matrix $R_t(s_1, s_2)$ from (\ref{structeq}) in the basis $\{E_{s_1}(t)\}$ and $\{E_{s_2}(t)\}$ of $V_{s_1}(t)$ and $V_{s_2}(t)$ respectively, is independent of the choice of normal moving frames. It will be denoted by $\mathfrak R_t(s_1, s_2)$ and it is called the \emph {$(s_1,s_2)$-curvature map of the curve $\Lambda(\cdot)$ at time $t$}. \subsection{Expressions of the curvature maps and the comparison theorems} The construction above helps to find very fruitful additional structures in the cotangent bundle $T^*M$ The structural equation (\ref{structeq}) for the Jacobi curve $\mathcal J_\lambda(t)$ can be seen as the intrinsic Jacobi equation along the extremal $e^{t\vec h}\lambda$ and the curvature maps are the coefficients of this Jacobi equation. Since there is a canonical splitting of $\mathcal J_\lambda(t)$ and taking into account that $\mathcal J_\lambda(0)$ and $\Pi_\lambda$ can be naturally identified, we have the canonical splitting of $\Pi_\lambda$: $$\Pi_\lambda=\mathcal V_a(\lambda)\oplus \mathcal V_b(\lambda)\oplus \mathcal V_c(\lambda),\dim \mathcal V_a(\lambda)=\dim \mathcal V_b(\lambda)=1,\ \dim\mathcal V_c(\lambda)=n-3,$$ where $\mathcal V_s(\lambda)=V_s(0), s=a,b,c$. Moreover, let $\mathfrak R_\lambda(s_1,s_2): \mathcal V_{s_1}(\lambda)\rightarrow \mathcal V_{s_2}(\lambda)$ and the $\mathfrak R_\lambda:\Pi_\lambda\rightarrow\Pi_\lambda$ be the $(s_1,s_2)$-curvature map. These maps are intrinsically related to the sub-Riemannian structure. They are called the \emph{$(s_1,s_2)$-curvature}. In the Riemannian case, the curvature map is expressed in terms of Riemannian curvature tensor and the structural equations are actually the Jacobi equations in Riemannian geometry. For a sub-Riemannian structures $(P,\mathcal H,\langle\cdot,\cdot\rangle)$ associated with a principle connection $\mathcal H$ on a $G$-bundle $\pi:P\rightarrow M$ with one dimensional fibers, it turns out that the big curvature map is a combination of Riemannian curvature tensor of $M$ and the curvature form. To be more precise, let $\omega$ be the connection 1-form of $\mathcal H$ then $d\omega$ is the curvature form and it induces a 1-1 tensor on $M$ $$g(JX,Y)=d\omega(X,Y).$$ A general formula of the curvature maps using the Riemannian curvature tensor on $M$ and the tensor $J$, together with their covariant derivatives can be found in \cite{cijacobi}. Now let us specialize to the case of a sub-Riemannian structure $(\mathbb S^{2n+1},\mathcal H, d_\mathcal H)$. It can be shown that $(J,g)$ defines a K\"{a}hlerian structures on $\mathbb{CP}^n$. See e.g. \cite[Chapter 3]{priemannian}. In this case the curvature maps read in a very simple form. For this, let us first of all give more explicit description of the subspaces $\mathcal V_z(\lambda)$. As the tangent space of the fibers of $T^*\mathbb S^{2n+1}$ can be naturally identified with the fibers themselves (the fibers are linear spaces), one can show that, $$\mathcal V_a(\lambda)=\mathcal{\mathcal H}_{z}^\bot,$$ where $\mathcal{\mathcal H}_{z}^\bot$ is the annihilator of $\mathcal H$, namely, $$\mathcal{H}_{z}^\bot=\{p\in T^*_z\mathbb S^{2n+1}:p\cdot v=0,\ \forall v\in \mathcal H_z\}.$$ Since the Moreover, we have that \begin{equation}\label{ident3} \mathcal V_b(\lambda)\oplus \mathcal V_c(\lambda) \sim\mathcal {H}_z^*\sim \mathcal{H}_z \end{equation} Since the $\pi_*:\mathcal H_{z}\rightarrow T_{\pi(z)} \mathbb C\mathbb P^{n}$ is an isometry for all $z\in \mathbb S^{2n+1}$, we also take $J_z$ as a antisymmetric operator on $\mathcal H_z$. So, under the above identifications, one can show that \begin{equation}\label{ident4} \mathcal V_b(\lambda)=\mathbb{R}J_zp,\quad \mathcal V_c(\lambda)=(\mathbb R J_z p)^\bot. \end{equation} Actually, $d\omega$ can be seen as a magnetic field and $J$ can be seen as a Lorenzian force on $\mathbb C\mathbb P^n$. The projection by $\pi$ of all sub-Riemannian geodesics describes all possible motion of a charged particle (with any possible charge) given by the magnetic field $d\omega$ on the Riemannian manifold $\mathbb {CP}^n$(see e.g. \cite[Chapter 12]{matour} and the references therein). Define $u_0: T^*\mathbb S^{2n+1}\to \mathbb R$ by $u_0(p,z):=p\cdot V(z),\ z\in \mathbb S^{2n+1},p\in T^*_z\mathbb S^{2n+1},$ where $V$ is the vertical vector field defined in \eqref{vertical}. As before $\lambda=(p,z)\in\mathcal H_{\frac{1}{2}},z\in \mathbb S^{2n+1},p\in T_z^*\mathbb S^{2n+1}$, any $v\in T_\lambda T^*_z \mathbb S^{2n+1} (\sim T^*_z\mathbb S^{2n+1}\sim T_z \mathbb S^{2n+1})$, we have a vector $v^h:=\pi_*v\in T_{\pi(z)}\mathbb {CP}^n$; conversely, given any $X\in T_{\pi (z)}\mathbb{CP}^n$, there is a unique $X^v\in \mathcal H_z\subset T_z\mathbb{S}^{2n+1} (\sim T_{(p,z)} T_z \mathbb{S}^{2n+1}), \ p\in T_z^*\mathbb{S}^{2n+1}$. \begin{theor} \label{Kahler} Let $(\langle\cdot,\cdot\rangle,J)$ be the K\"{a}hlerian structure on $\mathbb {CP}^n$ and $R^\nabla$ the Riemannian curvature tensor of $\langle\cdot,\cdot\rangle$. Then for $\forall v\in \mathcal V_c(\lambda)$, \begin{eqnarray*} g((\mathfrak{R}_\lambda(c,c)(v))^h, v^h)&=&g(R^\nabla(p^h, v^h)p^h,v^h)+\frac{u_0^2}{4}\|v\|^2,\\ \mathfrak R_\lambda(b,c)(v)&=&g(R^\nabla(p^h, Jp^h)p^h,v^h)\mathcal E_b(\lambda),\\ \rho_\lambda(b,b)&=&g(R^\nabla(p^h, Jp^h)p^h,Jp^h)+u_0^2,\\ \mathfrak R_\lambda(c,a)&=&0\quad \hbox{and}\quad \mathfrak R_\lambda(a,a)=0, \end{eqnarray*} where $\mathcal E_b(\lambda)$ and $\rho_\lambda(b,b)$ are defined by $$\mathcal E_b(\lambda)=(Jp^h)^v,\quad \mathfrak R_\lambda(b,b)v_b=\rho_\lambda(b,b)v_b,\ \forall v_b\in \mathcal V_b(\lambda).$$ \end{theor} Now let us recall an estimate on the bounds of sectional curvature of Fubini-Study metric on $\mathbb{CP}^n$ from the theory of Riemannian submersion (see e.g. \cite[chapter 3]{priemannian}) \begin{theor}\label{Oneill} Let $\sec(g)$ be the Riemannian sectional curvature of $g$ on $\mathbb{CP}^n$. Then $\sec(g)\in[1,4].$ Moreover, the estimate of the bounds is sharp, namely, the values 1 and 4 are achieved. \end{theor} Using the tool of the Generalized Sturm Theorem for curves in Lagrangian Grassmannians, we obtained the comparison theorems of estimation of number of conjugate points along a sub-Riemannian extremal of a contact sub-Riemannian structure with symmetries and satisfying some compatible condition (see \cite{cijacobi} and the references therein). \begin{theor} The number of conjugate points $\sharp_T\bigl(\lambda(\cdot)\bigr)$ to 0 on $(0,T]$ along $\lambda(\cdot)$ satisfies the following inequality \begin{equation} \label{conjest} Z_T(1+\bar u_0^2, 1+\frac{1}{4}\bar u_0^2)\leq \sharp_T(\lambda(\cdot))\leq Z_T(4+\bar u_0^2, 4+\frac{1}{4}\bar u_0^2), \end{equation} where $$Z_T(\omega_b,\omega_c)=(n-3)[\frac{T\sqrt{\omega_c}}{\pi}]+[\frac{T\sqrt{\omega_b}}{2\pi}]+ \sharp_T\{\tan(\frac{\sqrt{\omega_b}}{2}x)-\frac{\sqrt{\omega_b}}{2}x=0\}.$$ \end{theor} \begin{cor}\label{estimation} Under the same estimates the following statement hold for a normal sub-Riemannian extremal on $\mathcal H_{\frac{1}{2}}\cap\{u_0=\bar u_0\}$: \begin{enumerate} \item There is no conjugate points to $0$ in the interval $\bigl(0, \frac{\pi}{\sqrt{4+\frac{1}{4}\bar u_0^2}}\bigr)$; \item There is at least $(n-3)$ conjugate points to $0$ in the interval $\bigl(0,\frac{2\pi}{\sqrt{4+\bar u_0^2}} \bigr]$ and there are at least $(n-2)$ in the interval $\bigl(0,\frac{2\pi}{\sqrt{1+\bar u_0^2}} \bigr]$ \end{enumerate} \end{cor} \setlength\parindent{0pt}$\bullet$ {\bf Relation to quantum systems} The sub-Riemannian minimization problem for $(\mathbb S^{2n+1},\mathcal H, \langle\cdot,\cdot\rangle)$ , the initial and end points represent initial and target states of the system and to find a minimizer is equivalent to find a path which transfer the minimal energy from the initial to the target states (see \cite{cmvhopf}). So, we have shown that a sub-Riemannian geodesic $\gamma(\cdot)=\pi(\lambda(\cdot))$ always transfers the minimum energy from the state $\gamma(0)$ to the state $\gamma(T),\ \forall T<\frac{\pi}{\sqrt{4+\frac{1}{4}\bar u_0^2}}$ but fails to do this from the state $\gamma(0)$ to the state $\gamma(T),\ \forall T\geq \frac{2\pi}{\sqrt{4+\bar u_0^2}}$.
1,108,101,565,366
arxiv
\section{Introduction} \vspace{-3pt} The ever-increasing employment of machine learning (ML) technology, especially deep neural networks (DNNs), in various applications has tremendously improved these application domains' efficiency, such as computer vision \cite{CV1,he2016deep,simonyan2014very}, speech recognition \cite{speech1,speech3}, natural language processing \cite{nlp1,nlp2}, and etc. These ML and DNN models possess unique value, which is reflected in the expensive training efforts, privacy-sensitive training data, and proprietary network architectures and parameters. Thus, these models are usually deemed confidential as protected IPs. Consequently, these confidential models make them appealing targets to the adversary who intends to steal the models for profits \cite{chen2020stealing}. The adversary can explore the model execution and infer the non-disclosed model architecture and parameters through extraction attacks. Such attack is known as \textbf{\textit{model extraction attacks}}\cite{batina2019csi, duddu2018stealing, hu2020deepsniffer, hua2018reverse, naghibijouybari2018rendered, yan2020cache}, which not only destroys the confidentiality of a model and damages the IP of the owner but also benefits further attacks \cite{liu2016delving, oh2019towards}. ML and DNN models are mainly deployed in cloud with publicly accessible query interfaces/APIs, known as ML-as-a-service (MLaaS), allowing users to obtain service (e.g., predictive analytics) without accessing the black-box models. Thus, the adversary can duplicate the model functionality by exploring such attack surfaces as querying APIs \cite{tramer2016stealing}. Recently, with the proliferation of ML techniques in edge devices, such as autonomous driving \cite{av1,av2}, model extraction attack has such effective approaches as hardware side-channel attacks (SCAs) to pry into the model’s internal architecture. The prevalence of edge-deployed ML models and the pursuit of the extraction accuracy drive the adversaries to explore new attack surfaces instead of the superficial querying mode. Prior works \cite{hua2018reverse, naghibijouybari2018rendered, hu2020deepsniffer} demonstrate the SCAs can capture certain architectural events or hardware behaviors (e.g., bus traffic through bus snooping) during model execution. These timing-sensitive architectural events or hardware behaviors can be leveraged by the adversary to infer the DNN layer architectures and perform accurate DNN model extraction attacks. We argue that such \textit{effective} architectural events, dubbed \textbf{\textit{Architecture hints (Arch-hints)}} in this paper, present a new \textbf{\textit{Hardware/Architecture-level attack surface}} for the model extraction in the edge/local deployment. Though existing work shed some light in utilizing architecture-level events and behaviors in GPU-based model extraction\cite{naghibijouybari2018rendered,wei2020leaky,hu2020deepsniffer}, these events are used in an ad-hoc manner. It still lacks a systematic exploration and formal definition of such architectural behaviors, which conceals the universality of such threat on different platforms. In this paper, we set out to investigate prior identified Arch-hints, uncover the root cause of Arch-hints, and clearly define the Arch-hints. The key insight is that we identify that these Arch-hints essentially result from \textit{\textbf{data movement in hardware platforms during model execution}}. Nevertheless, simply caused by tractable data movement during model execution doesn't entitle an architectural event to an Arch-hint. The data movement during model execution must also exhibit distinguishable and stable patterns across the DNN model layers to make it a qualified Arch-hint. Nowadays, the Graphics Processing Unit (GPU) has become the dominant hardware to deploy DNN applications, both in cloud and edge scenarios \cite{dnncloud2,dnncloud3,dnncloud4, li2018learning, yazici2018edge}. Also, the considerable memory footprint of DNN-based workloads and ever-increasing requirements of programming flexibility has pushed the GPU memory management on the verge of a major shift from the traditional copy-then-execute (CoE) model to the unified memory (UM) model \cite{li2015evaluation, landaverde2014investigation, otterness2017evaluation, umbeginners, bateni2020co, wang2020enabling, 2020hotedge}, especially on these memory-limited edge platforms \cite{bateni2020co, wang2020enabling, dashti2017analyzing}. Based on the principles to define Arch-hints, we identify three Primary Arch-hints that are specifically caused by the data movement patterns of UM, namely \textit{page fault latency (PFLat), page migration latency (MigLat)}, and \textit{page migration size (MigSize)}, which exhibits distinguishable patterns on layer features and model architecture during model execution (Sec. \ref{UMbrief}). We propose a metric \textit{effectiveness\_score} to validate the effectiveness of these Primary Arch-hints (Sec. \ref{UM-archhints}) by evaluating their distributions across the DNN model layers. Then, we propose a new model extraction attack, \textbf{UMProbe}, which thoroughly explores the new Arch-hints in UM system to perform model extraction accurately (Sec. \ref{newatk}). We also evaluate how existing Arch-hints and their combinations with Primary Arch-hints can affect the model extraction accuracy. Lastly, we substantially modify the Darknet framework and develop the UM implementation for a portfolio of representative DNN benchmarks. To the best of our knowledge, no UM implementations of DNN models has been published. We evaluate UMProbe performance on these benchmarks and demonstrate that UMProbe can extract victim model layer sequence with the accuracy of 95\% for almost all victim models (Sec. \ref{evaluation}). In summary, this paper makes following contributions: \squishlist{} \item We investigate prior identified Arch-hints, uncover the root cause of these Arch-hints, and formally define the Arch-hints. Based on the definition, we identify three primary Arch-hints cause by the unique data movement patterns of GPU UM system \item We characterize multiple Arch-hints candidates in UM system and propose a metric to quantify their effectiveness. \item The newly explored Arch-hints expose a new attack surface in UM which has not been explored before. We develop an extraction attack UMProbe based on that. \item We create the first DNN benchmark suite under UM to facilitate the related researches in the community. \item We evaluate UMProbe performance using the benchmark suite and demonstrate UMProbe's high accuracy, calling for attention to the DNN security in UM system. \squishend{} \vspace{-3pt} \section{Extracting Model Using Hardware Architectural Hints} \vspace{-3pt} \subsection{Model Extraction Essential}\label{attback} Model extraction attacks target the ML models deployed in the cloud with publicly accessible query interfaces/APIs, which known as ML-as-a-Service (MLaaS) since its inception. The adversary tries to duplicate a functionally equivalent model by frequently querying APIs for cloud-based models. Recently, model extraction attack also extends to the ML models served on the edge and local devices with the proliferation of ML techniques in modern applications such as autonomous driving\cite{mlavs1,yang2019re,mlavs3,mlavs4,li2018learning, yazici2018edge, verhelst2020machine}. In this paper, we focus on this emerging trend and set out to explore the model extraction attack targeting the ML deployment in edge scenarios, which present higher threats to the models. \noindent\textbf{Attack Target: What to Steal?} As a DNN model consists of network architecture, parameters and hyper-parameters, the adversary can target extracting architecture \cite{oh2019towards, hu2020deepsniffer}, parameters \cite{tramer2016stealing}, hyper-parameters \cite{wang2018stealing}. Specifically, architecture indicates layer types and connections. Parameters are weights and biases that are learned during training process. Hyper-parameters are the configuration variables used to control the training process, such as learning rate, batch size, etc. Among all these targets, the \textit{network architecture is the most fundamental and valuable targets for a DNN model extraction} as both model parameters and hyper-parameters can be inferred with the knowledge of the model architecture \cite{tramer2016stealing, wang2018stealing}. The adversary can even launch adversarial attack based on the extracted network architecture \cite{liu2016delving, hu2020deepsniffer}. The desired network architecture usually consists of layer number, layer types/dimension, and layer connections. The layer connection can include the sequential (e.g., VGG \cite{simonyan2014very}) and the non-sequential (e.g., shortcut in ResNet \cite{he2016deep}). \vspace{-3pt} \subsection{Attack Surface: How to Steal?}\label{attsurface} \vspace{-3pt} \noindent\textbf{Application/API-Level Attack Surface:} Conventionally, the adversary performs extraction attack at \ul{\textit{application/API-level}}. In this attack surface, the adversary accesses the API by querying the victim model and receives the replies. It then leverages the input-output pairs of the victim model to detect the decision boundary (i.e., the classification boundary between different classes) of the model and subsequently duplicates the model \cite{oh2019towards, tramer2016stealing}. However, such attack needs tons of queries and consumes significant computation resources and time \cite{oh2019towards}. Moreover, the attack can only duplicate the functionality of the model instead of being able to probe the accurate internal architecture of the model \cite{oh2019towards, jagielski2020high} due to its limited access to the cloud-deployed black-box model, which can hardly satisfy adversary's appetite. Thus, new attack surface revealing accurate information on model internal architecture besides model functionality is needed. \begin{figure}[t] \centering \includegraphics[width=1\linewidth]{hintsandattack3.pdf} \vspace{-6mm} \caption{Extracting DNN models by exploring the attack surface provided by Architectural-hints.}\label{hintsandattack} \vspace{-4mm} \end{figure} \noindent\textbf{Hardware/Architecture-Level Attack Surface:} The pursuit of the extraction accuracy and the prevalence of edge-deployed ML models drive the adversaries to explore new attack surfaces instead of the superficial querying mode \cite{hong2018security, hong20200wn, yan2020cache}. The hardware side-channel attacks (SCAs) have recently drawn attention since they can provide effective approaches to pry into the model's internal architecture. For example, \cite{hua2018reverse, naghibijouybari2018rendered, hu2020deepsniffer} demonstrate that SCAs can obtain information that is closely correlated to the model internal architecture by capturing certain architectural events and hardware behaviors during model execution. With further data analysis, the \ul{internal model architecture could be accurately inferred based on these critical hardware architectural behaviors}. We observe that such \ul{\textit{ hardware/architecture-level behavior leakage}} provides a new attack surface for model extraction attack. We name this hardware/architecture-level visible information as {\textit{Architecture-hints (Arch-hints)}}. \subsection{Arch-hints for Model Extraction}\label{archhints} In this work, we take the first step to present an in-depth exploration of how Arch-hints can contribute to the extraction attacks of the edge-deployed DNN models. Specifically, we illustrate a threat model with a GPU-based DNN inference setup. We investigate prior identified Arch-hints, analyze the root cause of these Arch-hints, and define the Arch-hints. Then, we use this definition to identify two critical Arch-hints in existing unified memory management systems, which can lead to new model extraction attacks towards edge-deployed DNN models. Fig.\ref{hintsandattack} shows an abstract view of how DNN model information translates to Arch-hints during the model execution. When the DNN application is executed in the DL framework (e.g., Pytorch), the framework formalizes the DNN model into a framework-level computational graph and then transforms the computational graph into the runtime layer \textit{execution sequence}, which is then issued to the runtime/hardware driver (e.g., CUDA, GPU driver). The runtime/hardware driver will launch a series of operational \textit{kernel sequence} accordingly. These kernel sequences could be revealed by carefully chosen Arch-hints while executing in the hardware platform (e.g., CPU-GPU heterogeneous platforms). The adversary typically has physical access to the victim platforms and can co-locate its spy application in the same platforms. Thus, the adversary can capture these Arch-hints by leveraging hardware SCAs. Prior works leverage architectural behaviors in model extraction attacks based on different platforms. Though these architectural behaviors share the similar functionalities as Arch-hints, they are used in an ad-hoc manner. It still lacks a systematic exploration and formal definition of such architectural behaviors. We summarize these Arch-hints and explore the hidden principles behind them. We categorize these Arch-hints into three types: a) cache-based, b) DRAM-based, c) GPU kernel-based, as shown in Table \ref{tb1:AvaiArchhints}. We illustrate the Arch-hints caught in GPU platforms. For instance, \cite{naghibijouybari2018rendered} collects the GPU memory write transactions and GPU unified cache throughput as the Arch-hints. \cite{wei2020leaky} utilizes the Arch-hints including the number of GPU DRAM read/write requests and the number of GPU texture cache requests. \cite{hu2020deepsniffer} leverages Arch-hints such as memory bus traffics and kernel execution latency. \newcommand{\tabincell}[2]{\begin{tabular}{@{}#1@{}}#2\end{tabular}} \begin{table}[t] \scriptsize \centering \setlength\tabcolsep{1pt} \caption{Commonly-used Arch-hints in prior works. \vspace{-1pt} \begin{tabular}{c|c|c|c|c|c|c} \hline \multirow{2}{*}{Attack} & \multicolumn{3}{c|}{Arch-hints Used} & \multirow{2}{*}{\tabincell{c}{Platform \& \\ Mem. Model}} &\multirow{2}{*}{Scenario} &\multirow{2}{*}{Approach} \\ \cline{2-4} & Cache & Memory & Kernel &\multicolumn{1}{c|}{} &\\ \bottomrule \tabincell{c}{RenderedInsecure \cite{naghibijouybari2018rendered}} &\tabincell{l}{\checkmark} &\tabincell{l}{\checkmark} &\tabincell{l}{}& \tabincell{l}{GPU, CoE} & \tabincell{l}{Cloud/Edge} & \tabincell{l}{Predict}\\\hline \tabincell{c}{LeakyDNN \cite{wei2020leaky}} &\tabincell{l}{\checkmark} &\tabincell{l}{\checkmark} &\tabincell{l}{}& \tabincell{l}{GPU, CoE} & \tabincell{l}{Cloud} & \tabincell{l}{Predict}\\\hline \tabincell{c}{DeepSniffer \cite{hu2020deepsniffer}} &\tabincell{l}{} &\tabincell{l}{\checkmark} &\tabincell{l}{\checkmark}& \tabincell{l}{GPU, CoE} & \tabincell{l}{Edge} & \tabincell{l}{Predict} \\\hline \tabincell{c}{DeepRecon \cite{hong2018security}} &\tabincell{l}{\checkmark} &\tabincell{l}{} &\tabincell{l}{}& \tabincell{l}{CPU, N/A} & \tabincell{l}{Cloud} & \tabincell{l}{Predict} \\\hline \tabincell{c}{0wnNAS \cite{hong20200wn}} &\tabincell{l}{\checkmark} &\tabincell{l}{} &\tabincell{l}{}& \tabincell{l}{CPU, N/A} & \tabincell{l}{Cloud} & \tabincell{l}{Infer}\\\hline \tabincell{c}{StealNN \cite{duddu2018stealing}} &\tabincell{l}{} &\tabincell{l}{} &\tabincell{l}{\checkmark}& \tabincell{l}{CPU, N/A} &\tabincell{l}{Cloud} & \tabincell{l}{Predict} \\\hline \tabincell{c}{Cachetelepathy \cite{yan2020cache}} &\tabincell{l}{\checkmark} &\tabincell{l}{} &\tabincell{l}{}& \tabincell{l}{CPU, N/A} & \tabincell{l}{Cloud} & \tabincell{l}{Search}\\\hline \tabincell{c}{ReverseCNN \cite{hua2018reverse}} &\tabincell{l}{} &\tabincell{l}{\checkmark} &\tabincell{l}{\checkmark}& \tabincell{l}{FPGA, N/A} & \tabincell{l}{Cloud} & \tabincell{l}{Search}\\\bottomrule \end{tabular} \vspace{-5mm} \label{tb1:AvaiArchhints} \end{table} We delve into these Arch-hints and observe that these Arch-hints are essentially resulted from the \textit{data movement that exhibits in the hardware platforms} during model execution. For example, it is the data movement between GPU memory and GPU cache that causes the Arch-hint of memory bus traffic. The data movement between the GPU memory hierarchy system and GPU SMs that significantly contributes to the kernel execution latency. While data movement is crucial to model execution exhibiting Arch-hints, we further identify that not all architectural behaviors caused by data movement can serve as effective "Arch-hints" of being leveraged in the attack. In fact, valid Arch-hints should be architectural events and hardware behaviors that present certain recognizable information for the adversary. Based on the analysis above, Arch-hints is defined as effective architectural events and hardware behaviors that{\textit{1) being caused by tractable data movement during model execution, and 2) being able to exhibit recognizable information in extraction attack.}} We will utilize the definition to explore new Arch-hints in GPU unified memory system. \vspace{-3pt} \section{Demystifying Arch-hints in Unified Memory}\label{archhintcharacter} \vspace{-3pt} Unified memory (UM) has gained widely adoption today due to its efficient memory footprint and programmability. In this section, we identify two unique Arch-hints in UM management system based on the definition proposed in Sec.\ref{archhints} and validate their effectiveness. \vspace{-3pt} \subsection{GPU Execution and Unified Memory} \label{UMbrief} \vspace{-3pt} We first introduce background of GPU execution model and unified memory management. As a representative vendor, NVIDIA GPU consists of several Streaming Multiprocessors (SMs), on-chip L2 cache, and high-bandwidth DRAM. All SMs share the unified L2 cache and the device memory through an on-chip interconnection network. In a typical discrete GPU setup, the GPUs are connected to the host CPU through PCIe interconnect. Note that, the discrete GPU has its own on-board physical memory which is physically separated from the CPU host memory. Since the GPU memory usually has less capacity compared to the CPU memory (e.g., 32 GB in NVIDIA V100~\cite{v100} v.s. hundreds GB of host CPU memory), the conventional GPU workload execution pattern is to copy the data from CPU memory to the GPU memory when needed, and copy the data back to CPU memory after the computation finishes. This execution model is referred to as \ul{\textit{copy-then-execute (CoE)}}. \begin{figure}[t] \centering \includegraphics[width=\linewidth]{umbrief5.pdf} \caption{Far page fault and page migration in UM system.} \vspace{-4mm} \label{umprin} \end{figure} \noindent\textbf{Unified Memory Model:} However, CoE execution mechanism suffers from i) frequent data copy between CPU and GPU~\cite{bateni2020co} and ii) out-of-memory problem due to limited GPU memory capacity~\cite{ganguly2019interplay}. To address these disadvantages, \ul{\textit{ Unified Memory (UM)}} model is introduced to ease GPU programming by removing the necessity of explicit data copy between the CPU and GPU \cite{umbeginners,umamd}. Specifically, UM provides the illusion of unified virtual memory space for both CPU and GPU, and allows applications to access data on both CPU memory and GPU memory through a single shared pointer in the program. In CUDA programming, the API $cudaMallocManaged()$ is used to allocate UM space. Unlike CoE model which transfers the data by chunks, UM employs an on-demand paging method and transfers/migrates the data at page-level granularity. Fig.\ref{umprin} shows the data processing flow under UM model. At system level, the GPU's page table walk will fail if SMs try to access a physical memory page that is not currently available in GPU local memory (i.e., the address translation lookup misses in both TLB and page table, steps \ding{172} $\sim$ \ding{173}) and a \ul{\textit{ far page fault}} exception is raised (step \ding{174}) by the GPU memory management unit (MMU)~\cite{zheng2016towards}. These exceptions are sent to the host CPU and handled by the driver (step \ding{175}). In particular, the driver first interrupts the CPU to handle the page faults, and then {\it migrates} the requested pages to the GPU memory (step \ding{176}) \cite{ganguly2019interplay}. We claim the system with CoE management model as CoE system. Accordingly, UM system indicates the system with UM management model. \subsection{Arch-hints for UM System} \label{UM-archhints} \begin{table*}[t] \centering \small \caption{The collected and characterized candidate Arch-hints during DNN execution in UM system.} \vspace{-10pt} \begin{tabular}{c|c|c|c} \hline \tabincell{l}{Platform} &\tabincell{l}{Memory Hierarchy} &\tabincell{l}{Network Model} &\tabincell{l}{Collected candidate Arch-hints (8)}\\\hline \tabincell{c}{Titan RTX \\(GPU)} &\tabincell{c}{Unified Memory \\(UM)}&\tabincell{c}{Darknet \\Reference \cite{darknetclassification}}&\tabincell{l}{L2 write transaction, L2 read transaction, DRAM read transaction, DRAM write transaction, \\ kernel latency, far fault latency, data migration latency, migration size}\\\hline \end{tabular} \vspace{-10pt} \label{collectarchhints} \end{table*} \begin{figure*}[t] \subfloat[L2 write trans. (Byte).]{ \includegraphics[width=4.2cm, height = 3.2cm]{l2write1.pdf} \label{l2write} } \subfloat[DRAM write trans. (Byte).]{ \includegraphics[width=4.2cm, height = 3.2cm]{dramwrite1.pdf} \label{dramwrite} } \subfloat[L2 read trans. (Byte).]{ \includegraphics[width=4.2cm, height = 3.2cm]{l2read1.pdf} \label{l2read} } \subfloat[DRAM read trans. (Byte).]{ \includegraphics[width=4.2cm, height = 3.2cm]{dramread1.pdf} \label{dramread} } \subfloat[Kernel latency (ms).]{ \includegraphics[width=4.2cm, height = 3.2cm]{kernellatency1.pdf} \label{exelatencyofum} } \subfloat[Far fault latency (ms).]{ \includegraphics[width=4.2cm, height = 3.2cm]{pflatency1.pdf} \label{PFLatency} } \subfloat[Migration latency (ms).]{ \includegraphics[width=4.2cm, height = 3.2cm]{h2dlatency1.pdf} \label{h2d latency} } \subfloat[Migration size (KB).]{ \includegraphics[width=4.2cm, height = 3.2cm]{h2dsize1.pdf} \label{h2dsize} } \caption{Comprehensive characterization of distributions of different Arch-hints in UM system.} \label{comcharater} \vspace{-4mm} \end{figure*} \begin{comment} \begin{figure*}[t] \subfloat[refer l2/dram read (Byte).]{ \includegraphics[width=3.5cm, height = 2.8cm]{referl2dramread.png} \label{referl2dramread} } \subfloat[refer l2/dram write (Byte)]{ \includegraphics[width=3.5cm, height = 2.8cm]{referl2dramwrite.png} \label{referl2dramwrite} } \subfloat[l2 write read.]{ \includegraphics[width=3.5cm, height = 2.8cm]{l2writeread.png} \label{l2writeread} } \subfloat[dram write read.]{ \includegraphics[width=3.5cm, height = 2.8cm]{dramwriteread.png} \label{dramwriteread} } \subfloat[exe vs far-fault latency.]{ \includegraphics[width=3.5cm, height = 2.8cm]{exefarlatency.png} \label{exefarlatency} } \caption{Comprehensive characterization of L2/DRAM read/write and page fault under UM.} \label{comcharater1} \vspace{-2mm} \end{figure*} \end{comment} As discussed in Sec. \ref{archhints}, the ever-explored Arch-hints for GPU platform target copy-then-execute (CoE) system. Since the memory management and data movement in UM system differ from that in CoE, we explore how this difference impacts the Arch-hints patterns and effectiveness in model extraction. \noindent\textbf{What Should be Effective Arch-hints:} Extraction attack essentially explores the relation between the observed Arch-hints and the internal architectures of the victim model, and typically utilizes a training-based approach to learn the exhibited patterns and leaked information from the Arch-hints to predict the architecture, as shown in Table \ref{tb1:AvaiArchhints}. For example, \cite{naghibijouybari2018rendered} utilizes the Arch-hints of memory write transactions and unified cache throughput as inputs to train classification models (e.g., KNN) to predict the victim model neuron number. \cite{wei2020leaky} utilizes the Arch-hints, including the number of DRAM read/write requests and the number of texture cache requests, as inputs to train the LSTM model to predict different DNN layer types. Since the Arch-hints work as the input feature vector of the adversary learning model, the distribution property of Arch-hints across different layers significantly impacts the model extraction performance (e.g., extraction accuracy, Sec. \ref{evalacc}). It is expected that the Arch-hints distribution across different layers during model execution exhibits clear and accurate patterns. Unfortunately, some Arch-hints distribution can become blurred and inaccurate in UM system. \noindent\textbf{Existing Arch-hints May be Ineffective in UM:} In fact, we observe that \textit{the CoE platform-targeted Arch-hints can get blurred in UM system during model execution}, which potentially undermines the extraction attack. Typical cases are Arch-hints based on kernel activity or memory traffic. For example, such a common Arch-hint as kernel latency is closely associted with kernel size. It only consists of kernel execution latency in CoE platform. Since the data has been copied into GPU memory, the kernel execution can access the data in local memory and the execution latency is stable. During model execution, each layer shows stable latency. Due to the different size, different layers show different but stable latencies. The Arch-hint shows clear and accurate distribution across different layers. In comparison, the kernel latency includes far fault latency and migration latency besides the execution latency in UM \cite{umbeginners}, and the execution latency can overlap with the other two. Thus, each layer shows in-stable and variant latency during model execution, and the Arch-hint shows blurred and inaccurate distribution across different layers. Consequently, the Arch-hint can become ineffective for distinguishing different layers. Regarding the memory bus traffic, in CoE platform, as the data has been in GPU memory, the memory read transaction can reveal the input size of a kernel \cite{hu2020deepsniffer}. Thus, the Arch-hint of memory transaction is accurate. However, in UM, besides reading data from DRAM, a kernel can migrate large amount of data from CPU memory on demand, which will be directly loaded into GPU cache. Thus, the Arch-hint of memory transaction is inaccurate to reveal the kernel size. Here, we utilize the Darknet reference model as an example to illustrate our observation. We choose Darknet framework in this paper for three reasons: 1) it is open source; 2) it is written in C and CUDA, and well supported with the CUDA UM model APIs (e.g., $cudaMallocManaged()$, $cudaFree()$); 3) it provides a variety of standard pre-trained models for objective classification and detection applications. We execute the model and collect the Arch-hints on GPU platforms with UM system. Besides the commonly-used Arch-hints, we explore three candidate Arch-hints specified in UM model: \textit{far fault latency, data migration latency, and data migration size}, as shown in Table \ref{collectarchhints}. For each Arch-hint, we execute the model 10 times and collect 10 samples, and then draw the box-plot, as shown in Fig.\ref{comcharater}. \noindent\textbf{Quantifying the Effectiveness of Arch-hints:} To evaluate how the Arch-hints effectiveness are undermined and the extraction attack performance is impacted, we propose metrics to quantify the effectiveness of each Arch-hint. Note that all Arch-hints can be used in extraction attack theoretically, however, an effective Arch-hint can faithfully mirror the patterns of the model execution (e.g., layer features). If the arch-hint is ineffective, it will be more difficult for the adversary to extract the victim model, for example, the adversary has to pay more observations to obtain accurate results We mainly evaluate an Arch-hint's effectiveness in terms of its \ul{distribution across the NN layers}. The distribution can be measured using \textit{coefficient of variation (CoV)} from two aspects: 1) $distinguishability$, 2) $consistency$. $CoV$ is a statistical measure of the dispersion of a series of data independent of the measurement unit used for the data. As different Arch-hints have different measurement units, $CoV$ is useful for comparing the different Arch-hints distributions. $CoV$ is calculated as the ratio of the standard deviation ($\sigma$) to the mean ($\mu$), as shown in Equation \ref{cov}. Fig.\ref{comcharater} shows the box-plot of each Arch-hint, where the x-axis indicates the layers in Reference model and the y-axis indicates the 10-samples distribution of the Arch-hint on each layer. We detail below how the $CoV$ is used to measure the $distinguishability$ and $consistency$ of each Arch-hint. Note that we only show early layers of the model to save space, however, the calculation is applied to all layers. \begin{equation} \footnotesize CoV = \frac{\text{$\sigma$}}{\text{$\mu$}} \label{cov} \end{equation} \textit{a) Distinguishability (dis)} indicates variability of the Arch-hint value among different layers during model execution. As different layers of a model (e.g., Conv, BN, Pool, etc.) have different computation complexity and dimension size, it is supposed that one Arch-hint behave differently on different layers. $distinguishability$ is defined as the variability of the Arch-hint among all layers of a model and is calculated as the $CoV$ of the Arch-hint values of these layers, as shown in Equation \ref{dis}, where the n is total layer number. Intuitively, the larger the $CoV_{dis}$, the better the $distinguishability$ is (i.e., the $distinguishability$ positively correlates to $CoV_{dis}$). Then, the larger the Arch-hints difference on different layers is, the easier the adversary can distinguish different layers and explore the model internal architecture, and thus, the more effective the Arch-hint is. \begin{equation} \footnotesize dis = variability_{Arch-hint}(layer_{1}, layer_{2}, ..., layer_{n}) \label{dis} \end{equation} \textit{b) Consistency (con)} indicates the variability of each-layer Arch-hint values among the multiple samples/executions. It is expected that the Arch-hint shows consistent behaviors among the multiple samples (i.e., low variability) to provide accurate information about the model architecture, otherwise, the Arch-hint value contain great noises, increasing the difficulty for adversary to accurately extract the model architecture. As Equation \ref{con} shows, the $consistency$ is calculated as the $CoV$ of each-layer Arch-hint values of the multiple samples, where i $\le$ n. Accordingly, the lower the $CoV_{con}$ is, the larger the Arch-hint $consistency$ is (i.e., the $consistency$ negatively correlates to $CoV_{con}$). That is, the more accurate and less-noisy information about the model architecture that Arch-hint can provide, the more effective the Arch-hint is. \begin{equation} \footnotesize con = variability^{layer_i}_{Arch-hint}(sample_{1}, sample_{2}, ..., sample_{m}) \label{con} \end{equation} \noindent\textbf{Integratively Evaluate the Effectiveness of Arch-hints:} We analyze above that an Arch-hint distribution property among different layers during model execution matters to the Arch-hint effectiveness and the Arch-hint distribution can be measured from both $distinsuishability$ and $consistency$. Here, we integrate the $distinsuishability$ and $consistency$ of each Arch-hint, and define the \ul{\textit{Arch-hints Effectiveness Score (ArchES)}} to evaluate the overall effectiveness of an Arch-hint. The $ArchES$ is defined as \textit{the ratio of distinsuishability to consistency}, that is, $CoV_{dis}$/$CoV_{con}$ (Sec. \ref{effofarchs}). On one hand, the $CoV_{dis}$ is expected to be large such that the Arch-hint behaves significantly differently on different layers, providing recognizable information on the model architecture. On the other hand, the $CoV_{con}$ is expected to be low such that the Arch-hint behaves consistently among multiple model executions, providing accurate and noise-less information of the model architecture. The higher $ArchES$ is, the more effective the Arch-hint is. By utilizing the $ArchES$, We identify that UM system exhibits several unique and effective Arch-hints, which provides a new attack surface for adversary to extract DNN model (Sec. \ref{newatk}). In fact, when we discuss the effectiveness of Arch-hints, we essentially explore whether the data movement during model execution can exhibit distinguishable and accurate patterns, which can be regarded as the leak of the model architecture information, for being learned by adversary in a given system. In conventional Copy-then-Execute (CoE) system, such commonly-used Arch-hints as memory bus traffic, kernel latency can represent the input/output data size and computation complexity of a layer accurately and distinguish different layers. However, they become blurred in UM system. Instead, some new Arch-hints can reveal the data movement pattern clearly and accurately during model execution. \begin{comment} \begin{figure*}[t] \subfloat[refer ker/paga-fault latency]{ \includegraphics[width=4.8cm, height = 2.8cm]{referkernelpagalatcomparison.png} \label{referkerpaga-faultlat} } \subfloat[refer ker/paga-fault latency]{ \includegraphics[width=4.5cm, height = 2.8cm]{referkernelpagalatcomparison.png} \label{referkerpaga-faultlat} } \subfloat[refer l2 hit.]{ \includegraphics[width=0.23\linewidth]{referl2hit.png} \label{referl2hit} } \subfloat[l2 write read.]{ \includegraphics[width=0.33\linewidth]{referl2writeread.png} \label{l2writeread} } \subfloat[refer dram write read.]{ \includegraphics[width=0.33\linewidth]{referdramwriteread.png} \label{referdramwriteread} } \subfloat[I/O under UM.]{ \includegraphics[width=0.3\linewidth]{11dm.pdf} \label{I/O} } \subfloat[Far fault count.]{ \vspace{-2mm} \includegraphics[width=3.5cm, height = 2.8cm]{pfcounts.pdf} \label{pfcounts} } \subfloat[Far fault times.]{ \includegraphics[width=3.5cm, height = 2.8cm]{pftimes.pdf} \label{pftimes} } \caption{Comprehensive charaterization of different Arch-hints under UM2.} \label{comcharater2} \vspace{-2mm} \end{figure*} \begin{figure*}[h] \subfloat[pf count.]{ \includegraphics[width=5.5cm, height = 5cm]{pfcount.png} \label{l2writeread} } \subfloat[pf times.]{ \includegraphics[width=5.5cm, height = 5cm]{pftimes.png} \label{referdramwriteread} } \subfloat[h2d times.]{ \includegraphics[width=5.5cm, height = 5cm]{h2dtimes.png} \label{I/O} } \caption{Comprehensive characterization of different Arch-hints under UM3.} \label{comcharater3} \vspace{-2mm} \end{figure*} \end{comment} \vspace{-3pt} \section{Extracting Models with Arch-hints in UM} \label{newatk} \vspace{-3pt} In this section, we show how the identified Arch-hints based on page fault handling and on-demand data migration in UM system exhibit patterned information, revealing layer features and DNN characteristics. We then leverage these Arch-hints to launch an extraction attack, termed as \textbf{UMProbe}. To the best of our knowledge, this is the first model extraction attack targets unified memory system. \vspace{-3pt} \subsection{Threat Model and UM Arch-hints} \label{UniqueArchhints} \vspace{-3pt} The threat model focuses on edge security where the adversary is able to physically access the victim platform. Also, with GPU multi-instances technology \cite{MIGPU} and GPU's support for the concurrency of multi-tenant inference applications in edge scenarios \cite{liang2020ai}, the adversary can share the physical GPU platform with the victim and co-locate its application with the victim model in the GPU. First, the adversary can utilize the PCIe bus snooping method to obtain the GPU kernel and data migration activities, which has been proven with an accuracy of $\sim$98\% in practice\cite{zhu2020hermes}. Specifically, the GPU activities are initialized and terminated from the host commands, which are transferred through the PCIe connection between GPU and host. Accordingly, the far-fault handling requests and on-demand page migration both cause PCIe traffic in UM system, as shown in Fig.\ref{umprin}. By capturing these critical traffic and events related to far-fault requests and data migration \cite{PCIAnalyzer}, the adversary can obtain the Arch-hints of \textit{Page Fault Latency (PFLat), Page Migration Latency (MigLat) and Migration Size (MigSize)} of each kernel and layer. We consider these Arch-hints to be \textbf{Primary Arch-hints ($PriArchs$)} in UM system. We show that the primary Arch-hints can show strong patterns and are effective in leaking model information for adversary and that UMProbe is able to extract the victim DNN architecture accurately by merely leveraging these $PriArchs$. Additionally, since the adversary and victim share the same GPU platform (e.g., the hardware cache/memory, the open deep learning library (e.g., Darknet)), as demonstrated in \cite{naghibijouybari2017constructing, naghibijouybari2018rendered}, the adversary can co-locate its spy application with the victim model to obtain the victim cache properties (e.g., L2 transactions) by leveraging cache-side channels, which can achieve $\sim$ 90\% accuracy. UMProbe can collect the \textit{common Arch-hints ($ComArchs$), such as L2 read and write transactions} to further enhance its extraction performance. \begin{figure}[t] \centering \includegraphics[width=\linewidth]{dataflowofumprobe2.pdf} \caption{Overview of UMProbe.} \vspace{-2mm} \label{dataflowofumprobe} \end{figure} \noindent\textbf{UMProbe Overview:} After obtaining the Arch-hints, we overview how $PriArchs$ exhibit patterned information to reveal different layer features and manifest model architecture. As show in Fig.\ref{dataflowofumprobe}, a DNN application issues its computation graph to the Darknet framework, and Darknet forms the graph as the connected layer sequence of the DNN architecture. Then, Darknet transforms the layer sequence into the GPU commands of runtime kernel execution sequence corresponding to the layer sequence. Finally, the kernel sequence is executed in GPU platforms, which exhibits the Arch-hints of being learned by the adversary. Regarding Darknet framework, it mainly loads the network configuration file and parameters file (i.e., weights) after receiving the DNN execution request. Essentially, Darknet constitutes the network architecture and allocates memory space for each layer (i.e., the IFM, OFM, Filter) in UM system by calling the API $cudaMallocManaged()$. In UM system, it is lazy allocation, indicating that the physical page of the data populates in the host memory and the virtual page is invalid in the GPU side (i.e., the virtual-to-physical mapping does not exist or page valid flag is not set) after allocation completes. Thus, when the GPU SMs execute the layer and kennel sequentially, the SMs will encounter far-page fault exception if the SMs access the data region for the first time and may cause on-demand page migration. As different layer types utilize different GPU kernels, the $PriArchs$ can exhibit patterned information in leaking the different kernels characteristics and layers features. Considering the kernel sequences of different layers have a static execution order related to the original computational graph of a DNN model \cite{hu2020deepsniffer}, the kernels characteristics and layers features revealed by $PriArchs$ can be learned by adversary to predict the model layer sequence accurately. \vspace{-3pt} \subsection{New Attack Surface in UM}\label{atksurface} \vspace{-3pt} Essentially, UMProbe utilizes a learning-based model to explore the relationship between the extracted Arch-hints and victim model's internal architectures, the input Arch-hints containing the victim architecture information can reveal the victim layer features. In this section, we show how the primary Arch-hints under UM represent the different kernels' characteristics and reveal different layer features, which thus exposes a new attack surface in unified memory system for the adversary to infer victim DNN architecture. \subsubsection{Primary Arch-hints Reveal Layer Features}\label{unireveal} \noindent\textbf{Primary Arch-hints Vary with Layer OFM/Filter Characteristics:} As a DNN layer can be specified by its feature map (i.e., IFM, OFM) and its parameters (e.g., the Filter of a Conv layer), we observe that the primary Arch-hints of \textit{PFLat, MigLat and MigSize} are closely associated with the feature map and parameter characteristics of a runtime layer. Table \ref{tb2:kerlayfeature} shows the most-commonly-used layer types in a DNN model. By analyzing Darknet code, we identify the associated kernels of each layer. We will analyze how these kernels behave during DNN execution and how the Arch-hints can eventually reveal the kernel characteristics and layer features. \begin{table}[t] \centering \scriptsize \caption{Associated kernels and Arch-hints of the typical layers in Darknet.} \vspace{-6pt} \begin{tabular}{l|l|c|c|c} \hline \tabincell{l}{Layer} &\tabincell{l}{Kernels} &\tabincell{l}{PFLat} &\tabincell{l}{MigLat} &\tabincell{l}{MigSize}\\\hline \tabincell{l}{Conv} &\tabincell{l}{fill\_kernel, im2col\_kernel, \\gemmSN\_kernel\_nn, sgemm\_xx\_nn}&\tabincell{c}{\checkmark} &\tabincell{c}{\checkmark} &\tabincell{c}{\checkmark}\\\hline \tabincell{l}{FC} &\tabincell{l}{fill\_kernel, gemmSN\_kernel\_tn, \\sgemm\_xx\_tn, axpy\_kernel,} &\tabincell{c}{\checkmark}&\tabincell{c}{\checkmark} &\tabincell{c}{\checkmark}\\\hline \tabincell{l}{BN} &\tabincell{l}{normalize\_kernel, scale\_bias\_kernel,\\ add\_bias\_kernel} &\tabincell{l}{}&\tabincell{l}{}&\tabincell{l}{}\\\hline \tabincell{l}{ACT} &\tabincell{l}{activate\_kernel (ReLu)}&\tabincell{l}{} &\tabincell{l}{} &\tabincell{l}{}\\\hline \tabincell{l}{Pool} &\tabincell{l}{forward\_maxpool\_kernel, \\ forward\_avgpool\_kernel} &\tabincell{c}{\checkmark} &\tabincell{l}{} &\tabincell{l}{}\\\hline \tabincell{l}{Shortcut} &\tabincell{c}{copy\_kernel, shortcut\_kernel} &\tabincell{l}{\checkmark} &\tabincell{l}{} &\tabincell{l}{}\\\hline \end{tabular} \vspace{-6mm} \label{tb2:kerlayfeature} \end{table} \textit{a) Conv and FC layers.} The execution of a Conv layer involves multiple kernels, such as $fill\_kernel$, $gemm\_kernel$. Here, the kernel $fill\_kernel$ works to initialize the OFM region of the layer (i.e., filling value $1$ in the region), which is allocated in GPU memory before the convolution operation (i.e., the kernel $gemm\_kernel$) begins. To initialize the region, the GPU SMs access it for the first time after the memory being allocated, causing far-page fault handling and PFLat. After the OFM region is initialized, the kernels $im2col\_kernel$ and $gemm\_kernel$ are launched to execute convolution operation. During $gemm\_kernel$ execution, the GPU SMs has to access the Filter region (i.e., storing the weights) for the first time, causing far-page fault handling and PFLat as well. Moreover, since the physical pages of Filter data populate in the remote system memory at this moment, data migration is required after the far-page fault is processed, which results in MigLat and MigSize. For the FC layer, it almost follows the same pattern of Conv layer as they are both linear operation layers in a model. Specifically, FC layer starts with the kernel $fill\_kernel$ that initializes the OFM region and can cause far-page fault, and then executes the computation kernel $gemmSN\_kernel\_tn$ that accesses the Filter region and causes both far-page fault and data migration, and ends with the kernel $axpy\_kernel$ that again accesses OFM region and does not cause far-page fault or migration. Besides, although the layer implementations may vary in runtime, like a Conv layer can be implemented with a $gemmSN\_kernel\_nn$ or a $sgemm\_xx\_nn$ kernel (xx indicates different dimension, such as $32\times32$, $64\times32$), this makes no difference to our analysis above. These kernels always access the OFM and Filter region and result in both far-page fault handling and data migration. \textit{b) BN and ACT layers.} A Conv layer is typically followed by a BN layer and then a ACT layer. As Table \ref{tb2:kerlayfeature} shows, BN layer consists of three kernels, including $normalize\_kernel$, $scale\_bias\_kernel$, and $add\_bias\_kernel$, and ACT layer consists of $activate\_kernel$ (e.g., the most-commonly-used ReLU). When SMs execute a BN layer, the layer takes the OFM of the previous layer as its own IFM, indicating the OFM region has been accessed before and the data pages have populated in the local GPU memory. Thus, the SMs accessing the region does not cause far-page fault or data migration. Analogously, the ACT layer execution causes no far-page fault or migration. Besides, some earlier DNN models, such as Alexnet, do not include BN layer, the ACT layer directly takes the previous Conv/FC layer OFM as its IFM. Also, modern models include BN and ACT layers in Conv layer, known as Conv-BN-ReLu block \cite{ioffe2015batch}. However, our analysis above is also applicable to these different models variants. Namely, both BN and ACT layers do not cause far-page fault or data migration. \textit{c) Pooling and Shortcut layers.} Pooling layer mainly involves the kernel $maxpool\_kernel$ or $avgpool\_kernel$, which outputs the down-sampling result of the previous layer. When SMs execute the kernel, the SMs access the OFM region of the Pooling layer for the first time, causing far-page fault handling and PFLat. During the down-sampling operation, the SMs does not need other parameters \cite{cnnparas} and does not cause data migrate. Thus, the Pooling layer is featured with far-page fault handling but no data migration. Modern DNN models can be configured with more complex non-sequential architecture, such as the popular ResNet using shortcut connection. During runtime execution, the shortcut and the main branch are actually executed in sequence in GPU platforms \cite{hu2020deepsniffer}. In Darknet, the shortcut layer is composed of three kernels, including $copy\_kernel$, $shortcut\_kernel$, $activate\_kernel$. The kernel $copy\_kernel$ is first executed to copy the identity of the IFM of the divergence point to the OFM region of shortcut layer, then the kernel $shortcut\_kernel$ is executed to perform addition operation in OFM region again, and finally the kernel $activate\_kernel$ is executed. As Pooling layer does, the shortcut layer accesses its OFM region for the first time during kernel $copy\_kernel$ execution and causes far-fault handling, however, it does not require additional parameters in layer execution, thus causing no data migration. \textbf{Observation-1:} During DNN model execution, different types of layers perform differently and cause different patterns on page fault handling and on-demand data migration, as shown in Table \ref{tb2:kerlayfeature}. The Conv/FC layer is featured with both \textit{PFLat, MigLat and MigSize}, while the layers of BN and ACT do not cause far-page fault or data migration. Both the Pooling and Shortcut layers only cause far-page fault and \textit{PFLat}. Fig.\ref{layerfeaturedata} shows examples of one Reference block and one ResNet18 residual block, we observe their primary Arch-hints behave as we discussed above. \begin{figure}[t] \subfloat { \centering \includegraphics[width=3.6cm, height = 2.5cm]{referlayer2.pdf} \label{reflay} } \subfloat { \includegraphics[width=4.3cm, height = 2.5cm]{res18layer2.pdf} \label{reslay} } \vspace{-2mm} \caption{Layer feature from one block of a) Reference, b) ResNet18. For Conv layer, the migration causes MigSize 4608B for Reference and 2304B, 2304B for ResNet18, respectively.} \label{layerfeaturedata} \vspace{-4mm} \end{figure} \noindent\textbf{Primary Arch-hints Reveal Filter Size:} As the Conv/FC layer is featured with the Arch-hints of far-page fault and data migration, and the data migration is mainly caused by the Filter data of the layer, we explore how the primary Arch-hints of \textit{PFLat, MigLat, MigSize} can reveal the Filter Size characteristic. As Conv layer is the dominant layer in DNN architecture, we characterize all the Conv layers of Reference model to show the analysis. The Filter size of a Conv layer can be calculated as $Channel_{IFM}$ $\times$ $Width_{Filter}$ $\times$ $Height_{Filter}$ $\times$ $Channel_{OFM}$ $\times$ 4 bytes (i.e., each data is a $float$ type in memory). As shown in Fig.\ref{paraamountforarchs}, the x-axis indicates the different Conv layers with the network going deeper, and the y-axis on left-hand side indicates the migration and Filter data size, and the y-axis on the right-hand side indicates page fault and migration latency. We observe that the MigSize is almost equal to the Filter Size, indicating that the migration is mainly caused by the Filter data. Also, with network going deeper, the Filter Size increases, and MigSize increases accordingly. Meanwhile, the MigLat and PFLat increases as well, following the trend of Filter Size and MigSize. Intuitively, increasing MigSize causes increasing MigLat. Also, with Filter Size increasing, the SMs have to access a larger Filter data region in the memory during layer execution, which can trigger a larger amount of far page fault latency. \begin{figure}[t] \centering \includegraphics[width=8.3cm, height = 3.2cm]{archfilter0.pdf} \vspace{-2mm} \caption{Arch-hints reveal Filter size in Reference model.} \label{paraamountforarchs} \vspace{-5mm} \end{figure} \textbf{Observation-2:} During Conv/FC layer execution, the data migration mainly result from the Filter data. Thus, the migration data size well reveals the Filter data size, and the far fault latency and migration latency both positively correlates to Filter data size. \begin{comment} \begin{figure}[t] \subfloat { \includegraphics[width=4cm, height = 2.8cm,trim=0 0 0 0, clip]{fillgemmlayer.pdf} \label{fillgemm} } \subfloat { \centering \includegraphics[width=4.2cm, height = 2.8cm,trim=0 0 0 0, clip]{l2wr.pdf} \label{l2drtrans} } \vspace{-2mm} \caption{a) PFLat of the entire Conv layer and the two composite kernels, fill\_kernel and gemm\_kernel. b) L2 write/read transaction in Reference.} \label{fillgemml2} \vspace{-4mm} \end{figure} \end{comment} \noindent\textbf{Primary Arch-hints Reveal Layer Features:} As the primary Arch-hints of PFLat, MigLat, MigSize can reveal OFM size and Filter size, we show how these primary Arch-hints can leak layer features and manifest model architecture during model execution. We characterize the Arch-hints \textit{PFLat, MigLat, MigSize} of each layer (e.g., Conv, Pool, FC) during Reference model execution, as shown in Fig.\ref{difftypesforrefer}. The x-axis in Fig.\ref{difftypesforrefer} indicates different blocks as the network going deeper, and the y-axis on the left hand indicates the latency while the right hand indicates the data size. We observe that as the network goes deeper, the scale of \textit{PFLat, MigLat, MigSize} increases significantly, especially for the Conv layer. The different scales can identify the different blocks. Second, the \textit{PFLat, MigLat, MigSize} of a Conv layer are usually much larger than other types of layers (e.g., Pool, the last layer) within the same block. This is because the large Filter size and feature map size in a Conv layer cause large amounts of far page fault and data migration. Third, the BN layer and ACT layer (i.e., ReLU) exhibit quite similar execution features as they both do not cause far page fault or migration, resulting in difficulties for the adversary to accurately distinguish them. \textbf{Observation-3:} The primary Arch-hints of \textit{PFLat, MigLat, MigSize} can reveal different types of layers and blocks during model execution by identifying the layer's feature map and Filter characteristics and leak information on the model internal architecture. Thus, these Arch-hints exposes a new attack surface in UM system for extraction attack, which has not been explored before. \begin{figure}[t] \includegraphics[width=8.3cm, height = 3.5cm]{referblock2.pdf} \vspace{-2mm} \caption{The Arch-hints of different blocks in Reference.} \label{difftypesforrefer} \vspace{-5mm} \end{figure} \noindent\textbf{Common Arch-hints Further Helps:}\label{l2cache} We analyze above the primary Arch-hints of \textit{PFLat, MigLat, MigSize} can leak the layer information during model execution. However, some adjacent layers, like BN and ACT, do not cause \textit{PFLat, MigLat, MigSize}, and thus exhibit similar execution features, causing difficulties to UMProbe. Although UMProbe can utilize DNN model design philosophy (i.e., the empiric to follow a BN and ACT layer after Conv layer) to infer these layers, we consider UMProbe exploring other Arch-hints in UM system to conquer the difficulties. As we analyzed in Sec. \ref{archhintcharacter}, the L2 read/write transaction obtains a high $ArES$ and is considered effective in extraction attack. Thus, UMProbe utilizes the common Arch-hints of L2 read/write transaction in the attack besides the primary Arch-hints of \textit{PFLat, MigLat, MigSize}. Fig.\ref{l2write} and \ref{l2read} shows the Arch-hint of L2 write/read transaction shows noticeable difference on the BN and ACT layers, indicating UMProbe can utilize the Arch-hints to further improve its extraction accuracy (Sec. \ref{evalacc}). \subsubsection{Learning-based Extraction Attack} \label{atk} \vspace{-3pt} As We learned that the Arch-hints in UM system, especially the primary Arch-hints, can reveal layer features and leak model information during model execution, we will show how UMProbe can extract and identify victim model by learning these Arch-hints. Since model architecture, especially model layer sequence, is the most fundamental one among DNN model's properties and can be used to infer other parameters \cite{tramer2016stealing, wang2018stealing, liu2016delving, oh2019towards, hu2020deepsniffer, hu2021systematic, hu2020deepsniffer}, UMProbe is designed to identify the model architecture as the first step to extract the model. \noindent\textbf{Attack Methodology:} UMProbe adopts the Connectionist Temporal Classification (CTC) \cite{graves2006connectionist} model to predict victim layer sequence including layer number, types and connection, which has been proven effective in \cite{hu2020deepsniffer}. CTC is a sequence-to-sequence model, and it can be trained by minimizing the difference between the ground-truth layer sequence $L^{\ast}$ and predicted layer sequence $L$, and outputs a layer sequence which is as close to the ground-truth as possible. Fig.\ref{design} shows the specific attack methodology that includes 5 steps. \ding{172} The kernel sequence is composed of multiple kernels featured with their own Arch-hints Vectors $X_{i}$ (i.e., <\textit{PFLat, MigLat, MigSize, L2 read, L2 write}>). \ding{173} For the $i_{th}$ kernel, its Arch-hints $X_{i}$ can reveal the characteristics of the kernel. Then, UMProbe conducts the $i_{th}$ kernel classification based on $X_{i}$ by using a LSTM-classification model \cite{hu2020deepsniffer} and \ding{174} will output a probability distribution $K_{i}$ of which type of layer(i.e., Conv, ReLU, BN, Pool, etc.) those kernels belong to. \ding{175} UMProbe utilizes the CTC model to estimate the conditional probability with the distribution of prior kernels(i.e., $K_{1}$, $K_{2}$, ... $K_{i}$). Then UMProbe outputs all of the kernel sequence candidates here, such as (CV-CV-BN-Re), (CV-BN-Re-PL), etc. \ding{176} The CTC decoder eventually recognize the kernel sequence with the largest conditional possibility as the output $L$ by utilizing greedy search and de-duplication techniques \cite{zenkel2017comparison}. Table \ref{tb2:kerlayfeature} shows each layer is associated with their own specific kernels, and Fig.\ref{dataflowofumprobe} shows a DNN layer sequence that can be mapped to the kernel sequence in runtime, thus, UMProbe can successfully predict the layer sequence by extracting and identifying the kernel sequence. \begin{figure}[t] \centering \includegraphics[width=8cm, height = 5.5cm]{ctcmodel2.pdf} \vspace{-3mm} \caption{Scheme of layer sequence prediction in UMProbe.}\label{design} \vspace{-14pt} \end{figure} With layer sequence predicted, we then show how the layer dimension is estimated, though \cite{hu2020deepsniffer} demonstrates that the layer dimension is less important than layer sequence in extraction attack. Specifically, \cite{hu2020deepsniffer} provides a method utilizing the DRAM read transaction to estimate the input and output size of ReLU layer and other layers. Similarly, UMProbe utilizes L2 read transaction to estimate the input and output size of different layers by following the same method. Regarding GPU memory hierarchy, L2 cache read transaction can provide more accurate information to estimate the input and output size during kernel execution as the L2 cache cannot be bypassed in kernel transaction. Moreover, as we analyzed in Sec. \ref{atksurface} that the MigSize can reveal the Filter size of a layer (i.e., Conv/FC). That is, in UM system, the new attack surface, especially the Arch-hint of MigSize, has a advantage in estimating the Filter size of a layer. \vspace{-3pt} \section{Evaluation}\label{evaluation} \vspace{-3pt} \subsection{Experimental Setup} \vspace{-3pt} \noindent\textbf{Platform:} All sample collection, model training and validation, and attack evaluation are conducted on NVIDIA Titan RTX GPU platform. The DNN models are implemented in Darknet framework, with CUDA 10.0. We use the GPU performance counter \cite{CUPTI} to emulate bus snooping for page fault latency, page migration latency, page migration size and L2 cache read/write transaction information collection. \noindent\textbf{Benchmarks:}\label{bench} We use multiple pre-trained DNNs on Darknet framework \cite{darknetclassification}. The benchmark includes Sequential models (Alexnet, VGG-16, Reference, Tiny Darknet, and Extraction \cite{darknetclassification}) and Non-Sequential models (Resnet18, Resnet50, and Resnet101 \cite{he2016deep}). It is important to emphasize that, all of the aforementioned models do {\it not} have specific corresponding UM implementations in public domain. We substantially modify the Darknet framework to support its execution in UM system using CUDA APIs $cudaMallocManaged()$ and $cudaFree()$. \noindent\textbf{Model training and deployment:} Essentially, UMProbe contains a LSTM+CTC learning model to extract the victim DNN architecture. To train UMProbe, we randomly generate enough numbers of DNN models (i.e., random layer number, types, connections and dimensions) with both sequential and non-sequential connections, and utilize them as white-box models. We then execute the DNN models and collect the kernel execution samples (i.e., the Arch-hints of DNN kernel sequence) as the input to the model. After model being trained, we test UMProbe on the representative DNN benchmarks. These DNNs work as black-box models to UMProbe, and UMProbe predicts their model architectures by analyzing their Arch-hints exposed. We collect five types of samples to train/test UMProbe, as shown in Table \ref{archhintsample}, that is, samples using Arch-hints of 1) PFLat, 2) MigSize, 3) PFLat, MigLat, MigSize (PriArchs), 4) L2 read transaction, L2 write transaction (ComArchs), 5) PFLat, MigLat, MigSize, L2 read transaction, L2 write transaction (AllArchs). As different Arch-hints represent different DNN model characteristics, we evaluate the UMProbe performance by using different Arch-hints (Sec. \ref{evalacc}). \begin{table}[t] \small \centering \setlength \tabcolsep{1pt} \caption{Effectiveness Evaluation of Arch-hints in UM.}\label{evaarchs} \vspace{0pt} \begin{tabular}{l|ccc} \toprule Arch-hints & Distinguishability/$CoV_{dis}$ & Consistency/$CoV_{con}$ & ArES \\\bottomrule \tabincell{l}{L2 write trans.} & \tabincell{l}{1.55} & \tabincell{l}{0.0017} & \tabincell{l}{873.41}\\ \hline \tabincell{l}{DRAM write trans.} & \tabincell{l}{1.56} & \tabincell{l}{0.22} &\tabincell{l}{6.84} \\\bottomrule \tabincell{l}{L2 read trans.} &\tabincell{l}{1.71} &\tabincell{l}{0.46} & \tabincell{l}{3.72}\\\bottomrule \tabincell{l}{DRAM read trans.} &\tabincell{l}{1.34} &\tabincell{l}{0.51} & \tabincell{l}{2.62}\\\bottomrule \tabincell{l}{Kernel latency} &\tabincell{l}{1.81} &\tabincell{l}{0.11}& \tabincell{l}{16.59}\\\bottomrule \tabincell{l}{Far fault latency} &\tabincell{l}{2.24} &\tabincell{l}{0.095}& \tabincell{l}{23.38}\\\bottomrule \tabincell{l}{Migration latency} &\tabincell{l}{3.58} &\tabincell{l}{0.012} & \tabincell{l}{293.84}\\\bottomrule \tabincell{l}{Migration size} &\tabincell{l}{3.55} &\tabincell{l}{0.0081} & \tabincell{l}{437.14}\\\bottomrule \end{tabular} \vspace{-5mm} \label{AvES} \end{table} \subsection{Effectiveness of Different Arch-hints}\label{effofarchs} \noindent\textbf{Metric:} As characterized in Sec. \ref{archhintcharacter}, we define \ul{\textit{Arch-hints Effectiveness Score (\textit{ArchES})}} to quantify the effectiveness of each Arch-hint in UM system in terms of {\it distinguishability} ($CoV_{dis}$) and $consistency$ ($CoV_{con}$), as shown in Equation \ref{score}. Regarding the Arch-hint, the higher the $distinguishability$ is (i.e., higher $CoV_{dis}$) and the stronger the $consistency$ is (i.e., lower $CoV_{con}$), the more effective the Arch-hint is. \begin{equation} \footnotesize effectiveness\_score = \frac{\text{$CoV_{distinsuishability}$}}{\text{$CoV_{consistency}$}} = \frac{\text{$CoV_{dis}$}}{\text{$CoV_{con}$}} \label{score} \end{equation} \noindent\textbf{Evaluation:} We then calculate the $ArchES$ of each Arch-hint as well as their $dis$ and $con$ factors, as shown in Table \ref{AvES}. We observe that the Arch-hints of L2 write trans, migration latency and size gain much higher $ArchES$ than the other Arch-hints due to the high $dis$ and strong $con$. When comparing the L2 transaction to the DRAM transaction, we find that their $dis$ is competitive, however, the $con$ of L2 write transaction is significantly lower than that of DRAM write transaction. Because of the limited capacity of L2 cache, the data is eventually written back to the DRAM. Thus, the total amount of L2 write transaction is typically capped by the L2 cache capacity and shows strong consistency. In comparison, the DRAM capacity is much larger, and there is little limit to the DRAM write transaction, thus, DRAM write transaction shows much more inconsistency. Accordingly, the L2 read transaction shows more consistency than DRAM read transaction. Then, taking the remaining Arch-hints into consideration, the migration latency and migration size obviously outperform the other two in terms of $ArchES$. The kernel latency shows the lowest $ArchES$ compared to far fault latency due to its low $dis$ (i.e., low $CoV_{con}$). This is because, in UM system, the kernel latency is composed of execution latency, far fault latency and migration latency, and the execution latency can overlap wit far fault latency, causing overall kernel latency variable \cite{umbeginners}. Thus, the kernel latency can get blurred in multiple execution and the Arch-hint of far fault latency can be more effective. To summarize, we learned that in UM system the Arch-hints of L2 write transaction, L2 read transaction, far fault latency, and migration latency and size show greater effectiveness in terms of $ArchES$ compared to the other Arch-hints. Essentially, $ArchES$ measures the information leakage from an Arch-hint in UM system by examining the relation between the Arch-hints pattern (i.e., $distinguishability$, $consistency$) and the victim model internal architectures. As we analyze in Sec. \ref{unireveal}, the far page fault latency is closely associated with the OFM size of almost all layers, the migration latency and size can reveal the Filter size of a layer (i.e., Conv/FC). As the three primary Arch-hints in UM provide an ever explored attack surface for adversary, we will show below that they exhibit sufficient information for UMProbe to extract the model architecture. Then, the common Arch-hints of L2 read and write transaction can further enhance UMProbe performance by providing additional information to identify such blurring layers as BN and ACT layer. \begin{table}[t] \centering \small \caption{Samples using different Arch-hints.} \vspace{-6pt} \begin{tabular}{c|c|c|c|c|c} \hline \tabincell{l}{Sample} &\tabincell{l}{$s_{1}$} &\tabincell{l}{$s_{2}$} &\tabincell{l}{$s_{3}$} &\tabincell{l}{$s_{4}$} &\tabincell{l}{$s_{5}$}\\\hline \tabincell{c}{Arch-hints} &\tabincell{c}{PFLat}&\tabincell{c}{MigSize}&\tabincell{l}{PriArchs} &\tabincell{l}{ComArchs} &\tabincell{c}{AllArchs}\\\hline \end{tabular} \vspace{-2pt} \label{archhintsample} \end{table} \begin{figure}[t] \centering \includegraphics[width=10cm, height = 3.5cm, trim=120 0 0 0]{avglaccomp2.pdf} \vspace{-5mm} \caption{Avg LSA of UMProbe on different models.}\label{avglascom} \vspace{-16pt} \end{figure} \subsection{UMProbe Performance}\label{evalacc} \vspace{-3pt} \noindent\textbf{Metric:} As UMProbe targets extracting the victim DNN model layer sequence (i.e., layer number, layer types and layer connection), we measure the performance of UMProbe's DNN extraction ability by quantifying the extracted layer sequence accuracy. We define the extracted \ul{\textit{Layer Sequence Accuracy (LSA)}} as follows, \begin{equation} \footnotesize LSA = 1 - \frac{ED(L, L^{\ast})}{|L^{\ast}|} \end{equation}\label{lac} where $ED(L, L^{\ast}$) is the edited distance between extracted layer sequence $L$ and ground-truth layer sequence $L^{\ast}$ (i.e. the minimum number of insertions, substitutions, or deletions required to change L into $L^{\ast}$) \cite{abu2015exact}, while \textbf{$\left. ED(L, L^{\ast}) \middle/ |L^{\ast}| \right.$} indicates the extracted layer sequence error rate. $|L^{\ast}|$ is the length of $L^{\ast}$, thus, the larger $LSA$ is, the less the difference between $L$ and $L^{\ast}$, and the more accurate UMProbe extraction. \begin{figure}[t] \centering \includegraphics[width=10cm, height = 4.2cm, trim=90 0 0 0]{laccomp3.pdf} \vspace{-3mm} \caption{LSA of benchmarks using different Arch-hints.}\label{lascom} \vspace{-16pt} \end{figure} \noindent\textbf{Evaluation:} As UMProbe works by leveraging different Arch-hints samples (see Table \ref{archhintsample}), different Arch-hints are able to reveal different DNN layer features and model characteristics, and make a great difference to UMProbe performance in terms of LSA. We measure UMProbe performance on DNN benchmarks, and further validate the importance and effectiveness of the Arch-hints in UM system. First, we calculate the the average LSA of UMProbe using different Arch-hints on three kinds of networks (i.e., Seq nets, Non-Seq nets and all nets), as shown in Fig.\ref{avglascom}. We observe that the LSA of UMProbe by using either $s_{1}$ or $s_{1}$ is around 50\%, indicating that UMProbe can effectively extract partial DNN layer sequence, though its performance is low. As we analyzed in Sec. \ref{unireveal}, all layers except BN/ACT can cause far page fault and exhibit PFLat, which is closely associated with the OFM size of the layer, while the MigSize closely correlates to the Filter size of a layer (i.e., the dominant Conv/FC). Thus, both Arch-hints can provide effective information for adversary to infer the DNN layer sequence, but the amount of information is limited. Then, we learned that, by using $s_{3}$ of PriArchs (i.e., PFLat, MigLat, MigSize), the UMProbe performance is obviously improved. As we analyzed above, PFLat is closely associated with a layer OFM size while MigLat/MigSize can reveal the Filter size of a layer, indicating the Arch-hints can provide complementary information about the DNN architecture. By using this three primary Arch-hints, UMProbe can effectively extract most layer sequence information. Meanwhile, by using $s_{4}$ of ComArchs, the UMProbe performance is also improved, and is comparative to UMProbe using $s_{3}$. Regarding GPU memory hierarchy, the L2 cache is shared by all GPU SMs, while a kernel can be dispatched to multiple SMs and the kernel typically cannot bypass L2 cache to read/write data from DRAM. Thus, the L2 transaction provides relatively complete and highly distinguishable trace of data activities from different layer (i.e., input and output), as Table \ref{AvES} shows the high $ArchES$ of the Arch-hints. Thus, UMProbe performance using ComArchs is high as well. Based on the analysis above, we can say that as the new attack surface in UM system, the PriArchs provide sufficient information for UMProbe to effectively extract most of victim layer sequence, though UMProbe performance is not high enough. Finally, by using $s_{5}$, the average LSA of UMProbe on Seq, Non-Seq, and all networks can reach around 95\%, indicating that UMProbe can effectively extract almost all layer sequence. As we analyzed above, the PriArchs provide sufficient information for UMProbe to successfully extract layer sequence. Now, given that the ComArchs can provide additional information to further identify such blurring layers as BN/ACT, which hardly causes page fault and data migration, UMProbe performance can be further improved with the help of ComArchs. Besides, we calculate UMProbe LSA on each DNN benchmark, as shown in Fig.\ref{lascom}. We observe UMProbe performance on each DNN model that follows the same track of analysis above. Basically, PFLat reveals the OFM characteristics and MigSize reveals the Filter characteristics, either of them provides limited information for UMProbe ($\sim$ 50\% accuracy). Then, the three primary Arch-hints together can reveal a layer's features more completely, and UMProbe performance can be significantly improved. Especially, for the small and neat networks (e.g., Alexnet, Reference, Tiny and VGG), UMProbe performance is high ($\geq$ 80\%), indicating that the primary Arch-hints are able to reveal such model architecture thoroughly. Later, by using all Arch-hints, UMProbe can achieve very high performance on all models ($\geq$ 90\%). To summarize, we conclude that the new attack surface provided by the Arch-hints based on far page fault and data migration provides sufficiently effective clues for the adversary to extract most victim model architecture information in UM system and the extraction attack can achieve a high performance. Also, with Arch-hints providing additional information, the attack surface can be extended and the attack performance can be enhanced. In fact, such an attack surface has never been explored before and is worth attention. \section{Related Work} \noindent\textbf{Unified Memory:} GPU Unified Memory (UM) arises as it effectively eliminates the need for manual data migration, reducing programmer effort and enabling GPU memory oversubscription compared to the Copy-then-Execute system. However, the far fault handling and on demand migration can significantly impact the application performance, and many prior works focusing on performance optimization \cite{zheng2016towards, gandhi2014efficient, shin2018scheduling, hao2017supporting, kim2020batch, wang2020enabling, wang2020understanding, ganguly2019interplay} have been proposed. \cite{zheng2016towards} proposes a software page prefetcher to further utilize PCIe bus bandwidth and hide page migration overheads. \cite{kim2020batch} comprehensively characterizes the inefficiency of far fault handling under UM model and proposes batch-aware UM management. \cite{ganguly2019interplay} investigates the prefeching and eviction policies under UM model and proposes new locality-aware pre-eviction policies to reduce the performance overhead. However, this paper first explores the insecure communication pattern exposed by the far fault handling and on-demand migration under UM model and exploits this attack surface in UM system for stealing DNN models. \noindent\textbf{Model Extraction Attack:} The extraction attack mainly targets the ML models deployed in cloud with publicly accessible query inter-faces/APIs, and the adversary can duplicate the functionality of the model by frequently querying APIs \cite{oh2019towards, tramer2016stealing}. Then, some works consider utilizing side-channel information to benefit the attacks \cite{hong2018security, hong20200wn, yan2020cache, hua2018reverse}, such as cache-side channel. Recently, with ML models increasingly deployed in edge/local devices \cite{li2018learning, yazici2018edge, verhelst2020machine}, the adversary utilizes physical or local side-channels to obtain architecture-level information leakage to accurately extract the model architecture. \cite{naghibijouybari2018rendered} utilizes hardware counters to predict the NN neuron number. \cite{wei2020leaky} monitors the CUPTI events in GPU platforms to infer different DNN layer operations. \cite{hua2018reverse} observe the memory access patterns to search for the possible DNN structures in FPGAs. \cite{hu2020deepsniffer} collects the kernel latency, DRAM read and write volume, etc., to extract the DNN model architectures. However, none of them explores the meanings and patterns behind architecture information or proposes new architecture hints and attack surface for extraction attack in UM system. \noindent\textbf{Mitigation Countermeasures:} As the new attack surface relies on insecure communication pattern between GPU and CPU on PCIe bus, one potential defense approach is to obfuscate the communication pattern on PCIe bus. As GPU runtime process the far page fault first and then migrate data on demand, the runtime can dynamically obfuscate the requests, like postpone or even reorder some far fault requests. Also, the runtime/system can support transmitting dummy data to cover the real traffic, for example, the migrated data can be split/padded into a fixed size and be sent at fixed rate \cite{hunt2019isolation}. This way, the PCIe transmission and leaky communication pattern in UM system can obfuscated and interfered. However, such approaches will unavoidably incur significant PCIe bandwidth overhead and performance degradation. Besides, GPU trust execution environment (TEE) can be considered to mitigate or eliminate co-location side channel \cite{hunt2020telekine, volos2018graviton, 2020towards}. These TEE disallows different tenants to share the underlying hardware or execute concurrently, which can prevent the adversary to observe the victim activities through the performance counter, etc. Similarly, this method can negatively impact the GPU performance and is non-trivial to be deployed in practice. \vspace{-1em} \section{Conclusion} Emerging extraction attack can leverage architecture-level events (i.e., Arch-hints) in hardware platforms to extract DNN model layer information accurately. In this paper, we uncover the root cause of such Arch-hints and summarize the principles to identify them. We then apply these principles to emerging Unified Memory (UM) management system, identify three new Arch-hints, and develop a new extraction attack, UMProbe. We also create the first DNN benchmark suite in UM and utilize the benchmark suite to evaluate UMProbe. Evaluation shows UMProbe can extract the layer sequence with an accuracy of 95\% for almost all victim test models, calling for more attention to the DNN security in UM system.
1,108,101,565,367
arxiv
\section{Methodology} Customers experience in fashion e-commerce consists in selecting an article in a desired size, trying the article, forming an opinion on its size and returning or keeping it. To simulate the customers behavior towards sizing, we model the joint probability of a customer to pick a size and the resulting return status. Return status is described by three possible events: the customer keeps the article, the customers returns the article because it's too small and the customer returns the article because it is too big. We ignore the cases where the customer returns the article for any other reason. \subsection{Notation} Let us denote $\mathcal{C}$ the set of customers and $\mathcal{A}$ the set of articles. The size $\mathcal{S}_i$ is a continuous random variable. The variable $\mathcal{R}$ indicates the return status described above. Orders $\mathcal{O}$ are defined by a customer, an article, the purchased size and the return status. Both approaches introduced in the paper model the joint probability $p(\mathcal{S}_o, \mathcal{R}_o \mid \mathcal{C}_o, \mathcal{A}_o)$ as detailed in the following. \subsection{Baseline Model} \label{sec:baseline} The baseline model makes a simplifying assumption that the size the customer chose and the return status are two independent events. Thus, the joint probability is defined as the product of the probability over sizes and the probability of return status: \begin{equation} p(\mathcal{S}, \mathcal{R} \mid \mathcal{C}, \mathcal{A}) = p(\mathcal{S} \mid \mathcal{C}, \mathcal{A}) p(\mathcal{R} \mid\mathcal{C}, \mathcal{A}) \end{equation} \paragraph{Probability over sizes} We assume that multiple persons can use a single account. The probability distribution over sizes is obtained by Gaussian Kernel Density Estimation. To avoid getting degenerate distributions when customers always purchase the same size, we set a lower limit on the variances. Let denote $\mathcal{O}_j$ the set of $n_j$ orders and $S_j = \{s_i, i=1..n_j\}$ the set of $n_j$ sizes purchased by the customer $c_j$. The related probability density function is defined as: \begin{equation} p(s \mid c_j) = \dfrac{1}{n_jh_j} \displaystyle\sum_{i=1}^{n_j} \phi\left(\dfrac{s-s_i}{h_j}\right) \end{equation} where $\phi$ is the normal density function and $h_j$ is the bandwidth parameter for that specific customer $c_j$. The latter is obtained by minimizing the mean integrated squared error \cite{Silverman1986}. \paragraph{Probability over return status} Customers have different return behaviors. However, impact of customers on returns is assumed negligible compared to potential sizing issues of the article. The probability of each return status is consequently marginalized over customers: $p(\mathcal{R} \mid\mathcal{C}, \mathcal{A}) = p(\mathcal{R} \mid \mathcal{A})$. % The probability over return status is the empirical distribution over the three possible events: article is kept, too big or too small. The case of one of the events being not observed in the training data may lead to a null probability in validation. To avoid that problem, we add one to the counts of each event. For an article $a_i$, sold $n_i$ times, the probability of a return event $r$, observed $n_{i,r}$ times in the data, is defined as: \begin{equation} p(r \mid a_i) = \dfrac{n_{i,r} + 1}{n_i + 3} \end{equation} Though, this method seems inelegant, it has a Bayesian grounding. Indeed, this is equivalent to taking the maximum a posteriori of a categorical distribution with a Dirichlet conjugate prior, which concentration parameter is equal to one, i.e the uniform distribution. In case of a cold start, i.e if a customer (resp. an article) is new, the marginal distribution over sizes of all the customers (resp. over return status of all articles) is used. \subsection{Hierarchical Bayesian Model} \label{sec:BHM} The baseline described above has a risk of specious parameter estimation and overfitting. Bayesian approaches conversely aim at providing a probability of the estimated parameters given a set of observed data, supporting the decision process when offering a recommendation to the customer. Therefore, using a Bayesian approach, we aim at modeling the joint probability of a size to be purchased and a return status to be observed without the simplifying hypothesis from the base approach. For each pair of customer and article, orders $\mathcal{O}$ are drawn following a categorical distribution. \begin{equation} \mathcal{O} \sim Cat(p(\mathcal{S}, \mathcal{R} \mid \mathcal{C}, \mathcal{A})) \end{equation} Contrary to the baseline, both $\mathcal{S}$ and $\mathcal{R}$ are not assumed independent, instead the joint probability is factorized as: \begin{equation} p(\mathcal{S}, \mathcal{R} \mid \mathcal{C}, \mathcal{A}) = p(\mathcal{S} \mid \mathcal{R}, \mathcal{C}, \mathcal{A}) \times p(\mathcal{R} \mid \mathcal{C}, \mathcal{A}) \label{eq:factorization} \end{equation} It is worth noting that methods described in \cite{Sembium2018,abdulla2017} aim at modeling $p(\mathcal{R} \mid \mathcal{S}, \mathcal{C}, \mathcal{A})$. Doing so requires discretizing the continuous variable $\mathcal{S}$ leading to an increase in the number of parameters to be inferred and making the model more susceptible to the sparsity of the data. The factorization we chose in Equation \ref{eq:factorization} allows us to model $p(\mathcal{S} \mid \mathcal{R}, \mathcal{C}, \mathcal{A})$ as a continuous distribution. This enables us to learn a smaller set of parameters specifying the distribution over all sizes, which helps to alleviate part of the sparsity problem. \paragraph{Probability over return status} For the same reasons as explained in the Section \ref{sec:baseline}, the probability of return status is marginalized over customers: $p(\mathcal{R} \mid \mathcal{C}, \mathcal{A}) = p(\mathcal{R} \mid \mathcal{A})$. Returns are assumed independent from one another, allowing us to model them using a categorical distribution. As the number of purchases might be low for some articles, a Dirichlet prior is used. The concentration parameter of the prior is based on the counts at the brand level and at the category level (e.g. dresses, t-shirts, sneakers, etc.). Let $n_K$, $n_S$, and $n_B$ denote the counts of kept articles, returned articles for being too small and too big at the article level respectively. In a similar way, $m_K$, $m_S$ and $m_B$ indicates the counts at the brand level and $m'_K$, $m'_S$ and $m’_B$, at the category level. \begin{equation} p(\mathcal{R} \mid \mathcal{A}) \sim Dirichlet(\alpha) \end{equation} with $\alpha = w \cdot [m_K, m_S, m_B] + w’ \cdot [m'_K, m'_S, m'_B]$. The weights $w$ and $w'$ are learned under the assumption that they follow Beta distribution with a low first shape parameter and the second shape parameter equal to 1, in order to favor low weight values. \begin{equation} w \sim Beta(0.5, 1) \mbox{ and } w' \sim Beta(0.1, 1) \end{equation} Since the Dirichlet prior is the conjugate of the categorical distribution, the posterior probability can be analytically computed, easing the inference of the parameters of the model. \begin{equation} P(\mathcal{R}=r \mid \mathcal{A}, \mathcal{O} ) = \dfrac{n_r + \alpha_r}{\displaystyle\sum_{\substack{i \in \\ \{K,S,B\}}} n_i + \displaystyle\sum_{\substack{i \in \\ \{K,S,B\}}}\alpha_i} \end{equation} \paragraph{Probability over sizes conditionally on the return status} For a given customer and article, the probability distribution of the customer buying a size is a mixture of Gaussians. Since the number of users of an account is unknown, we decide to use an infinite mixture model. However, we assume that the number of distinct persons using a single account is low. That's why we opt for a Dirichlet process with a truncation level fixed to four. In order to ease the inference, we use a truncated stick-breaking process \cite{sethuraman1994}. \begin{equation} \begin{split} &p(s \mid r, c, a) = \displaystyle\sum_{i=1}^{4} \pi_i \phi(s \mid \mu_i, \sigma_i^{2}) \\ &\pi_i = b_i \displaystyle\prod_{j=1}^{i-1} (1-b_j) \\ &b_i \sim Beta(1, \alpha) \mbox{ for } i=1..3 \mbox{ and } b_4 = 1 \end{split} \label{eq:mix_gaussian} \end{equation} The parameter $\pi$ is the mixing proportion, that can be interpreted as the probability of person $i$ using the account of the customer $c$. The shape parameter $\alpha$ of the $Beta$ distribution is acting as a concentration parameter of the Dirichlet process: the case $\alpha = 1$ is equivalent to the uniform distribution, thus favoring the scenario of multiple persons sharing a single account; conversely, the case $\alpha \rightarrow 0$ produces high density around 1 which favors the scenario of a single person placing all the orders. In the context of size recommendation, $\alpha$ is fixed at $0.5$. The function $\phi$ in equation \ref{eq:mix_gaussian} is the normal probability density function over sizes for the person $i$ using the account of the customer $c$, buying an article $a$, resulting in a return status $r$. The parameter $\mu$ is a combination of three parameters $\mu = \mu_C + \mu_A + \eta_R$: \begin{itemize} \item the average size of a person $\mu_C$,\begin{equation} \mu_C \sim \mathcal{N}(\mu_0, \sigma_0^2) \end{equation} where hyperparameters $\mu_0$ and $\sigma_0^2$ depend on the category, the gender of the article and the size system; \item the average offset of the article $\mu_A$, \begin{equation} \mu_A \sim \mathcal{N}(0, 1) \end{equation} where assumption is made that most articles have an accurate size, i.e. an offset of zero; \item a shifting parameter $\eta = \{\eta_K, \eta_S, \eta_B\}$ for each return status: resp. article is kept, returned too small and returned too big, \begin{equation} \eta_K = 0 \mbox{ ; } \eta_S \sim \mathcal{N}(-1, 1) \mbox{ ; } \eta_B \sim \mathcal{N}(1, 1) \end{equation} shifting parameter is fixed at 0 for the case the article is kept, while it's sampled using a Gaussian distribution centered to 1 (resp. -1) when the customer has returned the article because it is too big (resp. too small). \end{itemize} We assume the parameter $\sigma$ in equation \ref{eq:mix_gaussian} depends on the customer only. It is sampled following an Inverse Gamma distribution, with the shape parameter $\gamma_1$ and the scale parameter $\gamma_2$. \begin{equation} \sigma_C ^2 \sim \Gamma^{-1}(\gamma_1,\gamma_2) \end{equation} We fix the parameters of the distribution to $\gamma_1 = 1$ and $\gamma_2 = 2$, so that the mode of the Inverse Gamma distribution is equal to 1. Figure \ref{fig:bayesian_graphe} represents the graphical model of the approach. \paragraph{Inference} Monte-Carlo Markov Chains are popular sampling methods for Bayesian inference. But those approaches are often slow to converge. Variational inference methods run faster than sampling based methods but also introduce an approximation bias that may lead to a bad estimation of the parameters. However, those approaches have demonstrated reasonable performances on Dirichlet processes \cite{Blei2006} and are well suited for problems involving large amount of data. The inference is consequently done using mean-field approximation. \input{graphical_model.tex} \subsection{Providing a Size Recommendation} The set of sizes of an article is a finite set and as a consequence the probability density function needs to be discretized. For a customer $c$ and an article $a$ with a set of $k$ sizes $\mathcal{S} = \{s_i, i=1..k\}$, the probability over sizes is discretized as follows: \begin{equation} p(s, r) = p(r) \dfrac{\displaystyle\int_{s-\frac{1}{2}\epsilon}^{s+\frac{1}{2}\epsilon} f(x) dx}{\displaystyle\int_{s_1-\frac{1}{2}\epsilon}^{s_k+\frac{1}{2}\epsilon} f(x) dx} \end{equation} where $f$ is the probability over sizes, marginal for the baseline and conditionally to the return status for the Bayesian model ; $\epsilon$ is equal to the step between two sizes. To provide a size recommendation, we choose the size having the highest probability to be kept by the customer. \section{Conclusion} A hierarchical Bayesian approach was proposed to tackle the challenging problem of size recommendation in e-commerce fashion. The size purchased by a customer and its possible return events were jointly modeled thanks to a Bayesian approach. Experimental results were presented on real (anonymized) data from millions of customers along with a detailed discussion and comparison with a baseline approach with simplified hypothesis. It was shown that the Bayesian approach outperforms the baseline approach while providing better theoretical framework for gaining deeper understanding of the predictions. Future work consists in exploring different approaches to learn the joint probability, making use of additional article related data from fashion industry, along with deeper dives in the segmentation of customers and articles. \section{Experiments} For this experiment, anonymized purchase data is collected for adult shoes. The data consists of 14.5 million purchases, 3 million distinct customers and 73,000 distinct articles. As the data has a strong temporal component, cross validation is performed under the following conditions \cite{arlot2010}: 1. validation data occurs later than training data, 2. a period of three weeks - corresponding to the time to collect most returns from customers - is ignored between train set and validation set and 3. validation sets must not overlap. Both models are trained and cross-validated on the same data. We reported the average logarithm of the joint probability $p(s,r)$ over all observations in the validation set in Table \ref{tab:likelihood}. Higher numbers show better performances of the model. \begin{table}[!h] \caption{Average log joint probability} \vspace{-0.3cm} \begin{tabular}{ | l | l | l |} \hline & Baseline & Bayesian \\ \hline incl. unknown customers & -2.85 (± 0.15) & -2.35 (± 0.11)\\ \hline excl. unknown customers & -3.32 (± 0.26)& -1.83 (± 0.19)\\ \hline \end{tabular} \label{tab:likelihood} \end{table} Table \ref{tab:likelihood} compares likelihood for both approaches, including and excluding customers not observed in the training data. Bayesian approach shows better results in both cases, where results are the best when excluding unknown customers. Conversely, the baseline performs better when unknown customers are included. This is mainly due to the fact that the baseline overfits by putting high probability density on the events observed in the training set, and very low density on unseen events. In the context of size recommendation, two indicators play a key role in the decision process: coverage and accuracy. The coverage is the percentage of purchases for which the algorithm is confident making a decision. The accuracy is the number of correctly predicted sizes and return status, over all predictions. Figure \ref{fig:cov_vs_acc} (Top) shows the accuracy versus the coverage for several values of a sliding threshold on the joint probability for both models. On Figure \ref{fig:cov_vs_acc} (Bottom), the results are presented for the Bayesian model where we also include a threshold on the posterior probability of parameters. \begin{figure}[!ht] \centering \subfloat{\includegraphics[width=0.47\textwidth]{coverage_vs_accuracy.pdf}} \quad \subfloat{\includegraphics[width=0.47\textwidth]{coverage_vs_accuracy_post.pdf}} \caption{Accuracy versus coverage. (Top) Baseline and Bayesian model. (Bottom) Bayesian model with and without threshold on the posterior probability of the parameters.} \label{fig:cov_vs_acc} \end{figure} Figure \ref{fig:cov_vs_acc} (Top) shows that accuracy decreases as coverage increases, when changing the threshold on the joint probability. The performances of the Baseline and the Bayesian model are similar, when the decision is based on the value of the joint probability. In Figure \ref{fig:cov_vs_acc} (Bottom), by putting a threshold on the posterior probability of parameters, we prevent the model from recommending article sizes to customers, where the parameters are poorly estimated. Performances are slightly better, however only 13\% of the purchases can be covered. It is worth mentioning that both approaches need to filter out a lot of purchases before starting to show reasonable accuracy levels. From the computational complexity point of view, inference of the Bayesian approach is more costly compared to the baseline model. Results are encouraging and demonstrate the complexity of the size recommender topic, motivating a deeper research work in the field. \section{Introduction} Fashion is a way to express identity, moods and opinions. Customers also tend to use fashion to either emphasize certain parts of their body or hide others. In that context, size and fit have been shown to be among factors influencing the most the overall satisfaction \cite{Pisut2017}. Online customers have to buy before trying their clothes on. The sensory feedback phase about how the article fits via touch and visual cues is then delayed. Because of these uncertainties, a lot of consumers are still reluctant to engage in the purchase process. To make matters worse, fashion articles including shoes and apparel have important sizing variations primarily due to: 1. different definitions of respective sizes from brands: the size systems used for specific categories are limited (e.g. S, M, L, etc.), however the sizes themselves represent different physical measurements from one brand to another; 2. different ways of converting a local size system to another: in Europe, garment sizes are not standardized and brands don't always use the same conversion logic. A way to circumvent the confusion created by these variations is to use size tables which map physical body measurements to the article size system, requiring customers to have accurate measurements of their body. However, size tables themselves might suffer from a large variance, up to one inch within a single size. These differences stem from either different datasets used for size tables (e.g. German vs. UK population) or are due to vanity sizing, i.e deliberate size inconsistencies in brands targeting a specific focus group based on age, sportiness, etc. which represent major influences on the body measurements \cite{ujevic2005, shin2007,faust2014}. The combination of the above factors leaves the customers alone to face a highly challenging problem of determining the right size and fit during their purchase journey. In recent years, there has been a lot of interest in building recommender systems in fashion e-commerce with major focus on modeling style preferences based on customers past interactions, taste and affinities \cite{hu2015,arora2016,bracher2016}. However, few research work have been conducted to tackle the size recommendation problem. The recommendation of size and fit has been recently studied in \cite{abdulla2017} where sparsity in purchased data is mentioned as a major issue, especially considering that articles have a limited stock. To minimize that problem, the authors propose to represent articles as a combination of brand, usage, size, and fit. A neural network is then trained to learn a latent vector describing each article defined as the combination of features mentioned before. Customer vector representation is obtained by aggregating over purchased articles and, finally, a gradient boosted classifier predicts the fit of an article to a customer. Following a different approach, the authors of \cite{sembium2017} propose a solution for determining if an article of a certain size would be fit, large, or small for a certain customer, using the purchase history. This is achieved by iteratively deducing the true sizes for customers and products, fitting a linear function based on the difference in sizes, and performing ordinal regression on the output of the function to get the loss. Extra features are simply included by addition to the linear function. To handle multiple persons behind a single account, hierarchical clustering is performed on each customer account before doing the above. An extension of that work has been very recently published proposing a Bayesian approach on a similar model \cite{Sembium2018}. Instead of learning the parameters in an iterative process, the updates are done with mean-field variational inference with Polya-Gamma augmentation. This method therefore naturally benefits from the nice advantages of Bayesian modeling - the uncertainty outputs, and the use of priors. In this paper, we present two approaches: a baseline algorithm, launched on shoes in 2016 and on garments in 2017, which consists in inferring the size that a customer intends to buy and, independently, the article's sizing characteristics ; and a hierarchical Bayesian approach which aims at jointly modeling the purchases of one or multiple sizes of an article along with their possible return events: 1. no return (article is kept), 2. returned too small 3. returned too big. In the size recommendation context, data sparsity is severe since it affects both articles and customers. To tackle this, we propose two design choices: a) building a hierarchy on top of parameters, exploiting prior knowledge on articles and customers; b) virtually treating the size as a continuous variable in the training phase.
1,108,101,565,368
arxiv
\section{Nonlocal Gauge theories} A consistent gauge-invariant theory for spin one massless particles regardless of the spacetime dimension fits in the following general class of theories \cite{modestoLeslaw} \begin{eqnarray} && \mathcal{L}_{\rm gauge} = - \frac{1}{4 g^2} {\rm tr}\con\left[ \, {\bf F} \, e^{H({\cal D}^2_{\Lambda})} {\bf F} + \mathcal{\bf V}_{ g \con} \con \right] . \label{gauge} \end{eqnarray} The theory above consists of a weakly nonlocal kinetic operator and a local curvature potential ${{\bf V}_{ g \con} }\con$ crucial to achieve finiteness of the theory as we will show later. In (\ref{gauge}) the Lorentz indices and tensorial structures have been neglected. The notation on the flat spacetime reads as follows: we use the gauge-covariant box operator defined via ${\cal D}^2={\cal D}_\mu{\cal D}^\mu$, where ${\cal D}_\mu$ is a gauge-covariant derivative (in the adjoint representation) acting on gauge-covariant field strength ${\bf F}_{\rho\sigma} = F_{\rho\sigma}^a T^a$ of the gauge potential $A_{\mu}$ (where $T^a$ are the generators of the gauge group in the adjoint representation.) The metric tensor $g_{\mu \nu}$ has signature $(- + \dots +)$. We employ the following definition, ${\cal D}^2_{\Lambda} \equiv {\cal D}^2/\Lambda^2$, where $\Lambda$ is an invariant mass scale in our fundamental theory. Finally, the entire function $V^{-1}(z) \equiv \exp H(z)$ ($z \equiv {\cal D}^2_\Lambda$) in (\ref{gauge}) satisfies the following general conditions \cite{kuzmin}, \cite{Tombo}: (i) $V^{-1}(z)$ is real and positive on the real axis and it has no zeros on the whole complex plane $|z| < + \infty$. This requirement implies, that there are no gauge-invariant poles other than for the transverse and massless gluons. (ii) $|V^{-1}(z)|$ has the same asymptotic behaviour along the real axis at $\pm \infty$. (iii) There exists $\Theta\in(0,\pi/2)$ such that asymptotically $ |V^{-1}(z)| \rightarrow | z |^{\gamma + \frac{D}{2} - 2}$, when $|z|\rightarrow + \infty$ with $\gamma\geqslant D/2$ ($D$ is even and $\gamma$ natural) for complex values of $z$ in the conical regions $C$ defined by: $C = \{ z \, | \,\, - \Theta < {\rm arg} z < + \Theta \, , \,\, \pi - \Theta < {\rm arg} z < \pi + \Theta\}.$ This condition is necessary to achieve the maximum convergence of the theory in the UV regime. (iv) The difference $V^{-1}(z)-V^{-1}_{\infty}(z)$ is such that on the real axis \begin{eqnarray} \lim\limits_{|z|\rightarrow\infty}\frac{V^{-1}(z)-V^{-1}_{\infty}(z)}{V^{-1}_{\infty}(z)}z^m=0,\qquad {\rm for\,\, all}\quad m\in \mathbb{N}, \end{eqnarray} where $V^{-1}_{\infty}(z)$ is the asymptotic behaviour of the form factor $V^{-1}(z)$. Property (iv) is crucial for the locality of counterterms. The entire function $H(z)$ must be chosen in such a way that $\exp H(z)$ tends to a polynomial $p(z)$ in UV hence leading to the same divergences as in higher-derivative theories. An explicit example of weakly nonlocal form factor $e^{H(z)}$, that has the properties (i)-(iv) can be easily constructed following \cite{Tombo}, \begin{eqnarray} \hspace{0.1cm} &&e^{H(z)} = e^{\frac{1}{2} \left[ \Gamma \left(0, e^{-\gamma_E}p(z)^2 \right)+ \log \left( p(z)^2 \right) \right] } \nonumber \\ && \underset{z\in\mathbb{R}}{=} \sqrt{ p(z)^2} \left( 1+ \frac{e^{-e^{-\gamma_E}p(z)^2}}{2 e^{-\gamma_E}p(z)^2} + \dots \right) \label{VlimitB} , \end{eqnarray} where $\gamma_E \approx 0.577216$ is the Euler-Mascheroni constant and $ \Gamma(0,x) = \int_x^{+ \infty} d t \, e^{-t} /t $ is the incomplete gamma function with its first argument vanishing. The polynomial $p(z)$ of degree $\gamma + (D-4)/2$ is such that $p(0)=0$, which gives the correct low energy limit of our theory coinciding with the standard two-derivative Yang-Mills theory. In this case the $\Theta$-angle defining cones $C$ turns out to be $ \pi/ (4\gamma + 2(D-4))$. The theories described by the action in (\ref{gauge}) are unitary and perturbatively renormalizable at quantum level in any dimension as we are going to explicitly show in the following subsections. Moreover, at classical level many evidences endorse that we are dealing with ``{\em gauge theories possessing singularity-free exact solutions}". The discussion here is closely analogous to the gravitational case \cite{BiswasSiegel,ModestoMoffatNico,BambiMalaModesto2, BambiMalaModesto,calcagnimodesto, koshe1, ModestoGhosts}. In particular the static gauge potential for the exponential form factor $\exp (- \Box/\Lambda^2)$ is for weak fields given approximately by: \begin{eqnarray} \Phi_{\rm gauge} ( r ) = A_0(r) = \con g \frac{{\rm Erf}( \frac{\Lambda r}{2} )}{r} . \end{eqnarray} We used the form factor $\exp (- \Box/\Lambda^2)$ and $D=4$ to end up with a simple analytic solution. However, the result is qualitatively the same for the asymptotically polynomial form factor (\ref{VlimitB}), and $\Phi_{\rm gauge} ( r ) = {\rm const}$ for $r=0$. \subsection{Propagator, unitarity and divergences } \label{gravitonpropagator} Splitting the gauge field into a background field (with flat gauge connection) plus a fluctuation, fixing the gauge freedom and computing the quadratic action for the fluctuations, we can invert the kinetic operator to get finally the two-point function. This quantity, also known as the propagator in the Fourier space reads, up to gauge dependent components, \begin{eqnarray} && \mathcal{O}^{-1}_{\mu\nu}(k) \!=\! \frac{-iV( k^2/\Lambda^2 ) } {k^2+i\epsilon} \left( \eta_{\mu\nu} - \frac{k_\mu k_\nu}{k^2} \right) \, , \label{propagator} \end{eqnarray} where we used the Feynman prescription (for dealing with poles). The tensorial structure in (\ref{propagator}) is the same of the local Yang-Mills theory, but we see the presence of a new element -- multiplicative form factor $V(z)$. If the function $V^{-1}(z)$ does not have any zeros on the whole complex plane, then the structure of poles in the spectrum is the same as in original two-derivative theory. This can be easily proved in the Coulomb gauge, which is manifestly unitary. Therefore, in the spectrum we have exactly the same modes as in two-derivative theories. In this way we have achieved unitarity, but the dynamics is modified from the simple two-derivative to a super-renormalizable one with higher-derivatives. Despite that in the UV regime we recover polynomial higher-derivative theory, the analysis of tree-level spectrum still gives us a unitary theory without ghosts, because the renormalizability is due to the behaviour of the theory in the very UV limit, while unitarity is influenced by the behaviour at any energy scale. In the high energy regime (UV), the propagator in momentum space schematically scales as \begin{eqnarray} \mathcal{O}^{-1}(k) \sim k^{-(2 \gamma + D-2) } \,. \label{OV} \end{eqnarray} The vertices of the theory can be collected in different sets, that may involve or not the entire function $\exp H(z)$. However, to find a bound on quantum divergences it is sufficient to concentrate on the polynomial operators with the high energy leading behaviour in the momenta $k$ \cite{kuzmin, Tombo}. These operators scale as the propagator, they cannot have higher power of momentum $k$ in the scaling, in order not to break the renormalizability of the theory. The consideration of them gives the following upper bound on the superficial degree of divergence of any graph \cite{Tombo,modesto,Anselmi0}, \begin{eqnarray} \omega(G)\leq DL+(V-I)(2 \gamma +D)-E\,. \label{omegaG1} \end{eqnarray} This bound holds in any spacetime, of even or odd dimensionality. In (\ref{omegaG1}) $V$ is the number of vertices, $I$ the number of internal lines, $L$ the number of loops, and $E$ is the number of external legs for the graph $G$. After plugging the topological relation $I -V= L -1$ in (\ref{omegaG1}) we get the following simplification: \begin{eqnarray} && \omega(G) \leq D - 2 \gamma (L - 1)-E \, . \label{even} \end{eqnarray} We comment on the situation in odd dimensions in the next section. Thus, if in even dimensions $\gamma > (D-E)/2$, in the theory only 1-loop divergences survive. Therefore, the theory is one-loop super-renormalizable \cite{Krasnikov, Tombo, Efimov, Moffat3,corni1} and only a finite number of operators of energy dimensions up to $M^D$ has to be included in the action to absorb all perturbative divergences. In a $D-$dimensional spacetime the renormalizable gauge theory includes all the operators up to energy dimension $M^D$, and schematically reads \begin{eqnarray} \mathcal{L}_D = - \frac{1}{4 g^2} {\rm tr} \con \left[ {\bf F}^2 + {\bf F}^3 + {\bf F} \, {\cal D}^2 \, {\bf F} + \dots + {\bf F}^{D/2} \right] . \end{eqnarray} In gauge theory the scaling of vertices originating from kinetic terms of the type ${\bf F}({\cal D}^2)^{\gamma+ (D-4)/2}{\bf F}$ is lower than the one seen in the inverse propagator $k^{2 \gamma +D-2}$. This is because when computing variational derivatives with respect to the dimensionful gauge potentials (to get higher point functions) we decrease the energy dimension of the result. Hence the number of remaining partial derivatives, when we put the variational derivative on the flat connection background, must be necessarily smaller. This means that we have a smaller power of momentum, when the 3-leg (or higher leg) vertex is written in momentum space. We get the maximal scaling for the gluons' 3-vertex, and it is with the exponent $2\gamma+D-3$. In this way we can put an upper bound on the degree of divergence for higher-derivative gauge theories even with a little excess. Again, for higher-derivative gauge theories and $\gamma>(D-E)/2$ we have one-loop super-renormalizability. For the minimal choice $E=2$ (because the tadpole diagram vanishes) we have $\gamma>(D-2)/2$. \subsection{Finite gauge theories in odd and even dimensions} In {\em odd number of dimensions} we can easily show that the theory is finite without need of gauge potential ${\bf V}_{ g}$ because in dimensional regularization scheme (DIMREG) {\em there are no divergences at one-loop and the theory is automatically finite}. The reason is of dimensional nature. In odd dimension the energy dimension of possible one-loop counterterms needed to absorb logarithmic divergences can be only odd. However, at one-loop such counterterms cannot be constructed in DIMREG scheme and having at our disposal only Lorentz invariant (and gauge-covariant) building blocks that always have energy dimension two. By elementary building blocks we mean here field strengths or gauge-covariant box operators, or even number of covariant derivatives (even number is necessary here to be able to contract all indices). For details we refer the reader to original papers \cite{modesto}. In {\em even dimensions} we for simplicity consider the polynomial $p(z)$ to be a monomial, $p_\gamma(z)= \omega \, z^{\gamma + \frac{D}{2} - 2}$ ($\omega$ is a positive real parameter). In this minimal setup the monomial in UV gives precisely the highest derivative term of the form ${\rm tr}\left(\con{\bf F}({\cal D}^2_{\Lambda})^{\gamma} {\bf F}\right)$ (in $D=4)$. There is only one possible way how to take trace over group indices here, and terms with derivatives can be reduced to those with gauge-covariant boxes only by exploiting Bianchi identities in gauge theory. These latter terms take the explicit form $F_{\mu\nu}^a ({\cal D}^2_{\Lambda})^{\gamma} F^{\mu\nu}_a$. In four dimensions there is an RG running of only one coupling constant. The contribution to the beta function of the YM coupling constant from this quadratic term is actually a dimensionless constant (independent of the frontal coefficient of the highest derivative term), which has been computed in \cite{TesiPiva} using Feynman diagrams. This number can be cancelled by a contribution coming from a quartic (in field strengths) gauge killer of the form \begin{eqnarray} -\frac{s_g}{4g^2}{\rm tr}\left( {\bf F}^2({\cal D}^2_{\Lambda})^{\gamma-2} {\bf F}^2 \right) \end{eqnarray} (here there are several possibilities of taking traces). The contribution to the beta function is linear in the parameter $s_g$ and hence the latter one can be adjusted to make the total beta function vanish. The action of the finite quantum theory may take the following compact form (for the choice $\gamma=3$ the general derivative structure is explicit in $D=4$): \begin{eqnarray} \label{fingaugeth} && \mathcal{L}_{\rm fin,\, gauge} = -\frac{1}{4g^2} {\rm tr}\con\Big[ \underbrace{{\bf F}e^{H({\cal D}_\Lambda^2)} {\bf F} + s_g {\bf F}^2({\cal D}^2_{\Lambda}) {\bf F}^2}_{\mbox{minimal finite theory}} \nonumber \\ && + \sum_i \sum_{j>2}^{5} \sum_{k=0}^{{5}-j} c^{(j,k)}_i \left(({\cal D}^2_{\Lambda})^k {\bf F}^j\right)_i\Big]\,, \end{eqnarray} where $c^{(j,k)}_i$ are some constant coefficients. The beta function can succesfully killed by the last operator in the first line above. The last terms in the formula \eqref{fingaugeth} have been written in a compact index-less notation and the index $i$ counts all possible contractions of Lorentz and group indices. \section{The finite theory in $D=4$} As extensively motivated in the previous section the minimal nonlocal gauge theory in $D=4$ candidate to be scale-invariant (finite) at quantum level is: \begin{eqnarray} \!\! \mathcal{L}_{\rm fin,\, gauge } = -\frac{\alpha}{4}{\rm tr}\Big[ {\bf F}e^{H({\cal D}_\Lambda^2)} {\bf F} + s_g {\bf F}^2({\cal D}^2_{\Lambda}) {}^{\gamma-2} {\bf F}^2 \Big] \, , \label{minimalGT} \end{eqnarray} where the function $H(z)$ is given in \eqref{VlimitB}. We here evaluate the contribution to the beta function $\beta_{\alpha}^{(s_g)}$ from the two following independent killer operators quartic in the field strength\footnote{It is worth noting that if we choose the gauge group $G=SU(N)$ and in the adjoint representation, it holds \begin{eqnarray} \mathrm{tr}(T^aT^bT^cT^d)=\delta^{ab}\delta^{cd}+\delta^{ad}\delta^{bc} \, , \label{Muta} \end{eqnarray} Therefore, the killers we have considered exhaust all the possible operators we can construct, regarding the structure in the internal indices. On top of this we have the freedom of using different contractions of Lorentz indices and covariant derivatives in the expressions for quartic killers. Indeed, if we plug the formula above (\ref{Muta}) in the following general Lagrangian \begin{eqnarray} {\cal L}_{\rm killer}=-\frac{s_g}{4g^2}\mathrm{tr}\Big[{\bf F}_{\mu\nu} {\bf F}^{\mu\nu} ({\cal D}_{\Lambda}^2)^{\gamma-2} {\bf F}_{\rho \sigma} {\bf F}^{\rho\sigma}\Big] , \end{eqnarray} we get the sum of the two killers (\ref{killer1}) and (\ref{killer2}) with the same front coefficient. } \begin{eqnarray} && 1. \,\,\, -\frac{s_g}{4g^2} F^a_{\mu\nu}F^{\mu\nu}_a \Box_{\Lambda}^{\gamma-2} F^{b}_{\rho\sigma}F^{\rho\sigma}_b \label{killer1} \,, \\ && 2. \,\,\, - \frac{s_g}{4g^2} F^{a}_{\mu\nu}F_b^{\mu\nu}({\cal D}^2_{\Lambda})^{\gamma-2}F^{b}_{\rho\sigma}F_a^{\rho\sigma} \, . \label{killer2} \end{eqnarray} All details of the computation are not included in this letter because they are very cumbersome, but the results are: \begin{eqnarray} && 1. \,\,\,\, \beta_{\alpha}^{(s_g)}=\frac{s_g}{2\pi^2\omega}, \\ && 2. \,\,\, \, \beta_{\alpha}^{(s_g)}=\frac{s_g}{4\pi^2\omega}(1+N_G), \end{eqnarray} where $N_G$ is the number of generators of the Lie group. These \con results have been checked using two different techniques: the method of Feynman diagrams and the Barvinsky-Vilkovisky trace technology \con \cite{GBV}. The computation has been done for the nonlocal theory with general polynomial asymptotic behaviour $p_\gamma(z)$ of degree $\gamma$. By choosing the monomial $p_\gamma(z)= \omega \, z^\gamma$ the prototype kinetic term used to evaluated the beta function reads \begin{eqnarray} && \hspace{-1.4cm}\mathcal{L}_{\rm fin,\, \rm kin. gauge} = -\frac{1}{4g^2} F^a_{\mu\nu} \left( 1+ \omega \, ({\cal D}^2_{\Lambda})^{\gamma} \right) F_a^{\mu\nu} \, . \label{actioncomp} \end{eqnarray} As already explained all the other contributions of the form factor fall off exponentially in the UV and do not contribute to the divergent part of the quantum action. To fix our conventions, we can read the beta function from the counterterm operator, namely \begin{eqnarray} \hspace{0.1cm} \mathcal{L}_{\rm ct} := - \frac{\alpha}{4}(Z_\alpha - 1) \, F^{\mu\nu}_a F_{\mu\nu}^a = - \mathcal{L}_{\rm div} = - \frac{1}{\epsilon} \beta_\alpha \, F_a^{\mu\nu}F^a_{\mu\nu} . \nonumber \end{eqnarray} \con By using the Batalin-Vilkovisky formalism \cite{bata} it is possible to prove that for the theory \eqref{minimalGT} there is no wave-function renormalization for the gauge field $A_{\mu}^a$. We have only renormalization of the gauge coupling constant. The contribution to the beta function $\beta_{\alpha}^{(\gamma)}$ due to the nonlocal kinetic term was obtained in \cite{TesiPiva}, namely \begin{eqnarray} \beta_{\alpha}^{(\gamma)}= -\frac{(5+3 \gamma +12 \gamma^2)}{192\pi^2}C_2(G) \, , \quad \gamma \geq 2 \, , \label{betas} \end{eqnarray} where $C_2(G)$ is the quadratic Casimir of the gauge group $G$. By imposing the following condition for scale invariance \begin{eqnarray} \beta^{(\gamma)}_{\alpha}+\beta_{\alpha}^{(s_g)}=0 , \end{eqnarray} we can find the special value of the coefficient $s_g^*$ that kills the beta function. Using for example the first killer (\ref{killer1}) we get \begin{eqnarray} s_g^*=-2\pi^2 \omega \beta_{\alpha}^{(\gamma)} , \label{adjusting} \end{eqnarray} and the Lagrangian for a finite nonlocal gauge theory in four dimensions can be explicitely written \begin{eqnarray} && \mathcal{L}_{\rm fin,\, gauge } = -\frac{\alpha}{4} \Big[ F_{\mu\nu}^ae^{H({\cal D}_\Lambda^2)} F^{\mu\nu}_a \label{finita} \\ && + \omega \frac{(5+3 \gamma+12 \gamma^2)}{96}C_2(G) F_{\mu\nu}^a F^{\mu\nu}_a({\cal D}_{\Lambda}^2)^{\gamma-2} F^b_{\rho \sigma} F_b^{\rho\sigma} \Big] \nonumber \end{eqnarray} where we assumed $\gamma \geq 2$. It is possible to kill the beta function also in nonlocal theories, where we have Abelian symmetry groups. For concreteness we can study the one-loop beta function of QED $\beta_e= e^3/12\pi^2$ for electric charge $e$. In terms of the inverse coupling $\alpha$ this function is expressed as $\beta_\alpha= - 1/6\pi^2$, which is a constant and gives logarithmic scaling with the energy for the coupling constant $\alpha$. Since pure two-derivative QED is a free theory, then the running comes entirely from quantum effects of charged matter. Here we assume one species of charged fermions coupled minimally to photon field. If we extend QED to the nonlocal version \eqref{gauge} with killer operator (\ref{killer1}) and we replace \begin{eqnarray} s_g^*=-2\pi^2 \omega \beta_{\alpha}^{(\gamma)} = \frac{ \omega }{3} \label{adjustingQED} \end{eqnarray} in \eqref{minimalGT}, then the theory is completely finite regardless of the parameter $\gamma$. It is important to notice that even in the Abelian case the killer operator has crucial impact on the beta function because it contains photon self-interactions. In this way we solve the problem of Landau pole for the running of the electric charge in the UV regime of QED. The same can be repeated for any gauge theory coupled to matter, provided that in the matter sector we do not have self-interactions and the coupling to gauge fields is minimal \cite{TesiPiva}. We want to comment on what we can achieve if we stick to one-loop super-renormalizable gauge theories without attempts to make them finite. The final result \eqref{betas} \con highlights a universal Landau pole issue in the UV regime for the running coupling constant $g(\mu)$ (where $\mu$ is the renormalization scale). This is true for any value of the integer $\gamma \geq2$, when \con we do not introduce any potential ${\bf V}_g$ with killer operators. The sign of the beta function is negative because the discriminant $\Delta<0$ of the quadratic polynomial in $\gamma$ in \eqref{betas}. For the particular choice (\ref{adjusting}) the theory \eqref{minimalGT} is one-loop finite, but if the front coefficient $s_g$ has a bigger value than in (\ref{adjusting}) then we enter the regime in which the UV asymptotic freedom is achieved. We here summarize the three possible scenarios for the value of the $s_g$ \begin{eqnarray} s_g \left\{ \begin{array}{lr} < \omega\frac{(5+3 \gamma+12 \gamma^2)}{96}C_2(G) \, , \quad \mbox{Landau pole} , \\ \\ = \omega\frac{(5+3 \gamma+12 \gamma^2)}{96}C_2(G) \equiv s_g^* \, , \quad \mbox{finiteness} , \\ \\ > \omega \frac{(5+3 \gamma +12 \gamma^2)}{96}C_2(G) \, , \quad \mbox{asymptotic freedom} . \end{array} \nonumber \right. \end{eqnarray} However, in weakly nonlocal higher-derivative theories we must read out the poles from the quantum effective action and not only from the beta functions of the couplings in the theory. In particular, in the case of the theory \eqref{gauge} the one-loop dressed propagator is devoid of any pole because its UV asymptotic behaviour is entirely due to the form factor $\exp H(z)$ \cite{Tombo}, namely, up to the tensorial structure, \begin{eqnarray} - i \frac{e^{- H(k^2) }}{k^2 \left( 1 + \beta_{\alpha} \, e^{- H( k^2) } \log ( k^2/\mu_0^2) \right) }. \end{eqnarray} Moreover, as a particular feature of the super-renormalizable theory, when \con $s_g=0$ or $s_g < s_g^*$, $\beta_{\alpha}$ is negative, signifying that at low energy the theory is weakly coupled. In consequence \con we do not have any pole in the dressed propagator in the UV nor do we have any problem in the IR as opposite to the local theory. In local two-derivative theories we usually have a UV Landau pole or an IR singularity of RG flow, so (as for example in QED) the theory is weakly coupled in the IR (without confinement), but it becomes non-perturbative in the UV. In QCD we have the reverse, the theory is asymptotically free in the UV where it is perturbative, but a singularity of the RG flow manifests itself in the IR indicating confinement. In the case of two-derivative local theories the singularities of the flow have direct realization as the poles in the effective propagator read from the quantum action. This is not true anymore when higher-derivatives are included. In the theory (\ref{minimalGT}) for $s_g < s_g^*$, the minus sign of the beta function, which usually gives rise to a UV Landau pole, is innocuous because the form factor washes away the $\log (k^2)$ contributions to the dressed propagator in the UV and there is no possibility for appearance of a new real pole in it. On the other hand, in the IR the analytic form factor does not play any role and there is no pole because the beta function is negative. The outcome is a theory perturbative in both the UV and in the IR regime. Therefore we are left with two possible options. We can choose completely UV finite (no divergences) nonlocal theories or super-renormalizable nonlocal theories with negative beta functions ($\beta_\alpha$) and hence without any singularities in asymptotic behaviours of the couplings. The second option seems to be very appealing in models that attempt to realize a unification of all coupling constants. \section{Conclusions} We have explicitly evaluated the one-loop exact beta function for the weakly nonlocal gauge theory recently proposed in \cite{modestoLeslaw}. The higher-derivative structure or quasi-polynomiality of the action implies that the theory is super-renormalizable, and in particular only one-loop divergences survive in any dimension. Once a potential, at least cubic in the field strengths, is switched on, it is always possible to make the theory finite. We evaluated the beta function for the special case of $D=4$, but the result can be generalized to any dimension where a careful selection of the killer operators should be done. In short, in this paper {\em we have explicitly shown how to construct a finite theory for gauge bosons in $D=4$} (\ref{finita}). We have considered both cases of Abelian and non-Abelian gauge symmetry groups. The super-renormalizable structure does not change if we add a general extra matter sector that does not exhibit self-interactions. The minimal nonlocal theory without any killer operator shows a Landau pole for the running coupling constant, regardless of the special asymptotic polynomial structure. This is a universal property shared at least by all the unitary and weakly nonlocal gauge theories with asymptotic polynomial behaviour in the UV regime. However, the one-loop dressed propagator does not show any Landau pole in the UV regime because the propagator is dominated by the nonlocal form factor and it is the nonlocal operator to control the high energy physics. Moreover, we do not have any pole even in the IR, as opposite to the local theory, exactly because the universal negative sign of the beta function. The outcome is a theory well defined at perturbative level in both the IR and the UV regime. The same result is achieved in the presence of sufficiently weakly coupled killer operators. In this paper we mostly considered pure gauge theories, but here we can achieve asymptotic freedom regardless of the number of fermionic fields, because it is the interaction between gauge bosons, due to the killer operators, that makes the theory asymptotically free. The generalization to extra dimensions is straightforward. In particular, the theory is finite in odd dimension without the need to introduce any killer operator, as a mere consequence of dimensional regularization. The results can also be reproduced in cut-off regularization making use of Pauli-Villars operators \cite{Anselmi2}. {\em Acknowledgements ---} We are grateful to D. Anselmi for very useful discussions in quantum field theory.
1,108,101,565,369
arxiv
\section{Introduction} In this paper we shall deal with the problem of constructing {\it compactly supported radial} functions $\Phi$ on $\R^d$ such that the symmetric kernels $\Phi(\bx-\by)$ could serve as reproducing kernels for Sobolev spaces under appropriate inner products. Due to advantageous aspects in practical applications, the problem has become an important issue in various fields of Mathematics including the theory of interpolations, spatial statistics and machine learning. In their pioneering work \cite{AK}, N. Aronszajn and K. T. Smith introduced the Sobolev space $\,H^\delta(\R^d)\,$ of order $\,\delta>0\,$ as the space of Bessel potentials defined by convolutions $\,(G_{\delta/2}\ast u)(\bx),\,$ where $\,u\in L^2(\R^d)\,$ and \begin{equation}\label{G1} G_\delta(\bx) = \frac{1}{2^{\delta-1 + \frac d2}\,\pi^{\frac d2}\, \Gamma(\delta)}\,K_{\delta-\frac d2}(|\bx|) |\bx|^{\delta - \frac d2}. \end{equation} As usual, $\,|\bx| = \sqrt{\bx\cdot\bx}\,$ denotes the Euclidean norm for each $\,\bx\in\R^d\,$ and $K_{\delta- d/2}$ stands for the modified Bessel function of order $\,\delta-d/2.$ Often referred to as Mat\'ern functions (see e.g. \cite{G1}), the Bessel potential kernels $G_\delta$ are are integrable with the Fourier transforms \begin{align}\label{G2} \widehat{G_\delta}(\xi) =\int_{\R^d} e^{-i \xi\cdot\bx}\,G_\delta(\bx) d\bx= \left( 1+|\xi|^2\right)^{-\delta}. \end{align} As a consequence, the Sobolev space of order $\delta$ may be identified with \begin{equation*} H^\delta(\R^d) =\left\{ u\in L^2(\R^d) : \left( 1+|\cdot|^2\right)^{\delta/2}\,\widehat{u}\in L^2(\R^d)\right\}, \end{equation*} which becomes a Hilbert space under the inner product \begin{align*} \bigl(u,\,v\bigr)_{H^\delta(\R^d)} = (2\pi)^{-d}\int_{\R^d} \left( 1+|\xi|^2\right)^{\delta}\widehat{u}(\mathbf{\xi}) \overline{\,\widehat{v}(\mathbf{\xi})} \,d\mathbf{\xi}. \end{align*} In the case $\,\delta> d/2,\,$ N. Aronszajn and K. T. Smith noticed further that $\,H^\delta(\R^d)\subset C(\R^d)\,$ continuously and $H^\delta(\R^d)$ is a reproducing kernel Hilbert space with kernel $G_{\delta}(\bx - \by)$, that is, for every $\,u\in H^\delta(\R^d),\,\bx\in\R^d,\,$ \begin{align*} &{\rm(i)}\quad G_{\delta}(\cdot-\bx)\in H^\delta(\R^d)\quad\text{and}\\ &{\rm(ii)}\quad u(\bx) = \big(u, \,G_\delta(\cdot-\bx)\big)_{H^\delta(\R^d)} \end{align*} (we also refer to A. P. Caler\'on \cite{C} and the appendix for a brief additional description of the Bessel potential kernels $G_\delta$). In connection with the problem of our consideration, there is a standard framework on reproducing kernel Hilbert spaces of functions on $\R^d$ which resembles the structure of Sobolev spaces and reads as follows. For a given real-valued {\it positive definite} function $\,\Phi\in C(\R^d)\cap L^1(\R^d),\,$ if we define \begin{align*} & \mathcal{F}_\Phi(\R^d) = \left\{ u\in C(\R^d)\cap L^2(\R^d) : \int_{\R^d} \big|\widehat u(\mathbf{\xi})\big|^2\frac{d\xi}{\,\widehat{\Phi}(\mathbf{\xi})\,}<\infty\right\},\\ &\qquad\bigl(u,\,v\bigr)_{\mathcal{F}_\Phi(\R^d)} = (2\pi)^{-d}\int_{\R^d} \widehat{u}(\mathbf{\xi})\, \overline{\widehat{v}(\xi)}\,\frac{d\xi}{\,\widehat{\Phi}(\mathbf{\xi})\,}, \end{align*} then $\mathcal{F}_\Phi(\R^d)$ becomes a Hilbert space with a reproducing kernel $\Phi(\mathbf{x}-\mathbf{y})$ (see \cite{S1}, \cite{We} and also \cite{Ar} for more general properties). On account of this framework, we shall focus on constructing compactly supported radial functions $\,\Phi\in C(\R^d) \cap L^1(\R^d)\,$ which are positive definite and subject to the Fourier transform estimates \begin{equation}\label{G5} C_1 (1+|\xi|^2)^{-\delta}\le \widehat\Phi(\xi)\le C_2 (1+|\xi|^2)^{-\delta} \end{equation} for some $\,\delta> d/2\,$ and for some positive constants $\,C_1, C_2.$ An initiative construction had been started by H. Wendland (\cite{We1}, \cite{We}) who introduced a family of polynomials on $[0, \infty)$ defined by \begin{align}\label{G3} \left\{\begin{aligned} {P_{d, m}(r)} &{\,\,= c_m\int_{r}^1 \left(t^2-r^2\right)^{m-1} t (1-t)^{\left[\frac d2\right] + m +1} dt,}\\ {P_{d}(r)} &{\,\,= (1- r)^{\left[\frac d2\right] +1}\,,}\end{aligned}\right. \end{align} for $\,0\le r\le 1\,$ and zero otherwise, where $m$ is a positive integer and $c_m$ is a constant, and proved $P_{d, m}(|\bx-\by|)$ is a reproducing kernel for the Sobolev space $\,H^{\frac{d+1}{2} + m}(\R^d)\,$ and so is $P_{d}(|\bx-\by|)$ for $\,H^{\frac{d+1}{2}}(\R^d)\,$ if $\,d\ge 3.$ In an attempt to cover the missing cases, R. Schaback (\cite{S2}) introduced a family of non-polynomial functions defined by \begin{equation}\label{G4} R_{d, m}(r) = c_m\int_{r}^1 \left(t^2-r^2\right)^{m- \frac 12} t (1-t)^{\frac d2 + m +1} dt \end{equation} for $\,0\le r\le 1\,$ and zero otherwise, where $m$ is a nonnegative integer, and proved $R_{d, m}(|\bx-\by|)$ is a reproducing kernel for $\,H^{\frac d2 + m + 1}(\R^d)\,$ if $d$ is even (see S. Hubbert \cite{H} for computational aspects). In order to deal with fractional orders, A. Chernih and S. Hubbert (\cite{CH}) further generalized Wendland's functions in the form \begin{equation}\label{G5} S_{d, \alpha}(r) = c_\alpha \int_{r}^1 \left(t^2-r^2\right)^{\alpha-1} t(1-t)^{\frac {d+1}{2} +\alpha} dt \end{equation} for $\,0\le r\le 1\,$ and zero otherwise, where $\,\alpha>0\,$ and $c_\alpha$ is a constant, and proved that $S_{d, \alpha}(|\bx-\by|)$ is a reproducing kernel for $\,H^{\frac {d+1}{2} +\alpha}(\R^d).\,$ Our primary aim in the present paper is to obtain a family of compactly supported radial functions which provide reproducing kernels for the Sobolev spaces $H^\delta(\R^d)$ of any order $\,\delta>d/2\,$ in a unified manner and thereby cover all of the missing cases left open in this subject. The method of our construction will be based on a new class of oscillatory integral transforms, to be called Hankel-Schoenberg transforms hereafter, which incorporate Fourier transforms of radial functions and classical Hankel transforms. Apparently useful in any situation where Fourier transforms of radial functions are involved, our secondary purpose is to bring Hankel-Schoenberg transforms to attention and establish their basic properties. In consideration of Euler's binomial densities as possible candidates, we shall begin with evaluating their Hankel-Schoenberg transforms in terms of generalized hypergeometric functions whose asymptotic behaviors are well investigated, e.g., by Y. L. Luke \cite{L}. We then select those binomial densities whose Hankel-Schoenberg transforms are strictly positive by the criteria of J. Fields and M. Ismail \cite{FI} and apply a continuous version of dimension walks to obtain the desired classes of functions. As it will be presented in detail, we shall exhibit three different classes of compactly supported functions which provide reproducing kernels for the Sobolev spaces $H^\delta(\R^d)$ of order $$\,\delta = \frac{d+1}{2}, \quad\delta>\max\,\left(1, \,\frac d2\right), \quad \delta> \frac d2\,,$$ separately. One of these classes include the compactly supported functions of Wendland, Schaback, Chernih and Hubbert as special instances. A distinctive feature of our construction is that the Fourier transform is explicit, which enables us to specify the inner product about which the reproducing property holds. As an illustration, it will be shown that the function $\,A_2(x) = (1 - |x|)_+^2\,, x\in\R,\,$ has the Fourier transform $$\widehat{A_2}(\xi) = \frac{4}{\xi^2}\left( 1- \frac{\sin \xi}{\xi}\right),$$ which is strictly positive and behaves like the Cauchy-Poisson kernel, and $A_2(x-y)$ is a reproducing kernel for $H^1(\R)$ under the inner product $$\bigl(u,\,v\bigr)_{A_2(\R)} = \frac{1}{8\pi}\,\int_{-\infty}^\infty \widehat{u}(\mathbf{\xi})\, \overline{\widehat{v}(\xi)}\,\frac{\xi^3\,d\xi}{\,\xi - \sin \xi\,}.$$ \medskip \paragraph{Notation.} We shall use the following notation in what follows. \begin{itemize} \item The Euler beta function will be denoted by $$B(a, \,b) = \int_0^1 t^{a-1} (1-t)^{b-1} dt\qquad(a>0, \,b>0).$$ \item The generalized hypergeometric functions will be denoted by \begin{equation*} {}_pF_q\left(a_1, \cdots, a_p;\,b_1, \cdots, b_q;\,r\right) =\sum_{k=0}^\infty\frac{\left(a_1\right)_k\cdots\left(a_p\right)_k}{k! \left(b_1\right)_k\cdots\left(b_q\right)_k}\,r^k \end{equation*} in which $\,(a)_k = a(a+1)\cdots (a+k-1)\,$ if $\,k\ge 1\,$ and $\,(a)_0 = 1\,$ for any real number $a$. \item The positive part of $\,x\in\R\,$ will be denoted by $\,x_+ = \max (x,\,0).$ \item We shall write $\,f(x) \approx g(x)\,$ for $\,x\in X\,$ for two real-valued functions $\,f, g\,$ defined on $X$ to indicate there exist positive constants $\,c_1, c_2\,$ such that $\,c_1\, g(x) \le f(x) \le c_2\, g(x)\,$ for all $\,x\in X.$ \end{itemize} \bigskip \section{Positive definite functions} We recall that a function $\Phi$ on $\R^d$ is said to be {\it positive semi-definite} if \begin{align*} \sum_{j=1}^N\sum_{k=1}^N \,\Phi\left(\bx_j - \bx_k\right) z_j \overline{z_k}\ge 0 \end{align*} for any choice of $\,z_1, \cdots, z_N\in\mathbb{C}\,$ and $\, \bx_1, \cdots, \bx_N\in\R^d.\,$ If equality holds only when $\,z_1=\cdots = z_N=0,\,$ $\Phi$ is said to be {\it positive definite}. A well-known theorem of S. Bochner states that a continuous function $\Phi$ is positive semi-definite if and only if it is the Fourier transform of some finite nonnegative Borel measure $\mu$ on $\R^d$. If the carrier of $\mu$ contains an open set, then $\,\Phi = \widehat{\mu}\,$ is positive definite. In particular, the Fourier transform of a nonnegative function $\,f\in L^1(\R^d)\,$ is positive definite if the essential support of $f$ contains an open set (see \cite{We}) \footnote{The carrier of a nonnegative Borel measure $\mu$ on $\R^d$ is defined to be $$\R^d\setminus \left\{O\subset\R^d : O \,\,\text{is open and}\,\, \mu(O) = 0\right\}.$$ If $\mu$ is absolutely continuous with respect to Lebesgue measure, $\,d\mu(\bx) = f(\bx)\, d\bx\,$ with a nonnegative $\,f\in L^1(\R^d)\,,$ then the carrier of $\mu$ equals to the essential support of $f$, the complement of the largest open subset of $\R^d$ on which $\,f = 0\,$ almost everywhere.} A univariate function $\phi$ on $[0, \infty)$ is said to be {\it positive semi-definite or positive definite on $\R^d$} if the radial extension $\,\bx\mapsto \phi(|\bx|), \,\bx\in\R^d,\,$ is positive semi-definite or positive definite in the above sense. To state sufficient or necessary conditions in terms of Fourier transforms, we shall introduce the following kernels, more extensive than being needed, which will serve as the kernels of Hankel-Schoenberg transforms to be studied later. \smallskip \begin{definition}\label{def1} For $\,\lambda>-1,\,$ define $\,\Omega_\lambda : \R\to \R\,$ by \begin{align*} \Omega_\lambda(t) &= \Gamma(\lambda+1)\sum_{k=0}^\infty\frac{(-1)^k}{k!\,\Gamma(\lambda +k +1)}\,\left(\frac t 2\right)^{2k}\\ &=\Gamma(\lambda+1)\left(t/2\right)^{-\lambda} J_\lambda(t), \end{align*} where $J_\lambda$ denotes the Bessel function of the first kind of order $\lambda$. \end{definition} \smallskip In the special case $\,\lambda = (d-2)/2\,,$ with $\,d\ge 2\,$ a positive integer, $\Omega_\lambda$ arises on consideration of the Fourier transform of the area measure $\sigma$ on the unit sphere $\s^{d-1}$ of the Euclidean space $\R^d$ in the form \begin{equation*} \frac{1}{\left|\s^{d-1}\right|}\, \int_{\s^{d-1}} e^{-i \xi\cdot \bx}\,d\sigma(\bx) =\Omega_{\frac{d-2}{2}}(|\xi|). \end{equation*} An immediate consequence is that if $F$ is integrable and radial with $\,F(\bx) = f(|\bx|)\,$ for some univariate function $f$ on $[0, \infty)$, then its Fourier transform is easily evaluated as \begin{align*} \widehat{F}(\xi) &= \int_0^\infty \left(\int_{\s^{d-1}} e^{-i t\xi\cdot \bx}\,d\sigma(\bx) \right) f(t) t^{d-1} dt\\ &= |\s^{d-1}|\int_0^\infty \Omega_{\frac{d-2}{2}}(|\xi|t) f(t) t^{d-1} dt. \end{align*} Since it is simple to find $\,\Omega_{-1/2}(t) =\cos t\,$ by definition, this formula continues to hold true for $\,d=1\,$ if we interpret $\,|\s^{0}| =2.$ In summary, we have the following which are substantially due to I. J. Schoenberg \cite{Sc} (see also \cite{GMO}, \cite{MS}, \cite{Stw}, \cite{We}). \smallskip \begin{proposition}\label{fourier} For $\,f\in L^1\left([0, \infty), \,t^{d-1} dt\right),\,$ put \begin{equation*} \phi(r) = \int_0^\infty\Omega_{\frac{d-2}{2}}\left(rt\right) f(t) t^{d-1} dt\qquad(r\ge 0). \end{equation*} \begin{itemize} \item[\rm(i)] If $\,F(\bx) = f(|\mathbf{x}|),\,\mathbf{x}\in\R^d,\,$ then the Fourier transform of $F$ is given by \begin{equation*}\label{F2} \widehat{F} (\xi) = \frac{2\pi^{d/2}}{\Gamma(d/2)}\,\phi(|\mathbf{\xi}|). \end{equation*} \item[\rm(ii)] If $f$ is nonnegative and the essential support of $f$ contains an open interval, then $\phi$ is positive definite on $\R^d$. \end{itemize} \end{proposition} \smallskip \begin{proof} Part (i) is what we have mentioned as above. Concerning part (ii), if the essential support of a nonnegative function $f$ contains an open interval, say, $\,I= (a, b),\,$ then the essential support of $\,F(\bx) = f(|\bx|)\,$ contains the open annulus $\,\{\bx\in\R^d: a<|\bx|<b\}\,$ and the assertion follows. \end{proof} \smallskip \begin{remark} The integral defined in the statement is often called the $d$-dimensional radial Fourier transform and formally denoted as \begin{equation}\label{P2} \F_d(f)(r) = \frac{2\pi^{d/2}}{\Gamma(d/2)}\, \int_0^\infty\Omega_{\frac{d-2}{2}}\left(rt\right) f(t) t^{d-1} dt\qquad(r\ge 0). \end{equation} As the kernel $\Omega_{\frac{d-2}{2}}$ will be shown to be uniformly bounded, the integral makes sense on the class of finite Borel measures on $[0, \infty)$. Indeed, Schoenberg's original theorem states that a continuous function $\phi$ on $[0, \infty)$ is positive semi-definite on $\R^d$ if and only if $$\phi(r) = \int_0^\infty\Omega_{\frac{d-2}{2}}\left(rt\right) d\nu(t)\qquad(r\ge 0)$$ for some finite nonnegative Borel measure $\nu$ on $[0, \infty)$. \end{remark} \bigskip \section{Hankel-Schoenberg transforms} As it is classical (see \cite{Wa} for instance), the Hankel transforms of a function $\,f\in L^1\left([0, \infty),\,\sqrt t dt\right)\,$ refer to the integrals of type $$\int_0^\infty J_\lambda(rt) f(t) t dt\qquad(\lambda\ge -1/2).$$ As a generalization of both Fourier transforms of radial functions and Hankel transforms, we shall consider the following integral transforms. \smallskip \begin{definition} The Hankel-Schoenberg transform of order $\,\lambda\ge -1/2\,$ of a Lebesgue measurable function $f$ on $[0, \infty)$ is defined to be \begin{equation*} \phi(r) = \int_0^\infty\Omega_\lambda(rt) f(t) dt \qquad(r\ge 0) \end{equation*} whenever the integral on the right side converges. \end{definition} \smallskip The definition indeed makes sense under various conditions on $f$. For this matter, we shall begin with investigating the kernels. \medskip \subsection{Kernels $\Omega_\lambda$} In many aspects, each $\Omega_\lambda$ is similar in nature to the cardinal sine function $$\frac{\sin t}{t} = \prod_{k=1}^\infty \left( 1- \frac{t^2}{k^2\pi^2}\right)\,$$ which coincides with the special case $\,\lambda = 1/2.$ To be more specific, we list the following properties of $\Omega_\lambda$'s which are deducible from the theory of Bessel functions $J_\lambda$ in a straightforward manner (see \cite{AS}, \cite{E}, \cite{Wa}). \begin{itemize} \item[(P1)] Each $\Omega_\lambda$ is of class $C^\infty(\R)$, even and uniformly bounded by $\,1=\Omega_\lambda(0).\,$ The kernels $\Omega_\lambda$ satisfy the Bessel-type differential equations \begin{equation*} {\Omega_\lambda}''(t) + \frac{2\lambda +1}{t}\,{\Omega_\lambda}'(t) + \Omega_\lambda(t) = 0 \end{equation*} as well as the Lommel-type recurrence relations \begin{align}\label{P1} &\qquad {\Omega_\lambda}'(t) = - \frac{t}{\,2(\lambda +1)\,}\,\Omega_{\lambda+1}(t),\nonumber\\ &\Omega_\lambda(t) -\Omega_{\lambda-1}(t) = \frac{t^2}{\,4\lambda(\lambda+1)\,}\,\Omega_{\lambda+2}(t). \end{align} \item[(P2)] An asymptotic formula due to Hankel states that as $\,t\to\infty,$ \begin{equation*} \Omega_\lambda(t) = \frac{\Gamma(\lambda+1)}{\sqrt{\pi}} \left(\frac t2\right)^{-\lambda -1/2} \left[\cos\left(t - \frac{(2\lambda+1)\pi}{4}\right) + O\left( t^{-1}\right)\right]. \end{equation*} \item[(P3)]$\Omega_\lambda$ is oscillatory with an infinity of simple zeros. Arranging the positive zeros of $J_\lambda$ in the ascending order $\,0<j_{\lambda, 1} < j_{\lambda, 2} < \cdots\,,$ $\Omega_\lambda$ can be represented as the infinite product \begin{equation*} \Omega_\lambda(t) = \prod_{k=1}^\infty \left( 1- \frac{t^2}{j_{\lambda, k}^2}\right). \end{equation*} \item[(P4)] Due to Liouville, $\Omega_\lambda$ is expressible in finite terms by algebraic and trigonometric functions if and only if $2\lambda$ is an odd integer. Indeed, \begin{equation} \Omega_{-1/2}(t) = \cos t\,,\quad \Omega_{1/2}(t) = \frac{\sin t}{t} \end{equation} and recurrence formula \eqref{P1} may be used to express $\Omega_{n + 1/2}$, with $n$ an integer, in finite terms by elementary functions. For example, \begin{align} \Omega_{3/2}(t) &= 3\left(\frac{\sin t - t\cos t}{t^3}\right),\nonumber\\ \Omega_{5/2}(t) &= 15\left[\frac{( 3- t^2)\sin t - 3t\cos t}{t^5}\right]. \end{align} \item[(P5)] For $\,\lambda>-1/2,\,$ Poisson's integral reads \begin{align*} \Omega_\lambda(t) = \frac{2}{B\left(\lambda + 1/2\,,\,1/2\right)}\, \int_0^1 \cos(t s)\, (1-s^2)^{\lambda -\frac 12}\,ds. \end{align*} \end{itemize} Owing to the boundedness and asymptotic behavior of $\Omega_\lambda$ described as in (P1), (P2), it is evident that the Hankel-Schoenberg transform of order $\lambda$ is well defined on the class $\,L^1([0, \infty))\,$ or $\,L^1\left( [0, \infty),\,t^{-\lambda-1/2} dt\right).$ \medskip \subsection{Inversion formula} The Hankel-Watson inversion theorem (\cite{Wa}) states that if $\,\lambda\ge -1/2\,$ and $\,f(s)\sqrt s\,$ is integrable on $[0, \infty)$, then \begin{equation*} \int_0^\infty J_\lambda(rt)\left[\int_0^\infty J_\lambda(rs) f(s)s ds\right] rdr = \frac{\,f(t+0) + f(t-0)\,}{2} \end{equation*} at every $\,t>0\,$ such that $f$ is of bounded variation in a neighborhood of $t$. As it is straightforward to express Hankel-Schoenberg transforms in terms of Hankel transforms, an obvious modification yields the following. \smallskip \begin{proposition}\label{inversion} {\rm (Inversion)} For $\,\lambda\ge -1/2,\,$ assume that \begin{equation}\label{invc1} \int_0^\infty |f(t)| t^{-\lambda-1/2}\,dt <\infty. \end{equation} Then the following holds for every $\,t>0\,$ at which $f$ is continuous: \begin{align*} \left\{\aligned &{\phi(r) = \int_0^\infty \Omega_\lambda(rt) f(t) dt\quad(r>0)\quad \text{implies}}\\ &{f(t) = \frac{t^{2\lambda+1}}{4^\lambda\left[\Gamma(\lambda+1)\right]^2}\, \int_0^\infty \Omega_\lambda(rt)\, \phi(r) r^{2\lambda+1}dr.}\endaligned\right. \end{align*} \end{proposition} \smallskip \begin{remark} In the case $\,\lambda = (d-2)/2\,,$ this formula may be considered as an alternative of the Fourier inversion theorem for radial functions. Useful to the present circumstance is the inversion of $\,f\in L^1([0, 1])\cap C((0, 1])\,$ \begin{equation}\label{HW1} \left\{\aligned &{\phi(r) = \int_0^1 \Omega_{\frac{d-2}{2}}(rt) f(t) t^{d-1}\, dt\quad(r>0)}\quad \Rightarrow\\ &{f(t) = \frac{1}{2^{d-2}\left[\Gamma(d/2)\right]^2}\, \int_0^\infty \Omega_{\frac{d-2}{2}}(rt)\, \phi(r) r^{d-1}\,dr}\quad (t>0).\endaligned\right. \end{equation} \end{remark} \medskip \subsection{Order walks} As the radial Fourier transforms of different dimensions are known to be interrelated by certain {\it dimension walk} transforms, the Hankel-Schoenberg transforms of different orders turn out to be related with each other. \smallskip \begin{lemma}\label{lemma4} For $\,\nu>-1\,$ and $\,\alpha>0,\,\beta>0,\,$ we have \begin{equation*} {}_1F_2\left(\beta; \alpha +\beta,\,\nu +1; -\frac{r^2}{4}\right) =\int_0^\infty\,\Omega_\nu(rt) f(t) dt \qquad(r\ge 0), \end{equation*} where $f$ is the probability density on $[0, \infty)$ defined by \begin{equation*} f(t) = \frac{2}{B(\alpha, \,\beta)}\,(1-t^2)_+^{\alpha - 1}\,t^{2\beta-1}\,. \end{equation*} \end{lemma} \smallskip \begin{proof} An elementary computation shows \begin{align*} \int_0^\infty t^{2k} f(t) dt = \frac{(\beta)_k}{(\alpha+\beta)_k}\,,\quad k=0, 1, 2, \cdots, \end{align*} and integrating termwise yields \begin{align*} \int_0^\infty\,\Omega_\nu(rt) f(t) dt &= \sum_{k=0}^\infty \frac{(-1)^k}{k!\, (\nu +1)_k}\left(\frac r2\right)^{2k}\,\int_0^\infty t^{2k} f(t) dt\\ &=\sum_{k=0}^\infty \frac{(\beta)_k}{k!\, (\alpha +\beta)_k(\nu+1)_k}\left(-\frac {r^2}{4}\right)^{k}. \end{align*} \end{proof} \smallskip Hankel-Schoenberg transforms of different orders are interrelated in the following way, which reveals how Hankel-Schoenberg transforms generalize radial Fourier transforms defined in \eqref{P2}. \smallskip \begin{theorem}\label{orderwalk} Let $d$ be a positive integer and $\,\lambda>d/2 -1.$ \begin{itemize} \item[\rm(i)] For each $\,r\ge 0,\,$ we have \begin{equation*} \Omega_{\lambda}(r) = \frac{2}{B\left(\lambda +1 -\frac d2,\,\frac d2\right)} \int_0^\infty\Omega_{\frac{d-2}{2}}(rt)(1-t^2)_+^{\lambda -\frac d2}\, t^{d-1} dt. \end{equation*} \item[\rm(ii)] If $\, f\in L^1([0, \infty)),\,$ then for each $\,r\ge 0,$ \begin{align*} &\int_0^\infty \,\Omega_{\lambda}(rt) f(t) dt = \int_0^\infty \,\Omega_{\frac{d-2}{2}}(rt)\,I_\lambda(f)(t) t^{d-1} dt,\quad\text{where}\\ &\quad I_\lambda(f)(t) = \frac{2}{B\left(\lambda +1 -\frac d2,\,\frac d2\right)} \int_t^\infty\left( s^2 - t^2\right)^{\lambda - \frac d2}\,s^{-2\lambda} f(s) ds. \end{align*} Moreover, $\,I_\lambda(f) \in L^1\left([0, \infty), \,t^{d-1} dt\right)\,$ with $$\int_0^\infty \left|I_\lambda(f)(t)\right| t^{d-1} dt \le \int_0^\infty \left|f(t)\right| dt.$$ \end{itemize} \end{theorem} \smallskip \begin{proof} The special choices of $\,\nu = d/2 -1, \,\alpha = \lambda +1 - d/2, \,\beta = d/2\,$ in Lemma \ref{lemma4} gives part (i) upon noticing \begin{equation*} \Omega_\lambda(r) = {}_0 F_1\left(\lambda +1;\,-\frac{r^2}{4}\right). \end{equation*} As for part (ii), we first notice \begin{align*} &\int_0^\infty \int_t^\infty \left( s^2 - t^2\right)^{\lambda - \frac d2}\,s^{-2\lambda} |f(s)| ds\, t^{d-1} dt\\ &\qquad =\int_0^\infty \left[\int_0^s \left( s^2 - t^2\right)^{\lambda - \frac d2}\,t^{d-1} dt\right] s^{-2\lambda} |f(s)| ds \\ &\qquad=\int_0^\infty \left[\int_0^1 \left( 1- u^2\right)^{\lambda - \frac d2}\,u^{d-1} du\right] |f(s)| ds \\ &\qquad=\frac{B\left(\lambda +1 -\frac d2,\,\frac d2\right)}{2}\,\int_0^\infty |f(s)| ds, \end{align*} whence $\,I_\lambda(f) \in L^1\left([0, \infty), \,t^{d-1} dt\right)\,$ and the last estimate follows. The stated formula is a simple consequence of part (i) on interchanging the order of integrations, which is legitimate due to Fubini's theorem. \end{proof} \smallskip \begin{remark} If $\,d=1,\,$ part (i) reduces to Poisson's integral (P5). The so-called descending-dimension walks of radial Fourier transforms are special instances of this theorem. In fact, if we take $\,\lambda = (d+k-2)/2,\,$ with $\,d, k\,$ positive integers, and write $\,I_\lambda = I_k\,$ for simplicity, then the formula of part (ii) applied to the function $\,f(t) t^{d+k-1}\,$ yields \begin{align} &\int_0^\infty \,\Omega_{\frac{d+k-2}{2}}(rt) f(t) t^{d+k-1} dt = \int_0^\infty \,\Omega_{\frac{d-2}{2}}(rt)\,I_k(f)(t) t^{d-1} dt,\nonumber\\ &\qquad I_k(f)(t) = \frac{2}{B\left(\frac k2,\,\frac d2\right)} \int_t^\infty\left( s^2 - t^2\right)^{\frac k2-1}\,s f(s) ds. \end{align} In the notation of \eqref{P2}, it reads \begin{equation} \F_{d+k}(f) (r) = \frac{\pi^{k/2}\,\Gamma\left(\frac d2\right)}{\Gamma\left(\frac{d+k}{2}\right)} \,\F_{d}\left(I_k(f)\right)(r), \end{equation} which expresses the $(d+k)$-dimensional radial Fourier transform of $f$ as $d$-dimensional radial Fourier transform of $I_k(f)$. We refer to \cite{S2}, \cite{SW} and \cite{We} for more detailed results on dimension walks. \end{remark} \bigskip \section{Hankel-Schoenberg transforms of binomial densities and asymptotic properties} As the first step of our construction, we shall consider all possible binomial densities and evaluate their Hankel-Schoenberg transforms. \smallskip \begin{lemma}\label{lemma6} Let $\,\lambda>-1\,$ and $\,\alpha>0,\,\beta>0.$ For each $\,r\ge 0,$ \begin{equation*} \int_0^\infty\,\Omega_\lambda(rt) p(t) dt = {}_2F_3\left(\frac{\beta}{2},\, \frac{\beta+1}{2}; \frac{\alpha +\beta}{2},\, \frac{\alpha +\beta+1}{2},\,\lambda +1; -\frac{r^2}{4}\right), \end{equation*} where $p$ is the probability density on $[0, \infty)$ defined by \begin{equation*} p(t) = \frac{1}{B(\alpha, \beta)}\,(1-t)_+^{\alpha - 1}\,t^{\beta-1}\,. \end{equation*} \end{lemma} \smallskip \begin{proof} By applying Legendre's duplication formula for the gamma function repeatedly, it is elementary to compute \begin{align*} \int_0^\infty t^{2k} p(t) dt = \frac{B(\alpha, 2k+\beta)}{B(\alpha, \beta)} =\frac{\left(\frac{\beta}{2}\right)_k \left(\frac{\beta+1}{2}\right)_k} {\left(\frac{\alpha +\beta}{2}\right)_k \left(\frac{\alpha +\beta+1}{2}\right)_k} \end{align*} for $\,k=0, 1, 2, \cdots,\,$ and integrating termwise yields the stated result. \end{proof} \smallskip After reducing the generalized hypergeometric functions of Lemma \ref{lemma6} to the ones of type ${}_1F_2$, we shall investigate their asymptotic properties for which our analysis will be based on the following lemma which has been studied by many authors including R. Askey and H. Pollard \cite{AP}, J. Steinig \cite{St} and culminated in the present form by J. Fields and M. Ismail \cite{FI}. \smallskip \begin{lemma}\label{lemma7} For $\,\rho>0,\,\nu>0,\,$ put \begin{align*} U(\rho, \,\nu\,; x) = {}_1 F_2\left(\nu\,;\, \rho\nu, \,\rho\nu + \frac 12\,;\,-\frac{x^2}{4}\right) \qquad(x\in\R). \end{align*} \begin{itemize} \item[\rm(i)] If $\,\rho=1,\,$ it is identical to the function $\,\Omega_{\nu-1/2}.$ \item[\rm(ii)] If $\,\rho\ne 1,\,$ then as $\,|x|\to\infty,$ \begin{align*} &U(\rho, \,\nu\,; x) = \frac{\Gamma(2\rho\nu)}{\,\Gamma(2\rho\nu - 2\nu)\,}\, |x|^{-2\nu}\Big[1 + O\left(|x|^{-2}\right)\Big]\\ &\quad + \frac{\Gamma(2\rho\nu)}{\,2^{\nu-1}\Gamma(\nu)\,}\,|x|^{-2\nu\left(\rho-\frac 12\right)} \biggl[\cos\left(|x|-\rho\nu\pi + \frac{\nu\pi}{2}\right) + O\left(|x|^{-1}\right)\biggr]. \end{align*} \item[\rm(iii)] If either $\,\rho\ge \frac 32,\,\nu>1\,$ or $\,\rho\ge 2,\,\nu>0,\,$ then $$ U(\rho, \,\nu\,; x) \,\approx\,(1+ |x|)^{-2\nu}\quad\text{for}\quad x\in\R.$$ In particular, $\, U(\rho, \,\nu\,; x)>0\,$ for every $\,x\in\R.$ \end{itemize} \end{lemma} \bigskip \section{Askey's class for $H^{\,\frac{d+1}{2}}(\R^d)$} In the special case $\,\delta = (d+1)/2,\,$ the Bessel potential kernel $G_\delta$, which gives a reproducing kernel for $\,H^{\,\frac{d+1}{2}}(\R^d)\,$ under the usual inner product, coincides with the exponential of $-|\bx|$ and its Fourier transform is nothing but the Cauchy-Poisson kernel (see appendix). To be precise, we have \begin{equation*} G_{\frac{d+1}{2}}(\bx) = \frac{1}{2^d \pi^{\frac{d-1}{2}}\Gamma\left(\frac{d+1}{2}\right)} \,e^{-|\bx|}\,,\quad \widehat{G_{\frac{d+1}{2}}}(\xi) = (1+|\xi|^2)^{-\frac{d+1}{2}}. \end{equation*} A large class of compactly supported functions, often referred to as Askey's class (\cite{As}, \cite{G2}, \cite{We}), turn out to be also available as reproducing kernels under suitable inner products in this case. \smallskip \begin{theorem}\label{askey} For a positive integer $d$, assume that $\alpha$ satisfies $\,\alpha\ge \frac{d+1}{2}\,$ if $\,d\ge 2\,$ and $\,\alpha\ge 2\,$ if $\,d=1.\,$ Define \begin{align*} \Lambda_{d, \alpha}(r) = {}_1 F_2\left(\frac{d+1}{2}\,; \frac{d+\alpha+1}{2}, \frac{d+\alpha +2}{2};\, -\frac{\,r^2}{4}\right)\qquad(r\ge 0). \end{align*} \begin{itemize} \item[\rm(i)] $\Lambda_{d, \alpha}$ is positive definite on $\R^d$ with \begin{align*} \Lambda_{d, \alpha}(r) = \frac{1}{B(\alpha+1, \,d)}\,\int_0^\infty \Omega_{\frac{d-2}{2}}(rt)\,(1-t)_+^{\alpha}\, t^{d-1} dt\qquad(r\ge 0). \end{align*} \item[\rm{(ii)}] $\,0< \Lambda_{d, \alpha}(r)\le 1\,$ for each $\,r\ge 0\,$ and \begin{align*} \Lambda_{d, \alpha}(r) &= \frac{\Gamma(d+\alpha+1)}{\,\Gamma(\alpha)\,}\, r^{-d-1}\Big[ 1 + O\left(r^{-2}\right)\Big]\\ &+ \frac{\Gamma(d+\alpha+1)}{\,2^{\frac{d-1}{2}}\Gamma\left(\frac{d+1}{2}\right)} \,r^{-\frac{(d+2\alpha+1)}{2}} \left[\cos\left(r-\frac{(d+2\alpha+1)\pi}{4}\right)+ O\left(r^{-1}\right)\right] \end{align*} as $\,r\to\infty.$ Moreover, $\,\Lambda_{d, \alpha}(r) \,\approx\, \left(1 + r\right)^{-d-1}\,$ for $\, r\ge 0.$ \item[\rm(iii)] $\,\Lambda_{d, \alpha}\in C([0, \infty))\cap L^1\left([0, \infty), \,r^{d-1} dr\right)\,$ and \begin{equation*} (1-t)_+^{\alpha} = \frac{2\Gamma(\alpha+1)\Gamma\left(\frac{d+1}{2}\right)} {\sqrt\pi\,\Gamma(\alpha + d+1)\Gamma\left(\frac d2\right)}\, \int_0^\infty \Omega_{\frac{d-2}{2}}(rt)\Lambda_{d, \alpha}(r) r^{d-1} dr \quad(t\ge 0). \end{equation*} As a consequence, the function $\,t\mapsto (1-t)^\alpha_+\,$ is positive definite on $\R^d$. \end{itemize} \end{theorem} \smallskip \begin{proof} The integral representation of part (i) corresponds to the special case of Lemma \ref{lemma6} with $\,\lambda = \frac {d-2}{2}\,,\,\beta=d\,$ for which we replace $\alpha$ by $\alpha+1$. The positive definiteness of $\Lambda_{d, \alpha}$ is an immediate consequence of Proposition \ref{fourier}. As to part (ii), while the uniform bound $\,\left|\Lambda_{d, \alpha}(r)\right|\le 1\,$ is a consequence of (P1), the rest follow from Lemma \ref{lemma7} upon expressing $$\Lambda_{d, \alpha}(r) = U\left(\frac{d+\alpha+1}{d+1}\,,\,\frac{d+1}{2}\,;\,r\right).$$ As to part (iii), the property $\,\Lambda_{d, \alpha}\in C([0, \infty))\cap L^1\left([0, \infty), \,r^{d-1} dr\right)\,$ is obvious. For $\,t>0,\,$ the stated integral representation follows by inverting the formula of part (i) in accordance with Proposition \ref{inversion}, particularly with \eqref{HW1}. By continuity, it continues to hold true for $\,t=0.$ Finally, the positive definiteness follows again from Proposition \ref{fourier}. \end{proof} On consideration of radial extensions, we obtain the following in which $$\gamma_{d, \alpha} = \frac{2^d \pi^{\frac{d-1}{2}}\Gamma(\alpha+1)\Gamma\left(\frac{d+1}{2}\right)}{\Gamma(\alpha+d+1)}.$$ \smallskip \begin{corollary}\label{askey1} For $\,\alpha\ge \frac{d+1}{2}\,$ if $\,d\ge 2\,$ and $\,\alpha\ge 2\,$ if $\,d=1,\,$ put \begin{equation*} A_\alpha(\bx) = (1-|\bx|)^\alpha_+\qquad(\bx\in\R^d). \end{equation*} Then each $A_\alpha$ is continuous and positive definite with \begin{align*} \widehat{A_\alpha}(\xi) = \gamma_{d, \alpha}\,\Lambda_{d, \alpha}(|\xi|)\qquad(\xi\in\R^d). \end{align*} As a consequence, $\,A_\alpha(\bx-\by)$ is a reproducing kernel for the Sobolev space $\,H^{\,\frac{d+1}{2}}(\R^d)\,$ with respect to the inner product defined by \begin{align*} \bigl(u,\,v\bigr)_{A_\alpha(\R^d)} = (2\pi)^{-d}\cdot\frac{1}{\gamma_{d, \alpha}}\int_{\R^d} \frac{\widehat{u}(\mathbf{\xi})\overline{\,\widehat{v}(\mathbf{\xi})} \,d\mathbf{\xi}}{ \Lambda_{d, \alpha}\left(|\mathbf{\xi}|\right)}\,. \end{align*} \end{corollary} \smallskip \begin{remark} If we put $\,\Lambda_{d, \alpha}(\bx) = \Lambda_{d, \alpha}(|\bx|),\,\bx\in\R^d,\,$ for simplicity, then the inversion formula of part (iii) in Theorem \ref{askey} shows $$\widehat{\Lambda_{d, \alpha}}(\xi) = \frac{\pi^{\frac{d+1}{2}}\Gamma(\alpha +d +1)}{\Gamma(\alpha+1)\Gamma\left(\frac{d+1}{2}\right)} \,A_\alpha(\xi)\qquad(\xi\in\R^d).$$ Thus $\Lambda_{d, \alpha}$ is an example of band-limited functions, the class of $L^2$ functions whose Fourier transforms are compactly supported. \end{remark} In the odd dimensional case, $\Lambda_{d, \alpha}$ is expressible in terms of algebraic and trigonometric functions if $\alpha$ happens to be an integer. As illustrations, we present the following examples: \begin{itemize} \item[(a)] In the case $\,d=1,$ the formula of part (i) in Theorem \ref{askey} reduces to \begin{equation*} \Lambda_{1, \alpha}(r) = (\alpha+1)\,\int_0^1 \cos(xt)(1-t)^\alpha dt \quad(\alpha\ge 2). \end{equation*} With the choice of minimal $\,\alpha =2\,$ and $\,\alpha=3,\,$ we have \begin{align*} \Lambda_{1, 2}(r) &= \frac{6}{\,r^2\,}\,\left( 1- \frac{\sin r}{r}\right)\,,\\ \Lambda_{1, 3}(r) &= \frac{12}{\,r^2\,}\left\{ 1- \left[\frac{\sin (r/2)}{r/2}\right]^2\right\} \end{align*} in which each formula must be understood as the limiting value at $\,r=0\,$. \item[(b)] In the case $\,d=3,$ the formula of part (i) in Theorem \ref{askey} reduces to \begin{equation*} \Lambda_{3, \alpha}(r) = \frac{(\alpha+3)(\alpha+2)(\alpha+1)}{2r}\,\int_0^1 \sin(rt)(1-t)^\alpha t dt \quad(\alpha\ge 2). \end{equation*} With the choice of minimal $\,\alpha = 2,\,$ we have \begin{align*} \Lambda_{3, 2}(r) = \frac{60}{\,r^4\,} \left( 2 + \cos r - 3 \,\frac{\sin r}{r}\right) \end{align*} with the same interpretation at $\,r=0\,$ as above. \end{itemize} \bigskip \section{Compactly supported reproducing kernels for $H^{\,\delta}(\R^d)$ with $\,\delta>\max\, (1, \, d/2)$} Due to an obvious cancellation effect, the generalized hypergeometric function of Lemma \ref{lemma6} in the special case $\,\beta = 2\lambda +1\,$ reduces to \begin{align}\label{W0} &{}_1F_2\left(\frac{2\lambda +1}{2}; \frac{\alpha + 2\lambda +1}{2},\, \frac{\alpha + 2\lambda +2}{2}; -\frac{r^2}{4}\right)\nonumber\\ &\qquad\qquad = \frac{1}{B(\alpha, 2\lambda +1)}\int_0^\infty\,\Omega_\lambda(rt)(1-t)_+^{\alpha - 1}\,t^{2\lambda}dt. \end{align} Expressing in the form of $U$-function defined in Lemma \ref{lemma7}, it is simple to find that this function is strictly positive if $\,\lambda>1/2,\,\alpha\ge \lambda + 1/2.\,$ The choice of minimal value $\,\alpha = \lambda + 1/2\,$ leads to \begin{align}\label{W1} &{}_1F_2\left(\lambda + \frac 12 ; \frac 32\left(\lambda + \frac 12\right),\, \frac 32\left(\lambda + \frac 12\right) + \frac 12; -\frac{r^2}{4}\right)\nonumber\\ &\qquad\qquad = \frac{1}{B\left(\lambda + \frac 12, \,2\lambda +1\right)}\int_0^\infty\,\Omega_\lambda(rt)(1-t)_+^{\lambda - \frac 12}\,t^{2\lambda}dt. \end{align} Rearranging parameters $\,\lambda + 1/2 = \delta\,$ and representing the last Hankel-Schoenberg transforms in terms of radial Fourier transforms, that is, those integrals with kernels $\Omega_{\frac{d-2}{2}}$, we are led to the following class of functions. \smallskip \begin{definition} For a positive integer $d$ and $\,\delta>d/2,\,$ define \begin{align*} \Phi_{d, \delta}(t) = \frac{1}{B\left(2\delta -d,\,\delta\right)} \int_t^1 (s^2- t^2)^{\delta -\frac {d+1}{2}}\, (1-s)^{\delta -1}\,ds \end{align*} for $\,0\le t\le 1\,$ and zero otherwise. \end{definition} \smallskip \begin{lemma}\label{lemma8} For a positive integer $d$ and $\,\delta>d/2,\,$ the integral in the definition of $\Phi_{d, \delta}$ converges and the following properties hold: \begin{itemize} \item[\rm(i)] $\Phi_{d, \delta}$ is continuous, strictly decreasing on $[0, 1]$ and $\,0\le \Phi_{d, \delta}\le 1.$ \item[\rm(ii)] $\,\Phi_{d, \delta}(t)\,\approx\, (1-t)^{2\delta - \frac{d+1}{2}}\,$ on $[0, 1].$ \item[\rm(iii)] If $\,\delta = \frac{d+1}{2}\,,\,$ then $\,\Phi_{d, \delta} (t) = (1- t)_+^{\frac{d+1}{2}}\,.$ \item[\rm(iv)] If $\,\delta>\frac{d+1}{2}\,,\,$ then for $\,0\le t\le 1,$ \begin{align*} \Phi_{d, \delta}(t) = \frac{1}{B\left(2\delta -d-1,\,\delta+1\right)} \int_t^1 (s^2- t^2)^{\delta -\frac {d+3}{2}}\, s(1-s)^{\delta}\,ds. \end{align*} \end{itemize} \end{lemma} \smallskip \begin{proof} For $\,\delta\ge \frac{d+1}{2}\,,\,$ as the function $\Phi_{d, \delta}$ is dominated by $$ \frac{1}{B\left(2\delta -d,\,\delta\right)} \int_0^1 s^{2\delta -d -1} (1-s)^{\delta -1}\,ds = 1, $$ the convergence of the defining integral is obvious. Under the transformation $\,s\mapsto \theta + (1-\theta) t,\,\,0 \le\theta\le 1, \,$ we may write \begin{align*} &\qquad\Phi_{d, \delta}(t) = (1-t)^{2\delta -\frac{d+1}{2}}\,V(t),\quad\text{where}\\ & V(t) = \frac{1}{B\left(2\delta -d, \,\delta\right)} \int_0^1 \theta^{\delta - \frac{d+1}{2}}(1-\theta)^{\delta-1}\big[2t + \theta(1-t)\big]^{\delta- \frac{d+1}{2}} d\theta\,. \end{align*} In the case $\, \frac{d}{2}<\delta<\frac{d+1}{2}\,,$ if we observe $$ 2^{\delta- \frac{d+1}{2}}\le \big[2t + \theta(1-t)\big]^{\delta- \frac{d+1}{2}} \le \theta^{\delta- \frac{d+1}{2}}$$ for $\,0\le t\le 1\,$ and for each fixed $\,\theta>0,$ it is simple to infer that $V(t)$ converges uniformly on $[0, 1]$ with $\,0\le V(t)\le 1\,$ and hence $\Phi_{d, \delta}$ is well defined. Bounding $V(t)$ in this way, we also deduce part (ii) plainly. As the convergence is ensured, part (i) can be verified easily. Part (iii) is trivial and part (iv) is a simple consequence of integrating by parts. \end{proof} \smallskip \begin{remark} Noteworthy are the following special instances of part (iv). \begin{itemize} \item [(a)] In the case $\,\delta = d/2 + k + 1/2,\,k\in\mathbb{N},\,$ $\Phi_{d, \delta}$ coincides with Wendland's function $P_{d, k}$, defined in \eqref{G3}, in the odd dimensions. \item[(b)] In the case $\,\delta = d/2 + m + 1,\,$ with $m$ a nonnegative integer, $\Phi_{d, \delta}$ coincides with Schaback's function $R_{d, m}$, defined in \eqref{G4}, in every dimension. Likewise, if $\,\delta = (d+1)/2 + \alpha,\,\alpha>0,\,$ $\Phi_{d, \delta}$ coincides with the function $S_{d, \alpha}$ of Chernih and Hubbert, defined in \eqref{G5}, in every dimension. \end{itemize} \end{remark} \smallskip In the statement below, we shall denote \begin{equation} \omega_{d, \delta} =\frac{ 2^{1 -d}\,\Gamma\left(3\delta\right)\Gamma\left(\delta -\frac d2\right)} {\Gamma\left(\delta\right)\Gamma\left(3\delta-d\right)\Gamma\left(\frac d2\right)}\,. \end{equation} \smallskip \begin{theorem}\label{wend} For a positive integer $d$ and $\,\delta>\max\left(1, \,d/2\right),$ define \begin{align*} W_\delta(r) = {}_1 F_2\left(\delta\,; \frac {3\delta}{2}, \,\frac {3\delta +1}{2}\,;\, -\frac{\,r^2}{4}\right) \qquad(r\ge 0). \end{align*} \begin{itemize} \item[\rm(i)] $W_\delta$ is positive definite on $\R^d$ with \begin{align*} W_\delta(r) &= \frac{1}{B\left(\delta, \,2\delta\right)}\int_0^\infty \Omega_{\delta- \frac 12}(rt) (1-t)_+^{\delta-1}\, t^{2\delta -1}\,dt\\ &= \omega_{d, \delta} \int_0^\infty \Omega_{\frac{d-2}{2}}(rt)\,\Phi_{d, \delta}(t)\, t^{d-1} dt. \end{align*} \item[\rm{(ii)}] $\,0<W_\delta(r)\le 1\,$ for each $\,r\ge 0\,$ and as $\,r\to\infty,$ \begin{align*} W_\delta(r) &= \frac{\Gamma\left(3\delta\right)}{\Gamma\left(\delta\right)} \,r^{-2\delta}\biggl[ 1 + \frac{\cos\left(r -\delta\pi\right)}{2^{\delta -1}}\biggr] + O\left(r^{-2\delta -1}\right)\,. \end{align*} Moreover, $\,W_\delta(r) \,\approx\, \left(1 + r\right)^{-2\delta}\,$ for $\,r\ge 0.\,$ \item[\rm(iii)] $\,W_\delta\in C([0, \infty))\cap L^1\left([0, \infty), \,r^{d-1} dr\right)\,$ and \begin{equation*} \Phi_{d, \delta} (t) = \frac{1}{2^{d-2}\,\left[\Gamma(d/2)\right]^2\,\omega_{d, \delta}}\, \int_0^\infty \Omega_{\frac{d-2}{2}}(rt) W_\delta(r) r^{d-1} dr \qquad(t\ge 0). \end{equation*} As a consequence, $\Phi_{d, \delta}$ is positive definite on $\R^d$. \end{itemize} \end{theorem} \smallskip \begin{proof} If we set $\,\lambda = \delta - 1/2\,$ in the representation \eqref{W1}, we obtain \begin{align*} W_\delta(r) &= \frac{1}{B\left(\delta, \,2\delta\right)}\int_0^\infty \Omega_{\delta- \frac 12}(rt) (1-t)_+^{\delta-1}\, t^{2\delta -1}\,dt\\ &= C(d, \delta)\int_0^\infty \Omega_{\frac{d-2}{2}}(rt)\,\Phi_{d, \delta}(t)\, t^{d-1} dt, \end{align*} where the latter follows by the order-walk transform of Theorem \ref{orderwalk} and $$C(d, \delta) = \frac{B\left(2\delta - d,\,\delta\right)} {B\left(\delta +\frac 12 - \frac d2, \,\frac d2\right) B(\delta, \,2\delta)}\,.$$ Simplifying with the aid of Legendre's duplication formula for the gamma function, it is elementary to see $\,C(d, \delta) = \omega_{d, \delta}.$ The positive definiteness of $W_\delta$ is an immediate consequence of Proposition \ref{fourier} and part (i) is proved. In view of the identification $$W_\delta(r) = U\left(\frac 32,\,\delta\,;\,r\right),$$ part (ii) is a consequence of Lemma \ref{lemma7} and Lemma \ref{lemma8}. As to part (iii), that $\,W_\delta\in C([0, \infty))\cap L^1\left([0, \infty), \,r^{d-1} dr\right)\,$ is obvious. For $\,t>0,\,$ the stated representation follows by inverting the formula of part (i) in accordance with \eqref{HW1}. By continuity, it continues to hold true for $\,t=0.$ Finally, the positive definiteness follows again from Proposition \ref{fourier}. \end{proof} As an immediate corollary, we obtain what we aim to accomplish. To simplify notation, we shall write \begin{equation} \zeta_{\,d, \delta} =\frac{2\pi^{d/2}}{\Gamma(d/2)}\cdot \frac{1}{\omega_{d, \delta}}. \end{equation} \smallskip \begin{corollary}\label{wend1} For $\,\delta>\max\left(1, \,d/2\right),$ let $\,\Phi_{d, \delta} (\bx) = \Phi_{d, \delta} (|\bx|),\,\bx\in\R^d.$ Then $\Phi_{d, \delta}$ is continuous and positive definite with \begin{align*} \widehat{\Phi_{d, \delta}}(\xi) = \zeta_{\,d, \delta}\,W_\delta(|\xi|)\qquad(\xi\in\R^d). \end{align*} As a consequence, $\,\Phi_{d, \delta}(\bx-\by)$ is a reproducing kernel for the Sobolev space $\,H^{\,\delta}(\R^d)\,$ with respect to the inner product defined by \begin{align*} \bigl(u,\,v\bigr)_{\Phi_{d, \delta}(\R^d)} = (2\pi)^{-d}\cdot\frac{1}{\zeta_{\,d, \delta}}\,\int_{\R^d} \frac{\widehat{u}(\mathbf{\xi})\overline{\,\widehat{v}(\mathbf{\xi})} \,d\mathbf{\xi}}{ W_\delta\left(|\mathbf{\xi}|\right)}\,. \end{align*} \end{corollary} \begin{remark} In the special case $\,\delta = (d+1)/2 +\alpha, \,\alpha>0,\,$ A. Chernih and S. Hubbert also obtained the Fourier transform $W_\delta$ (Theorem 2.1, \cite{CH}), but the authors did not give the integral representation formula nor the inversion formula as stated in the first equation of part (ii), part (iii) of Theorem \ref{wend}, respectively. We supplement a few computational aspects as follows. \begin{itemize} \item[(a)] In some special instances, it is possible to evaluate $\Phi_{d, \delta}$ in closed forms as the following list shows. \bigskip \bigskip \begin{tabular}{|l|lc|} \hline & \multicolumn{2}{|c|}{\textbf{$\Phi_{d, \delta}(r)$ on the interval $\,[0, 1]\,$}}\\\hline $\,\,\delta = \frac {d+1}{2}\,\,$ & $d\ge 2$ & $ (1-r)^{\frac{d+1}{2}}$ \\ \hline & $d=1$ & $\,(1-r)^3 \,(1+3r)\,$ \\ \cline{2-3} $\,\,\delta = 2\,\,$ &$d=2$ & $\,\left( 1+ 2r^2\right)\sqrt{1-r^2} -3r^2 \log \left(\frac{ 1+\sqrt{1-r^2}}{r}\right)\,$\\ \cline{2-3} & $d=3$ &$ (1- r)^2\,$ \\ \hline & $ d=1$ &$\,(1-r)^5 ( 1-2r + 8r^2)$\\ \cline{2-3} $\,\,\delta = 3\,\,$ & $ d=2$ & $ \frac 14 \Big[(4- 28 r^2 - 81 r^4)\sqrt{1-r^2} $\\ & & $ \qquad\qquad \qquad\quad+\,\,15 r^4 (6 + r^2)\log \left(\frac{ 1+\sqrt{1-r^2}}{r}\right)\Big]\,$ \\ \cline{2-3} &$ d=3$ & $ (1-r)^4 (1+4r) $\\ \hline \end{tabular} \bigskip \bigskip \item[(b)] In the case when $\delta$ is an integer, one may use the representation formula of part (i), Theorem \ref{wend}, to express $W_\delta$ in a closed form involving algebraic and trigonometric functions. To illustrate, let us take $$W_{2}(r) = {}_1F_2\left(2; 3, \,\frac 72; -\frac{r^2}{4}\right)\qquad(r\ge 0).$$ We evaluate \begin{align*} W_{2}(r) &= 20\int_0^\infty\Omega_{3/2}(rt)(1-t)_+\,t^3\,dt\\ &=\frac{60}{r^3}\left\{\int_0^1\sin(rt) (1-t) dt - r\int_0^1\cos(rt) (1-t) tdt\right\}\\ &= \frac{120}{r^4}\,\left( 1+ \frac{\cos r}{2}\right) - \frac{180\, \sin r}{r^5}\,. \end{align*} We should point out this closed form is consistent with the asymptotic formula stated in part (ii) of Theorem \ref{wend} which reads $$W_{2}(r) =\frac{120}{r^4}\,\left( 1+ \frac{\cos r}{2}\right) + O\left(r^{-5}\right).$$ \end{itemize} \end{remark} \bigskip \section{A smoother family of compactly supported reproducing kernels} Due to the restriction $\,\delta>\max\,(1, \,d/2),\,$ there are missing cases in the preceding results, namely, the cases $\,1/2<\delta\le 1\,$ for the one-dimensional Sobolev spaces $H^\delta(\R).$ Although the particular instance $\,\delta =1\,$ is covered in Corollary \ref{askey1}, the case $\,1/2<\delta<1\,$ is still left out. The purpose of this section is to provide compactly supported reproducing kernels in such missing cases. As a matter of fact, we shall construct another class of compactly supported reproducing kernels which suit to the Sobolev spaces $H^\delta(\R^d)$ of any order $\,\delta>d/2\,$ without any restriction. The key idea is to exploit the lemma of J. Fields and M. Ismail, Lemma \ref{lemma7}, in such a way that the strict positivity of the generalized hypergeometric function of \eqref{W0} is assured in the range $\,\lambda + 1/2>0,\,\alpha\ge 2\lambda+1.\,$ Choosing the minimal $\,\alpha= 2\lambda+1\,$ and setting $\,\lambda + 1/2 = \delta,\,$ it reduces to \begin{align}\label{S0} &{}_1F_2\left(\delta; 2\delta, \,2\delta + \frac 12; -\frac{r^2}{4}\right)\nonumber\\ &\qquad = \frac{1}{B(2\delta, \,2\delta)}\int_0^\infty\,\Omega_{\delta- \frac 12}(rt)(1-t)_+^{2\delta -1}\,t^{2\delta-1}dt, \end{align} which is strictly positive for any $\,\delta>0.$ For $\,\delta>d/2,\,$ an application of order-walk transformation yields \begin{align}\label{S1} \int_0^\infty\,\Omega_{\delta- \frac 12}(rt)(1-t)_+^{2\delta -1}\,t^{2\delta-1}dt = \int_0^\infty\,\Omega_{\frac{d-2}{2}}(rt) I_\delta(t) t^{d-1} dt \end{align} in which $I_\delta$ stands for the function supported in $[0, 1]$ and defined by \begin{align*} I_\delta(t) = \frac{2}{B\left(\delta + \frac 12 - \frac d2,\,\frac d2\right)}\,\int_t^1 (s^2 - t^2)^{\delta - \frac{d+1}{2}} (1-s)^{2\delta-1}\,ds \end{align*} for $\,0\le t\le 1.$ Normalizing the constant, we introduce \smallskip \begin{definition} For a positive integer $d$ and $\,\delta>d/2,\,$ define \begin{align*} \Psi_{d, \delta}(t) = \frac{1}{B\left(2\delta -d,\,2\delta\right)} \int_t^1 (s^2- t^2)^{\delta -\frac {d+1}{2}}\, (1-s)^{2\delta -1}\,ds \end{align*} for $\,0\le t\le 1\,$ and zero otherwise. \end{definition} \smallskip Being of similar nature with $\Phi_{d, \delta}$, we deduce its basic properties in the same way as stated and proved in Lemma \ref{lemma8}. \smallskip \begin{lemma}\label{lemma9} For a positive integer $d$ and $\,\delta>d/2,\,$ the integral in the definition of $\Psi_{d, \delta}$ converges and the following properties hold: \begin{itemize} \item[\rm(i)] $\Psi_{d, \delta}$ is continuous, strictly decreasing on $[0, 1]$ and $\,0\le \Psi_{d, \delta}\le 1.$ \item[\rm(ii)] $\,\Psi_{d, \delta}(t)\,\approx\, (1-t)^{3\delta - \frac{d+1}{2}}\,$ on $[0, 1].$ \item[\rm(iii)] If $\,\delta = \frac{d+1}{2}\,,\,$ then $\,\Psi_{d, \delta} (t) = (1- t)_+^{d+1}\,.$ \item[\rm(iv)] If $\,\delta>\frac{d+1}{2}\,,\,$ then for $\,0\le t\le 1,$ \begin{align*} \Psi_{d, \delta}(t) = \frac{1}{B\left(2\delta -d-1,\,2\delta+1\right)} \int_t^1 (s^2- t^2)^{\delta -\frac {d+3}{2}}\, s(1-s)^{2\delta}\,ds. \end{align*} \end{itemize} \end{lemma} \smallskip Combining \eqref{S0}, \eqref{S1} in terms of $\Psi_{d, \delta}$, we obtain the following analog of Theorem \ref{wend} without difficulty in which we write \begin{equation} \tau_{d, \delta} =\frac{ 2^{1-d}\,\Gamma\left(4\delta\right)\Gamma\left(\delta -\frac d2\right)} {\Gamma\left(\delta\right)\Gamma\left(4\delta-d\right)\Gamma\left(\frac d2\right)}\,. \end{equation} \smallskip \begin{theorem}\label{smooth} For a positive integer $d$ and $\,\delta> d/2,\,$ define \begin{align*} Q_\delta(r) = {}_1 F_2\left(\delta\,; 2\delta, \,2\delta + \frac 12\,;\, -\frac{\,r^2}{4}\right) \qquad(r\ge 0). \end{align*} \begin{itemize} \item[\rm(i)] $Q_\delta$ is positive definite on $\R^d$ with \begin{align*} Q_\delta(r) &= \frac{1}{B\left(2\delta, \,2\delta\right)}\int_0^\infty \Omega_{\delta- \frac 12}(rt) (1-t)_+^{2\delta-1}\, t^{2\delta -1}\,dt\\ &= \tau_{d, \delta} \int_0^\infty \Omega_{\frac{d-2}{2}}(rt)\,\Psi_{d, \delta}(t)\, t^{d-1} dt. \end{align*} \item[\rm{(ii)}] $\,0<Q_\delta(r)\le 1\,$ for each $\,r\ge 0\,$ and as $\,r\to\infty,$ \begin{align*} Q_\delta(r) &= \frac{\Gamma\left(4\delta\right)}{\Gamma\left(2\delta\right)} \,r^{-2\delta} + O\left(r^{-\,\min\,(2\delta +2,\,3\delta)}\right)\,. \end{align*} Moreover, $\,Q_\delta(r) \,\approx\, \left(1 + r\right)^{-2\delta}\,$ for $\,r\ge 0.\,$ \item[\rm(iii)] $\,Q_\delta\in C([0, \infty))\cap L^1\left([0, \infty), \,r^{d-1} dr\right)\,$ and \begin{equation*} \Psi_{d, \delta} (t) = \frac{1}{2^{d-2}\,\left[\Gamma(d/2)\right]^2\,\tau_{d, \delta}}\, \int_0^\infty \Omega_{\frac{d-2}{2}}(rt) Q_\delta(r) r^{d-1} dr \qquad(t\ge 0). \end{equation*} As a consequence, $\Psi_{d, \delta}$ is positive definite on $\R^d$. \end{itemize} \end{theorem} \smallskip As an immediate corollary, we obtain the following in which \begin{equation} \eta_{\,d, \delta} =\frac{2\pi^{d/2}}{\Gamma(d/2)}\cdot \frac{1}{\tau_{d, \delta}}. \end{equation} \smallskip \begin{corollary}\label{smooth1} For $\,\delta> d/2,$ let $\,\Psi_{d, \delta} (\bx) = \Psi_{d, \delta} (|\bx|),\,\bx\in\R^d.$ Then $\Psi_{d, \delta}$ is continuous and positive definite with \begin{align*} \widehat{\Psi_{d, \delta}}(\xi) = \eta_{\,d, \delta}\,Q_\delta(|\xi|)\qquad(\xi\in\R^d). \end{align*} As a consequence, $\,\Psi_{d, \delta}(\bx-\by)$ is a reproducing kernel for the Sobolev space $\,H^{\,\delta}(\R^d)\,$ with respect to the inner product defined by \begin{align*} \bigl(u,\,v\bigr)_{\Psi_{d, \delta}(\R^d)} = (2\pi)^{-d}\cdot\frac{1}{\eta_{\,d, \delta}}\,\int_{\R^d} \frac{\widehat{u}(\mathbf{\xi})\overline{\,\widehat{v}(\mathbf{\xi})} \,d\mathbf{\xi}}{ Q_\delta\left(|\mathbf{\xi}|\right)}\,. \end{align*} \end{corollary} \smallskip \begin{remark} In view of parts (ii), (iii) of Lemma \ref{lemma8}, it is evident that $\Psi_{d, \delta}$ is much smoother than $\Phi_{d, \delta}$ is, if both parameters $\,d, \delta\,$ are fixed. A possible disadvantage in practical applications, however, is that $\Psi_{d, \delta}$ involves higher algebraic powers than $\Phi_{d, \delta}$ does. \begin{itemize} \item[(a)] As illustrations, we have the following evaluations: \bigskip \bigskip \begin{tabular}{|l|lc|} \hline & \multicolumn{2}{|c|}{\textbf{$\Psi_{d, \delta}(r)$ on the interval $\,[0, 1]\,$}}\\\hline $d\ge 1$ & $\delta = \frac {d+1}{2}$ & $ (1-r)^{d+1}$ \\ \hline & $\delta = \frac 32$ & $\,\frac 12 \Big[( 2 + 13 r^2) \sqrt{ 1- r^2} -3 r^2 ( 4 + r^2) \log \left( \frac{1+ \sqrt{1- r^2}}{r}\right) \Big]$ \\ \cline{2-3} $d=1$ &$\delta = 2$ & $(1- r)^5 (1+5r)$\\ \cline{2-3} & $\delta =3$ &$ (1- r)^8 ( 1+ 8r + 21 r^2)$ \\ \hline & $\delta = \frac 52$ & $(1-r)^6 (1+6r)\,$ \\ \cline{2-3} $d=2$ &$\delta = \frac 72$ & $\frac 13 (1- r)^9 (3 + 27 r + 80 r^2)$\\ \cline{2-3} & $\delta = \frac 92$ &$ (1- r)^{12} (1 + 12 r + 57 r^2 + 112 r^3)$ \\ \hline $d=3$ & $ \delta =3$ & $ (1- r)^7 ( 1+ 7r)$ \\ \cline{2-3} &$ \delta = 4$ & $ (1-r)^{10} ( 1 + 10 r + 33 r^2) $\\ \hline \end{tabular} \bigskip \bigskip \item[(b)] As before, one may use the representation formula of part (i), Theorem \ref{smooth}, to express $Q_\delta$ in a closed form involving algebraic and trigonometric functions in some instances. To illustrate, let us take $$Q_{2}(r) = {}_1F_2\left(2; 4, \,\frac 92; -\frac{r^2}{4}\right)\qquad(r\ge 0).$$ In this special case, we evaluate \begin{align*} Q_{2}(r) &= 140\int_0^\infty\Omega_{3/2}(rt)(1-t)_+^3\,t^3\,dt\\ &=\frac{420}{r^3}\left\{\int_0^1\sin(rt) (1-t)^3 dt - r\int_0^1\cos(rt) t(1-t)^3 dt\right\}\\ &= \frac{840}{r^7}\,\left( r^3 - 12 r + 15 \sin r - 3r \cos r\right), \end{align*} which is consistent with the asymptotic formula $$Q_{2}(r) =\frac{840}{r^4} \,+ O\left(r^{-6}\right).$$ \end{itemize} \end{remark} \section{Appendix: Bessel potential kernels} In addition to the Fourier transform formulas, the Bessel potential kernels $G_\delta$ (or Mat\'ern functions) possess a number of important properties and arise in many areas of Mathematics with various disguises. As we are concerned with constructing possible replacements of Bessel potential kernels in the subject of reproducing kernels for Sobolev spaces, it may be instructive to recall some of their very basic properties (see \cite{AS}, \cite{Wa}). \begin{itemize} \item [(a)] Each $G_\delta$ is smooth away from the origin and subject to the asymptotic behavior, modulo multiplicative constants, described as follows: \begin{align*} &{\rm (i)}\quad\text{As}\quad |\bx|\to \infty,\quad G_\delta(\bx)\, \sim\, e^{-|\bx|}\,|\bx|^{\delta - \frac{d+1}{2}}\,.\\ &{\rm (ii)}\quad\text{As}\quad |\bx|\to 0,\quad\,\, G_\delta(\bx)\, \sim\,\left\{\begin{aligned} &{\quad\,\, 1} &{\quad\text{if}\quad \delta> d/2\,},\\ &{-\log |\bx|} &{\quad\text{if}\quad \delta = d/2\,},\\ &{\quad |\bx|^{2\delta-d}} &{\quad\text{if}\quad \delta< d/2\,}.\end{aligned}\right. \end{align*} \item[(b)] Due to Schl\"afli's integral representations, \begin{align}\label{K3} K_{\alpha}(z) &= \frac{\sqrt{\pi}}{\Gamma(\alpha + 1/2)}\,\left(\frac{z}{2}\right)^{\alpha} \int_{1}^{\infty} e^{-zt}\left(t^{2}-1\right)^{\alpha-\frac{1}{2}}\,dt\nonumber\\ &=\sqrt{\frac{\pi}{2}\,} \frac{e^{-z} z^\alpha}{\Gamma(\alpha+1/2)}\, \int_{0}^{\infty} e^{- zt} \left[ t\left( 1 + \frac t2\right)\right]^{\alpha - \frac 12}\,dt, \end{align} which is valid for $\,\alpha>-1/2\,$ and $\,z>0,$ it is easy to see $$ G_{\frac{d+1}{2}}(\bx) = \frac{K_{\frac 12}(|\bx|)\sqrt{|\bx|}}{2^{d-\frac 12}\,\pi^{\frac d2}\Gamma\left(\frac{d+1}{2}\right)} = \frac{e^{-|\bx|}}{2^{d}\,\pi^{\frac {d-1}{2}}\Gamma\left(\frac{d+1}{2}\right)}\,.$$ \item[(c)] More generally, if $m$ is a nonnegative integer, then \begin{equation} G_{m+ \frac{d+1}{2}}(\bx) = \frac{e^{-|\bx|}\,|\bx|^m}{2^{m+d}\,\pi^{\frac{d-1}{2}}\,\Gamma\left(m+ \frac{d+1}{2}\right)} \sum_{k=0}^m\frac{(m+k)!\,}{k! (m-k)!} \,(2|\bx|)^{-k}\,, \end{equation} which can be deduced easily from Schl\"afli's integrals. \end{itemize}
1,108,101,565,370
arxiv
\section{Introduction} Understanding the role of particle inertia on the late-time dispersion process is a problem of paramount importance in a variety of situations, mainly related to geophysics and atmospheric sciences. Airborne particulate matter in the atmosphere has indeed a well-recognized role for the Earth's climate system because of its effect on global radiative budget by scattering and absorbing long-wave and short-wave radiation \cite{IPCC}. For the sake of example, one of the most intriguing issue in this context is related to the evidence of anomalous large fluctuations in the residence times of mineral dust observed in different experiments carried out in the atmosphere \cite{Denjean2016}. Those observations naturally lead to the idea that settling and dispersion of inertial particles, both contributing to the residence time of particles in the atmosphere, crucially depend on the peculiar properties of the carrier flow encountered in the specific experiment. For the gravitational settling, this question was addressed in Ref.~\cite{martins2008}. It turned out that the value of the Stokes number alone, $St$, directly related to the particle size, is not sufficient to argue if the sedimentation is faster or slower with respect to what happens in still fluid. With minor variations of the carrier flow, for a given $St$, it has been shown that either an increase or a reduction of the falling velocity are possible thus affecting in a different way the particle residence time in the fluid. Our aim here is to shed some light on how dispersion of inertial particles does depend on relevant properties of the turbulent carrier flow. Our focus will be on the late-time evolution of the particle dynamics, a regime fully described in terms of eddy-diffusivities \cite{frischeddy,F95,mamamu2012}. Our main question can be thus rephrased in terms of the behavior of the eddy diffusivity by varying some relevant features of the carrier flow (e.g. the form of its auto-correlation function), for a given inertia of the particle.\\ This analysis for generic carrier flows is a task of formidable difficulty and forces to the exploitation of numerical approaches which, however, make it difficult to isolate simple mechanisms on large-scale transport induced by inertia. To overcome the problem, we decided to focus on simple flow field where the problem can be entirely grasped via analytic (or perturbative) techniques. As we will see, shear flows are natural candidates to allow one the analytic treatment of large-scale transport. Let us considered the well-known model \cite{MR83,G83} for transport of heavy particles in $d$-spatial dimensions by an incompressible carrier flow $\boldsymbol{u}(\bm{\xi}(t),t)$: \begin{eqnarray} \label{sf:sdepos} \begin{array}{l} \mathrm{d}\boldsymbol{\xi}(t)=\,\boldsymbol{v}(t)\,\mathrm{d}t \\[0.1cm] \mathrm{d}\boldsymbol{v}(t)=-\,\left(\dfrac{\boldsymbol{v}(t) -\boldsymbol{u}(\boldsymbol{\xi}(t),t)}{\tau}\right)\,\mathrm{d}t +\dfrac{\sqrt{2\,D_0}}{\tau} \,\mathrm{d}\boldsymbol{\omega}(t) \end{array} \end{eqnarray} Here $\boldsymbol{v}$ denotes the particle velocity, $\bm{\xi}$ its trajectory, $\tau$ is the Stokes time. Finally, $\boldsymbol{\omega}$ denotes a standard $d$-dimensional Wiener process \cite{Jacobs}. Increments $\mathrm{d}\boldsymbol{\omega}$ coupled to (\ref{sf:sdepos}) by a constant molecular diffusivity $D_0$ model, as customary, fast scale chaotic forces acting on the inertial particle acceleration \cite{R88}. \\ To start with, we assume that the carrier flow is a shear \begin{eqnarray} \boldsymbol{u}(\boldsymbol{x},t)=u(x_{2},\dots,x_{d},t)\,\boldsymbol{e}_{1} \nonumber \end{eqnarray} where $\boldsymbol{e}_{1}=(1,0,\dots,0)$ is the constant unit vector pointing along the first axis. This simple geometry readily enforces the incompressibility condition. We also assume that $u$ is a stationary and homogeneous Gaussian random field with mean and covariance specified by \begin{eqnarray} \label{cs} \begin{array}{l} \langle u(x_2,\dots,x_d,t) \rangle =0 \\[0.1cm] \langle u(x_2,\dots,x_d,t)\, u(0,\dots,0,0) \rangle= B(x_2,\dots,x_d,|t|) \end{array} \end{eqnarray} It is worth stressing that we assume that the Eulerian statistics of the carrier flow is independent from the Wiener process driving (\ref{sf:sdepos}). For a shear flow, (\ref{sf:sdepos}) is integrable by elementary techniques. We find \begin{subequations} \label{sol:n} \begin{eqnarray} \label{ppp} v_{n}(t)= e^{-\frac{t-t_{o}}{\tau}}\,{v}_{n}(t_{0})+\frac{\sqrt{2\,D_0}}{\tau}\int\limits_{t_{o}}^{t}\mathrm{d}{\omega}_{n}(s)\,e^{-\frac{t-s}{\tau}} \end{eqnarray} \begin{eqnarray} \label{POSITIONTRAN} \lefteqn{\hspace{-1.25cm} \xi_{n}(t)=\xi_{n}(t_{0})+\tau (1-e^{-\frac{t-t_{o}}{\tau}}){v}_{n}(t_{0}) } \nonumber\\&& +\sqrt{2\,D_0}\int\limits_{t_{o}}^{t}\mathrm{d}{\omega}_{n}(s)\,(1-e^{-\frac{t-s}{\tau}}) \end{eqnarray} \end{subequations} for $n\neq1$, and \begin{subequations} \label{sol:1} \begin{eqnarray} \label{vel1} \lefteqn{\hspace{-0.25cm} v_{1}(t) =e^{-\frac{t-t_{o}}{\tau}}\,{v}_{1}(t_{0})+\frac{\sqrt{2\,D_0}}{\tau}\int\limits_{t_{o}}^{t}\mathrm{d}{\omega}_{1}(s)\,e^{-\frac{t-s}{\tau}} } \nonumber\\&& +\frac{1}{\tau}\int\limits_{t_{o}}^{t}\mathrm{d} s\, u(\xi_{2}(s),\dots,\xi_{d}(s),s)\,e^{-\frac{t-s}{\tau}} \end{eqnarray} \begin{eqnarray} \label{qqq} \lefteqn{ \xi_{1}(t)={\xi}_{1}(t_{0})+\tau {v}_{1}(t_{0})(1-e^{-\frac{t-t_{o}}{\tau}})} \nonumber\\ &&+\sqrt{2\,D_0}\int\limits_{t_{o}}^{t}\mathrm{d}{\omega}_{1}(s)\,(1-e^{-\frac{t-s}{\tau}}) \nonumber\\ &&+\int\limits_{t_{o}}^{t}\mathrm{d} s\, u(\xi_{2}(s),\dots,\xi_{d}(s),s)\,(1-e^{-\frac{t-s}{\tau}}) \end{eqnarray} \end{subequations} for n=1. The stochastic integrals appearing in (\ref{sol:1}), (\ref{sol:n}) can be interpreted as the limit of usual Riemann sums owing to the additive nature of the noise. A relevant indicator of the dispersion properties of a single particle trajectory is the effective diffusion tensor defined as \begin{eqnarray} \label{DEF} \mathsf{D}_{l n}^{\mathrm{eff}}=\lim_{t\uparrow \infty}\frac{\langle \xi_{l}(t)\,\xi_{n}(t) \rangle -\langle \xi_{l}(t)\rangle\,\langle\xi_{n}(t) \rangle}{2\,(t-t_{0})} \nonumber \end{eqnarray} or, equivalently, by a straightforward application of de l'H\^opital rule \begin{eqnarray} \label{Def} \mathsf{D}_{l n}^{\mathrm{eff}}=\lim_{t\uparrow \infty}\frac{\langle v_{l}(t)\,\xi_{n}(t) \rangle -\langle v_{l}(t)\rangle\,\langle\xi_{n}(t)\rangle +l \leftrightarrow n}{2} \end{eqnarray} Inspection of (\ref{sol:1}), (\ref{sol:n}) readily shows that the only non-vanishing elements of the effective diffusion tensor are diagonal and are specified by the correlations $\langle \xi_{n}(t)\,\xi_{n}(t) \rangle$ $n=1,\dots,d$ (here and in the following the Einstein convention on repeated indexes is not adopted). A straightforward calculation yields the explicit value of the correlations \begin{eqnarray} \label{nneq1} \lefteqn{\mathrm{D}_{n n}^{\mathrm{eff}}= \lim_{t\uparrow\infty} \langle{v}_{n}(t){\xi}_{n}(t)\rangle} \nonumber\\ &&=\frac{2\,D_0}{\tau}\int\limits_{0}^{\infty}ds \,(1-e^{-\frac{s}{\tau}})\,e^{-\frac{s}{\tau}}=D_{0} \end{eqnarray} for $n\neq 1$. The carrier flow appears only in the correlation function for $n=1$. We find \begin{eqnarray} \label{n1} \lefteqn{ \lim_{t\uparrow\infty}\langle \xi_{1}(t) v_{1}(t)\rangle=D_0 +\lim_{t\uparrow\infty} } \\&&\hspace{-0.5cm} \times\int\limits_{(t_{0},t)^{2}} \hspace{-0.2cm}\mathrm{d}s\mathrm{d}s^{\prime}\, \frac{e^{-\frac{t-s}{\tau}}(1-e^{-\frac{t-s'}{\tau}}) \langle u(\boldsymbol{\eta}(s,t_0),s)\,u(\boldsymbol{\eta}(s^{\prime},t_0),s^{\prime})\rangle}{\tau}\nonumber \end{eqnarray} for $\boldsymbol{\eta}(s,t_0)=(\xi_{2}(s),\dots,\xi_{d}(s))$ and $ \xi_{i} (t)$ $i=2,\dots,d$ given by Eq.~(\ref{POSITIONTRAN}). It is worth observing that the explicit dependence on $t_0$ in (\ref{n1}) actually disappears due to the limit $t\uparrow\infty$. Without loss of generality we can thus assume $t_0 = -\infty$ in (\ref{n1}) in order to obtain simpler expressions. The integrand in (\ref{n1}) is amenable to a more explicit form, if we represent the Eulerian correlation function $B$ of the carrier flow, defined in (\ref{cs}), in terms of its Fourier representation. In such a case, the average over the Eulerian statistics of the carrier flow and the Lagrangian statistics of the first $d-1$ coordinates of the inertial particle factor out as \begin{eqnarray} \label{correlazione} \lefteqn{ \langle u(\boldsymbol{\eta}(s,t_0),s)\,u(\boldsymbol{\eta}(s^{\prime},t_0),s^{\prime})\rangle }\nonumber\\&& = \int\limits_{\mathbb{R}^{d-1}}\frac{\mathrm{d}^{d-1}\boldsymbol{k}}{(2\,\pi)^{d-1}}\check{\mathsf{B}}(\boldsymbol{k},|s-s'|) \langle e^{\imath\, \boldsymbol{k}\cdot (\boldsymbol{\eta}(s,t_0)-\boldsymbol{\eta}(s^{\prime},t_0)}\rangle \end{eqnarray} After some tedious yet elementary manipulations involving Gaussian integration on the Wiener process and changes of variables in the plane $(s,s')$, we obtain \begin{eqnarray} \lefteqn{ \mathrm{D}_{11}^{\mathrm{eff}}=D_0+ } \nonumber\\&& \hspace{-0.5cm} \int\limits_{\mathbb{R}^{d-1}}\frac{\mathrm{d}^{d-1}\boldsymbol{k}}{(2\,\pi)^{d-1}} \int\limits_{0}^{\infty}\mathrm{d}t\, e^{-D_{0} \|\boldsymbol k\|^2\left[t-\,\tau \, \left( 1-e^{-\frac{t}{\tau }}\right)\right]} \check{\mathsf{B}}(\boldsymbol{k},t) \label{D11} \end{eqnarray} We therefore see that all the dynamically non trivial information is encoded in the isotropic component of the effective diffusion tensor \begin{eqnarray} \label{eddyfinale} \lefteqn{\hspace{-0.3cm} D^{\mathrm{eff}}=\frac{1}{d}\sum_{n=1}^{d}\mathrm{D}_{n n}^{\mathrm{eff}}= D_0+} \nonumber\\&& \hspace{-0.3cm} \int\limits_{\mathbb{R}^{d-1}}\frac{\mathrm{d}^{d-1}\boldsymbol{k}}{(2\,\pi)^{d-1}} \int\limits_{0}^{\infty}\mathrm{d}t\, e^{-D_{0} \|\boldsymbol k\|^2\left[t-\,\tau \, \left( 1-e^{-\frac{t}{\tau }}\right)\right]} \frac{\check{\mathsf{B}}(\boldsymbol{k},t)}{d} \end{eqnarray} We emphasize that (\ref{D11}) and the resulting expression for the isotropic component of the effective diffusion tensor are exact results. There are several reasons why these simple results are interesting. To start with we notice that although derived for the highly stylized case of shear flow, they continue to hold in suitable asymptotic senses for much general classes of carrier flows. Namely, our final result for the isotropic component $D^{\mathrm{eff}}$ of the effective diffusion tensor coincides, with the one for tracer particles with colored noise derived in \cite{MaCa99}. More generally, $D^{\mathrm{eff}}$ admits the same expression if we compute the eddy diffusivity tensor in an infra-red perturbative expansion in the coupling of the carrier flow. The logic of the calculation is the same as in \cite{MazzVerg} but applied to inertial rather than Lagrangian particles. First, we couch (\ref{sf:sdepos}) into the equivalent integral form \begin{eqnarray} \label{irsol} \begin{array}{l} \boldsymbol{v}(t)=\boldsymbol{v}^{(0)}(t) +\dfrac{1}{\tau}\int\limits_{t_{o}}^{t}\mathrm{d} s\, \boldsymbol{u}(\boldsymbol{\xi}(s),s)\,e^{-\frac{t-s}{\tau}} \\[0.3cm] \boldsymbol{\xi}(t)=\boldsymbol{\xi}^{(0)}(t) +\int\limits_{t_{o}}^{t}\mathrm{d} s\, \boldsymbol{u}(\boldsymbol{\xi}(s),s)\,(1-e^{-\frac{t-s}{\tau}}) \end{array} \end{eqnarray} where now $\boldsymbol{\xi}^{(0)}(t)$, $\boldsymbol{v}^{(0)}(t)$ are Gaussian processes with components (\ref{sol:n}) but for $n=1,\dots,d$. Let us assume the carrier flow to be an incompressible Gaussian random field with homogeneous and stationary statistics \begin{eqnarray} \begin{array}{l} \langle\boldsymbol{u}(\boldsymbol{x},t)\rangle=0 \\[0.3cm] \langle\,u_{l}(\boldsymbol{x},t)\,u_{n}(\boldsymbol{0},t)\rangle=\mathsf{B}_{l n}(\boldsymbol{x},|t|) \end{array} \nonumber \end{eqnarray} Upon inserting (\ref{irsol}) into (\ref{Def}) and retaining the leading order in $\boldsymbol{u}$ (corresponding either to small $\mathsf{B}$ compared to $(D_0/L)^2$ -- $L$ being a characteristic length-scale of the flow -- or neglecting small deviations from the shear-flow geometry), we obtain \begin{eqnarray} \lefteqn{ \langle\boldsymbol{v}(t)\cdot\boldsymbol{\xi}(t)\rangle=\langle\boldsymbol{v}^{(0)}(t)\cdot\boldsymbol{\xi}^{(0)}(t)\rangle } \nonumber\\&& \hspace{-0.1cm} +\tau\int\limits_{t_{0}}^{t}\,\mathrm{d}s_{1}\int\limits_{t_{0}}^{s_{1}}\mathrm{d}s_{2}\,(1-e^{-\frac{t-s_{1}}{\tau}})(1-e^{-\frac{s_{1}-s_{2}}{\tau}}) C_{1} \nonumber\\&& +\int\limits_{t_{0}}^{t}\mathrm{d}s_{1}\int\limits_{t_{0}}^{s_{1}}\mathrm{d}s_{2}\,e^{-\frac{t-s_{1}}{\tau}}(1-e^{-\frac{s_{1}-s_{2}}{\tau}})C_{2} \nonumber\\&& +\int\limits_{(t_{0},t)^{2}}\mathrm{d}s_{1}\mathrm{d}s_{2}\,(1-e^{-\frac{t-s_{1}}{\tau}})e^{-\frac{t-s_{2}}{\tau}}C_{3}+\dots \nonumber \end{eqnarray} where the $\dots$ symbol stands for higher order terms and \begin{eqnarray} \begin{array}{l} C_{1}=\langle\boldsymbol{v}^{(0)}(t)\cdot(\boldsymbol{u}(\boldsymbol{\xi}^{(0)}(s^{\prime}),s^{\prime}) \cdot\partial_{\boldsymbol{\xi}^{(0)}_{s}})\boldsymbol{u}(\boldsymbol{\xi}^{(0)}(s),s)\rangle \\[0.2cm] C_{2}=\langle\boldsymbol{\xi}^{(0)}(t)\cdot(\boldsymbol{u}(\boldsymbol{\xi}^{(0)}(s^{\prime}),s^{\prime})\cdot\partial_{\boldsymbol{\xi}^{(0)}_{s}}) \boldsymbol{u}(\boldsymbol{\xi}^{(0)}(s),s) \\[0.2cm] C_{3}=\langle\boldsymbol{u}(\boldsymbol{\xi}^{(0)}_{s^{\prime}},s^{\prime})\cdot\boldsymbol{u}(\boldsymbol{\xi}^{(0)}_{s},s)\rangle \end{array} \nonumber \end{eqnarray} If we now invoke the incompressible carrier flow hypothesis we see that $C_{1}$ and $C_{2}$ vanish and that \begin{eqnarray} \hspace{-0.1cm}C_{3}\hspace{-0.05cm}=\hspace{-0.1cm} \int\limits_{\mathbb{R}^{d}}\hspace{-0.05cm} \frac{\mathrm{d}^{d}\boldsymbol{k}}{(2\,\pi)^{d}} \hspace{-0.05cm} \sum\limits_{n=1}^{d}\check{\mathsf{B}}_{n n}(\boldsymbol{k},|s-s^{\prime}|) \langle\,e^{\imath\boldsymbol{k}\cdot(\boldsymbol{\xi}^{(0)}(s)-\boldsymbol{\xi}^{(0)}(s^{\prime}))}\rangle \label{gf} \end{eqnarray} which coincides with (\ref{correlazione}) in one extra dimension once we identify the trace of the Fourier transform of the correlation tensor $\mathsf{B}_{l n}$. After having made the case for the general relevance for the expression of $D^{\mathrm{eff}}$ we now turn to analyze its behavior as function of the Stokes number and the characteristic time scale of the carrier flow. Let us first consider the limit of small $D_0$. This would make the resulting integrals easier to manage and to carry out. A first order expansion on $D_0$ carried out on Eq.~(\ref{eddyfinale}) gives: \begin{eqnarray} \label{EddyInerAppr} &\mathrm{D}^{\mathrm{ef}}&=D_0+\frac{1}{d }\int\frac{\mathrm{d}^{d-1}\boldsymbol{k}}{(2\,\pi)^{d-1}}\int_0^\infty\mathrm{d}t\operatorname{tr} \check{\mathsf{B}}(\boldsymbol{k},t)\nonumber\\ &\times &\, \,\, \left(1-D_0\|\boldsymbol{k}\|^{2}\left(t-\, \tau \, \left( 1-e^{-\frac{t}{\tau }}\right)\right) \,\right)\,+\dots\nonumber \end{eqnarray} or, in physical space: \begin{eqnarray} \lefteqn{ D^{\mathrm{eff}}=D_0 +\frac{1}{d }\int_0^\infty \mathrm{d}t \langle \boldsymbol u(\boldsymbol x, t) \cdot \boldsymbol u(\boldsymbol x,0)\rangle } \nonumber\\&& \hspace{-0.3cm} -D_0\sum_{\alpha,\beta=1}^{d} \int\limits_0^\infty \mathrm{d}t\frac{t-\, \tau \, ( 1-e^{-\frac{t}{\tau }})}{d}\, \langle[\partial_{\alpha} u_\beta(\boldsymbol x, t)] [\partial_{\alpha} u_{\beta}(\boldsymbol x, 0)]\rangle \nonumber\\&&+\dots \end{eqnarray} For $\tau\to 0$, the limit of vanishing inertia easily follows: \begin{eqnarray} &&D^{\mathrm{eff}}\longrightarrow_{\tau\to 0}D_0+\frac{1}{d }\int_0^\infty \mathrm{d}t\, \langle \boldsymbol u(\boldsymbol x, 0) \cdot \boldsymbol u(\boldsymbol x, t)\rangle\nonumber\\ && -\frac{D_0}{d}\sum_{\alpha,\beta=1}^{d} \int_0^\infty \mathrm{d}t\,t\, \langle[\partial_{\alpha} u_\beta(\boldsymbol x, 0)] [\partial_{\alpha} u_{\beta}(\boldsymbol x, t)]\rangle \nonumber\\ &&+\dots \end{eqnarray} which corresponds to the result reported in \cite{MazzVerg}. Returning to the heavy particle case, in order to further simplify the expression for the eddy diffusivity, let us focus on a 2D carrier flow with a single wave-number $\boldsymbol k_0$. The correlation function we consider is \cite{Antonov}: \begin{eqnarray} \label{corr} &\operatorname{tr}\check{\mathsf{B}}(\boldsymbol{k},|t_{}|)=(2\pi)^{d-1} E(\boldsymbol k_0) e^{-\frac{|t|}{T_c}} \cos (\Omega t)\nonumber\\ &\times [\delta(\boldsymbol k-\boldsymbol k_0)+\delta(\boldsymbol k+\boldsymbol k_0)] \end{eqnarray} $E(\boldsymbol k_0)$ being the turbulent kinetic energy associated to the wave-number. In principle, the decay time $T_c$ would depend on $\boldsymbol k$ itself, tipically like $ 1/\|\boldsymbol k\|$ or $ 1/\|\boldsymbol k\|^2$ \cite{Kaneda, BoiMazzLac, dissip}. However, since we are considering a single wave-number flow, we can consider it as a constant. We can now nondimensionalize our system by setting $\boldsymbol k_0 =T_c= 1$ and dimensionless, as to have the Stokes number $St=\tau$. By plugging Eq. (\ref{corr}) into Eq. (\ref{EddyInerAppr}), one obtains : \begin{eqnarray} \label{eddydiffD0} \begin{array}{l} D^{\mathrm{eff}}=D_0+E(\boldsymbol k_0)\left[ \dfrac{1}{d}\dfrac{2}{1+\Omega^2}+\dfrac{D_0}{ d}\,\mathcal K \right] \\[0.3cm] \mathcal K=\dfrac{2(1+\text{St})}{1+\Omega^2}-\dfrac{4}{(1+\Omega^2)^2} +\dfrac{2\text{St}^2(1+\text{St})^2}{(1+\text{St}(2+\text{St}+\text{St}\Omega^2))^2} \\[0.3cm] \hspace{0.6cm} +\dfrac{\text{St}^2(2+\text{St})}{4+\text{St}(4+\text{St}+\text{St}\Omega^2)} -\dfrac{\text{St}^2(4+3\text{St})}{1+\text{St}(2+\text{St}+\text{St}\Omega^2)} \end{array} \nonumber \end{eqnarray} The above expression is uniform in St. Indeed, it is a continuous function of St$\in[0,+\infty)$, and it tends to 0 as St$\to+\infty$ $\forall \Omega$, then it is limited for any St. This means that the perturbation expansion at first order in $D_0$ can be used for any value of St. However, note that, since $\text{max}|\mathcal K|\leq1$, we have a constraint on $D_0$ in order to have a uniform perturbation expansion, which is $D_0\ll\frac{2}{(1+\Omega^2)} $. The term $\mathcal K$ can be either positive or negative, depending on the importance of negative correlated regions in the correlation function (\ref{corr}). This fact can be detected from Fig.~\ref{FigNuova} where the regions inside which $\mathcal K$ is positive (gray region) and negative (white region) are shown in the plane $St-\Omega$. It is worth recalling that, for the tracer case, the condition for having $\mathcal K >0$ is simply $\Omega>1$. \begin{figure} \includegraphics[trim=0 0 0 0,clip,width=6cm,height=6cm]{interfer.eps} \caption{The sign of $\mathcal K$ in the $St-\Omega$ plane. Gray corresponds to $\mathcal K >0$; white to $\mathcal K <0$. The dotted line separates the region on its left, corresponding to transport enhancement due to inertia, from that on its right relative to transport reduction.} \label{FigNuova} \end{figure} The presence of inertia thus causes a change of the sign of $\mathcal K$ from negative to positive in a subset of the $St-\Omega$ plane. In this region inertia thus plays to increase transport with respect to the tracer case. The region where transport is enhanced with respect to the tracer case actually extends up to the dotted line. To observe a reduction of transport, the Stokes time has thus to be sufficiently large. Larger and larger values are required for increasing $\Omega$. The behavior of $\mathcal K $ as a function of $St$ is reported in Fig.~\ref{fig1} for different values of $\Omega$. \begin{figure}[b] \includegraphics[trim=0 0 0 0,clip,width=6cm, height=5cm]{Omega1.eps} \includegraphics[trim=0 0 0 0,clip,width=6cm, height=5cm]{Omega2.eps}\\ \includegraphics[trim=0 0 0 0,clip,width=6cm, height=5cm]{Omega3.eps}\\ \caption{$\mathcal K$ vs St at $\Omega=0.2$ (upper panel), $\Omega=0.8$ (middle panel) and $\Omega=1.1$ (lower panel).} \label{fig1} \end{figure} For sufficiently small $\Omega$, $\mathcal K$ is negative and inertia increases its value thus enhancing transport. For sufficiently large $\Omega$, $\mathcal K$ is positive and inertia increases its value up to a certain value of $St$ (corresponding to the intersection with the dotted line of Fig.~\ref{FigNuova}) above which transport is reduced by inertia. The physical explanation of the resulting behavior of $\mathcal K$ vs $St$, for small $St$, can be traced back to the mechanism of transport enhancement induced by a colored noise discussed in \cite{CaMa98}. Indeed, the random contribution to the inertial particle velocity in (\ref{vel1}) turns out to be a colored noise. The fact that for large Stokes times $\mathcal K$ goes to zero is a simple consequence of the fact that in such a limit the contribution of the noise to the particle trajectories becomes negligible because of the large inertia of the particles. A maximum of transport is thus guaranteed in all cases where $\mathcal K >0$ for $St=0$. In conclusion, by explicit computation, we have shown that the eddy diffusivities of inertial particles can be determined for the class of shear flows for all values of the Stokes number. Although the analysis has been here confined on the sole case of heavy particles, following the same line of reasoning it is not difficult to show that the present results actually hold for any density ratio of the particles (i.e. for any value of the added-mass term $\beta$ involved in the model (2.2) of Ref.~\cite{mamamu2012}). We also show that the analytical results we obtained for the class of shear flows correspond to the leading order contribution either in the deviation from the shear flow geometry or in the P\'eclet number of general random Gaussian velocity fields (i.e. not of shear type). \\ The results we obtained for the eddy-diffusivity allowed us to investigate the role of inertia on the asymptotic transport regime. It turned out that both enhancement and reduction of transport (with respect to the tracer case) may occur depending on the extension of anticorrelated regions of the carrier flow Lagrangian auto-correlation function. AM acknowledges with thanks the financial support from the PRIN 2012 project n. D38C13000610001 funded by the Italian Ministry of Education. We are also grateful for the financial support for the computational infrastructure from the Italian flagship project RITMARE.
1,108,101,565,371
arxiv
\section{Introduction} Cosmic neutrinos of energy in the Exavolt and higher (1 EeV = $10^{18}$~eV) energy range, though as yet undetected, are expected to arise through a host of energetic acceleration and interaction processes at source locations throughout the universe. However, in only one of these sources--the distributed interactions of the ultra-high energy cosmic ray flux-- does the combination of observational evidence and interaction physics lead to a strong requirement for resulting high energy neutrinos. Whatever the sources of the highest energy cosmic rays, their observed presence in the local universe, combined with the expectation that their sources occur widely throughout the universe at all epochs, leads to the conclusion that their interactions with the cosmic microwave background radiation (CMBR)-- the so-called GZK process (after Greisen, Zatsepin, and Kuzmin~\cite{GZK})-- must yield an associated cosmogenic neutrino flux, as first noted by Berezinsky and Zatsepin~\cite{BZ}. These neutrinos are often called the GZK neutrinos, as they arise from the same interactions of the ultra-high energy cosmic rays (UHECR) that cause the GZK cutoff, but they are perhaps more properly referred to as the BZ neutrinos. In BZ neutrino production scenarios, current experimental UHECR measurements invariably point to the presence of an associated ultra-high energy neutrino flux. For UHECR above several times $10^{19}$~eV, intergalactic space is optically thick to UHECR propagation through the CMBR at a distance scale of several tens of Mpc. Each UHECR source at all epochs is thus subject to local conversion of its hadronic flux to secondary, lower energy particles over a distance scale of order 100~MPc in the current epoch. Neutrinos are the only secondary particle that may freely propagate to cosmic distances, and the resulting neutrino flux at earth is thus related to the integral over the highest-energy cosmic ray history of the universe, to the earliest epoch at which they occur. Although local sources may also contribute to the EeV-ZeV neutrino flux at earth, the bulk of the flux is generally believed to arise from a much wider spectral convolution, and will thus be imprinted with the cosmological source distribution in addition to effects from local sources. This leads to strong motivations to detect the BZ neutrino flux: first, it is required by standard model physics, and thus its absence could signal new physics beyond the standard model. Second, it is the only way to directly observe the UHECR source behavior over cosmic distance scales. Finally, once established, the spectrum and absolute flux of such neutrinos may afford a calibrated ``test beam'' for both particle physics and astrophysics experimentation, providing center-of-momentum energies on target nucleons of 100-1000 TeV, an energy scale not likely to be reached by other methods in the near future. The Antarctic Impulsive Transient Antenna was designed with the goal of measuring the BZ neutrino flux directly, or limiting it at a level which would provide compelling and useful constraints on the early UHECR source history. The BZ neutrino flux is potentially very low--of order 1 neutrino per square kilometer per week arriving over 2$\pi$ steradians is a typical estimate. This flux presents an extreme challenge to detection, since the low neutrino interaction cross section also means that any target volume will have an inherently low efficiency for converting any given neutrino. ANITA's methodology centers on observing the largest possible volume of the most transparent possible target material: Antarctic ice, which has been demonstrated to provide extremely low-loss transmission of radio signals through its bulk over much of the continent. ANITA then exploits the Askaryan effect~\cite{Ask62}, coherent, impulsive radio emission from the charge asymmetry in the electromagnetic component of a high energy particle cascade in a dielectric medium. ANITA searches for cascades initiated by a primary neutrino interacting in the Antarctic ice sheet within its field of view from the Long-Duration Balloon (LDB) altitude of 35-37 km. The observed area of ice from these altitudes is of order 1.5~M km$^2$. Combining this with the electromagnetic field attenuation length of ice which is of order 1~km at ANITA's observation frequency range, ANITA is sensitive to a target volume of order 1-2~M km$^3$. The acceptance, however, is constrained by the fact that at any location within the target, the allowed solid angle of arrival for a neutrino to be detectable at the several-hundred-km average distance of the payload is a small fraction of a steradian. Folding in these constraints, the volumetric acceptance is still of order hundreds to thousands of km$^3$ steradians over the range of energy overlap--$10^{18.5-20}$~eV--with the BZ neutrino spectrum. This large acceptance, while tempered by the limited exposure in time provided by a balloon flight, still yields the largest sensitivity of any experiment to date for BZ neutrinos. In this report we document the ANITA instrument and our estimates of its sensitivity and performance for the first flight of the payload, completed in January of 2007. A separate report will detail the results on the neutrino flux. \section{Theoretical Basis for ANITA Methodology.} The concept of detecting high energy particles through the coherent radio emission from the cascade they produce can be traced back over 40 years to Askaryan~\cite{Ask62}, who argued persuasively for the presence of strong coherent radio emission from these cascades, and even suggested that any large volume of radio-transparent dielectric, such as an ice sheet, a geologic saltbed, or the lunar regolith could provide the target material for such interactions and radio emission. In fact all of these approaches are now being pursued~\cite{RICE03,SalSA,GLUE04}. Although significant early efforts were successful in detecting radio emission from high energy particle cascades in the earth's atmosphere~\cite{Jelley_65}, it is important to emphasize that the cascade radio emission that ANITA detects is {\em unrelated to the primary mechanism for air shower radio emission.} Particle cascades induced by neutrinos in Antarctic ice are very compact, consisting of a ``plug'' of relativistic charged particles several cm in diameter and $\sim 1$~cm thick, which develops at the speed of light over a distance of several meters from the vertex of the neutrino interaction, before dissipating into residual ionization in the ice. The resulting radio emission is coherent Cherenkov radiation with a particularly clean and simple geometry, providing high information content in the detected pulses. In contrast, the radio emission from air showers is a complex phenomenon entangled with geomagnetic and near field effects. Attempts to understand and exploit this form of air shower emission for cosmic ray studies have been hampered by this complexity since its discovery in the mid-1960's, although this effort has seen a recent renaissance~\cite{EASradio}. Surprisingly little work was done on Askaryan's suggestions that solids such as ice could be important media for detection until the mid-1980's, when Gusev and Zheleznykh~\cite{Gusev}, and Markov \& Zheleznykh~\cite{Markov86} revisited these ideas. More recently a host of investigators including Zheleznykh~\cite{Zhe88}, Dagkesamansky \& Zheleznykh~\cite{Dag89}, Frichter, Ralston, \& McKay~\cite{FRM}, Zas, Halzen, \& Stanev~\cite{ZHS92}, Alvarez-Mu\~niz \& Zas~\cite{Alv97}, and Razzaque et. al ~\cite{Razz02} among others have taken up these suggestions and confirmed the basic results through more detailed analysis. Of equal importance, a set of experiments at the Stanford Linear Accelerator center have now clearly confirmed the effect and explored it in significant detail~\cite{Sal01,SalSA,Mio06,slac07} \subsection{First Order Energy Threshold \& Sensitivity.} To illustrate the methodology, we consider a specific example. The coherent radio Cherenkov emission in an electromagnetic $e^+e^-$ cascade arises from the $\sim 20\%$ electron excess in the shower, which is itself produced primarily by Compton scattering and positron annihilation in flight. Considering deep-inelastic scattering charged-current interactions of a high energy neutrino $\nu$ with a nucleon $N$, given generically by $\nu + N \rightarrow \ell^{\pm} + X$, the charged lepton $\ell^{\pm}$ escapes while the hadronic debris $X$ leads to a hadronic cascade. If the initial neutrino has energy energy $E_{\nu}$, the resulting hadronic cascade energy will $E_c = yE_{\nu}$, where $y$ is the Bjorken inelasticity, with a mean of $\langle y \rangle \simeq 0.22$ at very high energies, and a very broad distribution. The average number of electrons and positrons $N_{e+e-}$ near total shower maximum is of order the cascade energy expressed in GeV, or \begin{equation} N_{e+e-}~\simeq~ {E_c \over 1~{\rm GeV}}~. \end{equation} Consider a case with $E_{\nu}=10^{19}$~eV and a slightly positive fluctuation above the mean giving $y=0.4$. This leads to $E_c= 4 \times 10^{18}$ eV, giving $N_{e+e-}\sim 4 \times 10^{9}$. The radiating charge excess is then of order $N_{ex} \simeq 0.2N_{e+e-}$. Single-charged-particle Cherenkov radiation gives a total radiated energy, for tracklength $L$ over a frequency band from $\nu_{min}$ to $\nu_{max}$, of: \begin{equation} W_{tot}~=~ \left ({\pi h \over c}\alpha \right ) L \left ( 1 - {1 \over n^2\beta^2} \right ) \left ( \nu_{max}^2 - \nu_{min}^2 \right ) \end{equation} where $\alpha\simeq 1/137$ is the fine structure constant, $h$ and $c$ are Planck's constant and the speed of light, and $n$ and $\beta$ are the medium dielectric constant, and the particle velocity relative to $c$, respectively. For a collection of $N$ charged particles radiating coherently (e.g., with mean spacing small compared to the mean radiated wavelength), the total energy will be of order $W_{tot} = N^2 w$. In solid dielectrics with density comparable to ice or silica sand, the cascade particle bunch is compact, with transverse dimensions of several cm, and longitudinal dimensions of order 1 cm. Thus coherence will obtain up to several GHz or more. For a $4 \times 10^{18}$ eV cascade, $N_{ex} \simeq 8 \times 10^{8}$, and $L\simeq 6$~m in the vicinity of shower maximum in a medium of density $\sim 0.9$ with $n\sim 1.8$ as in Antarctic ice. Taking the mean radio frequency to be 0.6~GHz with an effective bandwidth of 600 MHz, the net radiated energy is $W_{tot}= 10^{-7}$ J. This energy is emitted into a restricted solid angle defined by the Cherenkov cone at an angle $\theta_c$ defined by $\cos\theta_c = (n\beta)^{-1}$, and a width determined (primarily from diffraction considerations) by $\Delta \theta_c \simeq c \sin \theta_c / (\bar{\nu}L)$. The implied total solid angle of emittance is $\Omega_c \simeq 2\pi\Delta \theta_c \sin \theta_c = 0.36$~sr. The pulse is produced by coherent superposition of the amplitudes of the Cherenkov radiation shock front, which yields extremely broadband spectral power over the specified frequency range. The intrinsic pulse width is less than 100~ps~\cite{Mio06}, and this pulse thus excites a single temporal mode of the receiver, with characteristic time $\Delta t = (\Delta \nu )^{-1}$, or about 1.6 ns in our case here. Radio source intensity in radio astronomy is typically expressed in terms of the flux density Jansky (Jy), where 1 Jy = $10^{-26}$~W~m$^{-2}$~Hz$^{-1}$. The energy per unit solid angle derived above, $W_{tot}/\Omega_c = 2.7 \times 10^{-7}$ J/sr in a 600 MHz bandwidth, produces an instantaneous peak flux density of $S_c= 1.4 \times 10^7$ Jy at the mean geometric distance $D=480$~km of the ice in view, after accounting for the fact that at this distance the geometry constrains the Fresnel coefficient for transmission through the ice surface to $\sim 0.12$, since the radiation emerges from angles close to the total-internal-reflectance (TIR) angle. The sensitivity of a radio antenna is determined by its collecting aperture and the thermal noise background, called the system temperature $T_{sys}$. The RMS level of power fluctuations in this thermal noise, expressed in W~m$^{-2}$~Hz$^{-1}$, is given by \begin{equation} \Delta S ~=~ { ~k~T_{sys} \over A_{eff} \sqrt{\Delta t \Delta \nu}}~ {\rm W ~m^{-2}~ Hz^{-1}} \end{equation} where $k$ is Boltzmann's constant and $A_{eff}$ is the effective area of the antenna. Note that in our case, because the pulse is band-limited, the term $\sqrt{\Delta t \Delta \nu} \simeq 1$. For ANITA, a single on-board antenna has a frequency-averaged effective area of 0.2~m$^2$. For observations of ice the system temperature is dominated by the ice thermal emissivity with $T_{sys} \leq 320$ K, assuming $\sim 140$~K receiver noise temperature. The implied RMS noise level is thus $\Delta S = 2 \times 10^6$~Jy, giving a signal-to-noise ratio of 6.3 in this case. These simple arguments show that the expected threshold for neutrino detection is of order $10^{19}$~eV even to the edges of the observed area viewed by ANITA. In practice, events may be detected at lower energies due to fluctuations or interactions closer to the payload, but more detailed simulations of the energy-dependent acceptance of ANITA do not depart greatly from this first-order example. \section{Instrument Design} \subsection{Overview of Technical Approach} \label{Techoverview} As indicated by its acronym, ANITA is conceptually an antenna or antenna array optimized to detect impulsive RF events with a characteristic signature established by careful modeling and experimental measurements. The array of antennas should view most of the entire Antarctic ice sheet beneath the balloon, out to the horizon, to retain sensitivity to most of the potential ice volume available for neutrino event production. It should have the ability to trigger with high efficiency on events of interest, and should have the lowest feasible intrinsic noise levels in its receivers to maximize sensitivity. It should have broad radio spectral coverage and dual-polarization capability to improve its ability to identify the signals and reject the backgrounds. It must have immunity to transient or steady radio-frequency interference. It must have enough spatial resolution of the source of measured pulses to determine if they match expected signal sources, and to allow for first-order geolocation and subsequent sky mapping if the event is found to be consistent with a neutrino. It must make as many distinct and statistically independent measurements as possible of each impulse that triggers the system covering all available degrees of freedom (spatial, temporal, polarization, and spectral), because the number of potential neutrino events among these triggers may be close to zero, and this potential rarity of events demands that the information content of each measured event be maximized. These guiding principles have led to a technical approach that centers around dual-polarization, broadband antenna clusters with overlapping fields-of-view, combined with a trigger system based on a heritage of RF impulse detection instruments, both space-based (the FORTE satellite~\cite{Jacobson_99,FORTE04}) and ground-based (the GLUE and RICE experiments~\cite{GLUE04,RICE03}). The need for direction determination, combined with the constraints on usable radio frequency range dictated by ice parameters, leads to an overall geometry for both individual antennas and the entire array, that is governed by the requirements for radio pulse interferometry over the spectral band of interest. The key challenges for ANITA are in the area of background rejection and management of electromagnetic interference (EMI). Impulsive interference events are likely to be primarily from anthropogenic sources, and in most cases do not mimic real cascade Cherenkov radio impulses because they lack many of the required properties such as polarization-, spectral-, and phase-coherence. A subset of impulsive anthropogenic interference, primarily from systems where spark gaps or rapid solid-state switching relays are employed, can produce events which are difficult to distinguish from events of interest to ANITA, and thus the task of pinpointing the origin of any impulsive event is of high importance to the final selection of neutrino candidates. If, after rejection of all impulsive events associated with any known current or prior human activity, there remains a class of events which are distributed across the integrated field of view of the payload and in time in a way that is inconsistent with human origin, we may then begin to consider this event class as containing neutrino candidates. Whether they survive with that designation will ultimately depend on our ability to exclude all other known possibilities. In a previous balloon experiment~\cite{Anitalite}, we found Antarctica to be relatively radio quiet at balloon altitudes once the payload leaves the vicinity of the largest bases. What continuous wave (CW) interference is present can be managed with careful trigger configuration and threshold adjustment to servo-adjust for the temporary increases in narrow-band power that occasionally are seen. With regard to impulsive interference, we found triggers due to it to be relatively infrequent away from the main bases though even the smaller bases did occasionally produce bursts of triggers. A less well-understood background may arise from ultra-high energy air showers which can produce a tail of radio emission out to ANITA frequencies, but these events, though they may produce triggers, are eliminated on the basis of their direction, arising from above the horizon, and their loss of coherence at VHF and UHF frequencies. However, in all cases above, ANITA may be presented with unexpected challenges. \subsection{Background Interference Issues.} \label{EMIdisc} Because ANITA operates with extremely high radio bandwidth over frequencies that are not reserved for scientific use, the problem of radio backgrounds, both anthropogenic and natural, is crucial to the development of a robust mission design. We have noted previously that the thermal noise power \footnote{$P_N= k_B T_{sys} \Delta f$, for $k_B=1.38 \times 10^{-23}$J/K, $T_{sys}=$ the system noise temperature in Kelvin, and $\Delta f=$ the bandwidth in Hz.} provides the ultimate background limitation, for both impulsive and time-averaged measurements, in much the same way that photon noise provides one of the ultimate limits to optical imaging systems. Electromagnetic interference may take different forms: near-sinusoidal "Carrier Wave" (CW) interference can have very high narrow-band power and saturate the system, or it can appear at a low level, sometimes as a composite of contributions from many bands, and effectively act to raise the aggregate system noise. Impulsive EMI often arises from electronic switching phenomena, and may trigger the system even if it cannot be mistaken for signals of interest, since the trigger should be as inclusive as possible. ANITA has only one chance per true neutrino event to detect and characterize the radio wavefront as it passes by the payload; thus it must be as efficient as possible at triggering on anything similar to the events of interest. In the end, it is the information content of a given triggered measurement that will determine the confidence with which we can ascribe it to a neutrino origin. This conclusion is the primary mission design driver for the type of payload and the number of antennas. The design of the mission, payload, ballooncraft, and all ancillary instrumentation must therefore be evaluated in the light of whether it produces EMI, mitigates it, responds appropriately to it, or facilitates rejection of it. In the end, when all background interference has been rejected, what is left becomes the substance for ANITA science. \subsubsection{Anthropogenic Backgrounds.} \label{AnthropoEMI} Backgrounds from man-made sources do not in general pose a risk of being mistaken for the signals of scientific interest, unless they arise from locations where no human activity is previously known. As we will show later in this report, ANITA's angular reconstruction ability for terrestrial interference events gives accuracies of order 1 degree or better, enabling ground location of event sources to a level more than adequate to remove events that originate from known camps or anthropogenic sources. Human activity in Antarctica is highly controlled and positions and locations for all such activity are logged with high reliability during a season. However, man-made sources can still pose a significant risk of interfering with the operation of the instrument. Interference from man-made terrestrial or orbital sources is a ubiquitous problem in all of radio astronomy. In this respect ANITA faces a variety of potential interfering signals with various possible impacts on the data acquisition and analysis. \paragraph{Satellite signals.} Orbiting satellite transmitter power is generally low in the bands of interest. For example, the GPS constellation satellites at an altitude of 21000 km, have transmit powers of order 50W in the 1227 MHz and 1575 MHz bands, with antenna gains of 11-13 dBi. The implied power at the earth's surface is -127 dB W m$^{-2}$ maximum in the 1227 MHz band. The implied RMS noise voltage for ANITA, given the antenna's effective area at this frequency, is of order 0.7 $\mu$V, far below the RMS thermal noise voltage ($\sim 10-15\mu$V RMS) referenced to the receiver inputs. Current satellite systems do not typically operate in ANITA's band, however there are some legacy systems that can produce detectable power within ANITA's band. As we will discuss in a later section, ANITA has encountered some satellite interference in the 200-300 MHz range, but it has not caused significant performance degradation to date. Satellites do not in general intentionally produce nanosecond-scale impulsive signals; however, such signals may be produced by solid-state relay or actuator activity on a satellite that is changing its configuration. Such signals would appear to come from above the horizon, but might also show up in reflection off the ice surface. In this latter case, the Fresnel coefficient for such a reflection will in general signficantly boost the horizontal polarization of such a reflection, and this characteristic provides a strong discriminator, if the initial above-the-horizon impulse was for some reason not detected. \paragraph{Terrestrial signals.} The primary risk for terrestrial signals is not that they trigger the system. Terrestrial sources often do produce significant impulsive interference, and will trigger our system at significant rates anytime the payload is within view of such anthropogenic sources. However, such triggers are easily selected against in post-analysis since their directions can be precisely associated with known sources in Antarctica. The greater issue for ANITA occurs if there is a strong transmitter in the field of view which saturates the LNA, causing its gain to decrease so that the sensitivity in that antenna is lost. The present LNA design tolerates up to about 1 dBm output before saturation, with an input stage gain of 36 dB. Thus a signal of $0.25\mu$W coupled into the antenna would pose a risk of saturation and temporary loss of sensitivity. Since the antenna effective area is of order 0.6 m$^2$ at the low end of the band, ANITA therefore tolerates up to a 0.2 MW in-band transmitter at or near the horizon, or a several kW in-band transmitter near the nadir, accounting for the off-axis response of the ANITA antennas. Most of the higher power radar and other transmitters in use in Antarctica are primarily at the South Pole and McMurdo stations. Such systems did reduce our sensitivity when the payload was in close proximity to McMurdo station, and to a lesser degree, when in view of the South Pole station. \subsubsection{Other possible backgrounds.} \paragraph{Lightning.} Lightning is known to produce intense bursts of electromagnetic energy, but these have a spectrum that falls steeply with frequency, with very little power extending into the UHF and microwave regimes. Although lightning does occur over the Southern Ocean~\cite{Jacobson_01,Jacobson_00}, it is unknown on the Antarctic continent. We do not expect lightning to comprise a significant background to ANITA. \paragraph{Cosmic Ray Air Shower backgrounds.} Cosmic ray extensive air showers (EAS) at EeV energies also produce an electromagnetic pulse, known from observations since the late 1960's. The dominant RF emission comes from synchrotron radiation in the geomagnetic field. This emission is coherent below about 100~MHz, transitioning to partial coherence above about 200~MHz in the ANITA band. Although there has been a recent increase in activity to measure the radio characteristics of EAS events in the coherent regime below 100~MHz~\cite{Huege_Falcke05}, there is still little reliable information regarding the partially coherent regime where ANITA is sensitive to such events, although in fact several of the early detections of such events were at 500~MHz~\cite{Fegan}. The radio emission from EAS is highly beamed, so the acceptance for such events is naturally suppressed by geometry. They are also expected to have a steeply falling radio spectral signature, and thus an inverted spectrum compared to events originating from the Askaryan process, which has an intrinsic rising spectrum over the frequency region that coherence obtains, and a slow plateau and decline above those frequencies. ANITA may detect such events either by direct signals or reflected signals off the ice surface, in a manner similar to that mentioned above for posible impulses from satellites. The EAS signals are known to be linearly polarized, with the plane of polarization determined by the local geomagnetic field direction. Since the field is largely vertical in the polar regions, there is a tendency for the EAS radio emission to be horizontally polarized for air showers with large zenith angles. ANITA's field-of-view, which has maximum sensitivity near the horizon, thus favors EAS events with these large zenith angles. Such events when observed directly arrive from angles above the horizon, but under the right circumstances they may also be seen in reflection, thus appearing to originate from below the horizon. They might thus be confused with neutrino-like events originating from under the ice, if their radio-spectral and polarization signature was not considered. In an appendix we will address this possible physics background and show why it is straightforward to separate it from the events of interest. \subsection{CSBF Support Instrumentation Package.} \begin{figure}[htb!] \begin{center} \epsfig{file=ANITA-SIP.eps,width=3.95in} \caption{\it Block diagram of the NSBF SIP. \label{SIPfig}} \end{center} \end{figure} Support for NASA long-duration balloon payload launches and in-flight services is provided through the staff of the Columbia Scientific Balloon Facility (CSBF), based in Palestine, Texas, USA. CSBF has developed a ballooncraft Support Instrument Package (SIP), an integrated suite of computers, sensors, actuators, relays, transmitters, and antennas, for use with all LDB science instruments. The CSBF SIP is controlled by a pair of independent flight computers which handle science telemetry, balloon operations, navigation, ballast control and the final termination and descent of the payload. A system diagram of the SIP is provided in Figure~\ref{SIPfig}. A {\em Science Stack}, a configurable set of block modules, is also available as an option to the SIP providing such functions as a simple science flight computer, analog-to-digital conversion, and open-collector command outputs for additional instrument command and control. The SIP also provides the telemetry link between the ANITA flight computer and data acquisition system and ground based operations. Data from the ANITA computer is sent over serial lines to the SIP package which handles routing and transmission over line-of-sight (LOS), Tracking and Data Relay Satellite System (TDRSS), and IRIDIUM communication pathways. ANITA utilizes the NSBF SIP Science Stack to provide the ability to command the flight CPU system off and on and reboot the computer during flight. With regard to computational resources of the SIP, these are designed to fulfill existing LDB requirements, including preserving a full archive of all telemetered data that is passed through the SIP from the science instrument. This function thus provides an additional redundant copy of the telemetered data that can be used if there is telemetry loss or corruption. One important characteristic of the SIP relevant to ANITA is that it is not highly shielded from producing local EMI, at least at the extremely low level required for compatibility with ANITA science goals. Of necessity the SIP was thus enclosed in an external Faraday housing, with connectors, and penetrators designed in a manner similar to what was done for the ANITA primary electronics instrumentation. \subsection{Gondola Structure.} \label{gondola} \begin{figure*}[!htb] \centerline{ \includegraphics[width=3.95in]{Payl_LV.eps}~~\includegraphics[width=2.5in]{Payload06a.eps}} \begin{small} \caption{ ANITA payload in flight-ready configuration with launch vehicle.\label{payload06a}} \end{small} \end{figure*} The gondola structure consists primarily of an aircraft-grade aluminum alloy frame. A matrix of tubular components is pinned together via a combination of socket joints, tongue and clevis joints, and quarter-turn cam-lock fasteners. Views just prior to the launch of the payload show the structural elements of the gondola in Fig.~\ref{payload06a}. The frame is based on octagonal symmetry where eight vertical members, plus cross bracing, provide an internal backbone that allows for the attachment of spoke-like trusses to which the horn antennas fasten. Three ring-shaped clusters of quad-ridged horn antennas constitute the primary ANITA sensors. The top two antenna clusters have eight antennas each. Positioned around the perimeter of the base is a sixteen horn cluster. All of the eight horn clusters have a 45$^\circ$~ azimuthal offset angle between adjacent antennas, with a 22.5$^\circ$~ azimuthal offset between the top two rings. The antennas in the sixteen horn ring are offset from each other by 22.5$^\circ$~. All of the antennas in the upper two eight horn rings and the ones in the sixteen horn ring are canted down 10$^\circ$~ below horizontal to optimize their sensitivity, based on Monte Carlo studies of the effects of the tapering of the antenna beam when convolved with the neutrino arrival directions and energy spectrum. The nearly circular plane that is established by the sixteen antenna ring, near the base of the gondola, provides a large deck area for most of the other payload components. This region is covered by lightweight panels made of dacron sailcloth on the topside and a reflective layer on the underside to maintain thermal balance. The ANITA electronics housing, the NASA/CSBF SIP, and the battery packs are mounted on the structural ribs of the deck. Most external metallic structure is painted white to avoid overheating in the continuous sunlight, and critical components such as the instrument housings and receivers are covered with silver-backed teflon-coated tape to provide high reflective rejection of solar radiation and high emissivity for internal heat dissipation. \subsection{Power subsystem.} The ANITA power system is composed of a photovoltaic (PV) array, a charge controller, batteries, relays, and DC-to-DC converters. The PV array is an omni-directional array consisting of eight panels configured in an octagon, with the panels hanging vertically (see instrument figure). Although PV panels flown on high-altitude balloons are typically oriented at $\sim 23$$^\circ$~ to the horizontal, in Antarctica, the large solar albedo from the ice results in more irradiance incident on the panels (for most conditions) if they hang vertically. Each panel consists of 84 solar cells electrically connected in series. They were mounted on frames made of aircraft-grade spruce wood with a coarse webbing (Shearweave style1000-P02) stretched on the frames. The PV arrays were designed and fabricated by SunCat Solar. The solar cells used were Sunpower A-300 cells with a rated efficiency of 21.5\% and dimensions 12.5 cm square and thickness 260 um. Bypass diodes were placed in parallel with successive groups of 12 cells within a panel (7-diodes/panel) to mitigate the effect of a possible single cell open circuit failure during flight. Additionally, a blocking diode was placed between each panel output and the charge controller to prevent cross-charging of panels with different output voltages resulting from different illuminations and temperatures. To reduce Fresnel reflection losses for high-refractive index silicon (n=3.46 at 700 nm), the silicon cells had two anti-reflective (AR) coatings applied. An AR coating with refractive index n=1.92 was applied by the solar cell manufacturers. Additionally, during fabrication of the panels by SunCat Solar, a second AR coating with refractive index 1.47 was applied. This results in calculated Fresnel losses of 13-14\% for incidence angles from 0 to 40$^{\circ}$. The maximum power point (MPP) voltage and current generated by these cells under standard conditions (STC) are 0.560V and 5.54A respectively. However, the actual V and I vary considerably depending upon the irradiance and cell temperature. The single-cell temperature coefficient for the voltage is -1.9 mV/C. PV panel temperatures varied over the range of -10C to +95C, depending upon the irradiance incident upon the cells. The temperatures were measured by semiconductor temp sensors (AD590) glued to the back of cells. PV array circuit components (diodes) also introduce losses in the output voltage and power. The actual measured PV voltage input to the charge controller during the flight ranged from 42.5 to 47 V (in good agreement with estimates using the cell temperature and temperature coefficient) and the current was about 9~A giving a total power of 400 W. The omni-directional array is inherently an unbalanced system; i.e. the irradiance incident on each panel differs. For a given orientation of the gondola, some panels are directly irradiated by sunlight plus solar albedo from the ice and others are irradiated only indirectly from solar albedo. Additionally, for those that are directly irradiated, the solar incidence angle is different. This results in individual panels that generate very different currents at any given time. Because of the differing temperatures, the individual panels also have significantly different output voltages (the voltage differences are small compared to the current differences) that feed into the charge controller. As mentioned above, the blocking diodes prevent cross-charging of panels generating different voltages. When using an unbalanced array, to achieve the maximum power output, it is important to use a charge controller that senses and operates at the actual MPP as opposed to one that operates at a constant offset voltage from the array open-circuit voltage. We used an Outback MX-60 charge controller to supply power to the ANITA instrument. Conductive heat sinks were installed on the power FETs and transistors and the heat was conducted to the instrument radiator plate. We operated in the 24 V mode and flew nine pairs of 12 V Panasonic LC-X1220P (20 AH) lead acid batteries that were charged by the charge controller and would have provided 12 hours of power in case of PV array failure. The Instrument Power box consisted of the MX-60 charge controller, solid-state power relays, and Vicor DC/DC converters for the external radio-frequency conditional module (RFCM) amplifiers. The main power relays for the cPCI crate were controlled by discrete commands from the SIP. All other solid-state relays were controlled by the CPU, either under software control or by commands from the ground. The DC/DC box consisted of Vicor DC/DC converters which provided the +5, +12, -12, +3.3, +1.5, and 5 voltages required by the cPCI crate and peripherals. All voltages and currents were read by the housekeeping system. \begin{figure}[!htb] \centering \centerline{\includegraphics[width=3.75in]{qrhorn.eps}~~\includegraphics[width=3.0in]{SeaveyPlot.eps}} \begin{small} \caption{ Left: A photograph of an ANITA quad-ridged dual-polarization horn. Right: Typical transmission coefficient for signals into the quad-ridged horn as a function of radio frequency. \label{qrhorn}} \end{small} \end{figure} \subsection{Radio Frequency subsystem} \subsubsection{Antennas.} Figure~\ref{payload06a} shows the ANITA payload configuration just prior to launch in late 2006 at Williams Field, Antarctica. The individual horns are a custom design produced for ANITA by Seavey Engineering, Inc., now a subsidiary of Antenna Research Associates, Inc. These horns are the primary ANITA antennas, and may be thought of as a flared quad-ridged waveguide section; the back of the horn does in fact terminate in a short section of waveguide. The dimension of the mouth is of order 0.8~m across, and the horns can be close-packed with minimal disturbance of the beam response since the fringing fields outside the mouth of the horn are small. Figure~\ref{qrhorn} shows an individual antenna prior to painting, and a corresponding typical transmission curve indicating the efficiency for coupling power into the antenna, as a function of radio frequency. \begin{figure}[!htb] \centering \centerline{\includegraphics[width=3.5in]{comp_Vgain.eps}~~\includegraphics[width=3.5in]{comp_Hgain.eps}} \begin{small} \caption{ Left: Antenna vertical-polarization relative directivity in dB relative to the peak gain for both E and H-planes. Right: the same quantities for the horizontal polarization. Nine different antennas are shown. Gain is frequency-averaged for a flat-spectrum impulse across the band for different angles in \label{VHgain}} \end{small} \end{figure} The average full-width-at-half-maximum (FWHM) beamwidth of the antennas is about $45^{\circ}$ with a corresponding directivity gain (the ratio of $4\pi$ to the main beam solid angle) of approximately 10 dBi average across the band. Fig.~\ref{VHgain} illustrates this for nine different ANITA antennas, showing the frequency-averaged response relative to peak response along the principal antenna planes (E-plane and H-plane for both polarizations) as a function of angle. The choice of beam pattern for these antennas also determined the $22.5^{\circ}$ angular offsets in azimuth, as this was chosen to provide good overlap between the response of adjacent antennas, but still maintaining reasonable directivity for determination of source locations. By arranging an azimuthally symmetric array of 2 cluster groups of 8+8 (upper) and 16 (lower) antennas, each with a downward cant of about $10^{\circ}$, we achieve complete coverage of the horizon down to within $40^{\circ}$ of the nadir, virtually all of the observable ice area. The antenna beams in this configuration overlap within their 3 dB points, giving redundant coverage in the horizontal plane. The $\sim 3$~m separation between the upper and lower clusters of 16 antennas provides a vertical baseline for establishing pulse direction in elevation angle. Because the pulse from a cascade is known to be highly linearly-polarized, we convert the two linear polarizations of the antenna into dual circular polarizations using standard $90^{\circ}$ hybrid phase-shifting combiners. This is done for two reasons: first, a linearly polarized pulse will produce equal amplitudes in both circular polarizations, and thus some background rejection is gained by accepting only linearly-polarized signals; and second, the use of circular polarizations removes any bias in the trigger toward horizontal or vertically polarized impulses. \begin{figure}[htb!] \centering \centerline{\includegraphics[width=3.55in]{comp_height_el0az0.eps}~~\includegraphics[width=3.25in]{Impulse07.eps}} \vspace{3mm} \caption{ Left: Antenna impulse response as measured for nine ANITA antennas, here in units that also give the instantaneous effective height~\cite{Mio06}. Right: ANITA impulse response as it appears various stages of the signal chain.\label{impulse}} \end{figure} Because ANITA's sensitivity to neutrino events depends crucially on its ability to trigger on impulses that rise above the intrinsically impulsive thermal noise floor, ANITA's antennas and receiving system must preserve the narrow impulsive nature of any signal that arrives at the antenna. Fig.~\ref{impulse} shows the measured behavior of the system impulse response at various stages. On the left, we show details of the measured impulse response of nine of the flight antennas, in units that give the instantaneous effective height $h_{eff}(t)$. The actual voltage time response $\mathcal{V}(t)$ at the antenna terminals, assuming they are attached to a matched load, is then just the convolution of this function with the incident field $\mathcal{E}(t)$: $$\mathcal{V}(t) ~=~ \frac{1}{2} \mathcal{E}(t) \otimes h_{eff}(t)$$ where the convolution operator is indicated by the symbol $\otimes$. This equation can also be expressed as an equivalent frequency domain form, though in that case the quantities are in general complex. On the right, we show the evolution of an Askaryan impulse through the ANITA system. The initial Askaryan impulse (a) is completely unresolved by the ANITA system, since its intrinsic width is of order 100~ps (reference~\cite{Mio06} provides the actual measured data that form the basis for this). The horn antenna response (b) includes group delay at the edges of the frequency band which leads to the low-frequency tail, and such group delay variation is more pronounced after the system bandpass filters are applied (c). However, the voltage envelope is slightly misleading, as the total power (or intensity) response (d) of the system is still confined to 1.5~ns FWHM. \begin{figure*}[htb!] \centering \includegraphics[width=6.5in]{ANITA07schematic.eps} \begin{small} \caption{Block diagram of the primary RF subsystems for ANITA.} \label{Block07} \end{small} \end{figure*} \subsubsection{Receivers.} The RF front end for ANITA consists of a bandpass filter, followed by a low-noise-amplifier (LNA)/power limiter combination, then followed by a 2nd stage booster amplifier. An example of one of the receivers is shown in Fig.~\ref{RFCM}(left), with its enclosure cover removed to show the internal components. These elements are all in close proximity to the horn antennas to ensure no transfer losses through cables, and are enclosed in a Faraday box for additional EMI immunity. Once the signals are boosted by the 2nd stage amplifier, they are transmitted via coaxial cable to the receiving section of the trigger and digitizer, which is contained in a large shielded Faraday enclosure. Once the signals arrive at this location, a second bandpass filter is applied to remove the out-of-band noise from the LNA, and the signals are then ready for insertion into the trigger/digitizer system. \begin{figure}[htb!] \centering \centerline{\includegraphics[width=3.85in]{rfcm_photo.eps}~~\includegraphics[width=3.05in]{NF1.eps}} \begin{small} \caption{ Left: Photo Layout of ANITA receiver module. Right: Gain and noise measurements for all ANITA receivers} receivers. \label{RFCM} \end{small} \end{figure} Figure~\ref{RFCM}(right) shows a composite overlay of measurements of the total gains and noise temperature for all ANITA channels, including cable attenuation losses up to the inputs of the digitizers. The gain slope is dominated by the intrinsic amplifier response. The noise temperature arises from the LNAs, with about 90~K intrinsic amplifier noise, combined with the front-end bandpass filter, which creates most of the additional $\sim 50$~K due to emissivity along the signal path. The combined average noise temperature of $\langle T_{sys}\rangle \simeq 140$~K was found to contribute at most about 40-45\% of the total noise while at float, as we describe in a later section. Several of the receivers also had a coupler section added to them to allow for insertion of a calibration pulse into their respective antennas; these have a higher noise figure, but these channels were not used for the primary signal triggering. \subsection{Trigger and Digitizer Subsystem} \subsubsection{Trigger System.} In a long-duration balloon flight, primary power is solar which places tight restrictions on the payload power budget. Just as severe is the need to eliminate the heat generated by the payload electronics, which places practical limits of a kW or less on the entire payload instrumentation. ANITA seeks to digitize a large number of radio-frequency channels at rates of at least Nyquist frequencies for a bandwidth that extends up to 1.2~GHz, implying Nyquist sampling of at least 2.4 Gsamples/second. Commercial digitizers that run continuously at such rates are generally high-power devices, typically 5 watts per channel at the time of ANITA electronics development. For 80 channels, the required power for only the digitizers alone would be several hundred Watts, and the downstream electronics then required to parse and decimate the huge data rate (several Terabits per second) would use a comparable amount of power. Folding in the requirements for amplifiers and other analog and digital power needs, the payload power budget was not viable using commercial digitizers. To make this problem tractable within the power, space, and weight budget of an LDB payload, we elected to develop a separate analog trigger system which would detect the presence of an incoming plane-wave impulse, then only digitize a time window around this pre-detected signal. The as-built design for the low-powered RF trigger and digitization electronics is summarized in Table~\ref{specs}. A divide-and-conquer strategy to address the power and performance issues raised by these specifications is shown in Fig.~\ref{block_diag}. \begin{table}[h] \begin{center} \caption{ANITA Electronics Specifications.} \begin{tabular}{|l|c|l|} \hline \textbf{Design Parameter} & \textbf{As-built Value \& Comments} \\ \hline \# of RF channels & 80 = 32 top, 32 bottom, 8 monitor \\ \hline Sampling rate & 2.6 GSa/s, greater than Nyquist \\ \hline Sample resolution & $ \geq$ 9 bits = 3 bits noise + dynamic range \\ \hline Samples in window & 260 for a 100 ns window \\ \hline Buffer depth & 4 to allow rapid re-trigger \\ \hline Power/channel & $ <$ 10W including LNA \& triggering \\ \hline \hline \# of Trigger bands & 4, with roughly equal power per band \\ \hline \# of Trigger channels & 8 per antenna (4 bands x 2 pols.) \\ \hline Trigger threshold & $\leq 2.3\sigma $ above Gaussian thermal noise \\ \hline Accidental trigger rate & $\leq 5$ Hz, gives 'heartbeat' rate \\ \hline Raw event size & $\sim 35$ kB, uncompressed waveform samples \\ \hline \end{tabular} \label{specs} \end{center} \end{table} \begin{figure}[h] \includegraphics[width=6.35in]{block_diag.eps} \caption{In order to minimize the power required, signals from the antennas are split into analog sampling and trigger paths. To provide trigger robustness, the full 1GHz bandwidth is split into 4 separate frequency bands, which serve as separate trigger inputs. \label{block_diag} } \end{figure} \paragraph{Triggering at Thermal Noise Levels} Actual neutrino signals are not expected to be observed at a rate greater than of order several per hour at most, if all previous bounds are saturated. However, to avoid creating too restrictive a trigger condition, the trigger was designed not to depend on exact time-alignment (or phase) of the incoming signal, over a time window of order 10 ns. This allows accidentals from random thermal noise to also trigger the system, so that a continuous stream of events would be recorded, allowing a continuous sampling of the instrument health. Since the thermal noise floor in radio measurements is very well defined by the overall system temperature, this ensures that the sensitivity of the instrument remains high. As long as these thermal noise triggers do not saturate the data acquisition system, causing deadtime to actual events of interest, this methodology is effective. The thermal noise events so recorded have negligible probability of time alignment to mimic an actual signal. Appendix~\ref{Thermal_App} gives further results demonstrating this conclusion. \paragraph{Trigger Banding} In order to provide optimal robustness in the presence of unknown but potentially incapacitating Electro-Magnetic Interference (EMI) backgrounds, a system of non-overlapping frequency bands has been adopted. Typical anthropogenic backgrounds are narrow-band, and while a strong emitter in a given band would likely raise the trigger threshold (at constant rate) such that it would be effectively disabled, the other trigger bands could continue to operate at thermal noise levels. \begin{figure*}[thb!] \centerline{\epsfig{file=Trigger1.eps,width=2.65in}\includegraphics[width=3.85in]{run220trigscan.eps}} \caption{ Left: The baseline single-antenna (L1) triggering is illustrated schematically. In practice the double-sided voltage thresholds shown are implemented by first using a tunnel-diode square-law detector to rectify the band voltage signals into a unipolar pulse. Right: Plot of measured global (L3) trigger efficiency vs. threshold in standard deviations above the RMS thermal noise voltages for ANITA, using a shaped Askaryan-like pulse from a pulse generator under pure thermal noise conditions.\label{trigger}} \end{figure*} Signals from the vertical and horizontal polarizations of the quad-ridge horn antennas are amplified and conditioned in the RF chain described in the preceding subsection. These RF signals are then split into two paths, a trigger (lower) and digitizer (upper) path as indicated in Fig.~\ref{Block07}. The trigger path first passes through the hybrid 90$^\circ$~ combiner which converts the H \& V polarizations to LCP and RCP. The signals then enter a sub-band splitter section where they are divided into frequency bands with centers at $\nu_c = $265, 435, 650, and 990~MHz and bandwidths of $\Delta\nu =$130, 160, 270, 415~MHz, or fractional bandwidths $\Delta\nu/\nu_{c} \simeq 44\%$ on average. This partitioning is performed in order to provide rejection power against narrow-bandwidth anthropogenic backgrounds. In contrast, as true Askaryan pulses are temporally compact and coherent, significant RF power is expected across several bands. In addition the thermal noise in each of the bands are statistically independent, and requiring a multi-band coincidence thus permits operation at a lower effective thermal noise floor. This is illustrated schematically in Fig.~\ref{trigger}. \paragraph{Level 1 trigger.} The frequency sub-band signals are then led into a square-law-detector section which uses a tunnel-diode, a common technique in radio astronomy. The resulting output unipolar pulses (negative-going in this case, typically 7~ns FWHM) are filtered and sent to a local-trigger unit (a field-programmable gate array or FPGA) on one of the nine 8-input-channel Sampling Unit for Radio Frequencies (SURF) boards, within the compact-PCI ANITA crate, and therefore attached to the host computer and global trigger bus. Within the SURF FPGA, the square-law detector outputs are led through a discriminator section with a programmable threshold. The single-band thresholds are set in a noise-riding mode where they servo on their rate, with a time constant of several seconds, maintaining typical rates of 2.6-2.8~MHz under pure thermal noise conditions, corresponding approximately to the $2.3\sigma_V$ ($\sigma_V = V_{rms}$, the root-mean-square received voltage) level mentioned above. The SURF FPGA also then applies the single-antenna trigger requirement: the eight sub-bands generate a 10~ns logic level for each signal above threshold, and when any three of these logic gates overlap, a single-antenna trigger is generated. These triggers are denoted Level 1 (L1) triggers, and occur at typical rates of 150~kHz per antenna for thermal-noise conditions. To determine the expected ANITA L1 accidental rate $R_{L1}$ of $k$-fold coincidences among the $n=8$ sub-band single-antenna channels, consider a trial event, defined by a hit in any one of the $n$ channels, which then triggers a logic transition out of the discriminator to the logic TRUE state for a duration $\tau$. Then consider the probability during this trial that $k-1$ or more ($k=3$ for ANITA) additional sub-band discriminator logic signals arrive while the first is still in TRUE state, corresponding to a hit above threshold for that channel. The rate of TRUE states per channel is $r$. We do not for now assume $r\tau<<1$. The probability to observe exactly $k-1$ out of $n-1$ additional channels in the TRUE state after one channel has changed its state is given by the binomial (e.g., the $k$ out of $n$ `coin toss') probability: $$ P(k-1:n-1) = \frac{(n-1)!}{ (k-1)!(n-k)!} p^{k-1} (1-p)^{n-k} ~.$$ The single channel `coin-toss' probability $p$ is just given by the fractional occupancy of the TRUE state per second per channel: $p~=~r\tau$. The probability per trial to observe greater than $k-1$ out of $n-1$ channels is then just the cumulative probability density of the binomial distribution times the observation interval: $$ P(\geq k-1:n-1) = \sum_{j=k-1}^{n-1} \frac{(n-1)!}{ (j)!(n-1-j)!} (r\tau)^j [1-r\tau]^{n-j} ~dt $$ For $r\tau<<1$ as it often is in practice, this simplifies to $$ P(\geq k-1:n-1) \simeq \frac{(n-1)!}{ (k-1)!(n-k)!} (r\tau)^{k-1} $$ since only the leading term in the sum contributes significantly and the term $1-r\tau \simeq 1$. The rate is then determined by multiplying the single-trial probability by the number of ensemble trials per second, which is just equal to the total number of channels times the singles rate per channel. The singles rate per channel is given simply by $r$, and the total singles rate across all channels is $nr$. Thus the total rate in the limit of $r\tau<<1$, is: $$ R_{L1} = nr P(\geq k-1:n-1) \simeq n \frac{(n-1)!}{ (k-1)!(n-k)!} r^k\tau^{k-1} = \frac{(n)!}{ (k)!(n-k)!} k r^k\tau^{k-1}~. $$ \paragraph{Level 2 trigger.} When a given SURF module detects an L1 trigger, indicating either a possible signal or (most probably) a thermal noise fluctuation, it reports this immediately to the Trigger Unit for Radio Frequencies (TURF) module, which occupies a portion of the compact-PCI backplane common to all of the SURF modules. The TURF contains another FPGA, which determines whether a level 2 (L2) trigger, which corresponds to two L1 events in any adjacent antennas of either the upper or lower ring, within a 20~ns window, has occurred. L2 triggers occur at rate of about 2.5~Khz per antenna pair, or about 40~kHz aggregate rate for thermal noise. \paragraph{Level 3 trigger.} If a pair of L2s occur in the upper and lower rings within a 30~ns window and any up-down pair of the antennas share the same azimuthal sector (known as a ``phi sector''), a level 3 (L3) global trigger is issued, and the digitization of the event proceeds. These occur at a rate of about 4-5~Hz for thermal noise. Fig.~\ref{trigger}(right) shows a measurement of the effective threshold for L3 triggers in terms of the peak pulse SNR above thermal noise. Here three upper and three lower antennas were stimulated with a shaped pulse and the L3 rate was measured as a function of the input pulse SNR to estimate the effective global threshold, here in Gaussian standard deviations above the thermal noise RMS voltage. ANITA begins to respond at about $4\sigma_V$, reaches of order 50\% efficiency at $5.4 \sigma_V$ and is fully efficient at of order $\sim 7\sigma_V$. In Appendix~\ref{ThermalApp}, we further analyze the rate of accidentals in terms of their ability to reconstruct coherently to mimic a true signal event, and we find that the chance probability for this is of order 0.003 events for the ANITA flight, presenting a negligible background. \begin{figure}[htb!] \centerline{\includegraphics[width=3.25in]{sigchainloss.eps}~~\includegraphics[width=3.4in]{SURFatten.eps}} \caption{Left: Detailed view of input signal chain insertion losses, up to the input to the SURF. The losses in the trigger path are slightly lower since the 20~ns delay cable is omitted. Right:Typical insertion losses for the SURF digitizer inputs used to record the waveforms. The rolloff above about 850~MHz leads to a loss of SNR of the recorded waveforms compared to the actual RF trigger path. \label{SURFloss} } \end{figure} \subsubsection{Digitizer System.} The upper path in Figure~\ref{Block07} is the digitization path. A low-power, high sampling-speed Switched Capacitor Array (SCA) continuously samples the raw RF inputs over the entire 1 GHz of analog bandwidth defined by the upstream RF conditioning, at a sample rate of 2.6~Gsamples/s. Sampling is halted and the analog waveform samples held for readout upon the fulfillment of a trigger condition. The SCA sampler, which does not actually digitize its stored samples until commanded to do so for a trigger, uses far lower power than traditional high speed continuous-digitizing samplers (such as oscilloscopes). Without the custom development of this technology by G. Varner of the ANITA team, the power budget for ANITA would have grown substantially, from of order 1 W/ch to perhaps 10~W per channel for a commercial digitizer as was used for ANITA-lite~\cite{Anitalite}. In addition, the continuously sampled data would have added a processing load of order 200~Gbyte/sec to the trigger system. Fig.~\ref{SURFloss} shows measurements of the signal chain and SURF channel insertion loss vs. radio frequency. On the left, the losses up to the input of the SURFs are shown; these are primarily cable and second-stage bandpass filter losses. Similar losses apply to the trigger path as well, though slightly lower since there is no 20~ns delay cable in that path. On the right, the SURF insertion losses are shown, and these are unique to the waveform recording path. The loss above about 850 MHz tend to significantly reduce the intrinsic peak voltages in the most impulsive waveforms compared to what is seen by the analog trigger inputs. These amplitude losses can be corrected to some degree as shown in a later section, but there is still a net loss of SNR in the deconvolved waveforms compared to what is seen by the trigger path. It is evident from this plot that ANITA's digitizers did not fully achieve the design input bandwidth span of 200-1200 MHz; the last quarter of this band has reduced response compared to the design goal. The main impact of this is not to reduce the trigger sensitivity of the instrument, since the digitizer does not constrain the input bandwidth of the trigger system. Rather, the primary impact is in evaluation of potential neutrino candidates, for which we would like to be able to reconstruct an accurate spectral density for the received radio signal. For the current digitizer system, the reconstructed spectral content above 900~MHz will thus be subject to errors that increase with frequency; however, this frequency region is also the region where ice attenuation is rising quickly~\cite{icepaper}. \subsection{Navigation, attitude, \& timing} \paragraph{Absolute Orientation.} In order to geometrically reconstruct neutrino events, accurate position, altitude, absolute time, and pointing information are required. To provide such data on an event-by-event basis, a pair of Global Positioning System (GPS) units was used. They provide more than sufficient accuracy to fulfill the science requirements, see Table~\ref{GPS}. In addition these units provide the ability to synchronously trigger and read out the system on an absolute timing mark (such as the nearest second), a feature which is essential to the ground-to-flight calibration sequence, where a ground transmitter needs to be globally synchronized to the system during flight, including a propagation delay offset. ANITA had a mission-critical requirement for accurate payload orientation knowledge, to ensure that the free-rotation of the payload would not preclude reconstruction of directions for events at the sub-degree level of accuracy. Such measurements were accomplished with a redundant system of 4 sun-sensors, a magnetometer, and a differential GPS attitude measurement system (Thales Navigation/Magellan ADU5). These systems performed well in flight and met the mission design goals. In calibration done just prior to ANITA's 2006 launch, we measured a total $(\Delta \phi)_{RMS} = 0.071^{\circ}$, very close to the limit of the ADU5 sensor specification, and well within our allocated error budget. The ADU5 is connected to the flight computer with a pair of RS-232 interfaces; one carries the attitude information packets for the housekeeping readout, and the second provides readout when during the UTC second a digital trigger line was set at the ADU5. The second GPS unit, a G12 sensor from Thales, is used to get a second trigger timing piece of information over one serial line and timing information for the flight computer's NTP (Network Time Protocol) internal clock. Position and attitude information is updated every second. Also every second the NTP server gets an update to keep good overall clock time at the computer. The trigger is also connected directly to GPS time: GPS second from the 1-second readout and fraction of a second from the phototiming data block. Additional pointing information is derived from the sun sensors and a tip-tilt sensor mounted near the experiment's center of mass. They provide a simple crosscheck to the attitude data with very different systematics and also offer a measure of redundancy. \begin{table*}[bt] \caption{ Navigation, attitude, and timing sensor requirements and provided accuracy.} \label{GPS} \begin{center} \begin{tabular}{|l|l|l|l|} \hline Parameter &Determination method & Required Accuracy & System Accuracy \\ \hline \hline Position/Altitude & Ordinary GPS & 10m horiz./20m vert.& 5m Horizontal/10m Vertical \\ UTC & GPS Phototiming pulse & 20ns & $<10$ns \\ Pointing & Short-baseline differential GPS & 0.3$^{\circ}$ rotation/0.3$^{\circ}$ tip & $<0.07^{\circ}$/$<0.14^{\circ}$ \\ Pointing Sun-sensor & Tip-tilt sensor & 0.3$^{\circ}$ rotation/0.3$^{\circ}$ tip & 1$^{\circ}$/1$^{\circ}$ \\ \hline \end{tabular} \end{center} \end{table*} \section{Flight Software \& Data Acquisition} The ANITA-I flight computer was a standard c-PCI single board computer, based on the Intel Mobile Pentium III CPU. The operating system was Red Hat 9 Linux, which was selected due to driver availability. The design philosophy behind the ANITA flight software was to create, as far as possible, autonomous software processes. Inter-process data transfer consists of FIFO queues of data files and links on the system ramdisk, these queues are managed using standard Linux filesystem calls. Process control is achieved using configuration files and a small set of POSIX signals (corresponding to: re-read config files, restart, stop, and kill). A schematic overview of the flight software processes is shown in Figure~\ref{f:overview}. \begin{figure}[hbt] \centering \resizebox{\textwidth}{!}{\includegraphics{ryanOverviewPlain.eps}} \caption{A schematic overview of the flight software processes. The open arrows show software interaction with hardware components, and closed arrows indicate data transfer between processes. The telemetry data flow across the ramdisk is indicated by the dot-dashed (purple) line, and permanent data storage to the on-board drives is shown by the the dotted (red) lines.} \label{f:overview} \end{figure} The flight software processes break down into three main areas, which will be discussed in following sections. These areas are: \begin{enumerate} \item Data acquisition -- processes which control specific hardware and through which all data from that hardware is obtained. \item Event processing -- processes which augment or analyze the waveform data in the on-line environment. \item Control and telemetry -- processes which control the telemetry hardware and those which are responsible for process control. \end{enumerate} \subsection{Data Acquisition} The bulk of the data, around 98\%, acquired during the flight is in the form of digitized waveform data from the SURF boards. The remaining 2\% consists of auxiliary information necessary to process and interpret the waveform data (payload position, trigger thresholds, etc.) and data that is used to monitor the health of the instrument during flight (temperatures, voltages, disk spaces, etc.). \subsubsection{Waveform Data} The process that is responsible for the digitization and triggering hardware, the SURF and TURF, is Acqd (the Acquisition Daemon). The Acqd process has four main tasks: \begin{itemize} \item Acquiring waveform data from the SURFs \item Acquiring trigger and timing data from the TURF (via TURFIO). \item Acquiring housekeeping data from the SURF (scaler rates, read back thresholds, and RF power). \item Setting the thresholds and trigger band masks to control the trigger, dynamically adjusting the thresholds to maintain a constant rate. \end{itemize} Once the TURF has triggered an event and the SURFs' have finished digitizing, the event data is available for transfer across the c-PCI backplane to the flight computer. The flight computer polls the SURFs to check when an event has finished digitization and the data is ready to be transferred across the c-PCI backplane. An event consists of 260 16-bit waveform data words per channel, there are 9 channels per SURF and 9 SURFs in the c-PCI crate. A complete raw event is approximately 41KB. To achieve better compression, see Section~\ref{s:compression}, the raw waveform data is pedestal subtracted before being written to the queue for event processing. \subsubsection{Trigger Control} In addition to acquiring the waveform and trigger data, Acqd is also responsible for setting the thresholds and trigger band masks that control the trigger. There are three handles through which Acqd can control the trigger: \begin{itemize} \item The single channel trigger thresholds (256 channels). \item The trigger band masks (8 channels per antenna). \item The antenna trigger mask (32 antennas in total). \end{itemize} The default mode of operation is to have all of the masks off, such that every trigger band and every antenna can participate in the trigger. In the thermal regime, i.e. away from anthropogenic RF noise sources such as camps and bases, the trigger control operates by dynamically adjusting the single channel thresholds to ensure that each trigger channel triggers at the same rate (typically 2-3MHz). The dynamic adjusting of the thresholds is necessary as even away from man-made noise sources the RF power in view varies with the temperature of the antenna and its field of view, i.e with the position of the sun and galactic center with respect to the antenna. The thresholds are varied using a simple PID (proportional integral differential) servo loop that was tuned in the laboratory using RFCMs with terminated inputs. During times when the balloon is in view of large noise sources, such as McMurdo station, a different triggering regime is necessary to avoid swamping the downstream processes with an unmanageable event rate. To allow for this all of the trigger control options are commandable from the ground, see Section~\ref{s:commanding} for more details on commanding. Using these commands some of the available options for controlling the trigger rate are: \begin{itemize} \item Adjust the global desired single trigger channel rate. \item Adjust individual single channel rates independently. \item Remove individual trigger channels (i.e. frequency bands) from the antenna level (L1) trigger. \item Remove individual antennas from the L2 and L3 triggers. \end{itemize} \subsubsection{Housekeeping Data} In addition to the waveform data, housekeeping data is also continuously captured, both for use in event analysis and also for monitoring the health of the instrument during flight. Table~\ref{t:housekeeping} is a summary of the various types of housekeeping data acquired by the flight software processes. \begin{table}[hbt] \centering \begin{tabular}{| c | l | c |} \hline Process & Housekeeping Data & Rate \\ \hline Acqd & Trigger rates and average RF power & up to 5\,Hz \\ Calibd & Relay status & 0.03\,Hz \\ GPSd & GPS position, velocity, attitude, satellites, etc. & up to 5\,Hz \\ Hkd & Voltages, currents, temperatures, pressures, etc. & up to 5\,Hz \\ Monitord & CPU and disk drive status & 0.03\,Hz\\ \hline \end{tabular} \caption{The types of housekeeping data acquired by flight software processes.} \label{t:housekeeping} \end{table} \subsection{On-line Event Processing} At altitude the bandwidth for downloading data from the payload to the ground systems is very limited, see Section~\ref{s:datadown}. In order to maximize the usage of this limited resource the events are processed on-line to determine the event priority and they are compressed and split into a suitable format for telemetry. \subsubsection{Prioritization} The Prioritizerd daemon is responsible for determining the priority of an event. This priority is used to determine the likelihood of a given event being telemetered to the ground during flight. The prioritizer looks at a number of event characteristics to determine priority. The hierarchical priority determination is described below: \begin{itemize} \item Priority 9 -- If too many waveforms (configurable) have a peak in the FFT spectrum (configurable), the event is given this low priority, to veto events from strong narrow band noise sources. \item Priority 8 -- If two many channels peak simultaneously, determined via matched filter cross-correlation, the event is assumed to be generated on-payload and is rejected. \item Priority 7 -- Compares the RMS of the beginning and end of waveforms to veto non-impulsive events. \item Priority 6 -- This is the default priority. Thermal noise events will be assigned this priority if they are not demoted for one of the above reasons, or promoted for one of the below reasons. \item Priority 5 -- An event is promoted to priority five if it passes the test of N of M (configurable) neighboring antennas peaking within a time window (configurable). Events must satisfy this condition to be c onsidered for promotion to priorities 1-4. \item Priority 4 -- A cross-correlation is performed with boxcar smoothing, and there are peaks in 2-of-2 antennas in one ring and 1-of-2 in the other. \item Priority 3 -- A cross-correlation is performed with boxcar smoothing, and there are peaks in 2-of-3 antennas in both rings. \item Priority 2 -- A cross-correlation is performed with boxcar smoothing, and there are peaks in 2-of-2 antennas in both rings. \item Priority 1 -- A cross-correlation is performed with boxcar smoothing, and there are peaks in 3-of-3 antennas in both rings. \end{itemize} \subsubsection{Compression} \label{s:compression} Several encoding methods were investigated for the telemetry of waveform data. Of the methods investigated, the optimum method was determined to be a combination of binary and Fibonacci encoding. The steps involved in the compression are described below: \begin{itemize} \item The waveform is pedestal subtracted and zero-meaned. \item The data is truncated to 11-bits (the lowest bit in the data is shorted to the next to lowest) \item A binary bit size is determined based upon the smallest power of two above the three standard deviation point of the waveform. \item All samples that lie within the range of values that can be encoded with a binary number of this dimension are encoded as their binary value, those outside are assigned the maximal value. \item The difference between the maximal value and the actual sample value are then encoded using Fibonacci coding in an overflow array. Fibonacci coding is useful for this purpose as it efficiently encodes large numbers, and it has some built in immunity to data corruption as each encoded number ends with 11. \item The full encoded waveform then consists of 260 $n$-bit binary numbers followed by $M$ Fibonacci encoded overflow values. \end{itemize} Figure~\ref{fig:compress} shows the performance of the binary/Fibonacci encoding method in comparison with the other methods considered. This method was chosen as it proved to give the best compression of the loss-less methods considered. \begin{figure}[hbt] \centering \resizebox{0.6\textwidth}{!}{\includegraphics{compressionMethods.eps}} \caption{A comparison of the performance of the different loss-less and lossy compression methods tested on ANITA waveform data with adjustable Gaussian noise levels. The binary/Fibonacci method detailed in the text proved to be the most effective method for encoded the telemetry data. The mu-law methods were the only lossy methods that were considered.} \label{fig:compress} \end{figure} \subsection{Control and Telemetry} One critical aspect of the flight software for a balloon experiment is the control telemetry software. This software represents the only link between the experiment and the scientists on the ground. As such the software needs to be both very robust, to withstand a long flight away from human interaction, and also flexible enough to cope with unexpected failures during the flight. \subsubsection{Data Downlink} \label{s:datadown} During the flight it is critical to telemeter enough information that the scientists on the ground can determine whether the instrument is operating, and acquiring sensible data. All of this information needs to be pushed through one of two narrow (bandwidth limited) pipes to the ground. When close to the launch site the data is sent over a Line of Sight (LOS) transmitter with a maximum bit rate of 300kbps. Once over the horizon the only data links available are satellite links: i) a continuous 6\,kbps using the TDRSS satellite network; and ii) a maximum of 248 bytes every 30 seconds using the IRIDIUM network. The ODIUM link is only used to monitor payload health during times when the other, higher bandwidth, links are unavailable (due to satellite visibility and other issues). There are several data streams that are telemetered, these are summarized in Table~\ref{tab:telem}. Clearly, only a tiny fraction of the total data written to disk is able to be telemetered to the ground using the satellite links. \begin{table}[hbt] \centering \begin{tabular}{|c|c|c|c|} \hline Data Type & Size (bytes) & TDRSS (packets/min) & LOS (packets/min) \\ \hline Header & 74 & 150 & 300 \\ Waveform & 14,000 & 2 & 120 \\ GPS & 88 & 6 & 300 \\ Housekeeping & 150 & 6 & 60 \\ RF Scalers & 1380 & 1 & 30 \\ CPU Monitor & 64 & 1 & 1 \\ \hline \end{tabular} \caption{A summary of the data types, sizes and telemetry rates over the LOS and TDRSS downlinks.} \label{tab:telem} \end{table} \begin{table}[hbt!] \centering \begin{tabular}{|c|c|c|} \hline Command Name & Description & Number \\ \hline Send Config Files & Telemeter config file & $\sim 300$ \\ Set PID Goal & Adjust the single band trigger rate & $\sim 250$ \\ Send Log Files & Telemeter log file & $\sim 250$ \\ Kill Process & Tries to kill the selected process & $\sim 150$ \\ Restart Process & Tries to restart the selected process & $\sim 100$ \\ Set Band Mask & In/Exclude frequency bands & $\sim 100$ \\ TURN RFCM On/Off & Turn on/off the power to the amplifiers & $\sim 45$ \\ Set Antenna Mask & In/Exclude antennas from the trigger & $\sim 20$ \\ GPS Trigger Flag & Enable/Disable GPS triggered events & $\sim 20$ \\ Start New Run & Starts a new run & $\sim 20$ \\ Set Channel Scale & Change the goal rate of an individual band & $\sim 15$ \\ \hline \end{tabular} \caption{The most common commands used during the ANITA-I flight, with a brief description of the intended command outcome, and an indication of the number of times the command was used during the ANITA-I flight.} \label{tab:commands} \end{table} \subsubsection{Commanding} \label{s:commanding} During the flight it is possible to send commands from the ground to the payload over one of the three available links: line of sight, TDRSS and IRIDIUM. Each command contains up to 20 bytes of information. The most common commands used during the ANITA-I flight are shown in Table~\ref{tab:commands}. \section{Monte Carlo Simulation Sensitivity Estimates} ANITA's primary goal is the detection of the cosmogenic neutrino flux. All first-order modeling of such fluxes assume they are quasi-isotropic; that is to say, although they originate from sources of small angular extent and retain their directional information throughout their propagation trajectory to earth, the sources are presumed to be distributed in a spatially uniform manner as projected on the sky and viewed from Earth. Thus to a good approximation we model the incoming neutrino flux as sampling a parent distribution that is uniform across the spherical sky. Because of the complexity of the neutrino event acceptance for the target volume of ice that ANITA's methodology employs, a complete analytic estimate of the neutrino detection sensitivity is very difficult, and we have relied on detailed Monte Carlo simulation methods to perform the appropriate multi-dimensional integration which determines the acceptance as a function of neutrino energy. \begin{figure}[htb!] \includegraphics[width=5.85in]{Anitageom.eps} \caption{ Schematic view of the geometry for a neutrino interaction in the field-of-view of ANITA. \label{Anitageom} } \end{figure} One may visualize the acceptance by first considering a volume element located somewhere within the horizon of the ANITA payload at some instant of time, at a depth within several electric field attenuation lengths of the ice surface. In the absence of the surrounding ice and earth below this volume element, the neutrino flux passing through such a volume is isotropic. However, the attenuation due to the earth and surrounding ice modifies the flux and resulting acceptance, so that, at EeV energies where neutrino attenuation lengths are several hundred km water equivalent for the standard model cross sections, most of the neutrino acceptance below the horizon is lost. A schematic view of the geometry is shown in Fig.~\ref{Anitageom}. The volume element $dV$, and position $\vec{r}$ with respect to the payload's instantaneous location, encloses an interaction location which is subject to a neutrino flux over solid angle element $d\Omega_{\nu}$, at offset angles $\phi,\theta$ with respect to the radial azimuth from the payload and local horizontal directions. Note that the payload angular acceptance $d\Omega$ is related to $d\Omega_{\nu}$ by convolution with the detectability of neutrinos at every location within the field of view. The geometry of the Cherenkov cone and refraction effects then complicate the convolution even further, and Monte Carlo integration becomes an attractive way to deal with this complexity. We may still define an average volumetric acceptance $\langle \mathcal{V}\Omega \rangle$, in km$^3$-water-equivalent steradians (km$^3$we sr), as the physical target density-times-volume $\rho V$ of the detector multiplied by the weighted solid angle $\Omega_\nu$ over which initially isotropic neutrino fluxes produce interactions within the detector, and then multiplied again by the fraction $N_{det}/N_{int}$ of such interactions that are detected. For any given volume element of the target, the latter term will depend on the convolution of the emission solid angle for detectable radio signals with the arrival solid angle of the neutrinos. Although the symbol $\mathcal{V}\Omega$ appears to imply that the two parts of the acceptance (volume and solid angle) can be factored, this is generally not true in practice since they tend to be strongly convolved with one another as a function of energy. That is to say, any given volume element may provide events detectable over a certain solid angle centered on some portion of the sky, but a different volume element will in general have both a different net solid angle for detection and also a different angular region over which those detections occur, all as a function of energy. However, we can define an average acceptance solid angle $\langle \Omega \rangle$ and this factor is a useful quantity in this calculation. The differential number of neutrino interactions per unit time, per unit neutrino arrival solid angle, per unit volume element in the detector target, can be written as: \begin{equation} \frac{d^3 N_{int}}{dt~d\Omega_{\nu}~dV} ~=~ \int_{E_{thr}}^{\infty} dE_{\nu}~ F_{\nu}(E_{\nu})~ \sigma_{\nu N,e}(E_{\nu})~\rho(\vec{r})~N_{A}~P_{surv}(E_{\nu},\vec{r},\theta_{\nu},\phi_{\nu}) \end{equation} where $E_{thr}$ is the threshold energy for the detector, $\sigma_{\nu N,e}$ the neutrino total cross section on nucleons (or electrons, for $\nu_e+e$ scattering), $F_{\nu}(E_{\nu})$ is the neutrino flux as a function of energy, $\rho(\vec{r})$ is the ice density at interaction position $\vec{r}$, $N_{A}$ is Avogadro's number, and $P_{surv}(E_{\nu},\vec{r},\vec{p}_{\nu})$ is the survival probability for a neutrino of energy and 3-momentum $E_{\nu}, \vec{p}_{\nu}$ to arrive at position $\vec{r}$, which is the location of the volume element under consideration. This probability can be further written: \begin{equation} P_{surv}(E_{\nu},\vec{r},\theta_{\nu},\phi_{\nu}) = e^{-L_{int}(E_{\nu}) X(\vec{r},\theta_{\nu},\phi_{\nu})} \end{equation} with the neutrino interaction length $L_{int} = (\sigma_{\nu N,e}N_A)^{-1}$, and the function $X(\vec{r},\theta_{\nu},\phi_{\nu})$ the total column density along direction $\theta_{\nu},\phi_{\nu}$ from point $\vec{r}$ within the detector. The function $X$ thus contains the earth-attenuation dependence. The number of neutrino interactions is thus \begin{equation} N_{int}~=~ \int_0^T dt \int_0^{4\pi} d\Omega_{\nu} \int_0^{V} dV \frac{d^3 N_{int}}{dt~d\Omega_{\nu}~dV} \label{nint} \end{equation} To determine the average acceptance solid angle $\langle \Omega \rangle$, imagine the detector target volume completely isolated from any exterior target matter of the earth, bathed in an isotropic flux of neutrinos from $4\pi$ solid angle, with the assumption that the interaction length is large enough compared to the target thickness that we may neglect self-attenuation within the target. The number of interactions for this idealized case is \begin{equation} N_{int,0}~=~ \int_0^T dt \int_{4\pi} d\Omega_{\nu} \int_{V} dV \int_{E_{thr}}^{\infty} dE_{\nu}~F_{\nu}(E_{\nu})~\sigma_{\nu N,e}(E_{\nu})~\rho(\vec{r})~N_{A} \end{equation} or the same integral as equation~\ref{nint} except that the survival probability $P_{surv}=1$ for all arrival directions. With this prescription, the average acceptance solid angle is given by \begin{equation} \langle \Omega \rangle ~=~ 4\pi~\frac{N_{int}}{N_{int,0}} \end{equation} The corresponding number of detections depends both on the interactions and the probability $P_{det}$ of detection of the resulting shower. To first order, deep inelastic neutrino charged-current interactions lead to an immediate local hadronic shower and a single charged lepton which escapes the vertex and can subsequently interact. For electron neutrinos at $10^{18-20}$~eV, this lepton interaction usually takes place rapidly, and produces an immediate secondary electromagnetic shower, which will be elongated due to Landau-Pomeranchuk-Migdal (LPM) suppression of the bremsstrahlung and pair-production cross sections. For other neutrino flavors, the secondary lepton will propagate long distances through the medium but can produce detectable secondary showers through electromagnetic (bremsstrahlung or pair production) or photo-hadronic processes. Since the average Bjorken inelasticity $\langle y(E_{\nu}) \rangle \simeq 0.22$ at these energies, the secondary lepton in these charged current interactions leaves with most of the energy on average, so a secondary shower with any appreciable fraction of the lepton's energy may exceed the hadronic vertex in shower energy. Accounting for this, the number of detections can be written \begin{equation} N_{det}=\int_0^T dt \int_{4\pi} d\Omega_{\nu} \int_{V} dV \frac{d^3 N_{int}}{dt~d\Omega_{\nu} dV} \left [ P_{det,h}(yE_{\nu},\vec{r}_{h},\theta_{\nu},\phi_{\nu}) + \alpha_{cc}P_{det,\ell}((1-y)E_{\nu},\vec{r}_{\ell},\theta_{\nu},\phi_{\nu}) \right ] \end{equation} where $P_{det,h}(yE_{\nu},\vec{r_{h}},\theta_{\nu},\phi_{\nu})$ is the detection probability for the hadronic showers as a function of shower energy $yE_{\nu}$, shower centroid position $\vec{r}_h$, and shower momentum angles $\theta_{\nu},\phi_{\nu}$. The corresponding detection probability for subsequent lepton showers is written as $$\alpha_{cc}P_{det,\ell}((1-y)E_{\nu},\vec{r}_{\ell},\theta_{\nu},\phi_{\nu})$$ where $\vec{r}_{\ell}$ is the centroid of the leptonic shower and $\alpha_{cc} = 1,0$ for charged- or neutral-current interactions respectively (the latter have zero detection probability since the outgoing lepton is a neutrino; for simplicity in this present treatment we do not include subsequent possible neutrino interactions from neutral-current events within the detector and their contribution is small in any case). Once the two integrals given by $N_{int}$ and $N_{det}$ are determined, the volumetric acceptance is given by \begin{equation} \langle \mathcal{V}\Omega \rangle = \frac{N_{det}}{N_{int}}~V_0 \Omega ~=~ 4\pi V_0 \frac{N_{det}}{N_{int,0}} \end{equation} where $V_0$ is the total physical volume over which the trial neutrino flux arrives. \subsection{Monte Carlo Implementation} We have developed two largely independent simulation codes, one at the University of Hawaii, hereafter denoted the Hawaii code, and a second originally developed at UCLA, but now maintained at University College London, hereafter denoted the UCL code. Although some empirical parameterizations (for example, ice refractive index vs. depth) may use common code or source data, the methodologies of the two codes are entirely independent. \begin{figure}[htb!] \centerline{\includegraphics[width=4.5in]{E_skymaps.eps}} \caption{Simulated false-color images of relative E-field strength of the emerging Cherenkov cone as projected onto the sky due to a neutrino event at energy of order $3 \times 10^{18}$~eV, with an upcoming angle of several degrees relative to the local horizon. The different panes show the response at different radio frequencies as indicated. The color scale is normalized to the peak (blue) in each pane; in fact the higher frequencies peak at higher field strengths. \label{skyimg} } \end{figure} Both the Hawaii and UCL simulations of the ANITA experiment estimate the experiment's sensitivity from a sample of simulated ultra-high energy neutrino interactions in Antarctic ice. The programs use variations of ray tracing methods to propagate the signal from the interaction to the payload antennas, and then through the three-level ANITA instrument trigger. The neutrino events can be drawn from a parent sample that match any given source spectrum, or, for more general results, the acceptance can be estimated as a function of energy by stepping through a series of monoenergetic neutrino fluxes to map out the energy dependence. This latter approach then allows for estimates of the acceptance which can be convolved with any input neutrino flux. \begin{figure} \begin{center} \centerline{\epsfig{figure=upperlayers.eps,width=3in}~~\epsfig{figure=lowerlayers.eps,width=3in}} \caption{Left: Altitude of the upper four layers given in Crust 2.0 along the the $75^{\circ}$ S latitude line. The horizontal axis is degrees in longitude. Right: Altitude of the lower three layers given in Crust 2.0 along the the $75^{\circ}$ S latitude line. \label{fig:layers}} \end{center} \end{figure} The two simulations also differ in the areas in which they focus their highest fidelity. The Hawaii code attempts to model the entire pattern of the sky brightness of radio emission produced by a neutrino interaction, creating a library of these events which is later sampled for different relative sky positions of the payload with respect to the events. Fig.~\ref{skyimg}(left) shows one example of this type of modeling, in this case for a smooth ice surface. This sky-brightness modeling includes also a first-order model for the wavelength-scale surface roughness of the ice. The method uses a facet-model of the surface, and probes it with individual ray traces that are distributed in their number density according to the Cherenkov cone intensity (see Fig.~\ref{ERays} included in an Appendix). This surface ray-sampling thus gives a first-order approximation that has been found to give reasonable fidelity for such effects as enhanced transmission coefficients beyond the total-internal-reflectance angle, for example. Additional detail regarding surface roughness effects are included in an Appendix. The Hawaii code does not attempt detailed modeling of the physical topography of Antarctica but instead uses a standard ice volume with homogeneous characteristics, and estimates of the actual ANITA flight acceptance use a weighted-average based on the piecewise geography of the flight path. The UCL simulation, in contrast, does include a detailed modeling of the Antarctic ice sheets and subcontinent based on the CSEDI Crust 2.0 model of the Earth's interior, and the BEDMAP model for Antarctic ice thicknesses. Figure~\ref{fig:layers} shows the elevations of the seven crustal layers included in Crust 2.0; the BEDMAP database provides similar resolution. This approach provides higher fidelity estimates of the instantaneous ice volume below the payload for any given time during the flight. This approach toward event modeling, with its limited sampling of ray propagation, does not easily lend itself to treatment of surface roughness however, but is optimized for speed, using a principal-ray approximation for the neutrino interaction-to-payload radiation path. Thus each event is observed along a single ray refracted through the surface from the cascade location. \begin{figure}[htb!] \includegraphics[width=3.85in]{VeffSr.eps} \caption{Curves of simulated acceptance vs. energy for the two independent ANITA Monte Carlos. \label{Veffsr} } \end{figure} Despite their distinct differences in approach, the two methods have converged to good agreement in the total neutrino aperture, when compared for a given standard ice volume. Details of the two methods are described in two appendices. In Fig.~\ref{Veffsr} we show the results of the two estimates of total effective volumetric acceptance (volume times acceptance solid angle for an isotropic source such as the cosmogenic neutrinos) as estimated by the UH and UCL simulations. The values may be thought of as the total water-equivalent volume of material that has 1 steradian of acceptance to a monoenergetic flux of neutrinos at each plotted energy. It is evident that there is reasonably good agreement for ANITA's acceptance, with values that differ by of order a factor of 2 at most. In practice the solid angle of acceptance for any volume element of the ice sheet surveyed by ANITA is in fact very small, and the acceptance is also quite directional, covering only a relatively small band of sky at any given moment. The actual sky coverage will be shown in the next section. \section{Pre-flight Calibration} In June of 2006 prior to the ANITA flight, the complete ANITA payload and flight system was deployed in End Station A at the Stanford Linear Accelerator Center (SLAC) about 13~m downstream of a prepared target of 7.5 metric tons of pure carving-grade ice maintained at -20C in a refrigerated container. Using the 28 GeV electron bunches produced by SLAC, electromagnetic showers were created in the ice target, resulting in Askaryan impulses that were detected by the payload with its complete antenna array, trigger, and data acquisition system. This beam test experiment thus provided an end-to-end calibration of the flight system, and yielded on- and off-axis antenna response functions for ANITA as well as a separate verification of the details of the Askaryan effect theory in ice as the dielectric medium. A complete report of the results for the Askaryan effect in ice has been presented in a separate report~\cite{slac07}, which we denote as paper I, and we do not reproduce that material here. However, antenna response details were not reported in paper I, so we provide those here since they are relevant to the ANITA-1 flight. Fig.~\ref{wfm_ref}(top) shows an antenna impulse response function, and (bottom) the same set of response functions for the actual Askaryan signals recorded during the SLAC beamtest. The dashed and solid lines show both raw and partially deconvolved to remove the lowest-frequency component, which gives the long tail of $\sim 200$~MHz ringing that is seen in the plots. This tail is due to the rapid rise in group-delay as one approaches the low-frequency cutoff of the antenna, and although it can be useful as a kind of ``fingerprint'' of the antenna response in the presence of noise, it does not contribute much power to the actual trigger. The solid lines show the response function with the low-frequency component removed by wavelet deconvolution, showing the dominant power components. These data may be compared to the laboratory measurements in Fig.~\ref{impulse} above. \begin{figure}[htb!] \includegraphics[width=5.5in]{wfm_ref.eps} \caption{ Top: measured ANITA quad-ridge horn impulse response. Dashed lines include the strong nonlinear group-delay component near the horn low-frequency cutoff, and the solid line removes this low-power component using wavelet deconvolution, for better clarity. Bottom: waveform measured by ANITA antennas during SLAC calibration using direct Askaryan impulse signals from ice; dashed and solid lines are the same as in the top pane of the figure. \label{wfm_ref} } \end{figure} \section{In-flight Performance} ANITA's primary operational mode during the flight was to maintain the highest possible sensitivity to radio impulses with nanosecond-scale durations, but the trigger was designed to be as loose as possible to also record many different forms of impulsive interference. The single-band thresholds were allowed to vary relative to the instantaneous radio noise. The system that accomplished this noise-riding behavior utilized a Proportional-Integral-Differential (PID) servo-loop with a typical response time of several-seconds based on several Hz sampling of the individual frequency band singles rates. \subsection{Antenna temperature} \begin{figure*}[htb!] \includegraphics[width=6.95in]{Tant_Thr.eps} \caption{Top: Antenna effective temperature vs. event number (at an average rate of about 4.5~Hz) during the ANITA flight. The effective temperature is the average over the antenna beam including 230K ice and 5-100K sky, with varying contributions from the Sun and Galactic Center which produce the diurnal modulation. Modulation due to free rotation of the payload over several minute timescales has been averaged out. Bottom: The noise-riding antenna average threshold is shown on the same horizontal scale, showing the servo response between the instrument threshold and the apparent noise power. This noise-riding approach was used to retain a relatively constant overall trigger rate, giving stability to the data acquisition system. \label{rfpower} } \end{figure*} The power received by the antennas in response to the ambient RF environment directly determines the limiting sensitivity of the instrument. This received system power $P_{SYS}$, referenced to the input of the low-noise amplifier (LNA) is often expressed in terms of the effective temperature, via the Nyquist relation~\cite{Nyquist}, which can be expressed in simple terms as: \begin{equation} P_{SYS} ~=~ kT_{SYS} ~=~ k(T_{ANT}~+~T_{RCVR}) \end{equation} where $k$ is Boltzmann's constant, $T_{ANT}$ is the effective antenna temperature, which includes contributions from the apparent sky (or sky+ice in our case) temperature in the field of view, along with antenna mismatch losses; $T_{RCVR}$ is the additional noise added by the front-end LNA, cables, and filters, and second-stage amplifiers and other signal-conditioning devices, all of which constitute the receiver for the incident signal. $T_{RCVR}$, which is an irreducible noise contribution, is usually fixed by the system design, and is calibrated separately (c.f. Fig.~\ref{RFCM}). It is typically stable during operation and can be used as a reference to estimate additive noise. In contrast $T_{ANT}$ may vary significantly during a flight and must be monitored to ensure the best possible sensitivity of the instrument. Absolute monitoring of the sensitivity for ANITA was accomplished by scaling the measured total-band power from the calibrated noise figure of the front-end receivers. This power was separately sampled at several Hz for each of the input channels, using a radio-frequency power detector/amplifier based on the MAXIM Integrated Circuits MAX4003 device, calibrated with external noise diodes. The absolute noise power was also cross-calibrated by comparison with known noise temperatures for the galactic plane and sun which were in the fields-of-view of the antennas throughout the flight, modulated by the payload rotation and by diurnal elevation changes. \begin{figure*}[htb!] \includegraphics[width=5.5in]{sky20.eps} \caption{A false-color map of sky exposure quantified by total neutrino aperture (here given for $E_{\nu}=10^{20}$~eV) as a function of right-ascension and declination for the ANITA-1 flight. \label{skymap} } \end{figure*} A plot of the measured antenna temperature, derived from the RF power measurements, is shown in Fig.~\ref{rfpower} as a function of the payload event number recorded during flight, which may be converted to payload time in seconds using a mean trigger rate of about 4~Hz. Several features of this plot require explanation. A firmware error in a commercial Global-Position System (GPS) unit used for flight synchronization caused a series of in-flight timekeeping errors which led to a drift in payload time vs. real time; this was corrected in the data after the flight. Starting on day 21 (roughly event number 7M) of the flight, the flight computer began to experience anomalies which led to a series of reboots of the system, and increased deadtime. This behavior persisted until the termination of the flight after 35 days aloft. Beyond day 21, although we continued to take triggered data with an effective livetime of about 40\%, the environmental sensor data is fragmented and only a portion is shown here. Values which rise off-scale on the vertical axis indicate periods when EMI dominated the system noise, as ANITA's trajectory came within the horizon of either McMurdo Station or Amundsen-Scott Station, both of which had strong transmitters in the ANITA bands. Other periods are largely quiescent as indicated by the plot; in particular the period between events 4M to 6.6M. During these periods the observed antenna temperature is dominated by the 230K ice in the lower portion of the antenna fields of view, averaged with the cold sky (3-10 K over our band) in the upper portion. The plotted values are averaged over all payload rotation angles (ANITA was allowed to freely rotate around its balloon suspension point). The observed modulation of the antenna temperature is thus due the diurnal changes in elevation angles of the Galactic Center and Sun, and has the expected amplitude for these sources over our frequency band, dominated by the nonthermal rise at the lower end of our band near 200~MHz, where the antenna fields of view are also the largest. \subsection{Navigation performance} Figure~ref{GPS} summarizes the navigation performance during the flight. Reconstruction of the ground position of any source that is detected during flight depends on accurate knowledge of the payload position and attitude. There are six degrees of freedom: latitude, longitude, altitude, orientation, pitch, and roll, and these are all determined via the ADU5 system on ANITA. Redundant information with somewhat lower precision is also obtained by a sun-sensor and magnetometer suite. \begin{figure*}[htb!] \centerline{\includegraphics[width=6.5in]{GPS.eps}} \caption{ A composite set of plots representing the in-flight performance of ANITA's navigation system. The left side plots are for geodetic position and altitude, and the right-side plots show the attitude parameters. The top 6 plots zoom in on a single data run, covering several flight days. The corresponding plots on the bottom half of the figure show the total time series for the full flight. Data for the orientation changes rapidly as the payload is allowed to freely rotate; thus the individual rotation cycles are difficult to see in the entire flight's data but show up clearly for the single data run. \label{GPS} } \end{figure*} Fig.~\ref{GPS} shows several features that illustrate important aspects of the navigation system. The payload was freely-rotating; that feature is illustrated by the orientation plots in the figure, which show the often rapid change of azimuth for the fixed reference coordinate system on the payload as it rotated with periods varying from minutes to hours during the flight. The payload pitch and roll are somewhat ill-defined for an azimuthally symmetric payload, but they do indicate that the reference plane of the GPS was slightly tilted, by about $1^{\circ}$ in the ``roll'' direction with respect to the horizontal. Overall, the performance of this navigation subsystem was within the flight requirements, and helped to ensure the accuracy of the direction reconstruction of impulsive radio sources detected throughout the flight. \subsection{Flight path} \begin{figure*}[htb!] \centerline{\includegraphics[width=3.1in]{Flightpath08.eps}~~\includegraphics[width=3.5in]{depthAvgCan_06_26.eps}} \caption{Left: Flight path for ANITA-1, with depth of ice plotted. Only ice within the actual field-of-view of the payload is colored. Ice depths: red: $>4$km; yellow: 3-4km; green:2-3km; light blue 1-2km, blue: $<1$km. Right: average depth of ice within the payload field of view during the flight. Because the average includes all ice within the field of view, the minima and maxima of this curve do not correspond to the minima and maxima in the actual localized ice depths. \label{flightpath} } \end{figure*} During the austral summer in the polar regions, the polar vortex usually establishes a nearly circumpolar rotation of the upper stratosphere, and payload trajectories are often circular on average, even over several orbits. During the austral summer of 2006-2007 however the polar vortex did not stabilize in a normal configuration, but remained rather weak and centered over West Antarctica. This led to an overall offset of the centroid of the orbital trajectories for ANITA-1, as seen in Fig.~\ref{flightpath}(left). This anomalous polar vortex had two consequences for ANITA's in-flight performance: First, it led to a much larger dwell time in regions where there were higher concentrations of human activity, primarily McMurdo and Amundsen-Scott South Pole Stations. The presence of these and other bases within the field-of-view of the payload led to a higher-than expected rate of noise triggers and compromised the instrument sensitivity over a significant portion of the flight. Second, the higher-than-normal dwell time over the smaller and thinner West Antarctic ice sheet, rather than the main East Antarctic ice sheet, led to a lower overall exposure of ANITA to deep ice. The first consequence is apparent in Fig.~\ref{rfpower}, where it is evident that as much as 40\% of the flight suffered some impact on sensitivity due to the presence of noise sources in the field of view. The second consequence is quantified in Fig.~\ref{flightpath}(right) where we plot the field-of-view-averaged depth of the ice along the flight trajectory, along with the overall average. Despite the large dwell time over West Antarctica, ANITA still achieved an average depth of ice during the entire flight of more than 1500~m. \subsection{Sky coverage} In Fig.~\ref{skymap} we show a false-color map of the relative neutrino sky exposure in Galactic coordinates for ANITA's flight. The exposure is determined both by the geometric constraints of the flight path, and by the neutrino absorption of the earth, which limits the acceptance to within several degrees of the horizon at any physical location over the ice. It is evident from this plot that ANITA had significant exposure both to the galactic plane and out-of-plane sky, and although the total sky solid angle covered is only of order 10\%, the exposure is adequate to constrain a diffuse neutrino flux component, as well as provide some constraints on point sources within the exposed regions. \subsection{Data quality and volume} ANITA recorded approximately 8.2~M RF-triggered events during the 35 day flight, and for each of these events all of the antennas and different polarizations were read out and archived to disk. The system was able to maintain a trigger rate of between 4 and 5 Hz during normal operation without any significant loss, and this rate, which was determined by the individual antenna thresholds, met the pre-flight sensitivity goal for periods when the payload was not in view of strong anthropogenic noise sources. In practice, less than 1\% of all recorded triggers do actually arise from a coherent plane-wave impulse arriving from an external source. The vast majority are random thermal-noise coincidence events, which are recorded to provide an ongoing measure of the instrument health, and separate validation of the sensitivity. \begin{figure*}[tb!] \includegraphics[width=6.5in]{7767328.eps} \caption{Plot of an example triggered event (attributed to interference from a West Antarctic encampment) during the ANITA flight, as viewed with the realtime data viewers, both as a set of the entire payloads waveforms (left) and the trigger antenna geometry (right). Colors indicate the trigger hierarchy: yellow indicates L1 trigger, green L2, and blue a global L3 trigger. \label{7767328} } \end{figure*} The performance of the data acquisition system for non-thermal events is best summarized by showing a typical impulsive event as recorded by the system and displayed by the in-flight monitor on the ground receiving computer system. A partial display of event 7767328 is shown in Fig.~\ref{7767328}. On the left are the recorded waveforms for this event, which clearly show an impulsive signal superposed with thermal noise on the left, where all 72 antenna signals are displayed. Vertical columns are organized in azimuth around the payload, with horizontal rows providing both the vertical and horizontally polarized signals from the vertically offset antenna rings of the payload. The physical geometry of the arriving signal is indicated in the payload view to the right, where colors indicate which ones produced the actual triggers and at what level. These colors also match up with the colored waveforms on the left. These raw-data waveforms, while appearing somewhat noisy, actually have quite high information content, with up to ten usable signals detectable above thermal noise even in the raw data in this event. After processing, which typically includes a matched-filter correlation designed to optimize the SNR of broadband impulsive events, such signals are used to provide strong geometric constraints on the arrival direction of the event, as we will detail in a later section. \subsection{Ground-to-payload calibration pulsers.} The ANITA instrument requires the highest possible precision in reconstructing the arrival direction of any incoming source event so that it may reject any impulses that might mimic neutrino signals but arise from anthropogenic electromagnetic interference. In addition, ANITA must verify the in-flight sensitivity of the trigger system to external events. To accomplish this, we developed a ground-based calibration approach which relied on pulse transmitters which could be directed at the payload from several sites during the flight, and used to establish both pointing accuracy and sensitivity. Here we describe the performance and results using these systems. \label{sec:ground_pulser} \subsubsection{Description} \label{sec:gp:description} We deployed four ground-to-payload transmitter antennas at two distinct sites during the 2007 -- 2008 ANITA flight. The antennas transmitted radio impulses to the payload in order to verify instrument health and provide a sample of neutrino-like signals for testing our analysis codes. A field team traveled to Taylor Dome ($77^{\circ}$,52' S; $150^{\circ}$,27' E) and operated a surface quad-ridged horn antenna and a borehole discone-type antenna in a 100 meter deep borehole, about 30~m below the firn-ice boundary at Taylor Dome. Results from concurrent Taylor Dome experiments by the ANITA team on ice properties at this site are presented in a separate report~\cite{Besson08}. The team at Williams Field at the LDB launch site operated a surface quad-ridged horn antenna and a discone in a 26 meter deep borehole. All four antennas transmitted a kilovolt-scale impulse $\approx 500$~ps wide with a flat frequency spectrum. The impulse was chosen to be as close as possible to theoretical predictions of a neutrino Askaryan signal. The surface antenna at McMurdo could adjust the amplitude and polarization of the impulse and occasionally transmitted brief continuous wave (CW) signals. The surface antennas at both McMurdo and Taylor Dome were quad-ridged horn antennas identical to those on the ANITA payload. The borehole antennas were both discone antennas designed and built at the University of Hawaii for use in ice. The payload used two methods of detecting impulses: the standard RF trigger and a forced trigger using a preset time offset relative to the GPS pulse-per-second (PPS). Anthropogenic EMI from McMurdo restricted our ability to operate in RF trigger mode when the base was above the horizon. Despite this noise, we were able to use RF triggers for several hours during which noise levels were low. Both as a supplement to the RF triggered events and also for easy event identification, we timed the McMurdo antennas' transmissions such that the signals always arrived at the payload at a constant time offset from the GPS PPS. These PPS triggers allowed us to record calibration signals during periods that would have otherwise been unusable due to anthropogenic noise. \begin{figure}[thb!] \centerline{\epsfig{figure=Cal_rf_map.eps,width=3.3in}\epsfig{figure=bh_max_v_vs_dist_for_instrument_paper_v2.eps,width=3.5in}} \caption{Left: The flight path of ANITA in the first day after launch, with the reconstructed positions of signals from McMurdo ground calibration systems. Red dots are projected positions of the reconstructed events, the line is a trajectory of the ANITA flight, and the red part of the line is for the flight segment for this reconstruction. Right: Distance of the ANITA payload from Willy Field versus the maximum E-field of the pulse received from the McMurdo ground pulser borehole antenna. The solid line is a fit to a 1/r curve multiplied by an angle-dependent Fresnel factor. The dashed line is a pure 1/r curve with arbitrary normalization for reference. (The overall normalization agreed with the absolute scale to about 10\%). \label{cal_rf_map}} \label{fig:gp:dist_vs_max_v} \end{figure} \begin{figure}[ht!] \centerline{\epsfig{figure=seavey_real_162008_raw.eps,width=3.in}~~~\epsfig{figure=seavey_real_162008_deconv.eps,width=3.in}} \caption{Left: A ground calibration pulse from the McMurdo surface antenna, recorded during the 2006-2007 ANITA flight. Right: The same pulse, with instrument gains and group delays removed.} \label{fig:gp:deconvolution} \end{figure} \subsubsection{Uses of Ground Calibration Pulses} \label{sec:gp:uses} The ANITA payload successfully recorded tens of thousands of ground pulser signals. ANITA detected signals from the borehole at Williams Field at ranges up to 260 km; Fig.~\ref{cal_rf_map} shows a portion of the first day's flight path with a segment of the path from which events were efficiently reconstructed due to a lull in surface noise activity near McMurdo station. The surface antennas at McMurdo and Taylor Dome could be seen from even further away. ANITA's flight path took it no closer than about 250~km to Taylor Dome, and the limited power of the borehole pulser there, combined with the higher loss in the longer cable needed to accommodate the much deeper borehole there made it difficult to detect the Taylor Dome borehole pulses, as most were below the ANITA detection threshold. However, of order 20 of these borehole impulses were detected, all very close to the instrument hardware threshold. Figure~\ref{fig:gp:dist_vs_max_v} shows the maximum E-field strength from the McMurdo borehole antenna at the location of the ANITA instrument, as a function of distance. More than 10,000 events are plotted here. The line is the predicted peak of the E-field strength, taking into account Fresnel factors at the payload's location and the distance of the payload from the transmitting antenna. This plot illustrates the utility of a calibration source for measuring long-term stability of instrument gain. Changes in ANITA's gain are limited to the $10\%$ level over the day covered by the data in Figure~\ref{fig:gp:dist_vs_max_v}, and these constraints on the variation include the effects of any variation in surface roughness in the surface Fresnel zone of the borehole transmitter antenna. \begin{figure}[htb!] \centerline{\includegraphics[width=3.3in]{ev316475_td_coherent_wfm.eps}~~\includegraphics[width=3.3in]{ev316475_td_coherent_wfm_deconv.eps}} \caption{Left: Raw impulse from Taylor dome borehole. Right: Deconvolved impulse from the same waveform data, showing that the pulse remains highly coherent despite the propagation effects of the ice, firn, and surface through which the impulse was transmitted. \label{TDwfm} } \end{figure} We perform a final adjustment of antenna timing using the plane wave source. We use it to check the impulse response measured in the lab. The amplifiers, filters, and other components of the flight system introduce frequency-dependent gains and group delays in the measured waveforms. Using lab measurements of the instrument's RF properties, we can deconvolve the instrument's impulse response from our data, retrieving the E-field that was incident upon our antennas. We perform a final adjustment of antenna timing using the plane wave source. We use it to check the impulse response measured in the lab. Once again, the ground calibration data allows us to test this process, since we know what the incoming signal was. Figure~\ref{fig:gp:deconvolution} is an example of such a test. The deconvolved signal is much sharper. In fact, the deconvolved signal is the impulse response of an ANITA quad-ridged horn antenna -- which is also the result of transmitting an impulse such as the one used in the ANITA ground calibration system. This deconvolution process is also useful in establishing the coherent transmission properties of the ice, both internally as the pulse must pass through the ice bulk, and interface transmission as it passes through the ice surface which is potentially rough on scales that may not be negligible compared to the wavelength of the radiation. This propagation test was a focus of our calibration system at Taylor Dome. In Fig.~\ref{TDwfm} we show a pair of waveforms, the left one the raw waveform of a Taylor Dome pulse, and the right one the deconvolved impulse which is highly coherent. The fact that this impulse was transmitted of order 150 m (e.g., the slant depth to the surface exit point) through the ice and firn, along with the firn surface, with no apparent loss of coherence (which would appear as possible distortion or splitting of the pulse) provides good evidence that the methodology employed by ANITA is effective in detection of Askaryan impulse events from high energy showers. \subsection{Timing/Pointing analysis} \section{Angular Reconstruction} Angular reconstruction of RF signals is a crucial part in the ANITA data analysis. It provides powerful rejection of incoherent thermal noise events which comprise $\ge$95\% of the data set, anthropogenic RF events from existing bases and field camps, and radio Cherenkov events from air showers. If ANITA observes candidate neutrino events, angular reconstruction will be important for first-order neutrino energy estimation, for providing directional information, propagation distance $(r)$ for $1/r^2$ corrections, and the RF refraction angle on the ice-air boundary for the Fresnel correction. Although ANITA's trigger uses circularly polarized signals to provide an unbiased sensitivity to both linearly polarized events, the signals are separately recorded as their horizontally- and vertically- polarized waveform components. Our Monte-Carlo analysis has indicated that the neutrino signals of interest tend to be strongly favor vertical over horizontal polarization, and thus for simplicity we analyze the H-pol and V-pol waveforms as separate data sets. In the following description, the analysis steps are applied in parallel to both the polarizations independently. All steps of the analysis are optimized for either a 10\% fraction of the data, or the full data set with the payload orientation kept unknown till after the analysis was complete. This is done to eliminate analyst bias in developing the data cuts. The separation of the data streams into H-pol and V-pol sets also helps to eliminate any selection bias, since the cuts are applied in identical fashion between the two sets. \subsection{Cross Correlation} Direction reconstruction of the RF signal uses the arrival timing difference ($\Delta t$) information between pairs of antennas. $\Delta t$ is determined by using a cross-correlation technique between recorded waveforms of the antennas. The cross correlation coefficient ($R$) is defined by \begin{equation} R= \frac{{\displaystyle \sum_{t}}(V_i(t)-\bar{V_i})(V_j(t+ \delta t)- \bar{V_j})}{\sqrt{{\displaystyle \sum_{t}} (V_i(t)-\bar{V_i})^2} \sqrt{ {\displaystyle \sum_{t}}(V_j(t+\delta t)-\bar{V_j})^2}}. \end{equation} Here, $V(t)$ is a recorded voltage value at a time bin $t$ and $\delta t$ is time delay. The $i$ and $j$ denote channel numbers. For fast calculation, the mean voltage $\bar{V}$ is first subtracted to normalize the mean to zero. In order to minimize the effects any anthropogenic continuous wave interference, we use a limited time window of the signal waveform of the antenna $i$; $|{t-t_{\rm peak}}| \le 15~{\rm ns}$, where $t_{\rm peak}$ is the time bin in which the maximum peak voltage, $V_{\rm peak}$, exists. For a given pair of antennas $i$ and $j$, we search for the time lag $\Delta t$ which gives the maximum correlation coefficient, $R_{max}$, while varying $\delta t$ within $\pm 25~\rm{ns}$. In order to obtain more precise time resolution than the 0.38 ns sampling time interval of the ANITA digitization system, we use interpolated waveforms with a 7.6 ps bin width which is a five-fold oversampling of the original waveforms. \subsection{Group Delay Calibration and Time Resolution} A precise timing measurement using the cross correlation technique allows an accurate calibration of the fixed group delay for each channels. Although all fixed group delays were also measured in the laboratory before the flight, calibration during the flight was performed to reduce systematic uncertainties such as temperature dependence and possible antenna coordinate changes after the launch. We measured all relative group delays using the impulse signal transmitted from ground based calibration pulser system~\ref{ground_pulser}[reference for ground pulser here]. A group delay difference ($\delta T^{ij}$) between two channels of $i$ and $j$ is obtained by comparing observed timing difference $\Delta t_{\rm obs}$ and its expectation $\Delta t_{\rm exp}$; \begin{equation} \delta T^{i,j}= T_{\rm delay}^{i} - T_{\rm delay}^{j} = \Delta t_{\rm obs}^{ij}-\Delta t_{\rm exp}^{ij}, \end{equation} where the $\Delta t_{\rm exp}^{ij}$ is given by the difference of RF propagation times from the coordinate of transmitter to each receiver antenna. Figure~\ref{fig:dt} shows distributions of time differences $dt=\Delta t_{\rm obs}-\Delta t_{\rm exp}$ after applying the group delay correction for all channels. Obtained time resolutions for the upper/lower antenna pairs in the same $\Phi$-sector is $\sigma_{\rm same~\phi}=47~{\rm ps}$. Because of additional jitter due to the intrinsic jitter of time synchronization between different SURF boards allocated to different $\Phi$-sectors~\ref{SURF}, a slightly worse time resolution of $\sigma_{\rm diff~\phi}=66~{\rm ps}$ is obtained for antenna pairs in different $\Phi$-sectors. \begin{figure}[thb!] \centerline{ \epsfxsize=6in \epsfbox{dt_distributions.eps}} \caption[dt distributions]{$dt$ distributions: Upper/lower antenna pairs in the same $\Phi$-sector (Left) and antenna pairs in different $\Phi$-sectors (Right).} \label{fig:dt} \end{figure} Once the calibrations are applied, a first order method to establish the presence of a coherent signal source is to sum the two-dimensional cross-correlated intensities of all baseline pairs into a summed-power interferometric image, as shown in Fig.~\ref{TDimagemap}. for one of the surface antenna pulses from Taylor Dome. In this procedure each pair of antennas gives a fringe across the sky at an angle corresponding to the baseline direction, and the fringes sum together with the greatest strength at the source location. The virtue of this method is that it can also be used to identify the location of sidelobes in the image, so that later quantitative fitting of the source location can be tested to ensure that it has not produced a misreconstruction at one of the sidelobe locations. Such an image is highly analogous to a ``dirty map'' in radio astronomical usage, and a reduction in sidelobes is possible with further image processing. However, we have found in general that for this type of ``pulse-phase interferometry,'' these maps are adequate since all sources are unresolved for ANITA. We have also studied phase closure methods as applied to our data, and we find that further work in this area is justified and may yield even more robust methods for this type of source imaging. \begin{figure}[htb!] \centerline{\includegraphics[width=3.9in]{ev249475_xcorr_image_v3.eps}} \caption{Interferometric image of a single impulsive event from the Taylor Dome ground calibration pulser. \label{TDimagemap} } \end{figure} \subsection{Good event selection and hit cleaning} For the angular reconstruction, we use antennas in three $\Phi$-sectors around the maximum $\Phi$-sector where an average peak voltage of the upper and lower antennas is the highest among all $\Phi$-sectors. The upper antenna on the maximum$\Phi$-sector is used for a reference channel for the cross correlation. Channels having anomalous or outlying timing information are not used for the reconstruction. We compare $\Delta t_{\rm obs}$s for all possible baselines (15 possible) in the 6 antennas around the maximum $\Phi$-sectors, then we regard the channel as an isolated hit if multiple baselines associated with this channel have more than 12 ns deviation. A selection for good events is applied before the angular fit in order to speed up analysis and reduce misreconstructions. We require the number of good antenna hits $N_{\rm hit} > 5$, the peak voltage to be $V_{\rm peak}>35~mV$, and a signal to noise ratio to be ($SNR=V_{peak}/V_{rms}>3.5$) for both of upper and lower antennas in the maximum $\Phi$-sector, where the $V_{rms}$ is a root mean square of voltage obtained in the first third of the recorded waveform which is well-separated from the signal region near the center of the recorded waveform. Note that the SNR recorded in the digitizer waveform is generally lower than the SNR of the analog RF trigger, since there are additional insertion losses associated with the digitizer itself, as described in a previous section. \begin{figure}[ht!] \centerline{ \epsfxsize=3.65in \epsfbox{Baselines.eps}} \caption[Illustration of basic set of baselines for the angular fit] {Illustration of basic set of baselines for the angular fit. Not all of the fifteen possible baselines for 6 antennas are shown.} \label{fig:baseline} \end{figure} \begin{figure}[ht!] \centerline{ \epsfxsize=6in \epsfbox{dtheta_dphi_xcol.eps}} \caption[Angular resolution]{Angular resolution: $d \theta$ distribution (Left) and $d \phi$ distribution (Right)} \label{fig:angle_resol} \end{figure} \subsection{Angular Fit} The number of timing baselines $N_{\rm base}$ we use in the fit is $N_{\rm hit}-1$. A selection from among the fifteen possible baselines for six antennas is made in a way to minimize timing uncertainties and correlations between the baselines. Figure~\ref{fig:baseline} illustrates some of the possible baselines for an event with a phi-sector containing the maximum possible vertical baseline; either adjacent phi sector would involve a shorter principal baseline paired with longer adjacent ones. We perform a $\chi^2$ fit in order to find the direction of the RF signal. It minimizes \begin{equation} \chi^2 = \sum_{k}^{N_{\rm base}} \left ( \frac{\Delta t_{\rm obs}^{k} - \Delta t_{\rm hypo}^{k}(\theta,\phi)}{\sigma_{k}} \right )^2. \label{eq:chi2} \end{equation} Here, the $N_{\rm base}$ is the number of baselines, the $\Delta t_{\rm obs}^{k}$ is timing difference of the antenna channels participating in baseline $k$, the $t_{\rm hypo}^{k}(\theta, \phi)$ is its expectation for the RF signal with a given hypothesis of a plane wave direction ($\theta, \phi$), and the $\sigma$ is the time resolution of the corresponding baseline. Since the time resolution is varied depending on the signal strength, we use $SNR$ dependent time resolution; \begin{equation} \sigma = \sigma^{\rm ch1}_{SNR} \oplus \sigma^{\rm ch2}_{SNR} \oplus \sigma_{\rm sys}, \label{eq:sigma} \end{equation} where the $\sigma_{SNR}$ is a single channel timing error caused by thermal noise and the $\sigma_{\rm sys}$ is intrinsic system error. The ch1 and ch2 denote two channel numbers involved with the baseline. A functional form of $\sigma_{SNR}$ is obtained by a Monte Carlo simulation (MC) study of a thermal noise effect on time resolution of the impulsive signal. We use the measured time resolutions for $\sigma_{\rm sys}$, which can be either $\sigma_{\rm same~\phi}$ or $\sigma_{\rm diff~\phi}$ depending on the geometry of the baseline. We perform 10 iterations of fits to reduce misreconstructions caused by trapping of the solution in local minima of $\chi^2$. In each iteration, initial fit parameters of $(\theta, \phi)$ are uniformly varied within $\theta>90^\circ$ and $\phi_{\rm max}-25^\circ < \phi < \phi_{\rm max}+25^\circ$, where the $\phi_{\rm max}$ is an azimuth angle of the boresite of the antenna in the maximum $\Phi$-sector. We require $\chi^2<4$ for a well reconstructed event. Figure~\ref{fig:angle_resol} shows angular resolution with the ground-based calibration pulsers. The $d \theta$ ($d \phi$) is a deviation of zenith (azimuth) angle from expected value. Achieved resolutions are $0.2^\circ$ and $0.8^\circ$ respectively. The worse resolution of $d \phi$ than $d \theta$ is due to the shorter length of baselines in the $\phi$ direction and the worse time resolution of the inter-$\phi$ baselines. \subsection{Misreconstruction Rejection} A misreconstruction is a fit result that deviates from the expected source direction while still having a good fit quality or low $\chi^2$. Misreconstruction is one of the most important potential background sources for the neutrino search in the ANITA experiment, because a misreconstructed event of anthropogenic origin from, for example, a known encampment, might become a neutrino candidate. The main sources of the misreconstruction are incorrect timing information and a fit that becomes trapped in a false local minimum. Incorrect timing means timing with an much larger error than the expected $\sigma$. These may be caused by an unresolved cycle ambiguity of the signal pulse in the cross correlation procedure. In order to reduce this, we compare the timing with it from another cross-correlation with different time window of signal waveform; $|{t-t_{\rm peak}}| \le 12~{\rm ns}$, then reject the event if the difference is greater than 0.5~ns in any of $N_{\rm base}$ baselines. We also reduce the misreconstruction events using the excellent directivity characteristic of the ANITA horn antenna. Because of relatively high gain of the horn antennas, and corresponding narrow width of their angular response, the sector $\Phi_{\rm max}$ should be consistent with the signal direction. Therefore, we require $|\phi - \phi_{\rm max}| < -22.5 ^\circ$, where the $22.5^\circ$ is corresponding to the $\Phi$ occupancy of an antenna. A further requirement for the misreconstruction reduction is an internal consistency check with sub-sets of the baselines. We perform angular fits not using one of the baselines in the original baseline set. We reject events if any of the $N_{\rm base}-1$ of possible subset fits provides a space angle result that differs from the original by more than $5^\circ$. \subsection{Efficiency} \begin{figure}[ht!] \centerline{ \epsfxsize=5.5in \epsfbox{Recon_eff1.eps}} \caption[Reconstruction Efficiency vs. $SNR$ ]{Reconstruction Efficiency vs.$SNR$ for both polarizations analyzed independently: Dots are efficiency measurement with the ground based calibration pulses and the curves is for instrument simulation Monte Carlo estimation. Left is for vertical polarization which has a known SNR bias due to low-level EMI contamination in the event signals. The right shows the same efficiency for H-pol which is less affected by EMI contamination.} \label{fig:eff_vs_snr} \end{figure} The final reconstruction efficiency is important to understanding the overall detector efficiency, and is measured with the ground based calibration data and additional simulation analysis. Figure~\ref{fig:eff_vs_snr} shows the efficiency measurement as a function of $SNR$ for events acceptable fits for both polarizations. There is a good agreement between the ground based calibration data and the MC in $SNR>5.5$ while a discrepancy is found in the lower $SNR$ region for the Vpol events. The likely source of this discrepancy is the effect of low-level interference from Williams Field where the ground calibration system was used, which causes an error in the SNR estimate. Due to the preponderance of vertically-polarized signals in the transmitters near Williams field, Vpol is affected to a greater degree than H-pol. The global efficiency at higher SNR for both polarizations with the ground calibration system is 96\% while a misreconstruction-reconstruction rate is less than 0.16\%. \subsection{RF Projection on the Surface} As noted above, reconstruction of the RF source position on the surface is critical to identify the anthropogenic noise sources associated with known bases. In order to obtain more precise position measurements we take into account the surface elevation variation based on the Bedmap data.~\cite{bedmap} A simple model is implemented for a fast calculation. We find line-sphere intersections assuming the spherical shape of the earth, then take one solution having shorter distance to the ANITA payload. For an initial calculation, we assume the earth radius $R ^{\rm hypo}_{\rm earth}$ is the geoid plus surface elevation of the ANITA payload coordinates (longitude and latitude). Then the next iteration uses the geoid and surface elevation of result coordinate of the previous iteration. We found the results converge after more than 2 iterations. We found the source position resolution at the surface is 2.7 km for 170 km of line-of-sight distance at 37 km of altitude. \section{Summary} We have presented the basis for a balloon-borne payload search for ultra-high energy neutrinos from an altitude of 37~km above the Antarctic Ice sheets. The ANITA payload's first flight resulted in a large database of impulsive events from locations all across Antarctica. When these events indicated a ground-based point source, it could be geolocated to a precision of order $0.3^{\circ} \times 1.0^{\circ}$ in angle, which projects to error ellipses of several square km at typical distances of several hundred km. We have demonstrated that operation down to near-thermal-noise levels is achievable, and electromagnetic interference in Antarctical, while still problematic, is manageable and largely confined to a relatively small number of camps. Results from the analyses of all these data in the search for neutrino candidates will be presented separately.
1,108,101,565,372
arxiv
\section{Introduction} \label{introduction} More than 30 years ago 't Hooft \cite{'t Hooft:1976up,'tHooft:fv} explored some of the physical consequences of topological structures \cite{Belavin:1975fg} in non-Abelian gauge theories. The issues are directly tied to chiral anomalies, and the phenomena discussed ranged from the mass of the $\eta^\prime$ meson to the existence of baryon decay in the standard model. The latter effect is too small for observation; nevertheless, the fact that it must exist is crucial for any fundamental formulation of the theory. In particular, it must appear in any valid attempt to formulate the standard model on the lattice \cite{Eichten:1985ft}. Although the underpinnings of these ideas have been established for some time, recent controversies strikingly show that the issues are not fully understood. For example, the rooting algorithm used to adjust the number of quark flavors in the staggered fermion lattice algorithm is inconsistent with the expected form of the 't Hooft vertex \cite{Creutz:2007rk}. This has led to a rather bitter controversy involving a large subset of the lattice gauge community \cite{ Creutz:2007rk, Creutz:2007yg, Bernard:2006vv, Creutz:2007pr, Kronfeld:2007ek,Bernard:2007eh}. A second dispute involves the speculation that a vanishing up quark mass might solve the strong CP problem. The 't Hooft vertex gives rise to non-perturbative contributions to the renormalization group flow of quark masses \cite{Georgi:1981be,Banks:1994yg}. When the quark masses are non-degenerate, these involve an additive shift and show that the vanishing of a single quark mass is renormalization scheme dependent. As such it can not be a fundamental concept \cite{Creutz:2003xc}. This conflicts with the conventional perturbative arguments that the renormalization of fermion masses is purely multiplicative, something that is only true for multiple degenerate flavors. Nevertheless, various attempts to go beyond the standard model often continue to attempt to build in a vanishing up-quark mass as an escape from the strong CP problem; for a few examples see \cite{Srednicki:2005wc,Davoudiasl:2007zx,Davoudiasl:2005ai}. All these issues are closely tied to quantum anomalies and axial symmetries. Indeed, when expressed in terms of the 't Hooft interaction, the qualitative resolution of most of these effects becomes fairly obvious. The fact that lingering controversies continue suggests that it is worthwhile to revisit the underpinnings of the mechanism. That is the purpose of this paper. Although the mechanism applies also to the weak interactions through the predicted baryon decay, here I will restrict my discussion to the strong interactions of quarks and gluons. No single item in this discussion is new in and of itself. The main goal of this paper is to elucidate their unification through the 't Hooft interaction. I will occasionally use lattice language for convenience, such as referring to an ultraviolet cutoff $a$ as a ``lattice spacing.'' Nevertheless, the issues are in not specific to lattice gauge theory. The topic is basic non-perturbative issues within the standard quark confining dynamics of the strong interactions. I will rely heavily on chiral symmetries, and only assume that I have a well regulated theory that maintains these symmetries to a good approximation. I organize the discussion as follows. Section \ref{vertex} starts with a review of how the 't Hooft effective interaction arises and discusses some of its general properties. In section \ref{etamass} I turn to the historically most significant use of the effect, the connection to the $\eta^\prime$ mass. The robustness of the zero modes responsible for the vertex is discussed in section \ref{index}. The remainder of the paper goes through a variety of other physical consequences that are perhaps somewhat less familiar. Section \ref{reps} explores the discrete chiral symmetries that appear with quarks in higher representations than the fundamental, as motivated by some unified models. Section \ref{theta} discusses how the effective vertex is tied to the well known possibility of CP violation in the strong interactions through a non-trivial phase in the quark mass matrix. Section \ref{mu} connects the vertex to the ill posed nature of proposing a vanishing up quark mass to solve the strong CP problem. Building on this, section \ref{kaplanmanohar} relates this result to the effective chiral Lagrangian ambiguity discussed by Kaplan and Manohar \cite{Kaplan:1986ru}. In Section \ref{axions} I briefly discuss the axion solution to the strong CP problem, noting that the axion does acquire a mass from the anomaly but observing that as long as the coupling of the axion to the strong interactions is small this mass along with inherited CP violating effects will naturally be small. In section \ref{rooting} I discuss why the rooting procedure used in lattice gauge theory with staggered quarks mutilates the interaction, thus introducing an uncontrolled approximation. Finally, the basic conclusions are summarized in section \ref{summary}. \section{The vertex} \label{vertex} I begin with a brief reminder of the strategy of lattice simulations. Consider the basic path integral, or ``partition function,'' for quarks and gluons \begin{equation} Z=\int (dA)(d\psi)(d\overline\psi) \exp(-S_g(A)-\overline\psi D(a)\psi). \end{equation} Here $A$ denotes the gauge fields and $\overline\psi,\psi$ the quark fields. The pure gauge part of the action is $S_g(A)$ and the matrix describing the fermion part of the action is $D(A)$. Since direct numerical evaluation of the fermionic integrals appears to be impractical, the Grassmann integrals are conventionally evaluated analytically, reducing the partition function to \begin{equation} Z=\int (dA)\ e^{-S_g(A)}\ |D(A)|. \end{equation} Here $|D(A)|$ denotes the determinant of the Dirac matrix evaluated in the given gauge field. Thus motivated, the basic lattice strategy is to generate a set of random gauge configurations weighted by $\exp(-S_g(A))\ |D(a)|$. Given an ensemble of such configurations, one then estimates physical observables by averages over this ensemble. This procedure seems innocent enough, but it can run into trouble when one has massless fermions and corresponding chiral symmetries. To see the issue, write the determinant as a product of the eigenvalues $\lambda_i$ of the matrix $D$. In general $D$ may not be a normal matrix; so, one should pick either left or right eigenvectors at one's discretion. This is a technical detail that will not play any further role here. In order to control infrared issues with massless quarks, let me introduce a small explicit mass $m$ and reduce the path integral to \begin{equation} Z=\int (dA)\ e^{-S_g(A)}\ \prod (\lambda_i+m). \end{equation} Now suppose we have a configuration where one of the eigenvalues of $D(A)$ vanishes, {\it i.e.} assume that some $\lambda_i=0$. As we take the mass to zero, any configurations involving such an eigenvalue will drop out of the ensemble. At first one might suspect this would be a set of measure zero in the space of all possible gauge fields. However, as discussed later, the index theorem ties gauge field topology to such zero modes of the Dirac operator. This shows that such modes can be robust under small deformations of the fields. Under the traditional lattice strategy these configurations have zero weight in the massless limit. The naive conclusion is that such configurations are irrelevant to physics in the chiral limit. It was this reasoning that 't Hooft showed to be incorrect. Indeed, he demonstrated that it is natural for some observables to have $1/m$ factors when zero modes are present. These can cancel the terms linear in $m$ from the determinant, leaving a finite contribution. As a simple example, consider the quark condensate \begin{equation} \langle \overline\psi \psi\rangle= {1\over VZ} \int (dA)\ e^{-S_g}\ |D|\ \ {\rm Tr}D^{-1}. \end{equation} Here $V$ represents the system volume, inserted to give an intensive quantity. Expressing the fermionic factors in terms of the eigenvalues of $D$ reduces this to \begin{equation} \langle \overline\psi \psi\rangle= {1\over VZ} \int (dA)\ e^{-S_g}\ \prod (\lambda_i+m) \ \ \sum_i {1\over \lambda_i+m}. \end{equation} Now if there is a mode with $\lambda_i=0$, the factor of $m$ is canceled by a $1/m$ piece in the trace of $D^{-1}$. Configurations containing a zero mode give a constant contribution to the condensate that survives in the chiral limit. Note that this effect is unrelated to spontaneous breaking of chiral symmetry and appears even with finite volume. This contribution to the condensate is special to the one-flavor theory. Because of the anomaly, this quark condensate is not an order parameter for any symmetry. With more fermion species there will be additional factors of $m$ from the determinant. Then the effect is of higher order in the fermion fields and does not appear directly in the condensate. For two or more flavors the standard Banks-Casher picture \cite{Banks:1979yr} of an eigenvalue accumulation leading to the spontaneous breaking of chiral symmetry should apply. The conventional discussion of the 't Hooft vertex starts by inserting fermionic sources into the path integral \begin{equation} Z(\eta,\overline\eta)=\int (dA)\ (d\psi)\ (d\overline\psi)\ e^{-S_g-\overline\psi (D+m) \psi +\overline\psi \eta+ \overline\eta\psi}. \end{equation} Differentiation, in the Grasmannian sense, with respect to these sources can generate the expectation for an arbitrary product of fermionic operators. Integrating out the fermions reduces this to \begin{equation} Z=\int (dA)\ e^{-S_g{+\overline\eta(D+m)^{-1}\eta}}\ \prod (\lambda_i+m). \end{equation} Consider a zero mode $\psi_0$ satisfying {$D\psi_0=0$}. If the source has an overlap with the mode, that is $(\psi_0^\dagger\cdot\eta)\ne 0$, then a factor of {$1/m$} in the source term can cancel the {$m$} from the determinant. Although non-trivial topological configurations do not contribute to $Z$, their effects can survive in correlation functions. For the one-flavor theory the effective interaction is bilinear in the fermion sources and is proportional to \begin{equation} (\overline\eta\cdot\psi_0)(\psi_0^\dagger\cdot\eta). \label{bilinear} \end{equation} As discussed later, the index theorem tells us that in general the zero mode is chiral; it appears in either {$\overline\eta_L\eta_R$} or {$\overline\eta_R\eta_L$}, depending on the sign of the gauge field winding. With $N_f\ge2$ flavors, the cancellation of the mass factors in the determinant requires source factors from each flavor. This combination is the 't Hooft vertex. It is an effective $2N_f$ fermion operator. In the process, every flavor flips its spin, as sketched in Fig.~\ref{instanton}. Indeed, this is the chiral anomaly; left and right helicities are not separately conserved. \begin{figure*} \centering \includegraphics[width=2.5in]{instanton.eps} \caption{ The 't Hooft vertex for $N_f$ flavors is a $2N_f$ effective fermion operator that flips the spin of every flavor.} \label{instanton} \end{figure*} Because of Pauli statistics, the multi-flavor vertex can be written in the form of a determinant. This clarifies how the vertex preserves flavored chiral symmetries. With two flavors, call them $u$ and $d$, Eq.~\ref{bilinear} generalizes to \begin{equation} \left\vert \matrix{ (\overline u \cdot\psi_0)(\psi_0^\dagger\cdot u) & (\overline u \cdot\psi_0)(\psi_0^\dagger\cdot d)\cr (\overline d \cdot\psi_0)(\psi_0^\dagger\cdot u) & (\overline d \cdot\psi_0)(\psi_0^\dagger\cdot d)\cr } \right\vert. \end{equation} Note that the effect of the vertex is non-local. In general the zero mode $\psi_0$ is spread out over some finite region. This means there is an inherent position space uncertainty on where the fermions are interacting. A particular consequence is that fermion conservation is only a global symmetry. In Minkowski space language, this non-locality can be thought of in terms of states sliding in and out of the Dirac sea at different locations. \section{ The $\eta^\prime$ mass} \label{etamass} The best known consequence of the 't Hooft interaction is the explanation of why the $\eta^\prime$ meson is substantially heavier than the other pseudo-scalars. Consider the three-flavor theory with up, down, and strange quarks, $u,d,s$. The quark model indicates that this theory should have three neutral non-strange pseudo-scalars. Experimentally these are the $\pi_0$ at 135 MeV, the $\eta$ at 548 Mev, and the $\eta^\prime$ at 958 MeV. In the quark model, these should be combinations of the quark bound states $\overline u\gamma_5 u,\ \overline d\gamma_5 d,\ \overline s\gamma_5 s$. In the standard chiral picture, the squares of the Goldstone boson masses are linear in quark masses. The strange quark is the heaviest of the three, with its mass related to $m_K=$ 498 MeV. The maximum mass a Goldstone boson could have would be if it is pure $\overline s s$. Ignoring the light quark masses, this maximum value is $\sqrt 2 m_K = 704\ \hbox{MeV}$, substantially less than the observed mass of the ${\eta^\prime}$. From this we are driven to conclude that the $\eta^\prime$ must be something else and not a Goldstone boson. When viewed in the context of the 't Hooft interaction, the problem disappears. The vertex directly breaks the naive $U(1)$ axial symmetry, and thus there is no need for a corresponding Goldstone boson. Thus, the mass of the $\eta^\prime$ should be of order the strong interaction scale plus the masses of the contained quarks. Indeed, when compared to a vector meson mass, such as the $\phi$ at 1019 MeV, the 958 MeV of the $\eta^\prime$ seems quite normal. Even though this resolves the issue, it is perhaps interesting to look a bit further into the differences between the singlet and the flavored pseudo-scalar mesons. For the one-flavor case there are two effects that give extra contributions to the singlet mass. First, the vertex itself gives a direct mass shift to the quark, and, second, the vertex directly couples the quark-antiquark content to gluonic intermediate states. In general it is expected that \begin{equation} \langle \eta^\prime|F\tilde F|0\rangle\ne 0. \end{equation} The $\eta^\prime$ can be created not just by quark operators but also a pseudo-scalar gluonic combination. With more quark species, flavored pseudo-scalar Goldstone bosons should exist. Their primary difference from the singlet is the absence of the gluonic intermediate states. It is the 't Hooft vertex that non-locally couples $F\tilde F$ to the quark-antiquark content of the flavor singlet meson. Note that with multiple flavors the vertex involves more than two fermion lines; the contributions of the extra lines can be absorbed in the condensate, as sketched in Fig.~\ref{etamassfig}. This large mass generation is sometimes referred to as coming from ``constituent'' quark masses, as opposed to the ``current'' quark masses that vanish in the chiral limit. \begin{figure*} \centering \includegraphics[width=3in]{etamass.eps} \caption{The 't Hooft vertex couples the quark-antiquark content of the pseudo-scalar meson with gluonic intermediate states. Additional quark lines associated with the vertex can be absorbed in the condensate.} \label{etamassfig} \end{figure*} The renormalization group provides useful information on the coupling constant dependence of the $\eta^\prime$ mass as the cutoff is removed. These equations read \begin{equation} a{dg\over da}=\beta(g)=\beta_0 g^3+\beta_1 g^5 +\ldots +{\rm non{\hbox{-}}perturbative} \end{equation} for the bare coupling constant $g$ and \begin{equation} a{dm\over da}=m\gamma(g)=m(\gamma_0 g^2+\gamma_1 g^4 +\ldots) +{\rm non{\hbox{-}}perturbative} \label{mrg} \end{equation} for the bare quark mass $m$. I include this latter equation for later discussion. As is well known, the coefficients $\beta_0$, $\beta_1$, and $\gamma_0$ are independent of renormalization scheme. It is important to remember that the separation of the perturbative and non-perturbative parts of the renormalization group equations is scheme dependent. Indeed, different definitions of the coupling constant will in general differ by non-perturbative parts. This will play an important role later when I discuss non-perturbative changes in the definition of quark masses. Renormalizing by holding the $\eta^\prime$ mass fixed allows the solution of the the coupling constant equation with the result \begin{equation} m_{\eta^\prime}= C\ { e^{-1/2\beta_0 g^2} g^{-\beta_1/\beta_0^2}\over a} \times(1+O(g^2))=O( \Lambda_{qcd}). \label{mofa} \end{equation} Here $C$ is a dynamically determined constant which could in principle be determined in numerical simulations. Because the inverse of the coupling appears in the exponent, this dependence is non-perturbative. Indeed, this equation is a simple restatement of asymptotic freedom, the requirement that $\lim_{a\rightarrow 0}\ g(a)=0$ logarithmically. The importance of this relation is that similar expressions involving exponential dependences on the inverse coupling are natural and expected to occur in any quantities where non-perturbative effects are important. \section{Robustness and the index theorem} \label{index} The reason these zero modes remain crucial is their robustness through the connection to the index theorem \cite{Atiyah:1971rm}. Otherwise they could be argued to contribute a set of measure zero to the path integral. When the gauge field is smooth, then the difference in the number of right handed and left handed zero modes is tied to a topological wrapping of the gauge fields at infinity around the gauge group. Being topological, this winding is robust under small deformations of the gauge fields. Therefore exact zero modes are not accidental but required whenever gauge configurations have non-trivial topology. The robustness of these zero modes can also be seen directly from the eigenvalue structure of the Dirac operator. This builds on $\gamma_5$ hermiticity \begin{equation} D^\dagger=\gamma_5 D\gamma_5, \end{equation} a condition true in the naive continuum theory as well as most lattice discretizations. A direct consequence is that non-real eigenvalues of $D$ occur in complex conjugate pairs. All eigenvalues $\lambda$ satisfy $|D-\lambda|=0$, where the vertical lines denote the determinant. This plus the fact that $|\gamma_5|=1$ gives \begin{equation} |D-\lambda^*|=|\gamma_5 (D^\dagger-\lambda^*)\gamma_5|= |D^\dagger-\lambda^*|=|D-\lambda|^*=0. \end{equation} Thus all eigenvalues are either in complex pairs or real. Close to the massless continuum limit, $D$ is predominantly anti-Hermitian. Small real eigenvalues correspond to the zero modes that generate the 't Hooft vertex. Ignoring possible lattice artifacts, $D$ should approximately anticommute with $\gamma_5$. If $D\psi=0$, then $D\gamma_5\psi=-\gamma_5 D\psi=0.$ Thus $D$ and $\gamma_5$ commute on the subspace spanned by all the zero modes. Restricted to this space, these matrices can be simultaneously diagonalized. Since all eigenvalues of $\gamma_5$ are plus or minus unity, the trace of $\gamma_5$ restricted to this subspace must be an integer. Because of this quantization, this integer will generically be robust under small variations of the gauge fields. This defines the index for the given gauge field. This approach of defining the index directly through the eigenvalues of the Dirac operator has the advantage over the topological definition in that the gauge fields need not be differentiable. For smooth fields the definitions are equivalent through the index theorem. But in general path integrals are dominated by non-differentiable fields. Also, on the lattice the gauge fields lose the precise notion of continuity and topology. In the fermion approach to the index other subtleties do arise. In general a regularization can introduce distortions so that the real eigenvalues are not necessarily exactly at the same place. In particular, for Wilson lattice fermions the real eigenvalues spread over a finite range. In addition to the small eigenvalues near zero, the Wilson approach has additional real eigenvalues far from the origin that are associated with doublers. Applying $\gamma_5$ to a small eigenvector will in general mix in a small about of the larger modes. This allows the trace of $\gamma_5$ on the subspace involving only the low modes to deviate from an exact integer. The overlap operator \cite{Neuberger:1997fp} does constrain the small real eigenvalues to be at the origin and the earlier argument goes through. In this case additional real eigenvalues do occur far from the origin. These are required so that the trace of $\gamma_5$ over the full space will vanish. Since the overlap operator keeps the index discrete, it is forced to exhibit discontinuous behavior as the gauge fields vary between topological sectors. In the vicinity of these discontinuities the gauge fields can be thought of as ``rough'' and the precise value of the index can depend on the details of the kernel used to project onto the overlap matrix. For multiple light fermion flavors this ambiguity in the index is expected to be suppressed in the continuum limit. Nevertheless, the issues discussed later for a single massless quark suggests that the situation may be more subtle for the zero or one species case. \section{Fermions in higher representations of the gauge group} \label{reps} When the quarks are massless, the classical field theory corresponding to the strong interactions has a $U(1)$ axial symmetry under the transformation \begin{equation} \psi\rightarrow e^{i\theta\gamma_5}\psi\qquad \overline\psi\rightarrow \overline\psi e^{i\theta\gamma_5}. \label{thetarot} \end{equation} It is the 't Hooft vertex that explains how this symmetry does not survive quantization. In this section I discuss how in some special cases, in particular when the quarks are in non-fundamental representations of the gauge group, discrete subgroups of this symmetry can remain. While these considerations do not apply to the usual theory of the strong interactions, there are several reasons to study them anyway. At higher energies, perhaps as will be probed at the upcoming Large Hadron Collider, one might well discover new strong interactions that play a substantial role in the spontaneous breaking of the electroweak theory. Also, many grand unified theories involve fermions in non-fundamental representations. As one example, massless fermions in the 10 representation of $SU(5)$ possess a $Z_3$ discrete chiral symmetry. Similarly the left handed 16 covering representation of $SO(10)$ gives a chiral gauge theory with a surviving discrete $Z_2$ chiral symmetry. Understanding these symmetries may play some role in an eventual discretization of chiral gauge theories on the lattice. I build here on generalizations of the index theorem relating gauge field topology to zero modes of the Dirac operator. In particular, fermions in higher representations can involve in multiple zero modes for a given winding. Being generic, consider representation $X$ of a gauge group $G$. Denote by $N_X$ the number of zero modes that are required per unit of winding number in the gauge fields. That is, suppose the index theorem generalizes to \begin{equation} n_r-n_l=N_X\nu \end{equation} where $n_r$ and $n_l$ are the number of right and left handed zero modes, respectively, and $\nu$ is the winding number of the associated gauge field. The basic 't Hooft vertex receives contributions from each zero mode, resulting in an effective operator which is a product of $2N_X$ fermion fields. Schematically, the vertex is modified along the lines $\overline\psi_L \psi_R \longrightarrow (\overline\psi_L \psi_R)^{N_X}$. While this form still breaks the $U(1)$ axial symmetry, it is invariant under $\psi_R\rightarrow e^{2\pi i/N_X}\psi_R$. In other words, there is a $Z_{N_x}$ discrete axial symmetry. There are a variety of convenient tools for determining $N_X$. Consider building up representations from lower ones. Take two representations $X_1$ and $X_2$ and form the direct product representation $X_1\otimes X_2$. Let the matrix dimensions for $X_1$ and $X_2$ be $D_1$ and $D_2$, respectively. Then for the product representation we have \begin{equation} N_{X_1\otimes X_2}= N_{X_1} D_{X_2}+N_{X_2} D_{X_1}. \end{equation} To see this, start with $X_1$ and $X_2$ representing two independent groups $G_1$ and $G_2$. With $G_1$ having winding, there will be a zero mode for each of the dimensions of the matrix index associated with $X_2$. Similarly there will be multiple modes for winding in $G_2$. These modes are robust and all should remain if we now constrain the groups to be the same. As a first example, denote the fundamental representation of $SU(N)$ as $F$ and the adjoint representation as $A$. Then using $\overline F \otimes F = A+1$ in the above gives $N_A=2N$, as noted some time ago in Ref.~\cite{Witten:1982df}. With $SU(3)$, fermions in the adjoint representation will have six-fold degenerate zero modes. For another example, consider $SU(2)$ and build up towards arbitrary spin $s\in\{0,{1\over 2}, 1, {3\over 2},\ldots\}$. Recursing the above relation gives the result for arbitrary spin \begin{equation} N_s=s(2s+1)(2s+2)/3. \end{equation} Another technique for finding $N_X$ in more complicated groups begins by rotating all topological structure into an $SU(2)$ subgroup and then counting the corresponding $SU(2)$ representations making up the larger representation of the whole group. An example to illustrate this procedure is the antisymmetric two indexed representation of $SU(N)$. This representation has been extensively used in \cite{Corrigan:1979xf,Armoni:2003fb,Sannino:2003xe,Unsal:2006pj} for an alternative approach to the large $N_c$ limit. The basic $N(N-1)/2$ fermion fields take the form \begin{equation} \psi_{ab}=-\psi_{ba}, \qquad a,b\in 1,2,...N. \end{equation} Consider rotating all topology into the $SU(2)$ subgroup involving the first two indices, i.e. 1 and 2. Because of the anti-symmetrization, the field $\psi_{12}$ is a singlet in this subgroup. The field pairs $(\psi_{1,j},\psi_{2,j})$ form a doublet for each $j\ge 3$. Finally, the $(N-2)(N-3)/2$ remaining fields do not transform under this subgroup and are singlets. Overall we have $N-2$ doublets under the $SU(2)$ subgroup, each of which gives one zero mode per winding number. We conclude that the 't Hooft vertex leaves behind a $Z_{N-2}$ discrete chiral symmetry. Specializing to the 10 representation of $SU(5)$, this is the $Z_3$ mentioned earlier. Another example is the group $SO(10)$ with fermions in the 16 dimensional covering group. This forms the basis of a rather interesting grand unified theory, where one generation of fermions is placed into a single left handed 16 multiplet \cite{Georgi:1979dq}. This representation includes two quark species interacting with the $SU(3)$ subgroup of the strong interactions, Rotating a topological excitation into this subgroup, we see that the effective vertex will be a four fermion operator and preserve a $Z_2$ discrete chiral symmetry. \begin{figure*} \centering \includegraphics[width=2.5in]{sufivealt.eps} \caption{With massless fermions in the 10 representation of gauge group $SU(10)$ there exists a discrete $Z_3$ chiral symmetry. If this is spontaneously broken one expects three phase transitions to meet at the origin in complex mass space, as sketched here. (From Ref.~\cite{Creutz:2006ts}).} \label{sufivealt} \end{figure*} \begin{figure*} \centering \includegraphics[width=2.5in]{sufive.eps} \caption{If the discrete chiral symmetry is not broken spontaneously, $SU(5)$ gauge theory with fermions in the 10 representation should behave smoothly in the quark mass as it passes through zero. Such a smooth behavior is similar to that expected for the one-flavor theory in the fundamental representation. (From Ref.~\cite{Creutz:2006ts}).} \label{sufive} \end{figure*} It is unclear whether these discrete symmetries are expected to be spontaneously broken. Since they are discrete, such breaking is not associated with Goldstone bosons. But the quark condensate does provide an order parameter; so when $N_X>1$, any such breaking would be conceptually meaningful. Returning to the $SU(5)$ case with fermions in the 10, a spontaneous breaking would give rise to discrete jumps in this order parameter as a function of the complex mass plane, as sketched in Fig.~\ref{sufivealt}. Alternatively, the unbroken theory would have a phase diagram more like that in Fig.~\ref{sufive}. In these figures I assume that for large mass a spontaneous breaking of parity does occur when the strong CP violation angle is set to $\pi$. Such a jump is expected even for the one-flavor theory with fermions in the fundamental representation \cite{Creutz:2006ts}. Which of these behaviors is correct could be determined in lattice simulations, although there are issues in how the lattice formulation is set up. The Wilson approach involves irrelevant chiral symmetry breaking operators that will in general distort the three fold symmetry of these models. Even the overlap operator \cite{Neuberger:1997fp}, which respects a variation of the continuous chiral symmetries, appears to break these discrete symmetries \cite{Edwards:1998dj}. Nevertheless, as one comes sufficiently close to the continuum limit, it should be possible to distinguish between these scenarios. \section{The Theta parameter and the 't Hooft vertex} \label{theta} When the quarks are massless, the classical field theory corresponding to the strong interactions has a $U(1)$ axial symmetry under the transformation in Eq.~(\ref{thetarot}). On the other hand, a fermion mass term, say $m\overline\psi\psi$, breaks this symmetry explicitly. Indeed, under the chiral rotation of Eq.~(\ref{thetarot}) \begin{equation} m\overline\psi\psi\rightarrow m\cos(\theta)\overline\psi\psi +im\sin(\theta)\overline\psi\gamma_5\psi \label{m5} \end{equation} If the classical chiral symmetry of the kinetic term was not broken by quantum effects, then a mass term of the form of the right hand side of this equation would be physically completely equivalent to the normal mass term. But because of the effect of the 't Hooft interaction, the theory with the rotated mass is physically inequivalent to the unrotated theory. However the theory is regulated, it is essential that the cutoff distinguish between the two terms on the right hand side of Eq.~(\ref{m5}). With a Pauli-Villars scheme it is the the mass for the heavy regulator field that fixes the angle $\theta$. For Wilson fermions the Wilson term selects the chiral direction. This carries over to the overlap formulation, built on a projection from the Wilson operator. Unfortunately it is the absence of such a distinction that lies at the heart of the failure of the rooting prescription for staggered fermions, as discussed later. The above rotation is often described by complexifying the mass term. If we write \begin{equation} \overline\psi\psi=\overline\psi_L\psi_R+\overline\psi_R\psi_L \end{equation} with \begin{eqnarray} \psi_{R,L}={1\pm\gamma_5\over 2}\ \psi\\ \overline\psi_{R,L}=\overline\psi\ {1\mp\gamma_5\over 2}, \end{eqnarray} then our generalized mass term takes the form $$ m\overline\psi_L\psi_R+m^*\overline\psi_R\psi_L $$ with $m=|m|e^{i\theta}$ a complex number. In this latter notation, the effect of the 't Hooft vertex is to make the phase of the mass matrix an observable quantity. This phase is connected to the strong $CP$ angle, usually called $\Theta$. Indeed, because of this effect, the real and the imaginary parts of the quark masses are actually independent parameters. The two terms $\overline\psi\psi$ and $i\overline\psi\gamma_5\psi$, which are naively equivalent, are in fact distinct possible ways to break the chiral symmetry. It is the 't Hooft vertex which distinguishes one of them a special. With usual conventions $i\overline\psi\gamma_5\psi$ is a CP odd operator; therefore, its interference with the vertex can generate explicit CP violation. The non-observation of such in the strong interactions indicates this term must be quite small; this lies at the heart of the strong CP problem. With multiple flavors the possibility of flavored axial chiral rotations allows one to move the phase of the mass between the various species without changing the physical consequences. One natural choice is to place all phases on the lightest quark, say the up quark, and keep all others real. Equivalently one could put all the phase on the top quark, but this would obscure the effects on low energy physics. If one gives all quarks a common phase $\theta$, then that phase is is related to the physical parameter by $\theta=\Theta/N_f$. \section{The strong CP problem and $m_u=0$} \label{mu} One of the puzzles of the strong interactions is the experimental absence of CP violation, which would not be the case if the imaginary part of the mass were present. This would be quite unnatural if at some higher energy the strong interactions were unified with the weak interactions, which are well known not to satisfy CP symmetry. On considering the strong interactions at lower energies, some residual effect of this breaking would naturally appear in the basic parameters, in particular through the imaginary part of the quark mass. The apparent experimental absence of such is known as the strong CP puzzle. An old suggestion to resolve this puzzle is that one of the quark masses might vanish. Indeed, this is a bit of a tautology since if it vanishes as a complex parameter, so does its imaginary part. But the imaginary part is really an independent parameter, and so it seems quite peculiar to tie it to the real part. While phenomenological models suggest that the up quark mass is in fact far from vanishing, various attempts to go beyond the standard model continue to attempt building in a vanishing up-quark mass at some high scale as an escape from the strong CP problem \cite{Srednicki:2005wc,Davoudiasl:2007zx,Davoudiasl:2005ai}. It is through consideration of the 't Hooft vertex that one sees that this solution is in fact ill posed \cite{Creutz:2003xc}. As discussed earlier, for the one-flavor case the vertex introduces a shift in the quark mass of order $\Lambda_{qcd}$. The amount of this shift will in general depend on the details of the renormalization group scheme and the scale of definition. The concept of a vanishing mass is not a renormalization group invariant, and as such it should not be relevant to a fundamental issue such as whether the strong interactions violate CP symmetry. This point carries over into the theory with multiple flavors as long as they are not degenerate. The experimental fact that the pion mass does not vanish indicates that two independent flavors cannot both be massless. If one considers the multiple flavor 't Hooft vertex, then one can always absorb the involved heavy quark lines with their masses, as sketched in Fig.~\ref{thooft}. This leaves behind a residual bilinear fermion vertex of order the product of the heavier quark masses. For the three flavor theory, this gives an ambiguity in the definition of the up quark mass of order $m_dm_s/\Lambda_{qcd}$ \cite{Georgi:1981be,Banks:1994yg}. Note that it is the mass associated with chiral symmetry, i.e. the ``current'' quark mass, that is being considered here; thus, the heavy quark lines cannot be absorbed in the condensate as they were in the earlier discussion of the $\eta^\prime$ mass. \begin{figure*} \centering \includegraphics[width=2.5in]{thooft.eps} \caption{With three non-degenerate flavors the lines representing the heavier quarks can be joined to 't Hooft vertex in such a way that the combination gives an ambiguity in the light quark mass of order the product of the heavier masses. (From Ref.~\cite{Creutz:2003xc}).} \label{thooft} \end{figure*} Can we define a massless quark via its bare value? This approach fails at the outset due to the perturbative divergences inherent in the bare parameters of any quantum field theory. Indeed, the renormalization group tells us that the bare quark mass must be zero, regardless of the physical hadronic spectrum. One immediate consequence is that it does not make sense to take the continuum limit before taking the mass to zero; the two limits are intricately entwined through the renormalization group equations. To see this explicitly, recall the renormalization group equation for the mass, Eq.~(\ref{mrg}). This is easily solved to reveal the small cutoff behavior of the bare mass \begin{equation} m=M_R\ g^{\gamma_0/\beta_0} (1+O(g^2)). \end{equation} This goes to zero as $a\rightarrow 0$ since $g$ does so by asymptotic freedom and $\gamma_0/\beta_0>0$. Here {$M_R$} denotes an integration constant which might be regarded as a ``renormalized mass.'' One cannot sensibly use $M_R$ to define a vanishing mass since it has an additive ambiguity. For example, consider a non-perturbative redefinition of the bare mass \begin{equation} \tilde m_0=m_0- g^{\gamma_0/\beta_0}\times { e^{-1/2\beta_0 g^2} g^{-\beta_1/\beta_0^2}\over a}\times {\Delta\over \Lambda_{qcd}}. \label{tildem} \end{equation} This is still a solution of the renormalization group equation, but involves the shift \begin{equation} M_R\rightarrow M_R-\Delta. \end{equation} Since the parameter $\Delta$ can be chosen arbitrarily, a vanishing of the renormalized mass for a non-degenerate quark is meaningless. While the exponential factor in Eq.~(\ref{tildem}) may look contrived, non-perturbative forms like this are in fact natural. Indeed, compare this expression with that for the eta prime mass, Eq.~(\ref{mofa}). \section{Connections with the Kaplan-Manohar ambiguity} \label{kaplanmanohar} In 1986 Kaplan and Manohar \cite{Kaplan:1986ru}, working in the context of next to leading order chiral Lagrangians, pointed out an inherent ambiguity in the quark masses. This takes a similar form to that found above, being proportional to the product of the heavier quark masses. The appearance of this form from the 't Hooft vertex is illustrative of a fundamental connection to the phenomenological chiral Lagrangian models. In this section I slightly rephrase the Kaplan-Manohar argument. Consider the three flavor theory with mass matrix \begin{equation} M=\pmatrix{ m_u & 0 & 0\cr 0 & m_d & 0\cr 0 & 0 & m_s\cr } \label{massmatrix} \end{equation} Chiral symmetry manifests itself in the massive theory as an invariance of physical quantities under changes in the quark mass matrix. Under a rotation of the form \begin{equation} M\rightarrow g_L M g_R^{-1} \label{chiralm} \end{equation} the basic physics of particles and their scatterings will remain equivalent. Here $g_L$ and $g_R$ are arbitrary elements of the flavor group, here taken as $SU(3)$. To proceed, consider the invariance of the antisymmetric tensor under the flavor group \begin{equation} \epsilon_{abc}=g_{ac}g_{bd}g_{ce}\epsilon_{cde}. \end{equation} Using this, it is straightforward to show that the combination \begin{equation}\epsilon_{acd}\epsilon_{bef} M^\dagger_{ec}M^\dagger_{fd} \label{chiralm1} \end{equation} transforms exactly the same way as $M$ under the change in Eq.~(\ref{chiralm}). This symmetry allows the renormalization group equations to mix the term in Eq.~(\ref{chiralm1}) with the starting mass matrix. Under a change of scale, the mass matrix can evolve to a combination along the lines \begin{equation} M_{ab}\rightarrow \alpha M_{ab}+\beta\epsilon_{acd}\epsilon_{bef} M^\dagger_{ec}M^\dagger_{fd} \end{equation} Writing this in terms of the three quark masses in Eq.~(\ref{massmatrix}) gives \begin{equation} m_u\rightarrow \alpha m_u+2 \beta m_sm_d \end{equation} This is exactly the same form as generated by the 't Hooft vertex. For the three flavor theory this is a next-to-leading-order chiral ambiguity in $m_u$. Dropping down to less flavors the issue becomes sharper, being a leading-order mixing of $m_u$ with $m_d^*$ in the two flavor case. For one flavor it is a zeroth-order effect, leaving a mass ambiguity of order $\Lambda_{qcd}$. Increasing the number of light flavors tends to suppress topologically non-trivial gauge configurations. Effectively the fermions act to smooth out rough gauge fields. If we drop down in the number of flavors even further towards the pure Yang-Mills theory, the fluctuations associated with topology should become still stronger. Indeed, the issues present in the one-flavor case suggest that there may be a residual ambiguity in defining topological susceptibility for the pure glue theory \cite{Creutz:2004ir}. Since the theta parameter arises from topological issues in the gauge theory, one might wonder how its effects can be present in the chiral Lagrangian approach, where the gauge fields are effectively hidden. The reason is tied to the constraint that the effective fields are in the group $SU(3)$ rather than $U(3)$. The chiral Lagrangian imposes from the outset the correct symmetry of the theory including anomalies. Had one worked with a $U(3)$ effective field, then one would need to add a term to break the unwanted axial $U(1)$. Involving the determinant of the effective matrix accomplishes this, as in Ref.~\cite{Di Vecchia:1980ve}. \section{Axions and the strong CP problem} \label{axions} Another approach to the strong CP issue is to make the imaginary part of the quark mass a dynamical quantity that naturally relaxes to zero. Excitations of this new dynamical field are referred to as axions, and this is known as the axion solution to the strong CP problem. The basic idea is to replace a quark mass term with a coupling to a new dynamical field ${\cal A}(x)$ \begin{equation} m\overline\psi_L \psi_R + \hbox{h.c.}\rightarrow m \overline\psi_L \psi_R + i\xi {\cal A}(x) \overline\psi_L \psi_R + \hbox{h.c.} + (\partial_\mu {\cal A}(x))^2/2 \end{equation} Here $\xi$ is a parameter that allows one to adjust the strength of the axion coupling to hadrons; if $\xi$ is sufficiently small, the axion would not be observable in ordinary hadronic interactions. Any imaginary part in $m$ can then be shifted away, thus removing CP violation. This is the Peccei-Quinn symmetry \cite{Peccei:1977hh}. At this level the axion is massless. However, the operator coupled to the axion field can create eta prime mesons, so this term will mix the axion and the eta prime. Since non-perturbative effects give the eta prime a large mass, this mixing will in general not leave the physical axion massless; indeed it should acquire a mass of order $\xi^2$. This requirement for a renormalization of the axion mass shows how the anomaly forces a breaking of the Peccei-Quinn symmetry. As that was motivated by the strong CP problem, one might wonder if the axion really still solves this issue. As long as CP violation is present in some unified theory, and we don't have the shift symmetry, the reduction to the strong interactions could leave behind a linear term in the axion field, i.e. something that cannot be shifted away. The fact that we are taking $\xi$ small suggests that such a term would naturally be of order $\xi^2$, i.e. something of order the axion mixing with the eta prime. As long as the axion mass is not large, the visible CP violations in the strong interactions will remain small and the axion solution to the strong CP problem remains viable. \section{Consequences for rooted determinants} \label{rooting} Among the more controversial consequences of the 't Hooft vertex is the fact that it is severely mutilated by the ``rooting trick'' popular in many lattice gauge simulations \cite{Creutz:2007yg,Creutz:2007rk}. This represents a serious flaw in these algorithms. Indeed, the main purpose of lattice gauge theory is to obtain non-perturbative information on field theories, and the 't Hooft interaction is one of the most important non-perturbative effects. Nevertheless, the large investments that have been made in such algorithms has led some authors to attempt refuting this flaw \cite{Bernard:2006vv,Kronfeld:2007ek,Bernard:2007eh}. The problem arises because the staggered formulation for lattice quarks starts with an inherent factor of four in the number of species \cite{Kogut:1974ag,Susskind:1976jm,Sharatchandra:1981si,Karsten:1980wd}. These species are sometimes called ``tastes.'' Associated with this degeneracy is one exact chiral symmetry, which corresponds to a flavored chiral symmetry amongst the tastes. As one approaches the continuum limit, the 't Hooft vertex continues to couple to all tastes. All of these states will be involved as intermediate states in the interactions between topological objects. They give a contribution which is constant as the mass goes to zero, with a factor of $m^4$ from the determinant being canceled by a factor of $m^{-4}$ from the sources. The rooting ``trick'' is an attempt to reduce the theory from one with four tastes per flavor to only one. This is done by replacing the fermion determinant with its fourth root. However, this process preserves any symmetries of the determinant, including the one exact chiral symmetry of the staggered formulation. This is a foreboding of inherent problems since the one-flavor theory is not allowed to have any chiral symmetry. This symmetry forbids the appearance of the mass shift that is associated with the 't Hooft vertex of the one flavor theory. The main issue with rooting is that, even after the process, four potential tastes remain in the sources. The effective vertex will couple to all of them and be a multi-linear operator of the same order as it was in the unrooted theory. Furthermore, it will have a severe singularity in the massless limit since the rooting reduces the $m^4$ factor from the determinant to simply $m$, while the $m^{-4}$ from the sources remains. For the one-flavor theory the issue is particularly extreme. In this case the bilinear 't Hooft vertex should be a mass shift. However such an effect is forbidden by the exact chiral symmetry of the rooted formulation. Another way to see the issue with staggered fermions is via the chiral rotation in Eq.~(\ref{m5}). As discussed there, it is essential that two types of inequivalent mass terms be present. For staggered fermions the role of $\gamma_5$ is played by the parity of the site, i.e. $\pm1$ depending on whether one is on an even or odd lattice site. Unfortunately, the exact chiral symmetry of the staggered formulation gives physics which is completely independent of the angle $\theta$. For the unrooted theory this is acceptable since it is actually a flavored chiral rotation amongst the tastes, with two rotating one way and two with the opposite effective sign for $\theta$. But on rooting this symmetry is preserved, and thus the regulator cannot be complete. \section{Summary} \label{summary} I have discussed a variety of consequences of the 't Hooft operator. This is a rather old topic, but many of these consequences remain poorly understood. The approach remains the primary route towards understanding the quantum mechanical loss of the classical axial $U(1)$ symmetry and the connection of this with the $\eta^\prime$ mass. The vertex is a direct consequence of the robust nature of exact zero modes of the Dirac operator. These modes are tied to the topology of the gauge fields through the index theorem. Their stability under small perturbations of the gauge fields follows from their chiral nature. The form of the vertex exposes interesting discrete symmetries in some potential models for unification. Understanding these properties may be helpful towards finding a non-perturbative regulator for gauge theories involving chiral couplings to fermions. This effective interaction ties together and gives a qualitative understanding of several controversial ideas. In particular, the flaws in the rooting trick used in lattice gauge theory become clear in this context, although they are only beginning to be appreciated. Also, various attempts to formulate theories beyond the standard model continue to speculate on a vanishing up quark mass, despite this being an ill-posed concept. \section*{Acknowledgments} This manuscript has been authored under contract number DE-AC02-98CH10886 with the U.S.~Department of Energy. Accordingly, the U.S. Government retains a non-exclusive, royalty-free license to publish or reproduce the published form of this contribution, or allow others to do so, for U.S.~Government purposes.
1,108,101,565,373
arxiv
\section{introduction} Since the relativistic and higher-order $\alpha_s$ corrections are less important for bottomonia than for any other $q\bar q$ systems, study on bottomonia may offer more direct information about the hadron configuration and application of the perturbative QCD. The key problem is how to deal with the hadronic transition matrix elements which are fully governed by the non-perturbative QCD effects. Many phenomenological models have been constructed and applied. Each of them has achieved relative successes, but since none of them are based on any well established underlying theories, their model parameters must be obtained by fitting data. By doing so, some drawbacks of the model are exposed when applying to deal with different phenomenological processes. Thus one needs to continuously modify the model or re-fit its parameter, if not completely negate it. The light front quark model is one of such models. It has been applied to calculate the hadronic transitions and generally considered as a successful one. The model contains a Gaussian-type wavefunction whose parameters should be determined in a certain way. The Gaussian-type wavefunction was recommended by the authors of Refs. \cite{Jaus:1999zv,Cheng:2003sm} and most frequently the wavefunction for harmonic oscillator is adopted which we refer as the traditional LFQM wavefunction. As we employed the traditional LFQM wave functions to calculate the branching ratios of $\Upsilon(nS)\rightarrow\eta_b+\gamma$, some obvious contradictions between the theoretical predictions and experimental data emerged. Namely, the predicted $ \mathcal{B}(\Upsilon(2S)\rightarrow\eta_b+\gamma)$ was one order larger than the experimental upper bound \cite{Ke:2010tk}. Moreover, as one carefully investigates the wave functions, he would face a serious problem. If the traditional wave functions were employed, the decay constants of $\Upsilon(nS)$ ($f_V$) would increase for higher $n$. It obviously contradicts to the experimental data and the physics picture which tells us that the decay constant of a $nS$ state is proportional to its wavefunction at origin which manifests the probability that the two constituents spatially merge, so for excited states the probability should decrease. Thus the decay constants should be smaller as $n$ is larger. The experimental data confirm this trend. But the theoretical calculations with the traditional wave functions result in an inverse order. To overcome these problems, one may adopt different model parameters (refers to $\beta$) by fitting individual $n$'s decay constants as done in \cite{Ke:2010tk,Wang:2010np}, but the orthogonality among the $nS$ states is broken. In this work, we try to modify the harmonic oscillator functions and introduce an explicit $n$-dependent form for the wave functions. Keeping the orthogonality among the $nS$ states ($n=1,...5$), we modify the LFQM wave functions. By fitting the decay constants of $\Upsilon(nS)$, the concerned model parameters are fixed. Besides fitting the decay constants of the $\Upsilon(nS)$ family, one should test the applicability of the model in other processes. We choose the radiative decays of $\Upsilon(nS)\rightarrow \eta_b+\gamma$ as the probe. As a matter of fact, those radiative decays are of great significance for understanding the hadronic structure of bottomonia family. Indeed, the spin-triplet state of bottomonia $\Upsilon(nS)$ and the P-states $\chi_b(nP)$ were discovered decades ago, however the singlet state $\eta_b$ evaded detection for a long time, even though much efforts were made. Many phenomenological researches on $\eta_b$ have been done by some groups \cite{Hao:2007rb,Ebert:2002pp,Motyka:1997di,Liao:2001yh,Recksiegel:2003fm, Gray:2005ur,Eichten:1994gt,Ke:2007ih}. Different theoretical approaches result in different level splitting $\Delta M=\Upsilon(1S)-\eta_b(1S)$. In \cite{Recksiegel:2003fm} the authors used an improved perturbative QCD approach to get $\Delta M=44$ MeV; using the potential model suggested in \cite{Buchmuller:1980su} Eichten and Quigg estimated $\Delta M=87$ MeV \cite{Eichten:1994gt}; in Ref. \cite{Motyka:1997di} the authors selected a non-relativistic Hamiltonian with spin dependent corrections to study the spectra of heavy quarkonia and got $\Delta M$=57 MeV; the lattice prediction is $\Delta M$=51 MeV \cite{Liao:2001yh}, whereas the lattice result calculated in Ref. \cite{Gray:2005ur} was $\Delta M=64\pm14$MeV. Ebert $et\, al.$ \cite{Ebert:2002pp} directly studied spectra of heavy quarkonia in the relativistic quark model and gave $m_{\eta_b}=9.400$ GeV. The dispersion of the values may imply that there exist some ambiguities in our understanding about the structures of the $b\bar b$ family. \begin{center} \begin{figure}[htb] \begin{tabular}{c} \scalebox{1.2}{\includegraphics{DeltaM.eps}} \end{tabular} \caption{$\Delta M$ coming from different experimental measurement and theoretical work.}\label{DM} \end{figure} \end{center} The Babar Collaboration \cite{:2008vj} first measured $\mathcal{B}(\Upsilon(3S)\rightarrow\gamma\eta_b)=(4.8\pm0.5\pm0.6)\times10^{-4}$, and determined $m_{\eta_b}=9388.9^{+3.1}_{-2.3}\pm2.7$ MeV, $\Delta M= 71.4^{+3.1}_{-2.3}\pm2.7$ MeV in 2008. New data $m_{\eta_b}=9394.2^{+4.8}_{-4.9}\pm2.0$ MeV and $\mathcal{B}(\Upsilon(2S)\rightarrow\gamma\eta_b)=(3.9\pm1.1^{+1.1}_{-0.9})\times10^{-4}$ were released in 2009 \cite{:2009pz}. More recently the CLEO Collaboration \cite{Bonvicini:2009hs} confirmed the observation of $\eta_b$ using the database of 6 million $\Upsilon(3S)$ decays and assuming $\Gamma(\eta_b)\approx$10 MeV, they obtained $\mathcal{B}(\Upsilon(3S)\rightarrow\gamma\eta_b)=(7.1\pm1.8\pm1.1)\times10^{-4}$, $m_{\eta_b}=9391.8\pm6.6\pm2.0$ MeV and the hyperfine splitting $\Delta M= 68.5\pm6.6\pm2.0$ MeV, whereas using the database with 9 million $\Upsilon(2S)$ decays they obtained $\mathcal{B}(\Upsilon(2S)\rightarrow\gamma\eta_b)<8.4\times10^{-4}$ at 90\% confidential level. It is noted that the data of the two collaborations are in accordance on $m_{\eta_b}$, but the central values of $\mathcal{B}(\Upsilon(3S)\rightarrow\gamma\eta_b)$ are different. However, if the experimental errors are taken into account, the difference is within one standard deviation. Some theoretical works \cite{Radford:2009qi,Colangelo:2009pb,Seth:2009ba} are devoted to account the experimental results. In Ref. \cite{Ebert:2002pp} the authors studied these radiative decays and estimated $\mathcal{B}(\Upsilon(3S)\rightarrow\eta_b+\gamma)=4\times10^{-4}$, $\mathcal{B}(\Upsilon(2S)\rightarrow\eta_b+\gamma)=1.5\times10^{-4}$ and $\mathcal{B}(\Upsilon(1S)\rightarrow\eta_b+\gamma)=1.1\times10^{-4}$ with the mass $m_{\eta_b}$ = $9.400$ GeV. Their results about $m_{\eta_b}$ and $\mathcal{B}(\Upsilon(3S)\rightarrow\eta_b+\gamma)$ are close to the data. The authors of Ref. \cite{Choi:2007se} systematically investigated the magnetic dipole transition $V\rightarrow P\gamma$ in the light-front quark model (LFQM) \cite{Jaus:1999zv,Cheng:2003sm,Hwang:2006cua,Wei:2009nc}. In the QCD-motivated approach there are several free parameters, i.e., the quark mass and $\beta$ in the wave function (the notation of $\beta$ was given in the aforementioned literatures) which are fixed by the variational principle, then $\mathcal{B}(\Upsilon(1S)\rightarrow\eta_b+\gamma)$ was calculated and the central value is $8.4\,({\rm or}\,7.7)\times 10^{-4}$ \footnote{The different values correspond to the different potentials adopted in the calculations.}. It is also noted that the mass of $m_{\eta_b}=9.657\,({\rm or} \,9.295)$ GeV presented in Ref. \cite{Choi:2007se} deviates from the data listed before, so we are going to re-fix the parameter $\beta$ in other ways namely we fix the parameter $\beta$ by fitting data. Since experimentally, $m_{\eta_b}$ is determined by $\mathcal{B}(\Upsilon(nS)\rightarrow\eta_b+\gamma)$ and a study on the radiative decays can offer us much information about the characteristics of $\eta_b$, one should carefully investigate the transition within a relatively reliable theoretical framework. That is the aim of the present work, namely we will evaluate the hadronic matrix element in terms of our modified LFQM. This paper is organized as follows. After this introduction, in section II we discuss how to modify the traditional wave functions in LFQM. We present the formula to calculate the form factors for $V\rightarrow P\gamma$ in the LFQM and numerical results in section III. The section IV is devoted to our conclusion and discussion. \section{The modified wave functions for the radially excited states } When the LFQM is employed to calculate the decay constants and form factors, one needs the wave functions of the concerned hadrons. In most cases, the wave functions of harmonic oscillator are adopted. In the works \cite{Jaus:1999zv, Cheng:2003sm, Hwang:2006cua,Wei:2009nc,Choi:2007se,Ke:2009mn}, only the wave function of the radially ground state is needed, but when in the processes under consideration radially excited states of are involved, their wave functions should also be available. In \cite{Faiman:1968js, Isgur:1988gb}, the traditional wave functions $\varphi$ for $1S$ and $2S$ states in configuration space from harmonic oscillator are given as \begin{eqnarray} \label{eq:12S} \varphi^{1S}(r)&=&\Big(\frac{\beta^2}{\pi}\Big)^{3/4}{\exp}\Big(-\frac{1}{2}\beta^2\mathbf{r}^2\Big),\nonumber\\ \varphi^{2S}(r)&=&\Big(\frac{\beta^2}{\pi}\Big)^{3/4}{\exp}\Big(-\frac{1}{2}\beta^2\mathbf{r}^2\Big)\frac{1}{\sqrt{6}} \Big(3-2\beta^2\mathbf{r}^2\Big). \end{eqnarray} In order to maintain the orthogonality among $nS$ states, the parameter $\beta$ in the above two functions are the same. The wave functions for other $nS$ state can be found in Appendix A. The decay constants of the $nS$ states are directly proportional to the wave function at the origin \begin{eqnarray}\label{24} f_V\propto \varphi(r=0). \end{eqnarray}If we simply adopt the wave functions of harmonic oscillator for all of them as we do for the $1S$ state, then we find the wave functions at the origin, i.e. $\varphi(r=0)$ (see Appendix for details) rises with increase of $n$ (the principle quantum number) which means the decay constants would increase for larger $n$. For example, by Eq. (\ref{eq:12S}) the ratio of wave functions of $2S$ and $1S$ states at the origin is $3/\sqrt{6}>1$. The decay constants $f_V$ of $\Upsilon(nS)$ are extracted from the processes $\Gamma(\Upsilon(nS)\rightarrow e^+e^- )$ with \begin{eqnarray}\label{25} \Gamma(V\rightarrow e^+e^-)=\frac{4\pi}{27}\frac{\alpha^2}{m_V}f^2_{V}, \end{eqnarray} where $V$ represents $\Upsilon(nS)$ and $m_V$ its mass. By use of the experimental data from PDG \cite{PDG08}, we obtain the experimental values for $f_V$ which are listed in Table \ref{tab:decay}. Obviously, the decay constant becomes smaller as $n$ is larger. In LFQM, the formula for calculating the vector meson decay constant is given by \cite{Jaus:1999zv,Cheng:2003sm} \begin{eqnarray}\label{26} f_V&=&\frac{\sqrt{N_c}}{4\pi^3M}\int dx\int d^2k_\perp\frac{\phi(nS)}{\sqrt{2x(1-x)}\tilde M_0} \biggl[xM_0^2-m_1(m_1-m_2)-k^2_\perp+\frac{m_1+m_2}{M_0+m_1+m_2}k^2_\perp\biggl], \end{eqnarray} where $m_1=m_2=m_b$ and other notations are collected in the Appendix. In the calculation we set $m_b=5.2$ GeV following \cite{Choi:2007se} and the decay constant of $\Upsilon(1S)$ is used to determine the parameter $\beta_\Upsilon$ as the input. We obtain $\beta_\Upsilon=1.257\pm$0.006 GeV corresponding to $f^{\rm exp}_{\Upsilon(1S)}=715\pm 5$ MeV. In order to illustrate the dependence of our results on $m_b$, we re-set $m_b=4.8$ GeV to repeat our calculation, then by fitting the same data, we fix $\beta_\Upsilon=1.288\pm$0.006 GeV and all the results are clearly shown in the following tables. The $f_\Upsilon^{\rm T}$ in Table \ref{tab:decay} are the decay constant calculated in the traditional wave functions. These results expose an explicit contradictory trend. Thus, our calculation indicates that if the traditional wave functions are used, the obtained decay constants of $\Upsilon(nS)$ would sharply contradict to the experimental data. \begin{table} \caption{ The decay constants of $\Upsilon(nS)$ (in the unit of MeV). The column ``$f_\Upsilon^{\rm T}$" represents the theoretcal predictions with the traditional wave function in LFQM. The column ``$f_\Upsilon^{\rm M}$" represents the prediction with our modified wave function and the values in the brackets are the corresponding values with $m_b=4.8$ GeV as input. (The other values are corresponding to $m_b=5.2$ GeV.)} \label{tab:decay} \begin{tabular}{c|c|c|c}\hline\hline ~~~~~~~nS~~~~~~~ & ~~~~~~$f_\Upsilon^{\rm exp}$~~~~~~ & ~~~~~~$f_\Upsilon^{\rm T}$~~~~~~ & ~~~~~~$f_\Upsilon^{\rm M}$~~~~~~ \\\hline 1S & 715$\pm$5 & 715$\pm$5 & 715$\pm 5$ (715$\pm 5$) \\ 2S & 497$\pm$5 & 841$\pm$7 & 497$\pm 5$ (498$\pm 5$) \\ 3S & 430$\pm$4 & 925 $\pm$8 & 418$\pm 5$ (419$\pm 4$) \\ 4S & 340$\pm$19 & 993 $\pm$8 & 378$\pm 4$ (397$\pm 4$) \\ 5S & 369$\pm$42 & 1040 $\pm$9 & 349$\pm 4$ (351$\pm 4$) \\\hline\hline \end{tabular} \end{table} As aforementioned, the wave functions must be modified. Our strategy is to establish a new Gaussian-type wave function which is different from that of harmonic oscillator. As modifying the wave functions, several principles must be respected: (1) The wave function of $1S$ should not change because its application for dealing with various processes has been tested and the results indicate that it works well; (2) The number of nodes of $nS$ should not be changed; (3) A factor may be added into the wave functions which should uniquely depend on $n$ in analog to the wave function of the hydrogen-like atoms which is written as $R_n(r)=P^{hydr}_n(r)e^{-{Zr\over na_0}}$, where $P_n(r)$ is a polynomial and $Z$ is the atomic number, $a_0$ is the Bhor radius; (4) Using the new Gaussian-type wave function, the contradiction for the decay constants can be solved. In the LFQM, we only need the wave functions in the momentum space. Fourier transformation gives us the corresponding forms in the momentum space, see the Appendix for details. The $1S$ wave function is remained and used to fix the model parameter. Now let us investigate the wave function of $2S$. According to the analog to the hydrogen-like atom, we introduce a factor $g_2$ represents $n$-dependence to the exponential in the wave function of $2S$, thus the wave function of $2S$ is changed to \begin{eqnarray}\label{2S} \psi_{_M}^{2S}(\mathbf{p}^2)=\Big(\frac{\pi}{\beta^2}\Big)^{3/4}{\exp}\Big(-g_2\frac{\mathbf{p}^2}{2\beta} \Big) \Big(a+b\frac{\mathbf{p}^2}{\beta^2}\Big), \end{eqnarray} where the subscript $M$ denotes the modified function. Then by requiring it to be orthogonal to that of $1S$ and normalizing the wave function, we determine the parameters $a$ and $b$ in the modified wave function of $2S$. With this new wave function of $2S$, we demand the theoretical decay constant be consistent with data so $g_2$ should fall into a range determined by the experimental errors. Going on, we obtain the modified wave function of $3S$ and that for 4S and 5S as well. In this case the modified wave functions of $nS$ states are more complicated than the traditional ones. We have gained a series of numerical $g_n$'s by the principles we discussed above, then we wish to guess an analytical factor $g_n$ which is close to the numerical values of the series. We find that if $g_n=n^\delta$( $\delta=1/1.82$) is set, we almost recover the numerical series. Thus the wave function of the $nS$ state in the momentum space can be written as \begin{eqnarray} \psi_{_M}^{nS}({\bf p}^2)=P_n({\bf p}^2){\exp}\Big(-n^\delta\frac{\mathbf{p}^2}{2\beta^2} \Big), \end{eqnarray} where $P_n({\bf p}^2)$ is a polynomial in ${\bf p}^2$. The corresponding wave function of the $nS$ state in the configuration space can be written as \begin{eqnarray} \psi_{_M}^{nS}(r)=P'_n({\bf r}^2){\exp}\Big(-\frac{\beta^2\mathbf{r}^2}{2n^\delta} \Big). \end{eqnarray} Comparing with the case of the hydrogen-like atoms which the $nS$-wave functions are written as \begin{eqnarray} R_{n0}(r)=P^{\rm hydr}_n(r)\exp\Big({-Zr\over na_0}\Big). \end{eqnarray} in the configuration space, where $P^{hydr}_n(r)$ is a polynomial in $r$. The factor $1/n$ in the exponential power is obtained by solving the Schr\"odinger equation where only the Coulomb potential exists. To modify the wave functions we get the factors numerically for all the $nS$ states, then ``guess" its analytical form. In the LFQM, the factor $1/n^\delta$ is introduced to fit the experimental data for $nS$ decay constants. Definitely this analytical form is not derived from an underlying theory, such as that for the hydrogen atom, thus the dependence on $n$ is only an empirical expression. But we are sure that if the model is correct and our guess is reasonable, it should be obtained from QCD (maybe non-perturbative QCD). It is noted that the experimental errors are large, so that other forms for $g_n$ might also be possible. The theoretical estimation of the decay constants of $\Upsilon(nS)$ ($f_\Upsilon^{\rm M}$) are also presented in Table \ref{tab:decay}. The modified wave functions seem to work well and they could be used for evaluating $\mathcal{B}(\Upsilon(nS)\rightarrow\eta_b+\gamma)$. \section{the transition of $\Upsilon(nS)\to \eta_b +\gamma$ } In this section, we calculate the branching ratios of $\Upsilon(nS)\to \eta_b +\gamma$ in terms of the modified wave functions derived in the above section. \subsection{Formulation of $\Upsilon(nS)\to \eta_b +\gamma$ in the LFQM} The Feynman diagrams describing $\Upsilon(nS)\to \eta_b+ \gamma$ are plotted in Fig. \ref{fig:LFQM}. The transition amplitude of $\Upsilon(nS)\to \eta_b+ \gamma$ can be expressed in terms of the form factor $\mathcal{F}_{\Upsilon(nS)\to\eta_b}(q^2)$ which is defined as \cite{Choi:2007se,Hwang:2006cua} \begin{eqnarray}\label{2S1} &&\langle \eta_b(\mathcal{P}')|J_{em}^\mu|\Upsilon(\mathcal{P},h)\rangle =ie\,\varepsilon^{\mu\nu\rho\sigma}\epsilon_\nu(\mathcal{P},h)q_\rho \mathcal{P}_\sigma\mathcal{ F}_{\Upsilon(nS) \to\eta_b}(q^2), \end{eqnarray} where $\mathcal{P}$ and $\mathcal{P}'$ are the four-momenta of $\Upsilon(nS)$ and $\eta_b$. $q=\mathcal{P}-\mathcal{P}'$ is the four-momentum of the emitted photon and $\epsilon_\nu(\mathcal{P},h)$ denotes the polarization vector of $\Upsilon(nS)$ with helicity $h$. For applying the LFQM, we first let the photon be virtual, i.e. leave its mass-shell $q^2=0$ into the un-physical region of $q^2<0$. Then $\mathcal{ F}_{\Upsilon(nS)\to \eta_b}(q^2)$ can be obtained in the $q^+=0$ frame with $q^2=q^+q^- - {\bf q}^2_\perp=-{\bf q}^2_\perp<0$. Then we just analytically extrapolate $\mathcal{ F}_{\Upsilon(nS) \to\eta_b}({\bf q}^2_\perp)$ from the space-like region to the time-like region ($q^2\geq 0$). By taking the limit $q^2\rightarrow 0$, one obtains $\mathcal{ F}_{\Upsilon(nS)\to \eta_b}( q^2=0)$. \begin{center} \begin{figure}[htb] \begin{tabular}{cc} \scalebox{0.8}{\includegraphics{diagram1.eps}}&\raisebox{-2.5em} {\scalebox{0.8}{\includegraphics{diagram2.eps}}} \end{tabular} \caption{Feynman diagrams depicting the radiative decay $\Upsilon(nS)\to \eta_b+\gamma$.\label{fig:LFQM}} \end{figure} \end{center} By means of the light front quark model, one can obtain the expression of the form factor $\mathcal{ F}_{\Upsilon(nS)\to \eta_b}(q^2)$ \cite{Choi:2007se}: \begin{eqnarray}\label{21} \mathcal{ F}_{\Upsilon(nS)\to\eta_b}(q^2)= e_bI(m_1,m_2,q^2) + e_{b} I(m_2,m_1,q^2), \end{eqnarray} where $e_{b}$ is the electrical charge for the bottom quark, $m_1=m_2=m_b$ and \begin{eqnarray}\label{22} I(m_1,m_2,q^2) &=&\int^1_0 \frac{dx}{8\pi^3}\int d^2{\bf k}_\perp \frac{\phi(x, {\bf k'}_\perp)\phi(x,{\bf k}_\perp)}{x_1\tilde{M_0}\tilde{M'_0}} \times \biggl\{{\cal A} + \frac{2} {{\cal M}_0} [{\bf k}^2_\perp - \frac{({\bf k}_\perp\cdot{\bf q}_\perp)^2}{{\bf q}^2_\perp}]\biggr\}. \end{eqnarray} where ${\cal A}=x_2m_1+x_1m_2$, $x=x_1$ and the other variables in Eq. (\ref{22}) are defined in Appendix. In the covariant light-front quark model, the authors of \cite{Hwang:2006cua} obtained the same form factor $\mathcal{ F}_{\Upsilon(nS)\to \eta_b}(\mathbf{q}^2)$. The decay width for $\Upsilon(nS)\rightarrow \eta_b+\gamma$ is easily achieved \begin{eqnarray}\label{23} \Gamma(\Upsilon(nS)\rightarrow \eta_b+\gamma)=\frac{\alpha}{3}\bigg[\frac{m_{\Upsilon(nS)}^2-m_{\eta_b}^2}{2m_{\Upsilon(nS)}}\bigg]^3 \mathcal{ F}^2_{\Upsilon(nS) \to\eta_b}(0). \end{eqnarray} where $\alpha$ is the fine-structure constant and $m_{\Upsilon(nS)},\; m_{\eta_b}$ are the masses of $\Upsilon(nS)$ and $\eta_b$ respectively. \subsection{Numerical results } Now we begin to evaluate the transition rates of $\Upsilon(2S)\rightarrow\eta_c+\gamma$ with the modified wave functions. We still use the values of $m_b=5.2$ GeV and $\beta_\Upsilon=1.257\pm 0.006$ GeV given in last section. The parameter $\beta_{\eta_b}$ is unknown, we determine it from $\Upsilon(2S)\rightarrow\gamma\eta_b$ process. Comparing with the data $\mathcal{B}(\Upsilon(2S)\rightarrow\gamma\eta_b)=3.9\times 10^{-4}$ \cite{:2009pz}, we obtain $\beta_{\eta_b}=1.246\pm0.005$ GeV which is consistent with our expectation, namely it is close to the value of $\beta_{\Upsilon}=1.257$ GeV. Under the heavy quark limit, they should be exactly equal, and the deviation must be of order $\mathcal{O}(1/m_b)$ which is small \cite{Isgur:1989vq}. With these parameters, we can calculate the branching ratios $\mathcal{B}(\Upsilon(1S)\rightarrow\eta_b+\gamma)$, $\mathcal{B}(\Upsilon(3S)\rightarrow\eta_b+\gamma)$, $\mathcal{B}(\Upsilon(4S)\rightarrow\eta_b+\gamma)$, and $\mathcal{B}(\Upsilon(5S)\rightarrow\eta_b+\gamma)$. The numerical results are presented in the column ``$\mathcal{B}_{\rm I}^{\rm M}$" of Table \ref{tab:etab2}. Indeed, the b-quark mass is an uncertain parameter which cannot be directly measured and in some literatures, different values for b-quark mass have been adopted. To see how sensitive to the b-quark mass the result would be, we also present the numerical results with $m_b=4.8$ GeV, $\beta_\Upsilon=1.288\pm 0.006$ GeV and $\beta_{\eta_b}=1.287\pm0.005$ GeV in the column ``$\mathcal{B}_{\rm II}^{\rm M}$" of Table \ref{tab:etab2}. The results in the column ``$\mathcal{B}^{\rm T}$" of Table \ref{tab:etab2} are obtained with the traditional wave functions. Apparently, as the modified wave functions are employed, the theoretical predictions on branching ratios of the radiative decays are much improved, namely deviations from the data is diminished. About the numerical results, some comments are given as following: (1) Comparing the results shown in column $\mathcal{B}_{\rm I}^{\rm M}$ with those in column $\mathcal{B}_{\rm II}^{\rm M}$, we can find that they are not sensitive to $m_b$. (2) For the decay $\Upsilon(1S)\rightarrow\eta_b+\gamma$, our prediction on the branching ratio is about $2.0\times 10^{-4}$. This mode should be observed soon in the coming experiment. Our prediction is consistent with the results of Ref. \cite{Ebert:2002pp,Choi:2007se}. The branching ratio is not sensitive to $\beta_{\eta_b}$, but sensitive to the mass splitting $\Delta M$. That is easy to understand. Since the decay width is proportional to $(\Delta M)^3$, thus as $\Delta M$ is small, i.e., the masses of initial and daughter mesons are close to each other, any small change of $m_{\eta_b}$ can lead to a remarkable difference in the theoretical prediction on the branching ratio. Thus the accurate measurement on $\mathcal{B}(\Upsilon(1S)\rightarrow\eta_b+\gamma)$ will be a great help to determine the mass of $m_{\eta_b}$. (3) The process of $\Upsilon(2S)\rightarrow\eta_b+\gamma$ is used as an input to determine the parameter of $\eta_b$. The prediction of $\Upsilon(3S)\rightarrow\eta_b+\gamma$ is in accordance with the experimental data by the order of magnitude. After taking into account the experimental and theoretical errors, they can be consistent. This result could be of relatively large errors, because we only use four parameters ($m_b$, $\beta_\Upsilon$, $\beta_{\eta_b}$, $\alpha$) to determine five decay constants and three branching ratios for $\Upsilon(1S,2S,3S)\to \eta_b+\gamma$ and all of them possess certain errors. (4) The branching ratios for the processes $\Upsilon(4S)\rightarrow\eta_b+\gamma$ and $\Upsilon(5S)\rightarrow\eta_b+\gamma$ are at the order of $10^{-8}$, it is nearly impossible to be observed in the near future if there aren't other mechanisms to enhance them. (5) As an application, we predict the decay constant of $\eta_b$ in terms of the model parameters we obtained above. We calculate the branching ratio of $\mathcal{B}$$(\Upsilon(2S)\rightarrow\gamma\eta_b)$ in the LFQM. By fitting data we fix the concerned model parameters for $\eta_b$, and then with them we predict the decay constant of $\eta_b$ in the same framework of the LFQM \cite{Jaus:1999zv,Cheng:2003sm}. In the calculations, b-quark mass $m_b$ and $\beta_{\eta_b} $ are input parameters. To show how sensitive the results are to the parameters, we use the two sets of input parameters given above, and the corresponding results are as follows: $f_{\eta_b}=567$ MeV when $m_b=5.2$GeV and $\beta_{\eta_b}=1.246$GeV ; $f_{\eta_b}=604$ MeV when $m_b=4.8$GeV and $\beta_{\eta_b}=1.287$GeV. For a comparison, we deliberately change only $m_b$ while keeping $\beta_{\eta_b}$ unchanged to repeat the calculation and obtain $f_{\eta_b}=591$ MeV when $m_b=5.2$GeV and $\beta_{\eta_b}=1.287$GeV. It is noted that $f_{\eta_b}$ is more sensitive to $\beta_{\eta_b}$, rather than $m_b$. \begin{table} \caption{ The branching ratios of $\Upsilon(nS)\rightarrow\gamma\eta_b$. In the column ``$\mathcal{B}_{\rm I}^{\rm M}$", $m_b=5.2$GeV, $\beta_\Upsilon=1.257\pm 0.006$ GeV and $\beta_{\eta_b}=1.246\pm 0.005$ GeV. In the column ``$\mathcal{B}_{\rm II}^{\rm M}$", $m_b=4.8$GeV, $\beta_\Upsilon=1.288\pm 0.006$ GeV and $\beta_{\eta_b}=1.287\pm0.005$ GeV. In the column ``$\mathcal{B}^{\rm T}$", $m_b=5.2$GeV, $\beta_\Upsilon=1.257\pm 0.006$ GeV and $\beta_{\eta_b}=1.249\pm0.005$ GeV.} \begin{ruledtabular} \begin{tabular}{ccccc} Decay mode & $\mathcal{B}_{\rm I}^{\rm M}$ & $\mathcal{B}_{\rm II}^{\rm M}$ & $\mathcal{B}^{\rm T}$& Experiment \\\hline $\Upsilon(1S)\rightarrow \eta_b+\gamma$ & $(1.94\pm 0.41)\times 10^{-4}$ & $(2.24\pm 0.47)\times 10^{-4}$ & $(1.94\pm 0.42)\times 10^{-4}$ & - \\\hline $\Upsilon(2S)\rightarrow \eta_b+\gamma$ & $(3.90\pm 1.49)\times 10^{-4}$ & $(3.90\pm 1.49)\times 10^{-4}$ & $(3.90\pm 1.49)\times 10^{-4}$ & $(3.9\pm1.1^{+1.1}_{-0.9})\times 10^{-4}$ \cite{:2009pz} \\\hline $\Upsilon(3S)\rightarrow \eta_b+\gamma$ & $(1.87\pm 0.71)\times 10^{-4}$ & $(1.68\pm 0.72)\times 10^{-4}$ & $(1.05\pm 0.40)\times 10^{-5}$ & $(4.8\pm 0.5\pm 0.6)\times 10^{-4}$ \cite{:2008vj} \\ & & & & $(7.1\pm 1.8\pm 1.1)\times 10^{-4}$ \cite{Bonvicini:2009hs} \\\hline $\Upsilon(4S)\rightarrow \eta_b+\gamma$ & $(8.81\pm 3.32)\times 10^{-8}$ & $(7.82\pm 3.35)\times 10^{-8}$ & $(2.25\pm 0.88)\times 10^{-10}$& - \\\hline $\Upsilon(5S)\rightarrow \eta_b+\gamma$ & $(1.17\pm 0.43)\times 10^{-8}$ & $(1.02\pm 0.45)\times 10^{-8}$ & $(1.57\pm 0.52)\times 10^{-12}$ & - \\ \end{tabular} \end{ruledtabular}\label{tab:etab2} \end{table} \section{Conclusion} The LFQM has been successful in phenomenological applications. It is believed that it could be a reasonable model for dealing with the hadronic transitions where the non-perturbative QCD effects dominate. However, it seems that the wave function adopted in the previous literature has to be modified. As we study the decay constant of $\Upsilon(nS)$, we find that there exists a sharp contradiction between the theoretical prediction and data as long as the traditional harmonic oscillator wave functions were employed. Namely, the larger $n$ is, the larger the predicted decay constant would be. It is obviously contradict to the physics picture that for higher radially excited states, the wave function at origin should be smaller than the lower ones. But the old wave functions would result in an inverse tendency. If enforcing all the decay constants of $\Upsilon(nS)$ to be fitted to the data in terms of the traditional wave functions, the orthogonality among all the $nS$ states must be abandoned, but it is not acceptable according to the basic principle of quantum mechanics. Thus we modify the wave functions of the radial excited states based on the common principles. Namely, we keep the orthogonality among the wave functions and their proper normalization. Moreover, we require the wave functions $\varphi_M(r)$ at origin $r=0$ to be consistent with the data, i.e. the decay constants for higher $n$ must be smaller than that of the lower states. Concretely, we modify the exponential function in the wave functions by demanding the power not to universal for all $n$'s, but be dependent on $n$. Concretely we add a numerical factor $g_n$ into $\exp(g_n{-{\bf p}^2\over 2\beta^2})$ and by fitting the data of the decay constants of $\Upsilon(nS)$ we obtain a series of the numbers of $g_n$. Within a reasonable error range, we approximate $g_n$ as $g(n)=n^\delta$ and calculate the value for $\delta$. It is an alternative way which is different from that adopted in Ref. \cite{Choi:2007se}, to fix the parameter. With the modified wave functions of $\Upsilon(nS)$, we calculate the branching ratios of $\Upsilon(nS)\rightarrow\eta_b+\gamma$ in the LFQM. First by fitting the well-measured central value of $\mathcal{B}(\Upsilon(2S)\to \eta_b+\gamma)$ \cite{:2009pz}, we obtain the parameter $\beta_{\eta_b}$. By the effective heavy quark theory, in heavy quark limit the spin singlet and triplet of $b\bar b$ system should degenerate, namely the parameters of $\beta_{\Upsilon(1S)}$ and $\beta_{\eta_b}$ should be very close. Our numerical result confirms this requirement. Then we estimate the other $\Upsilon(nS)\to \eta_b+\gamma$. The order of magnitudes of our numerical results is consistent with data. Even though the predicted branching ratios still do not precisely coincide with the data, the result is much improved. The branching ratios of processes $\Upsilon(4S)\to \eta_b+\gamma$ and $\Upsilon(5S)\to \eta_b+\gamma$ are predicted to be at the order of $10^{-8}$. They are difficult to be measured in the future as long as there is no new physical mechanism to greatly enhance them. By studying the radiative decay of $\Upsilon(nS)\to \eta_b+\gamma$, we can learn much about the hadronic structure of $\eta_b$. Even though much effort has been made to explore the spin singlet $\eta_b$, in particle data group (PDG) of 2008, $\eta_b$ was still omitted from the summary table \cite{PDG08}. In fact, determination of the mass of $\eta_b$ is made via the radiative decays of $\Upsilon(nS)\to \eta_b+\gamma$ \cite{:2008vj}, and the recent data show $m_{\eta_b}=9388.9^{+3.1}_{-2.3}(stat)\pm 2.7(syst)$ MeV by the $\Upsilon(3S)$ data and $m_{\eta_b}=9394.2^{+4.8}_{-4.9}(stat)\pm 2.0(syst)$ MeV by the $\Upsilon(2S)$ data \cite{:2009pz}. Penin \cite{Penin:2009wf} reviewed the progress for determining the mass of $\eta_b$ and indicated that the accurate theoretical prediction of $m_{\eta_b}$ would be a great challenge. Indeed, determining the wave function of $\eta_b$ would be even more challenging. We carefully study the transition rates of the radiative decays which would help to extract information about $m_{\eta_b}$. The transition rate of $\Upsilon(1S)\to\eta_b+\gamma$ is very sensitive to the mass splitting $\Delta M=m_{\Upsilon(1S)}-m_{\eta_b}$ due to the phase space constraint, thus an accurate measurement of the radiative decay may be more useful to learn the spin dependence of the bottominia. \section*{Acknowledgments} This project is supported by the National Natural Science Foundation of China (NSFC) under Contracts Nos. 10705001, 10705015 and 10775073; the Foundation for the Author of National Excellent Doctoral Dissertation of P.R. China (FANEDD) under Contracts No. 200924; the Doctoral Program Foundation of Institutions of Higher Education of P.R. China under Grant No. 20090211120029; the Special Grant for the Ph.D. program of Ministry of Eduction of P.R. China; the Program for New Century Excellent Talents in University (NCET) by Ministry of Education of P.R. China; the Fundamental Research Funds for the Central Universities; the Special Grant for New Faculty from Tianjin University. \section*{Appendix} \subsection{The radial wave functions} The traditional wave functions $\phi$ in configuration space from harmonic oscillator \cite{Faiman:1968js} are \begin{eqnarray}\label{app4} \varphi^{1S}(r)&=&\Big(\frac{\beta^2}{\pi}\Big)^{3/4}{\exp}\Big(-\frac{1}{2}\beta^2\mathbf{r}^2\Big),\nonumber\\ \varphi^{2S}(r)&=&\Big(\frac{\beta^2}{\pi}\Big)^{3/4}{\exp}\Big(-\frac{1}{2}\beta^2\mathbf{r}^2\Big)\frac{1}{\sqrt{6}} \Big(3-2\beta^2\mathbf{r}^2\Big), \nonumber\\ \varphi^{3S}(r)&=&\Big(\frac{\beta^2}{\pi}\Big)^{3/4}{\exp}\Big(-\frac{1}{2}\beta^2\mathbf{r}^2\Big) \sqrt{\frac{2}{15}} \Big(\frac{15}{4}-5\beta^2\mathbf{r}^2+\beta^4\mathbf{r}^4\Big),\nonumber \\ \varphi^{4S}(r)&=&\Big(\frac{\beta^2}{\pi}\Big)^{3/4}{\rm exp}\Big(-\frac{1}{2}\mathbf{r}^2\beta^2\Big)\frac{1}{{12\sqrt{35}}} \Big(-105+210\mathbf{r}^2{\beta^2}-84\mathbf{r}^4{\beta^4}+8\mathbf{r}^6{\beta^6}\Big), \nonumber\\ \varphi^{5S}(r)&=&\Big(\frac{\beta^2}{\pi}\Big)^{3/4}{\rm exp}\Big(-\frac{1}{2}\mathbf{r}^2\beta^2\Big)\frac{1}{{72\sqrt{70}}} \Big(945-2520{\beta^2}\mathbf{r}^2+1512{\beta^4}\mathbf{r}^4 -288{\beta^6}\mathbf{r}^6+ 16{\beta^8}\mathbf{r}^8\Big). \end{eqnarray} and their Fourier transformation are \begin{eqnarray}\label{app5} \psi^{1S}(\mathbf{p}^2)&=&\Big(\frac{1}{\beta^2\pi}\Big)^{3/4}{\exp}\Big(-\frac{1}{2}\frac{\mathbf{p}^2}{\beta^2}\Big),\nonumber\\ \psi^{2S}(\mathbf{p}^2)&=&\Big(\frac{1}{\beta^2\pi}\Big)^{3/4}{\exp}\Big(-\frac{1}{2}\frac{\mathbf{p}^2}{\beta^2}\Big)\frac{1}{\sqrt{6}} \Big(3-2\frac{\mathbf{p}^2}{\beta^2}\Big), \nonumber\\ \psi^{3S}(\mathbf{p}^2)&=&\Big(\frac{1}{\beta^2\pi}\Big)^{3/4}{\exp}\Big(-\frac{1}{2}\frac{\mathbf{p}^2}{\beta^2}\Big) \sqrt{\frac{2}{15}} \Big(\frac{15}{4}-5\frac{\mathbf{p}^2}{\beta^2}+\frac{\mathbf{p}^4}{\beta^4}\Big),\nonumber \\ \psi^{4S}(\mathbf{p}^2)&=&\Big(\frac{1}{\beta^2\pi}\Big)^{3/4}{\rm exp}\Big(-\frac{1}{2}\frac{\mathbf{p}^2}{\beta^2}\Big)\frac{1}{{12\sqrt{35}}} \Big(-105+210\frac{\mathbf{p}^2}{\beta^2}-84\frac{\mathbf{p}^4}{\beta^4}+8\frac{\mathbf{p}^6}{\beta^6}\Big), \nonumber\\ \psi^{5S}(\mathbf{p}^2)&=&\Big(\frac{1}{\beta^2\pi}\Big)^{3/4}{\rm exp}\Big(-\frac{1}{2}\frac{\mathbf{p}^2}{\beta^2}\Big)\frac{1}{{72\sqrt{70}}} \Big(945-2520\frac{\mathbf{p}^2}{\beta^2}+1512\frac{\mathbf{p}^4}{\beta^4} -288\frac{\mathbf{p}^6}{\beta^6}+ 16\frac{\mathbf{p}^8}{\beta^8}\Big). \end{eqnarray} The modified wave functions $\varphi_M$ in configuration space are defined \begin{eqnarray}\label{app4} \varphi_{\rm_M}^{1S}(r)&&=\Big(\frac{\beta^2}{\pi}\Big)^{3/4}{\exp}\Big(-\frac{1}{2}\beta^2\mathbf{r}^2\Big),\nonumber\\ \varphi_{\rm_M}^{2S}(r)&&=\Big(\frac{\beta^2}{\pi}\Big)^{3/4}{\exp}\Big(-\frac{1}{2\times{2}^\delta}\beta^2\mathbf{r}^2\Big) \Big(a_2 - b_2\beta^2\mathbf{r}^2\Big), \nonumber\\ \varphi_{\rm_M}^{3S}(r)&&=\Big(\frac{\beta^2}{\pi}\Big)^{3/4}{\exp}\Big(-\frac{1}{2\times{3}^\delta}\beta^2\mathbf{r}^2\Big) \Big(a_3 - b_3\beta^2\mathbf{r}^2+c_3\beta^4\mathbf{r}^4\Big),\nonumber\\ \varphi_{\rm_M}^{4S}(r)&&=\Big(\frac{\beta^2}{\pi}\Big)^{3/4}{\rm exp}\Big(-\frac{1}{2\times{4}^\delta}\mathbf{r}^2\beta^2\Big) \Big(-a_4 + b_4\mathbf{r}^2{\beta^2}-c_4\mathbf{r}^4{\beta^4}+d_4\mathbf{r}^6{\beta^6}\Big), \nonumber\\ \varphi_{\rm_M}^{5S}(r)&&=\Big(\frac{\beta^2}{\pi}\Big)^{3/4}{\rm exp}\Big(-\frac{1}{2\times{5}^\delta}\mathbf{r}^2\beta^2\Big) \Big(a_5 - b_5{\beta^2}\mathbf{r}^2+c_5{\beta^4}\mathbf{r}^4 -d_5{\beta^6}\mathbf{r}^6 +e_5{\beta^8}\mathbf{r}^8\Big) \end{eqnarray} with coefficients, which are irrational numbers and are kept five digits after the decimal point \begin{eqnarray*} \begin{array}{|c|c|c|c|c|c|}\toprule[1pt] n&a_n&b_n&c_n&d_n&e_n\\\hline 2& 0.72817& 0.40857&-&-&-\\ 3& 0.62920& 0.54138&0.06712&-&-\\ 4&0.57834&0.61887&0.12838&0.00614&-\\ 5&0.54747&0.67621&0.18332&0.01558&0.00038 \\\bottomrule[1pt] \end{array}\,. \end{eqnarray*} The corresponding modified wave functions in momentum space are \begin{eqnarray}\label{app4} \psi_{\rm_M}^{1S}(\mathbf{p}^2)&&=\Big(\frac{1}{\beta^2\pi}\Big)^{3/4}{\exp}\Big(-\frac{1}{2}\frac{\mathbf{p}^2}{\beta^2}\Big),\nonumber\\ \psi_{\rm_M}^{2S}(\mathbf{p}^2)&&=\Big(\frac{1}{\beta^2\pi}\Big)^{3/4}{\exp}\Big(-\frac{{2}^\delta}{2}\frac{\mathbf{p}^2}{\beta^2}\Big) \Big(a^\prime_2 -b^\prime_2\frac{\mathbf{p}^2}{\beta^2}\Big),\nonumber\\ \psi_{\rm_M}^{3S}(\mathbf{p}^2)&&=\Big(\frac{1}{\beta^2\pi}\Big)^{3/4}{\exp}\Big(-\frac{{3}^\delta}{2}\frac{\mathbf{p}^2}{\beta^2}\Big) \Big(a^\prime_3 - b^\prime_3\frac{\mathbf{p}^2}{\beta^2}+c^\prime_3\frac{\mathbf{p}^4}{\beta^4}\Big),\nonumber\\ \psi_{\rm_M}^{4S}(\mathbf{p}^2)&&=\Big(\frac{1}{\beta^2\pi}\Big)^{3/4}{\rm exp}\Big(-\frac{{4}^\delta\mathbf{p}^2}{2\beta^2}\Big) \Big(-a^\prime_4+ b^\prime_4\frac{\mathbf{p}^2}{\beta^2} -c^\prime_4\frac{\mathbf{p}^4}{\beta^4}+d^\prime_4\frac{\mathbf{p}^6}{\beta^6}\Big),\nonumber\\ \psi_{\rm_M}^{5S}(\mathbf{p}^2)&&=\Big(\frac{1}{\beta^2\pi}\Big)^{3/4}{\rm exp}\Big(-\frac{{5}^\delta}{2}\frac{\mathbf{p}^2}{\beta^2}\Big) \Big(a^\prime_5 -b^\prime_5\frac{\mathbf{p}^2}{\beta^2}+c^\prime_5\frac{\mathbf{p}^4}{\beta^4} -d^\prime_5\frac{\mathbf{p}^6}{\beta^6}+e^\prime_5\frac{\mathbf{p}^8}{\beta^8}\Big) \end{eqnarray} with coefficients \begin{eqnarray*} \begin{array}{|c|c|c|c|c|c|}\toprule[1pt] n&a^\prime_n&b^\prime_n&c^\prime_n&d^\prime_n&e^\prime_n\\\hline 2&1.88684& 1.54943&-&-&-\\ 3&2.53764& 5.67431&1.85652&-&-\\ 4&3.1439&12.58984&10.05113&1.88915&-\\ 5&3.67493&22.58205&31.06666&13.51792&1.70476 \\\bottomrule[1pt] \end{array}\,. \end{eqnarray*} \subsection{Some notations in LFQM} The incoming (outgoing) meson in Fig. \ref{fig:LFQM} has the momentum ${P}^({'}^)={p_1}^({'}^)+p_2$ where ${p_1}^({'}^)$ and $p_2$ are the momenta of the off-shell quark and antiquark and \begin{eqnarray}\label{20} p^+_1&=&x_1 P^+,\qquad p^+_2 = x_2 P^+, \nonumber\\ { p}_{1\perp}&=& x_1{ P}_\perp + { k}_\perp,\qquad { p}_{2\perp}= x_2{ P}_\perp - { k}_\perp, \nonumber\\ p'^+_1&=&x_1 P^+,\qquad p'^+_2 = x_2 P^+, \nonumber\\ { p'}_{1\perp}&=& x_1{ P'}_\perp + { k'}_\perp, \qquad{ p'}_{2\perp}= x_2{ P'}_\perp - { k'}_\perp\nonumber \end{eqnarray} with $x_1+x_2=1$, where $x_i$ and $k_\perp(k'_\perp)$ are internal variables. $M_0$ and $\tilde {M_0}$ are defined \begin{eqnarray}\label{app2} &&M_0^2=\frac{k^2_\perp+m^2_1}{x_1}+\frac{k^2_\perp+m^2_2}{x_2},\nonumber\\&& \tilde {M_0}=\sqrt{M_0^2-(m_1-m_2)^2}.\nonumber \end{eqnarray} The wave functions $\phi_M$ are transformed into \begin{eqnarray}\label{app4} \phi_{\rm_M}(1S)&&=4\Big(\frac{\pi}{\beta^2}\Big)^{3/4}\sqrt{\frac{\partial k_z}{\partial x}}{\exp}\Big(-\frac{k^2_z+k^2_\perp}{2\beta^2}\Big),\nonumber\\ \phi_{\rm_M}(2S)&&=4\Big(\frac{\pi}{\beta^2}\Big)^{3/4}\sqrt{\frac{\partial k_z}{\partial x}}{\exp}\Big(-\frac{{2}^\delta}{2}\frac{k^2_z+k^2_\perp}{\beta^2}\Big) \Big(a_2^\prime -b_2^\prime\frac{k^2_z+k^2_\perp}{\beta^2}\Big), \nonumber\\ \phi_{\rm_M}(3S)&&=4\Big(\frac{\pi}{\beta^2}\Big)^{3/4}\sqrt{\frac{\partial k_z}{\partial x}}{\exp}\Big(-\frac{{3}^\delta}{2}\frac{k^2_z+k^2_\perp}{\beta^2}\Big) \Big(a_3^\prime - b_3^\prime\frac{k^2_z+k^2_\perp}{\beta^2}+c_3^\prime\frac{(k^2_z+k^2_\perp)^2}{\beta^4}\Big),\nonumber\\ \phi_{\rm_M}(4S)&&=4\Big(\frac{\pi}{\beta^2}\Big)^{3/4}\sqrt{\frac{\partial k_z}{\partial x}}{\rm exp}\Big(-\frac{{4}^\delta}{2}\frac{k^2_z+k^2_\perp}{\beta^2}\Big) \Big(-a_4^\prime + b_4^\prime\frac{k^2_z+k^2_\perp}{\beta^2} -c_4^\prime\frac{(k^2_z+k^2_\perp)^2}{\beta^4}+d_4^\prime\frac{(k^2_z+k^2_\perp)^3}{\beta^6}\Big), \nonumber\\ \phi_{\rm_M}(5S)&&=4\Big(\frac{\pi}{\beta^2}\Big)^{3/4}\sqrt{\frac{\partial k_z}{\partial x}}{\rm exp}\Big(-\frac{{5}^\delta}{2}\frac{k^2_z+k^2_\perp}{\beta^2}\Big) \Big(a_5^\prime - b_5^\prime\frac{k^2_z+k^2_\perp}{\beta^2}+c_5^\prime\frac{(k^2_z+k^2_\perp)^2}{\beta^4} -d_5^\prime\frac{(k^2_z+k^2_\perp)^3}{\beta^6}+e_5^\prime\frac{(k^2_z+k^2_\perp)^4}{\beta^8}\Big).\nonumber\\ \end{eqnarray} More information can be found in Ref. \cite{Cheng:2003sm}.
1,108,101,565,374
arxiv
\section{Introduction} \label{sec:intro} In a series of articles on biological evolution published in the Journal of Statistical Physics, it is natural to ask what expertise and insights statistical physicists can bring to the study of evolution, and in what way might their approach to the subject differ from biologists. If the subject is the one that will largely interest us in this paper --- the study of evolution within the framework of population genetics --- these questions are more easily answered. This is because in a system containing a large, but finite, number of individuals with given genetic characteristics, genetic drift leads to a stochastic dynamics which has many of the features which allow the application of the ideas and techniques of non-equilibrium statistical physics. We will use this formalism, but in addition our approach will have parallels with the traditional methodology of theoretical physicists. Firstly, we will stress the fundamental nature of the microscopic description. That is, we will start with genes as basic constituents, which can be in different states according to their type (allele), location (if the individual carrying the gene is located on an island), the sex of the individual carrying the gene, etc. Secondly, since the microscopic description contains too much detail which is irrelevant at the macroscale (or in our case, the mesoscale, where some stochastic element is retained) we will derive a \textit{reduced} or \textit{effective} model which contains parameters which depend on the parameters of the microscopic description and so encapsulate the relevant aspects of the microscopic description. Thirdly, we will be interested in \textit{generic} behaviour. By this we mean that we will attempt to formulate a microscopic description that does not have inbuilt assumptions that make the model more easily solvable. Instead we will try to formulate the model in such a way that it could be generalised to include many more effects, without changing its structure. In this philosophy, the simplifying assumptions are brought in during the process of obtaining the effective model, and should be clearly stated. Finally, although the whole basis of our work is mathematical, we will use intuition to explore the admissibility of the techniques we use outside of their regime of strict applicability, and check their correctness through the use of computer simulations. Although these ideas are familiar to the theoretical physics community, they tend to be utilised less in the biological sciences. For example, many biologists may use quite complex verbal arguments to gain insights. Conversely, our methodology may differ from that of many mathematical biologists, since rigorous justification will not be a central feature of our approach. In addition, many mathematical biologists are focussed on the deterministic dynamics found at the macroscale. Nevertheless, we view the approach we will discuss here as able to form a bridge between the intuition gleaned by biologists and the more analytic investigations of mathematicians. In this way we hope that our methods prove of interest to a wider audience outside of the theoretical physics community. In a previous paper~\cite{mckane_2014}, we have reviewed the process of setting-up a description of this class of biological systems in terms of its basic constituents, and from this deriving the mesoscopic equations governing the dynamics which generalise what might be the more familiar macroscopic equations. In particular, in Ref.~\cite{mckane_2014}, we give formulae for writing down the form of the mesoscopic dynamics in terms of quantities which appear in the microscopic formulation. This essentially is the first point of our methodology described above, and so while we will discuss it here, we will refer the reader to this earlier paper for more details. Instead we will focus on the second point above, namely obtaining an effective theory that is more amenable to analysis than the original. There may be several ways of reducing the complexity of the model, but here we will concentrate on one which is based on time-scale separation arguments. That is, we will seek to identify \textit{fast} modes which die away relatively quickly, and \textit{slow} modes which endure at long times. The dynamics of systems featuring such timescale separation are illustrated in Fig.~\ref{fig:figure_1}. This is, of course, a well-known procedure, perhaps the most famous example being in hydrodynamics, where the microscopic molecular dynamics can be replaced by a macroscopic dynamics with a few long-lasting variables. Although this dynamics is macroscopic, a mesoscopic extension can also be derived along the same lines~\cite{Fox1970}. In the theory of dynamical systems, the concept of a centre manifold (CM) is another manifestation of these ideas. During our discussion of this methodology, there will be several illustrations of the third and fourth points discussed above, namely the wish to use generic structures and the use of numerical simulations to check the precision of the approximations we utilise. As we have already mentioned, one of our aims in writing this article is to make the ideas and techniques available to a larger audience. To help to achieve this we will present an application of the method in a pedagogical manner in Section 2. We have chosen one of the simplest possible systems: haploid individuals on one of two islands of equal size which can migrate from one island to the other. It will be assumed that are only two possible alleles which are modelled by a Moran process. After this informal, and hopefully easily accessible, introduction to the method, we will describe its application to a number of models in Section 3. These include: a haploid Moran model on an arbitrary number of islands with selection and mutation; a stochastic Lotka-Volterra competition model with an arbitrary number of islands and a stochastic Lotka-Volterra competition model with an arbitrary number of species; a derivation of the Hardy-Weinberg approximation from first principles; a model of epidemic spread on a network. In Section 4, we will illustrate the method in the slightly more technical case where noise-induced dynamics are present. We will see that noise-induced selection can cause selection for genotypes that are neutral in a deterministic setting, and that further, this noise induced selection can, under certain conditions, be strong enough to reverse the direction of deterministic selection. We will illustrate this behaviour with reference to Lotka-Volterra competition models, where we will see that this effect can help alleviate the dilemma of cooperation, and a model of transitions between sex-chromosome systems. Finally, in Section 5 we conclude with a discussion. \begin{figure}[th] \begin{center} \includegraphics[width=0.4\textwidth]{JSP_fig1a.pdf}\hspace{0.08\textwidth}\includegraphics[width=0.4\textwidth]{JSP_fig1b.pdf} \includegraphics[width=0.4\textwidth]{JSP_fig1c.pdf}\hspace{0.08\textwidth}\includegraphics[width=0.4\textwidth]{JSP_fig1d.pdf} \caption{Illustration of four systems featuring timescale separation that can be analysed with the methods reviewed in this paper. \textbf{Top left panel}: Phase plot for a haploid Moran model with two alleles on two islands with strong migration (described in \sref{sec:example}). The deterministic dynamics rapidly collapse to a slow subspace, indicated here by the blue dashed line. \textbf{Top right panel}: Deterministic trajectories (grey arrows) for a system similar to that in the top left panel but with three islands, addressed in \sref{sec:D_island_Moran}. Again, the deterministic dynamics rapidly collapse to a one-dimensional subspace indicated by the blue dashed line. \textbf{Bottom left panel}: Genotype frequencies as a function of time for a population genetic model, described in \sref{sec:heterogamety}. Stochastic trajectories (solid lines) initially rapidly relax on quasi-deterministic trajectories (inset) before reaching a one-dimensional slow subspace along which the system moves on a slower timescale. \textbf{Bottom Right panel}: A neutral three-species Lotka-Volterra model, addressed in \sref{sec:M_allele_SLVC}. Stochastic trajectories (orange) rapidly collapse along quasi-deterministic trajectories onto a two-dimensional slow subspace (blue surface), about which they are confined.} \label{fig:figure_1} \end{center} \end{figure} \section{Pedagogical Example} \label{sec:example} In this section we will explain as simply as possible how to apply the ideas discussed in the Introduction to a concrete example. The example we choose is a Moran model with migration. We ask how this can be reduced to an effective one-island model. \subsection{Setting-up the model} \label{sec:setup} We set up the model at the microscale, that is, at the level of haploid individuals who each carry an allele of type $1$ or of type $2$. The individuals can reside on one of two islands, both of which can only carry a fixed number of individuals, which we denote by $N$. We therefore denote by $n_1$ the number of individuals carrying allele $1$ on island $1$, by $N-n_1$ the number of individuals carrying allele $2$ on island $1$, by $n_2$ the number of individuals carrying allele $1$ on island $2$, and by $N-n_2$ the number of individuals carrying allele $2$ on island $2$. So the state of the whole system is given by only two variables, which we can form into the two-dimensional vector $\bm{n}=(n_1,n_2)$. We would like to reduce this description to one involving only one variable, which gives the fraction of individuals in the system carrying allele $1$. This would allow us to calculate, for example, the probability that allele $1$ or allele $2$ fix, and the mean time to fixation. There is not enough information about the birth, death and migration of individuals to model them in any other way than as random processes, so the model is specified by giving the probability per unit time that the system transitions from its current state, given by the vector $\bm{n}$, to a new state $\bm{n}'$. We write these transition rates as $T(\bm{n}'|\bm{n})$, with the initial state on the right and the final state on the left (some authors use the reverse convention). The probability distribution function (pdf) of the system, $p(\bm{n},t)$, is then given by the master equation \begin{equation} \frac{\mathrm{d}p(\bm{n},t)}{\mathrm{d}t} = \sum_{\bm{n}' \neq \bm{n}}\left[ T(\bm{n}|\bm{n}')p(\bm{n}',t) - T(\bm{n}'|\bm{n})p(\bm{n},t)\right]. \label{master_generic} \end{equation} This is relatively easy to understand. The first term on the right-hand side is made up of the probability of being in state $\bm{n}'$ multiplied by the probability of being in that state and making a transition to state $\bm{n}$. It therefore represents the probability of starting in state $\bm{n}'$ and making a transition to state $\bm{n}$. In the same way the second term on the right-hand side represents the probability of starting in state $\bm{n}$ and making a transition to state $\bm{n}'$. Their difference, summed over all states $\bm{n}'$, different to $\bm{n}$, gives the rate of increase of $p(\bm{n},t)$ with time. The form we take for the $T(\bm{n}'|\bm{n})$ depends on the model choice. Here we choose the Moran model, because it is simple: it amalgamates births and deaths and asks that birth, death and migration events happen in such a way that the population size of each island, $N$, is kept fixed. These are not the most realistic assumptions, and we discuss ways to relax them later in the paper, but they have the merit that the number of model parameters is kept to a minimum. The method of constructing $T(\bm{n}'|\bm{n})$ also seems a little more complicated, due to the requirement of keeping a fixed number of individuals on each island. This is done as follows: \begin{itemize} \item[(i)] Pick an island (with probability $1/2$, since the islands are identical) and then pick an individual randomly from that island. Allow the individual to reproduce by duplicating the individual. \item[(ii)] With probability $m$, the progeny migrates to the other island. In this case choose an individual on the other island at random to die. \item[(iii)] With probability $(1-m)$, the progeny remains on the same island. In this case choose an individual on the same island at random to die. \end{itemize} It should be noticed that the processes of birth and death are inextricably linked and that they are assumed to happen at rate $1$, this choice being possible through a choice of time units. On top of this process, migration is imposed with a probability of occurrence equal to $m$ ($0 \leq m \leq 1$). Following these rules, if the model is neutral the transition rate from the state $(n_1,n_2)$ to the state $(n_1 + 1,n_2)$ is \begin{equation} T(n_1 + 1,n_2|n_1,n_2) = \frac{1}{2}\,(1-m)\,\frac{n_1}{N}\frac{(N-n_1)}{N} + \frac{1}{2}\,m\,\frac{n_2}{N}\frac{(N-n_1)}{N}. \label{neut_transition_rate_simple} \end{equation} Similar expressions can be be found for $T(n_1 - 1,n_2|n_1,n_2)$ and for $T(n_1, n_2 \pm 1|n_1,n_2)$. However, we would like to include selection in the model. In this case we have to weight the choice of picking an allele by the relative fitness of that allele on a particular island. Since we are aiming to be as simple as possible to illustrate the basic ideas, we will assume that this fitness weighting is the same on both islands, though it is simple enough to relax this condition. Therefore we will denote the fitness weighting of allele $1$ to be $W^{(1)}(\bm{n})$ and of allele $2$ to be $W^{(2)}(\bm{n})$. Then the four transition rates from state $(n_1,n_2)$ to the new state are \begin{eqnarray} T(n_1 + 1,n_2|n_1,n_2) &=& \frac{1}{2}\,\left( 1 - m \right)\,\frac{W^{(1)}(\bm{n})n_1}{\mathcal{W}_1(\bm{n})}\,\frac{(N-n_1)}{N} \nonumber \\ &+& \frac{1}{2}\,m\,\frac{W^{(1)}(\bm{n})n_2}{\mathcal{W}_2(\bm{n})}\,\frac{(N-n_1)}{N}, \nonumber \\ T(n_1 - 1,n_2|n_1,n_2) &=& \frac{1}{2}\,\left( 1 - m \right)\,\frac{W^{(2)}(\bm{n})(N-n_1)}{\mathcal{W}_1(\bm{n})}\,\frac{n_1}{N} \nonumber \\ &+& \frac{1}{2}\,m\,\frac{W^{(2)}(\bm{n})(N-n_2)}{\mathcal{W}_2(\bm{n})}\,\frac{n_1}{N}, \nonumber \\ T(n_1,n_2 + 1|n_1,n_2) &=& \frac{1}{2}\,\left( 1 - m \right)\,\frac{W^{(1)}(\bm{n})n_2}{\mathcal{W}_2(\bm{n})}\,\frac{(N-n_2)}{N} \nonumber \\ &+& \frac{1}{2}\,m\,\frac{W^{(1)}(\bm{n})n_1}{\mathcal{W}_1(\bm{n})}\,\frac{(N-n_2)}{N}, \nonumber \\ T(n_1,n_2 - 1|n_1,n_2) &=& \frac{1}{2}\,\left( 1 - m \right)\,\frac{W^{(2)}(\bm{n})(N-n_2)}{\mathcal{W}_2(\bm{n})}\,\frac{n_2}{N} \nonumber \\ &+& \frac{1}{2}\,m\,\frac{W^{(2)}(\bm{n})(N-n_1)}{\mathcal{W}_1(\bm{n})}\,\frac{n_2}{N}, \label{transition_rate_simple} \end{eqnarray} where $\mathcal{W}_i(\bm{n})= W^{(1)}(\bm{n})n_i + W^{(2)}(\bm{n})(N-n_i)$, $i=1,2$, is the fitness of the individuals on island $i$. Here superscripts are a label for the two different alleles, whereas subscripts are a label for the two different islands. For further background on how to arrive precisely at the forms given by Eq.~(\ref{transition_rate_simple}) the reader is referred to previous discussions in the literature~\cite{blythe_mckane_models_2007,mckane_2014,constable_phys}. If $W^{(1)}$ and $W^{(2)}$ are independent of $\bm{n}$, then the selection is known as frequency independent selection. This will be assumed in this pedagogical treatment, and we write $W^{(1)} = 1 + \alpha^{(1)}s + \mathcal{O}(s^2)$ and $W^{(2)} = 1 + \alpha^{(2)}s + \mathcal{O}(s^2)$, where $s$ is a selection coefficient and $\alpha^{(1)},\alpha^{(2)}$ are constants. Since $s$ is typically very small, we do not expect it will be necessary to include the order $s^2$ terms in the expressions for $W^{(1)}$ and $W^{(2)}$. Equations (\ref{master_generic}) and (\ref{transition_rate_simple}) define the microscopic model, and once an initial condition $p(\bm{n},0)$ for the pdf has been given, specify the dynamics for all $t$. All other systems discussed in this paper will have a similar structure; the form of the transition rates will differ depending on the system, but in all cases the substitution of these rates into the master equation (\ref{master_generic}) will give the dynamics. As we discussed in the Introduction, the validity of our methods and approximations are checked via computer simulations, and these use the microscopic model defined by Eqs.~(\ref{master_generic}) and (\ref{transition_rate_simple}). The simulations use the Gillepsie algorithm~\cite{gillespie_1976,gillespie_1977} which is developed within the same formalism discussed in this section. However, the master equation is difficult to study analytically. It is for this reason that we make the diffusion approximation, replacing the microscopic model with a mesoscopic version. The diffusion approximation was applied very early on in the development of population genetics~\cite{fisher_1922} and is widely used~\cite{crow_kimura_into}. We do insist, however, that it should be derived from an underlying microscopic model, since there are potentially many microscopic models that give the same mesoscopic model, and so simply defining the model at the mesoscale can lead to ambiguities. The idea itself is very simple: if $N$ is large enough, the ratios $n_i/N$, which are formally fractions, are assumed to be continuous, and denoted by $x_i$. At the same time the master equation is expanded in powers of $N^{-1}$, and terms in $N^{-3}$ and higher are neglected. This process can be carried out directly, but formulae exist for the analogues of the transition rates which appear in the mesoscopic equations~\cite{mckane_2014}. To use these we need to introduce what are in effect stoichiometric vectors corresponding to the four ``reactions'' in Eq.~(\ref{transition_rate_simple}). In other words we write the final state, $\bm{n}'$, in terms of the initial state, $\bm{n}$, as $\bm{n}' = \bm{n} + \bm{\nu}_\mu$, where $\mu=1,\ldots,4$ specify each of the four reactions. So for example, in the first reaction of Eq.~(\ref{transition_rate_simple}), $n_1$ increases by $1$ and $n_2$ does not change, so $\bm{\nu}_1 = (1,0)$. Similarly, $\bm{\nu}_2 = (-1,0), \bm{\nu}_3 = (0,1)$ and $\bm{\nu}_4 = (0,-1)$. These identifications allow us to use Eqs.~(18) and (21) of Ref.~\cite{mckane_2014} to show that the $\bm{A}$ and $\bm{B}$ functions which appear in the mesoscopic equations are \begin{eqnarray} A_1(\bm{x}) &=& -\frac{1}{2}m\left(x_1 - x_2\right) \nonumber \\ &+& \frac{1}{2}\left(\alpha^{(1)} - \alpha^{(2)}\right)s\left[ (1-m)x_1\left( 1 - x_1 \right) + m x_2\left( 1 - x_2 \right) \right] + \mathcal{O}\left( s^2 \right), \nonumber \\ A_2(\bm{x}) &=& -\frac{1}{2}m\left(x_2 - x_1\right) \nonumber \\ &+& \frac{1}{2}\left(\alpha^{(1)} - \alpha^{(2)}\right)s\left[ (1-m)x_2\left( 1 - x_2 \right) + m x_1\left( 1 - x_1 \right) \right] + \mathcal{O}\left( s^2 \right), \nonumber \\ \label{A_simple} \end{eqnarray} and \begin{eqnarray} B_{11}(\bm{x}) &=& x_1\left(1-x_1\right) + \frac{1}{2}m\left(x_1 - x_2\right)\left(2x_1 - 1 \right) + \mathcal{O}(s), \nonumber \\ B_{22}(\bm{x}) &=& x_2\left(1-x_2\right) + \frac{1}{2}m\left(x_2 - x_1\right)\left(2x_2 - 1 \right) + \mathcal{O}(s), \label{B_simple} \end{eqnarray} with $B_{12}=B_{21}=0$. These then specify the model, and the general dynamical equations which allow us to find the dynamics are either the Fokker-Planck equation (FPE) \begin{equation} \frac{\partial P(\bm{x},t)}{\partial t} = - \frac{1}{N}\,\sum_{i=1}^2 \frac{\partial }{\partial x_i} \left[ A_i(\bm{x}) P(\bm{x},t) \right] + \frac{1}{2N^2} \sum_{i,j=1}^2 \frac{\partial^2 }{\partial x_i \partial x_j} \left[ B_{ij}(\bm{x}) P(\bm{x},t) \right], \label{FPE_simple} \end{equation} or the equivalent It\={o} stochastic differential equation (SDE) \begin{equation} \frac{\mathrm{d}x_i}{\mathrm{d}\tau} = A_i(\bm{x}) + \frac{1}{\sqrt{N}} \eta_i(\tau), \ \ \ \ i=1,2, \label{SDE_simple} \end{equation} where $\tau = t/N$ is a rescaled time and $\eta_i(\tau)$ is a Gaussian white noise with zero mean and with a correlator \begin{equation} \left\langle \eta_i(\tau) \eta_j(\tau') \right\rangle = B_{i j}(\bm{x}) \delta\left( \tau - \tau' \right), \ \ \ i,j=1,2. \label{correlator_simple} \end{equation} As for the microscopic model, substitution of the specific forms given by Eqs.~(\ref{A_simple}) and (\ref{B_simple}) into the generic forms of the FPE or SDE gives the behaviour of the mesoscopic model for all time, provided an initial condition $P(\bm{x},0)$ is given. For more details on derivation and meaning of these equations standard texts on the theory of stochastic processes~\cite{risken_1989,gardiner_2009} may be consulted, or previous articles on the application of these ideas in a biological context~\cite{blythe_mckane_models_2007,mckane_2014,constable_phys}. We end this section with two general points. First, if we take the limit $N \to \infty$ of Eq.~(\ref{SDE_simple}) we obtain the \textit{macroscopic} model, which is deterministic, since the noise is eliminated by taking the limit. The dynamics of the deterministic model is then given by $\mathrm{d}x_i/\mathrm{d}\tau = A_i(\bm{x})$, $i=1,2$. Second, we keep terms of order $s$ in $\bm{A}$, but neglect them in $\bm{B}$, since we are envisaging keeping terms of order $s/N$ or $1/N^2$ in the FPE, but discarding terms of order $s^2/N, s/N^2$ and $1/N^3$. This essentially assumes that $s \sim N^{-1}$, although we will keep $s$ and $N$ to be independent variables throughout the paper. \subsection{First stage of the reduction process: identifying the fast and slow variables} \label{sec:first_stage} Although the mesoscopic equations (\ref{FPE_simple}) and (\ref{SDE_simple}) are potentially more manageable than the differential-difference equation (\ref{master_generic}), they are still formidably difficult to analyse --- equation (\ref{FPE_simple}) is a partial differential equation in three variables, and in most of the other systems discussed later in this paper, the corresponding equation may have tens or even hundreds of variables. To find a simpler, or reduced, form we want to eliminate the variables which decay away quickly, since they are not relevant to making predictions about the medium- to long-term behaviour. This subset of slow variables will form a slow-subspace (SS), so that instead of allowing the system to explore the whole space of variables, we only allow it to move within this subspace. In practice, instead of searching for a SS directly, we frequently search for a CM which is composed not of slow-variables, which hardly change with time, but of conserved variables that do not change with time at all. In population genetics, for example, neutral theories may contain conserved quantities, due to symmetries in the system (the different alleles behave in the same way), and the effects of selection can be added as perturbative corrections, given the extremely small size of selection coefficients. The CM is found by looking for fixed points of the macroscopic equation (the macroscopic limit of the SDE with no noise term present). This first stage of the reduction therefore consists of the following steps: \begin{itemize} \item[1.] Identify a CM, perhaps by setting some parameters to zero in order to increase the symmetry of the deterministic equations (this could be the neutral limit of the deterministic dynamics, for example). \item[2.] Find the Jacobian at the fixed points that constitute the CM, and so find the eigenvalues and eigenvectors of the Jacobian evaluated on the CM. \item[3.] Form a projection operator from the eigenvectors found in 2, which is used to operate on quantities in the full mesoscopic system in order to eliminate the fast variables. \item[4.] Use this projection operator, or use conservation laws, to find the point where the system first reaches the CM. This will be the new initial condition for the reduced system. \end{itemize} We will now illustrate these four steps on the pedagogical example. \begin{itemize} \item[(i)] Setting $s=0$ in Eq.~(\ref{A_simple}), we see that there is a line of fixed points $x_1 = x_2$. This is the CM. \item[(ii)] The Jacobian of the deterministic system $\mathrm{d}x_i/\mathrm{d}\tau = A_i(\bm{x})$, $i=1,2$, with $s=0$, is \begin{equation} J = \left( \begin{array}{cc} -m/2 & \ m/2 \\ \ m/2 & -m/2 \end{array} \right)\,. \label{Jacobian_simple} \end{equation} This matrix has zero determinant and a trace equal to $-m$, which immediately gives its two eigenvalues as $\lambda^{\{ 1\} }=0$ and $\lambda^{\{ 2\} }=-m$. Two typical features we would expect are illustrated here: the number of zero eigenvalues equals the number of dimensions of the CM (since there is no dynamics at all on the CM --- it is comprised only of fixed points) and the non-zero eigenvalue has a real part which is negative (so that it can be identified as the fast mode which dies away quickly). In this case the non-zero eigenvalue is real, which is a reflection of the fact that the Jacobian, $J$, is symmetric. Another consequence of $J$ being symmetric is that we would normally be required to find right- and left-eigenvectors of $J$, but these coincide for symmetric matrices and are given by \begin{align}\label{eigenvectors_simple} \bm{v}^{\{ 1\} } = \frac{1}{\sqrt{2}}\,\left( \begin{array}{c} 1 \\ 1 \end{array} \right), \ \ \bm{v}^{\{ 2\} } = \frac{1}{\sqrt{2}}\,\left( \begin{array}{c} 1 \\ -1 \end{array} \right). \end{align} We would expect that the eigenvector corresponding to the zero eigenvalue would lie in the CM, and indeed $\bm{v}^{\{ 1\} }$ lies on the line $x_1 = x_2$. The normalisation of the eigenvectors has been chosen so that they are orthonormal: $\sum^2_{i=1} v^{\{ \mu\} }_iv^{\{ \nu\} }_i = \delta_{\mu \nu}$, where $\mu,\nu=1,2$ and $\delta_{\mu \nu}$ is the Kronecker delta. \item[(iii)] The projection operator is defined by $P_{i j} = v^{\{ 1\} }_i v^{\{ 1 \} }_j$, constructed only from the eigenvectors of the zero mode(s). To illustrate its use, we operate with it on a general vector of the full system given by $\phi_i = C_1 v^{\{ 1\} }_i + C_2 v^{\{ 2\} }_i$, where $C_1$ and $C_2$ are constants. Then $\sum^2_{i = 1} P_{i j} \phi_j = C_1 v^{\{ 1\} }_i$, that is, the term involving the fast mode(s) in $\phi_i$, $C_2 v^{\{ 2\} }_i$, has been wiped out using the orthogonality of the eigenvectors. \item[(iv)] If the point at which the system begins is $x^{\rm IC}_i$, then we would expect it to reach the CM at $x^{\rm CMIC}_i = \sum^{2}_{j=1} P_{i j} x^{\rm IC}_j$, since only the slow (zero) modes will have survived by this time. Here $i=1,2$ and `IC' and `CMIC' stand for `initial condition' and `centre-manifold initial condition' respectively. Applying the projection operator one finds that $x^{\rm CMIC}_1 = x^{\rm CMIC}_2 = (x^{\rm IC}_1 + x^{\rm IC}_2)/2$. Another way to obtain this result is to use the conserved quantity which exists in this degenerate system. From Eq.~(\ref{A_simple}), one sees that $\mathrm{d}(x_1 + x_2)/\mathrm{d}\tau = 0$ when $s=0$. Therefore, $x_1 + x_2$ is unchanged in time, and so $x^{\rm IC}_1 + x^{\rm IC}_2 = x^{\rm CMIC}_1 + x^{\rm CMIC}_2 = 2 x^{\rm CMIC}_1$ or $2 x^{\rm CMIC}_2$. \end{itemize} These results will be used to construct the reduced model. As an illustration of the fourth general point made in the Introduction relating to intuition and the checking of approximations through simulations, we note that the trajectory from $\bm{x}^{\rm IC}$ to $\bm{x}^{\rm CMIC}$ is stochastic, not deterministic. Nevertheless, we will use $\bm{x}^{\rm CMIC}$ as the initial condition for the reduced system on the CM, even though it has been deduced through a deterministic argument. We expect the deterministic dynamics to dominate the collapse from $\bm{x}^{\rm IC}$ to the CM under the condition that the rate of migration (which controls the collapse of the system to the CM) is much stronger than the rate of genetic drift (which causes deviations from the deterministic collapse to the CM). Since the rate of genetic drift grows linearly with the population size (see \eref{FPE_simple}), this condition can be expressed $m \gg N^{-1}$. In addition, the projection of the IC onto the CM as described is only strictly true when trajectories to the CM are linear (as in this case) or when the initial condition is close to the CM (in which case the linear approximation is applicable). If the deterministic trajectories to the CM are non-linear, more sophisticated mathematical techniques may be necessary to calculate $\bm{x}^{\rm CMIC}$ when $\bm{x}^{\rm IC}$ lies far from the CM~\cite{roberts_1989}. We will also continue to use the eigenvectors of the neutral model when constructing the $s \neq 0$ reduced model. It would be possible to find perturbative corrections to these in $s$, but we expect the effects to be sufficiently small as to be completely negligible. These are judgements made on the basis of intuition. Their validity will be examined through numerical simulations, and a comparison made of analytic results found on the basis of these assumptions with results obtained by simulations of the original model. Finally it is worth noting that, as addressed in point (iv) above, an alternative line of attack is possible in which one transforms into the fast-slow basis of the problem, removes the fast-variables and then transforms back into the original, biologically relevant, variables of the problem (for an illustration of such an approach, see \cite{constable_2013}). For the system at hand, the fast-slow basis is straightforward to obtain and is given by $(x_1 - x_2)$ and $(x_1 + x_2)$. However in more general problems such a basis may not be straightforward to obtain analytically (we will explore such a scenario in \sref{sec:heterogamety}) while the projection method that we develop here will continue to yield insight. We therefore continue to explore the current pedagological example using the projection formalism. \subsection{Second stage of the reduction process: construction of the reduced model} \label{sec:second_stage} In the first stage we worked with a version of the model which had a CM, but our real interest is in the actual, realistic, model which will typically only have a SS. This will not change materially the process of collapse described in Sec.~\ref{sec:first_stage}: we still expect what is in effect a deterministic collapse onto the SS. However, we would like to be able to assume that the system would then stay in the subspace which is defined by the slow variables. This is true (at least deterministically) when there is a CM, since the system ceases to move once it reaches the CM (because $\bm{A}=\bm{0}$). But this is not true when there is a SS. We therefore demand that $\bm{A}$ has no component in the fast directions. These conditions give the equation for the SS. There will still be a (weak) dynamics in the direction of the slow variables. We will also ask that there is no noise in the fast directions, only in the slow directions. In this way, the system is effectively constrained to evolve in the SS. This second stage of the reduction therefore consists of the following steps: \begin{itemize} \item[1.] Ask that $\bm{A}$ has no components in the fast directions. This gives the equation that the SS must have for this to be true. \item[2.] Apply the projection operator to the SDE of the full system, to obtain the SDE of the reduced system. \end{itemize} We can now illustrate these steps on the pedagogical example. \begin{itemize} \item[(i)] The condition that $\bm{A}$ has no component in the fast direction is $\bm{v}^{\{ 2 \} }\cdot \bm{A} = 0$, where $\bm{A}$ is the full ($s \neq 0$) form given by Eq.~(\ref{A_simple}). This condition is simply $A_1(\bm{x}) - A_2(\bm{x})=0$. Substituting $x_i = X_i^{(0)} + sX_i^{(1)} + \mathcal{O}(s^2)$ into this equation we can determine the SS. We know that when $s=0$ the SS is the CM and that $X_1^{(0)} = X_2^{(0)}$, however using $A_1(\bm{x}) - A_2(\bm{x})$ we find that it is also true that $X_1^{(1)} = X_2^{(1)}$, so the SS is defined by $x_1 = x_2$ to order $s$. It therefore has exactly the same form as the CM, and is linear. This is not true in general, and is only a feature of this simple pedagogical example. The variable $x_1$, or $x_2$, will be denoted by $z$ on the SS. \item[(ii)] Applying the projection operator to the SDE (\ref{SDE_simple}) gives $( 2/\sqrt{2}) (\mathrm{d}z/\mathrm{d}\tau)$ for the left-hand side, since $x_1 = x_2\equiv z$ on the SS. The first term on the right-hand side gives $(2/\sqrt{2}) \bar{A}(z)$ for a similar reason: $A_1(\bm{x})=A_2(\bm{x})$ on the SS $x_1=x_2$, and we write $A_1$ or $A_2$ on the SS in terms of $z$ as $\bar{A}(z)$. Therefore the reduced SDE is given by \begin{equation} \frac{\mathrm{d}z}{\mathrm{d}\tau} = \bar{A}(z) + \frac{1}{\sqrt{N}} \zeta(\tau), \label{reduced_SDE_simple} \end{equation} where \begin{equation} \bar{A}(z) = \frac{1}{2}\left(\alpha^{(1)} - \alpha^{(2)}\right)s\,z\left( 1 - z \right) + \mathcal{O}\left( s^2 \right), \label{A_bar_simple} \end{equation} and where \begin{equation} \zeta(\tau) = \frac{1}{\sqrt{2}}\,\frac{1}{\sqrt{2}}\left[ v^{\{ 1\} }_1 \eta_1(\tau) + v^{\{ 1\} }_2 \eta_2(\tau) \right] = \frac{1}{2\sqrt{2}} \left[ \eta_1(\tau) + \eta_2(\tau) \right]. \label{zeta_defn} \end{equation} From the properties of the noise $\bm{\eta}(\tau)$, including the correlation function in Eq.~(\ref{correlator_simple}), it follows that $\zeta(\tau)$ is a Gaussian white noise with zero mean and with a correlator \begin{equation} \left\langle \zeta(\tau) \zeta(\tau') \right\rangle = \frac{1}{8} \left [B_{1 1} + B_{2 2} \right] \delta\left( \tau - \tau' \right) \equiv \bar{B}(z) \delta\left( \tau - \tau' \right). \label{reduced_correlator_simple} \end{equation} where \begin{equation} \bar{B}(z) = \frac{1}{4}\,z\left( 1 - z \right) + \mathcal{O}\left( s \right). \label{B_bar_simple} \end{equation} \end{itemize} The reduction we have described here has produced an effective mesoscopic model defined by Eqs.~(\ref{reduced_SDE_simple}), (\ref{A_bar_simple}), (\ref{reduced_correlator_simple}) and (\ref{B_bar_simple}), which has only one variable, and which is sufficiently simple that it can be analysed mathematically. The pedagogical example was chosen on the grounds that its simplicity allowed the method to be clearly explained, and unfortunately this means that the reduction does not give that much useful information. For instance, the final results do not depend on $m$, and all we have done is to verify a known result~\cite{maruyama1969} that with this type of selection and with the same selection pressure on all islands, the population behaves in a similar way to a well-mixed population of size equal to that of the two islands added together. Actually, if one takes the calculation to higher order in $s$, then one can find an exception to this: migration-selection balance can occur if selection acts in opposing directions on the different islands. In later sections we will describe applications of the method which will yield informative reduced models. Although the essentials of the reduction method will be the same as those that we have described in this section, a few aspects will make it seem slightly more complicated. Apart from the number of variables being greater, requiring additional indices, we mention \begin{itemize} \item[(a)] The generalisation of the Jacobian (\ref{Jacobian_simple}) will typically not be a symmetric matrix. This means that the non-zero eigenvalues will be complex in general, and the left- and right-eigenvectors will not coincide. We will use the notation $\bm{u}^{\{ \mu\} }$ and $\bm{v}^{\{ \mu\} }$ respectively for the left- and right-eigenvectors corresponding to the eigenvalue $\lambda^{\{ \mu\} }$, where $\mu=1,2,\ldots$. They will be chosen to be orthonormal, that is, $\bm{u}^{\{ \mu\} }\cdot\bm{v}^{\{ \nu\} } = \delta_{\mu \nu}$. \item[(b)] As a consequence of this a typical term in the projection operator will involve left- and right-eigenvectors and the condition for there to be no deterministic dynamics in the $\mu$-direction will be $\bm{u}^{\{ \mu \} }\cdot\bm{A}(\bm{x}) = 0$, that is, will involve $\bm{u}^{\{ \mu \} }$ rather than $\bm{v}^{\{ \mu \} }$. \end{itemize} These are simply small technical details, and the method as discussed here is in essence that used in the other applications which we will now go on to discuss. However, having accounted for these points, we can define more general forms of $\bar{A}(z)$ and $\bar{B}(z)$ that will be relevant for later problems. In particular, for a system with initially $M$ variables we find an effective one-dimensional approximation of form \eref{reduced_SDE_simple} with; \begin{eqnarray} \bar{A}(z) = \sum_{i=1}^M u^{ \{ 1\} }_i \left. A_i(\bm{x}) \right|_{\mathrm{CM}} \label{eq_general_ABar} \end{eqnarray}and \begin{eqnarray} \bar{B}(z) &=& \sum_{i,j=1}^M u^{ \{ 1\} }_i u^{ \{ 1\} }_j B_{ij}(\bm{x}) |_{\mathrm{CM} } \,, \label{eq_general_BBar} \end{eqnarray} and where we recall that $\bm{u}^{ \{ 1\} }$ is the left-eigenvector corresponding to the zero eigenvalue. Since $\bm{u}^{ \{ 1\} }$ is perpendicular to all of the fast directions~\cite{constable_phys}, these two terms can be viewed as the deterministic and noisy components of the full problem respectively projected onto the SS; that is using $P_{ij} = v^{ \{ 1 \} }_i u^{ \{ 1 \} }_j $. Finally, since one of the themes of this paper relates to the utilisation of techniques from theoretical physics to population genetics, and other areas of theoretical biology, we could import the use of the bra-ket notation from quantum mechanics~\cite{dirac_1958}. In this notation the right-eigenvector $\bm{v}^{\{ \nu \} }$ is written as the ket $| \nu \rangle$ and the left-eigenvector $\bm{u}^{\{ \mu \} }$ as the bra $\langle \mu |$. Then the orthogonality relation becomes $\langle \mu | \nu \rangle = \delta_{\mu \nu}$ and the projection operator $P_{ij} = v^{ \{ 1 \} }_i u^{ \{ 1 \} }_j $ may be written as $P = | 1 \rangle \langle 1 |$. Though undoubtedly more elegant, for consistency with earlier work we shall not use the bra-ket notation here. \section{Applications of the fast-mode elimination procedure} \label{sec:applications} In this section we will discuss some applications of the formalism which we have described in Sec.~\ref{sec:example}, making reference to previous work, notable features of the various models and possible future work. \subsection{The $\mathcal{D}$-island Moran model} \label{sec:D_island_Moran} The modelling and analysis of migration effects in population genetics has always been challenging since it involves a spatial aspect in an essential way. Historically, it was Wright~\cite{wright_1931} who first studied migration models in population genetics, however he in fact did not assume a spatial structure, since migratory individuals were chosen from a global, well-mixed, population. The stepping stone model~\cite{kimura_1964_SSM} was one of the first which did have real spatial structure. It consisted of a line of islands, but where migration could only take place between an island and its neighbours on either side. If we view migration as an interaction event between two islands, this is a one-dimensional model with nearest-neighbour interactions. Expressed in this way, an obvious generalisation is to a network of islands with interaction strengths between the islands proportional to the probability of migration between the islands. Such a model was investigated by Nagylaki~\cite{nagylaki1980SM}, although with discrete generations and strong migration, where the probability of a migration event is of the same order as a birth or a death. Assumptions either made the analysis difficult to follow or were not thought to be widely applicable, and many further studies of this kind followed which made a different set of assumptions. These, and further discussions and analysis, can be found in the book by Rousset~\cite{rousset_2004}, while more recent results that utilise probability generating functions~\cite{reichl_1998} are developed in \cite{houch_2011}. In Sec.~\ref{sec:example} a simple two island model was introduced. The $\mathcal{D}$-island model is a generalisation of this but with added features. Details are given in earlier papers of ours~\cite{constable_phys,constable_bio,constable2015c}, but examples of more general features are the migration probability to island $i$ from island $j$, $m_{ij}$, which will in general not be symmetric ($m_{ij} \neq m_{ji}$), and the fact that islands will be allowed to contain different number of individuals ($\beta_i N$ on island $i$). The migration matrix is not so far defined for $i = j$, however if one sets the probability of the chosen individual not migrating equal to $m_{ii} = 1 - \sum^{\mathcal{D}}_{j \neq i} m_{ji}$, then this defines $m_{ij}$ for all $i,j$. The factor $N$ which appears in the popoulation size is the only large parameter in the model, and the approximations made in Sec.~\ref{sec:example} are reliant on this. So, for example, $\beta_i$ should not be so small or large that we still cannot treat the factor $\beta_i N$ as being of a similar magnitude to $N$. Similarly, the number of islands $\mathcal{D}$ should not be so large that it can be thought of as being of order $N$. It is likely that in some cases the approximations will continue to be good outside of their strict range of validity, but at present this can only be tested by comparing the analytic results with simulations. We will discuss the model with and without mutation separately, since typical questions which we are interested in answering differ. We begin with the model with no mutation. \subsubsection{The $\mathcal{D}$-island model without mutation} \label{sec:D_island_Moran_without} This model has been analysed in detail in Refs.~\cite{constable_phys,constable_bio}, where further details may be found. Here we only note that, in addition to the points already made, that the probability of choosing an island on which an individual is then chosen to die or migrate, $f_i$, has to be done with a probability proportional to $\beta_i$, if we are to get sensible results. When the approximation described in Sec.~\ref{sec:example} are made, one again finds that the reduced model has only one degree of freedom, and is given by Eqs.~(\ref{reduced_SDE_simple}) and (\ref{reduced_correlator_simple}). Even the functional forms of $\bar{A}(z)$ and $\bar{B}(z)$ are unchanged, although now they contain parameters which are functions of most of the parameters of the starting model: \begin{equation} \bar{A}(z) = a_1 s\,z\left( 1 - z \right) + \mathcal{O}\left( s^2 \right), \ \ \ \bar{B}(z) = b_1 z\left( 1 - z \right) + \mathcal{O}\left( s \right), \label{A_and_B_bar_Dislands} \end{equation} where \begin{equation} a_1 = \sum^\mathcal{D}_{i,j=1} u^{\{ 1\} }_i \frac{m_{ij}f_j}{\beta_i} \alpha_j, \ \ \ b_1 = \sum^\mathcal{D}_{i,j=1} \left[ u^{\{ 1\} }_i \right]^2\frac{m_{ij}f_j}{\beta^2_i}. \label{aone_bone} \end{equation} Here $\alpha_i$ is the relative fitness of the first allele over the second on island $i$ and $\bm{u}^{\{ 1 \} }$ is the left-eigenvector corresponding to the zero eigenvalue. So even with the added complexity of each island differing in size, arbitrary migration probabilities, selective advantage of allele $1$ over allele $2$ varying from island to island, and the number of islands itself arbitrary, the reduction gives a standard (i.e. non-spatial model) Moran model with selection with effective parameters which can be seen from Eq.~(\ref{aone_bone}) to contain information from virtually all the parameters of the original model. It should be noted that in order to prove results pertaining to the spectrum of the eigenvalues of the Jacobian, we assumed that the migration matrix had a structure which in effect meant that no subgroup of islands were isolated from any other~\cite{constable_phys}, and so the results displayed in Eq.~(\ref{aone_bone}) are subject to this restriction. This rules out, for example, the case where the islands may be divided into two subgroups, with no migration between the subgroups. \begin{figure}[t] \includegraphics[width=0.45\textwidth]{JSP_fig2a.pdf} \includegraphics[width=0.45\textwidth]{JSP_fig2b.pdf} \caption{Plots for the probability of fixation (left panel) and mean time to fixation (right panel) as a function of the projected initial conditions for the $\mathcal{D}=3$ island Moran model. The solid lines are calculated from the reduced model and the various symbols indicate the results obtained from simulations. For each different colour/symbol different $\bm{\alpha}$ vectors are used; green squares, $\bm{\alpha}=(1,1,-1)$, red triangles, $\bm{\alpha}=(-1,-1,1)$, and blue circles, $\bm{\alpha}=(1,-2,-1)$. All other parameters are kept constant; $s=0.005$, $N=200$, $\bm{\beta}=(3,2,1)$ and the migration matrix $m$ is fixed, though not given here. Simulation results are the average of $5000$ runs.} \label{fig:Moran_Dislands} \end{figure} As mentioned the reduced model is the mesoscopic version of the Moran (or, in fact, the Wright-Fisher model) with selection. The probability of either allele $1$ or allele $2$ fixing for a given initial state and also the mean time to fixation may be straightforwardly found~\cite{crow_kimura_into}, given as they are by the solution to second-order ordinary differential equations~\cite{constable_bio}. These are denoted by $Q(z_0)$ and $T(z_0)$ respectively. Here $z_0$ is the initial condition for the reduced system, which was referred to as $\bm{x}^{\rm CMIC}$ in the final paragraph of Sec.~\ref{sec:first_stage} on the first stage of the reduction process. We have changed notation to the new coordinate $z$ and also indicated that it is an initial condition through use of the subscript $0$, rather than the superscript CMIC. It is clear that both $Q$ and $T$ have to depend on precisely where the reduced system is initialised. In Fig.~\ref{fig:Moran_Dislands}, fixation times calculated from the reduced model and simulations of the underlying microscopic model, from which it was derived, are shown. The very good agreement between the two gives us confidence in the method, with more comparisons~\cite{constable_phys,constable_bio} also giving further support. The calculation can also be taken to next order in $s$, and an effective term of order $s^2$ found in the expression for $\bar{A}(z)$ given to first order in $s$ in Eq.~(\ref{A_and_B_bar_Dislands}). In this case $s$ is tentaively assumed to be of order $N^{-1/2}$ (although a direct indentification is not made), so that terms of order higher than $N^{-2}$ have been ignored in the expansion of the master equation. Novel effects can be investigated through use of the $s^2$ term, and a greater range of parameters explored. We refer the reader to the original literature for a discussion of these~\cite{constable_phys,constable_bio}. \subsubsection{The $\mathcal{D}$-island model with mutation} \label{sec:D_island_Moran_with} So far in this article we have examined the effects of migration, selection and genetic drift, but not that other important process of population genetics: mutation. The inclusion of mutation has a drastic effect on the long term behaviour of the system, since it is now possible in principle for one allele to mutate to another at any time, and so concepts such as fixation probabilities and mean time to fixation do not apply. On the other hand, the pdf of the allele frequencies is now non-trivial and becomes stationary at long times. This is an interesting quantity which characterises the model and we use it to test the accuracy of the effective model found through the reduction procedure. The microscopic model is constructed in a similar way to that described in Sec.~\ref{sec:example}. There is more than one way that mutation can be included; here we follow the procedure described in Ref.~\cite{constable2015c}. In this case we allow birth/death/migration events to happen a fraction $\xi$ of the time, and mutation events a fraction $(1-\xi)$ of the time. The mutation rate from the first allele to the second on island $i$ is denoted by $\kappa^{(1)}_i$ and from the second to the first on island $i$ by $\kappa^{(2)}_i$. It may be a little unusual to allow rates to vary from one island to the other, but allowing for this does not markedly increase the complexity of the calculation and so we include it. If we denote the rates without mutation (such as are shown in Eq.~(\ref{transition_rate_simple}) for the case of two islands) as $T_{\rm S}(\bm{n}'|\bm{n})$, where the subscript S indicates that selection has been included, but not mutation, then the corresponding rates with mutation and selection included are \begin{eqnarray} T_{\rm MS}(n_i + 1|n_i) &=& \xi T_{\rm S}(n_i + 1|n_i) + \left( 1 - \xi \right) \kappa^{(1)}_i \frac{(\beta_i N - n_i)}{\beta_i N}, \nonumber \\ T_{\rm MS}(n_i - 1|n_i) &=& \xi T_{\rm S}(n_i - 1|n_i) + \left( 1 - \xi \right) \kappa^{(2)}_i \frac{n_i}{\beta_i N}. \label{rates_with_mutation} \end{eqnarray} Here the dependence of the transition rates, $T_{\rm S}(\bm{n}'|\bm{n})$, on elements of $\bm{n}$ that do not change in the transition has been suppressed. We may rescale time in the master equation by a factor of $\xi$ and absorb a factor of $(1-\xi)/\xi$ in the mutation rates. This effectively means that we can drop the factors $\xi$ and $(1-\xi)$ from Eq.~(\ref{rates_with_mutation}). Making the diffusion approximation, the mesoscopic model is given by Eq.~(\ref{SDE_simple}) with the noise correlator given by Eq.~(\ref{correlator_simple}). In this case \begin{eqnarray} \left. A_i(\bm{x}) \right|_{\rm MS} &=& \left. A_i(\bm{x}) \right|_{\rm S} + \frac{1}{\beta_i} \left[ \kappa^{(1)}_i - \left( \kappa^{(1)}_i + \kappa^{(2)}_i \right) x_i \right], \nonumber \\ \left. B_{i j}(\bm{x}) \right|_{\rm MS} &=& \left. B_{i j }(\bm{x}) \right|_{\rm S} + \mathcal{O}\left( \bm{\kappa}^{(1)}, \bm{\kappa}^{(2)} \right), \label{A_B_mut} \end{eqnarray} where $\bm{\kappa}^{(1)} = (\kappa^{(1)}_1,\ldots,\kappa^{(1)}_\mathcal{D})$ and $\bm{\kappa}^{(2)} = (\kappa^{(2)}_1,\ldots,\kappa^{(2)}_\mathcal{D})$. Since mutation has been modelled as a linear process, the $\kappa$ dependence in $A_i(\bm{x})$ in Eq.~(\ref{A_B_mut}) is exact. We will neglect the $\kappa$ dependence in $B_{ij}(\bm{x})$ for precisely the same reasons that we neglected the dependence on the selection coefficients: we are assuming that the elements of $\bm{\kappa}^{(1)}$ and $\bm{\kappa}^{(2)}$ are so small that they can be thought to be of the same order as $N^{-1}$. We therefore only keep terms of order $\bm{\kappa}^{(1)}/N$, $\bm{\kappa}^{(2)}/N$ and $1/N^2$ in the FPE. The reduction process itself is similar to that discussed previously since, as just mentioned, mutation rates are generally very small, and so they can be treated as perturbations of the neutral model in exactly the same way as was done for selection strengths. Therefore the reduced model is given by Eqs.~(\ref{SDE_simple}) and (\ref{correlator_simple}) with $\bar{B}(z)$ unchanged from the form given in Eq.~(\ref{A_and_B_bar_Dislands}). Perhaps not surprisingly $\bar{A}(z)$ is modified by the addition of an extra term depending on the mutation rates~\cite{constable2015c}: \begin{equation} \bar{A}_{\rm MS}(z) = \bar{A}_{\rm S}(z) + \hat{\kappa}^{(1)} - \left(\hat{\kappa}^{(1)} + \hat{\kappa}^{(2)}\right) z = a_1 s\,z\left( 1 - z \right) + \hat{\kappa}^{(1)} - \left(\hat{\kappa}^{(1)} + \hat{\kappa}^{(2)}\right) z, \label{A_bar_with_mut} \end{equation} where \begin{equation} \hat{\kappa}^{(1)} = \sum^{\mathcal{D}}_{i=1} \frac{u^{\{ 1 \} }_i\kappa^{(1)}_i}{\beta_i}, \quad \hat{\kappa}^{(2)} = \sum^{\mathcal{D}}_{i=1} \frac{u^{\{ 1 \} }_i\kappa^{(2)}_i}{\beta_i}. \label{kappa_hats} \end{equation} and where $a_1$ is defined in Eq.~(\ref{aone_bone}). Here, as in Sec.~\ref{sec:D_island_Moran_without}, we retain terms only to order $s$. The stationary pdf of the effective theory can be straightforwardly found from the FPE corresponding to the SDE (\ref{SDE_simple}). Using the explicit forms for $\bar{A}(z)$ and $\bar{B}(z)$ one finds that~\cite{constable2015c} \begin{equation} p_{\rm st}(z) = \mathcal{N} z^{c_1} \left( 1 - z \right)^{c_2}\exp{ (c_3 z)}, \label{P_stat_mut} \end{equation} where $\mathcal{N}$ is a normalisation constant and \begin{equation} c_1 = \frac{N}{b_1}\hat{\kappa}^{(1)} - 1, \quad c_2 = \frac{N}{b_1}\hat{\kappa}^{(2)} - 1, \quad c_3 = a_1 s \frac{N}{b_1}. \label{c_1_c_2_c_3} \end{equation} A comparison between simulations of the original model and calculations from the reduced model in Fig.~\ref{fig:Moran_mutation} shows that the reduced model captures well the features of the full model. \begin{figure}[t] \includegraphics[width=0.70\textwidth]{JSP_fig3.pdf} \caption{The stationary pdf for the $\mathcal{D}$-island Moran model on the slow subspace for a range of systems with various parameters which are omitted here for brevity but which can be found in Ref.~\cite{constable2015c}. The solid black line is obtained from an analysis of the reduced model, the orange histogram from simulations of the original microscopic model, and the dashed line from a well mixed model with the same total system size and average mutation rates (weighted by island size).} \label{fig:Moran_mutation} \end{figure} There are several ways in which the work discussed here and in the previous section could be taken forward. There is scope for biologists to tailor the technique to their own interests, perhaps including additional processes with their own set of parameters and dropping others. Mathematicians may be able to provide conditions under which the approximations made would be expected to be valid, perhaps giving upper bounds for the negative real parts of the non-zero eigenvalues. Another extension is to perform a similar analysis, but starting not from the Moran model, but from one which is closer to those used in ecological modelling. We now go on to discuss this. \subsection{The stochastic Lotka-Volterra competition model} \label{sec:SLVC} The theory of evolution has had a very convoluted history, and a reflection of this was the significant contribution made to the subject by theoretical studies, at least compared to other areas of the biological sciences. This had repercussions on the nature of the mathematical models used in these studies: they tended to be unrelated to broader questions relating to the organism, and more focussed on the combinatorics of allele selection. This was a good strategy when trying to test the ideas of Darwinian evolution, but it tended to isolate the theoretical development of the subject from developments elsewhere. An example is the Wright-Fisher model~\cite{wright_1931,fisher_1930}, one of the first models of genetic drift, as well as the precursor of the Moran model~\cite{moran_1957}. In this model there is no competition between individuals --- a trait which is obviously a feature of Darwinian evolution. The population therefore grows very quickly, but is kept under control by sampling from the very large pool of individuals that come into existence, in order to form the next generation. Only a fixed number, $N$, of individuals are retained to form each new generation (Wright-Fisher) or births and deaths are coupled so the population at any given time is always equal to $N$ (Moran). This leads to an artificiality in the way that the models are set-up. In this section we will use as a starting point models in which the population is regulated by competition, rather than by a fictitious constraint which fixes the population size. This is a closer reflection of reality, and indeed the formulation of aspects of the models seem less contrived. It is a constant theme in the biological modelling literature that models of evolution should have a more ecological flavour, and this approach conforms to these views. An apparent disadvantage is that the number of variables is increased. To see that, recall that the Moran model of Sec.~\ref{sec:D_island_Moran} had only $\mathcal{D}$ variables, since the number of individuals carrying the second allele on island $i$ could be expressed in terms of the number of individuals carrying the first allele on the same island \textit{i.e.}~$N-n_i$. If the population of an island is not fixed, the number of individuals carrying the first and second allele can independently vary, and so the number of variables will double. Below we will describe how application of the reduction methods to models with competition show that they reduce to Moran-like models in the medium-to-long term~\cite{constable_2015,constable_2017}. The competition will be chosen to be of the simple Lotka-Volterra type~\cite{roughgarden_1979}, but in principle more complex competitive processes could be utilised. Since these are stochastic models, we will refer to them as stochastic Lotka-Volterra competition (SLVC) models. \subsubsection{The $\mathcal{D}$-island SLVC model} \label{sec:D_island_SLVC} As indicated above this system has $2\mathcal{D}$ variables which in the microscopic model are $n^{(\alpha)}_i$, where $i=1,\ldots,\mathcal{D}$ labels the islands and $\alpha=1,2$ labels the allele. The comment made above concerning SLVC models being less contrived can be illustrated here in the way that migration is modelled. The procedure for doing this in the Moran model involves considerable care in making sure that there are no biases built into the way the transition rates are constructed (see Eq.~(\ref{transition_rate_simple}) in the simple two-island case) while keeping the population of both islands fixed. For example, one has to ensure that a death on island $j$ occurs before a migration from island $i$ to this island is allowed. In the SLVC model, one simply specifies birth, death and competition rates, respectively $b^{(\alpha)}_i, d^{(\alpha)}_i$ and $c^{(\alpha \beta)}_i$, which are all independent of each other. For the neutral version of the model, the birth, death and competition rates are the same for all alleles (and are denoted by a superscript $0$); selection is introduced through a small perturbation in $\epsilon$, where $\epsilon$ is the selection strength: \begin{equation} b^{(\alpha)}_i = b^{(0)}_i \left( 1 + \epsilon \hat{b}^{(\alpha)}_{i} \right), \ \ \ d^{(\alpha)}_i = d^{(0)}_i \left( 1 + \epsilon \hat{d}^{(\alpha)}_{i} \right), \ \ \ c^{(\alpha \beta)}_{i} = c^{(0)}_i \left( 1 + \epsilon \hat{c}^{(\alpha \beta)}_i \right). \label{parameters_non_neut} \end{equation} The diffusion approximation is applied in the same way as in the Moran model, although now the large parameter is not $N$, which is no longer present in the definition of the model, but $V$, which is some measure of the size of the system, such as the volume. Although there are $2\mathcal{D}$ variables initially, $2\mathcal{D}-1$ of these are fast, and so the reduced model has again only one variable. It may be possible in some parameter regimes to see a clear cut decay first to the $\mathcal{D}$ variables of a Moran-type model with fixed populations on each island, and then a slower decay to an effective one-island model, which parallels the discussion in Sec.~\ref{sec:D_island_Moran}, but in many cases these time-scales will be similar or will overlap. The time-scales are related to the inverse of the eigenvalues of the Jacobian and, in general, these are complicated functions of all the model parameters. While it is true that the reduced SLVC $\mathcal{D}$-island model does reduce to a system which has a mesoscopic description given by Eqs.~(\ref{reduced_SDE_simple}) and (\ref{reduced_correlator_simple}) --- although with $N$ replaced by $V$ --- and with $\bar{B}(z)=b_1 z(1-z)$ to leading order in $\epsilon$, the form of $\bar{A}(z)$ is a little different. It is found to be given by~\cite{parrarojas_2017} \begin{equation} \bar{A}(z) = \epsilon\,z\left( 1 - z \right)\left( a_1 - a_2 z \right) + \mathcal{O}\left( \epsilon^2 \right). \label{A_bar_Dislands} \end{equation} In this case $a_1, a_2$ and $b_1$ are functions of the rates given which appear in Eq.~(\ref{parameters_non_neut}), $\beta_i$, and the left-eigenvector of the Jacobian corresponding to the zero eigenvalue. This change in the form of $\bar{A}(z)$ may be slight, but it could give significantly different fixation probabilities and mean time to fixation. One reason for this is that it is now possible to have a fixed point in the deterministic dynamics. This dynamics will be given by Eq.~(\ref{reduced_SDE_simple}) without the noise term, and so fixed points are solutions of $\bar{A}(z)=0$. When $\bar{A}(z)$ has the structure shown in Eq.~(\ref{A_bar_Dislands}), an internal fixed point (\textit{i.e.}~one not at the boundaries $z=0$ or $z=1$) is possible if $a_2 \neq 0$: $z^* = a_1/a_2$, where the asterisk denotes a fixed point. It will only exist if $0 < a_1/a_2 < 1$, but if it is stable, it may prolong the time taken for the system to fix (reach the points $z=0$ or $z=1$). Similarly if it is unstable, it may lead to a shorter mean time to fixation. To test the approximation we again compare the fixation probabilities and mean fixation time derived from the reduced model and those found from simulations of the original model. Although the form of $\bar{A}(z)$ is slightly more complicated than before, it is nevertheless still straightforward to work with the ordinary differential equations for $Q(z_0)$ and $T(z_0)$. The results shown in Fig.~\ref{fig:SLVC_Dislands} indicate that the reduction method is again working well in this case. \begin{figure}[t] \centering \includegraphics[width=0.45\textwidth]{JSP_fig4a.pdf} \includegraphics[width=0.45\textwidth]{JSP_fig4b.pdf} \caption{Fixation probability of allele $1$ (left panel) and mean unconditional time to fixation (right panel) as a function of the projected initial condition $z_0$ for an SLVC model with $\mathcal{D}=4$ islands in the neutral case (blue) and in the case with selection (red, $\epsilon=0.05$). Symbols: mean obtained from 3000 stochastic simulations of the microscopic system; lines: theoretical predictions for the fixation probability and mean time to fixation obtained from the reduced model. Here the parameter $V$ is equal to $150$.} \label{fig:SLVC_Dislands} \end{figure} \subsubsection{The $M$-allele SLVC model} \label{sec:M_allele_SLVC} So far we have only discussed individuals which are haploid and carry one of two possible alleles. Here we discuss the generalisation to $M$ alleles. This is interesting for a number of reasons, not least because we are able to recognise patterns that are not apparent in the two allele case. As in Sec.~\ref{sec:D_island_SLVC}, the variables in the microscopic model are $n^{(\alpha)}$, where now $\alpha=1,\ldots,M$ and there is no island label, because we will assume that the population is well-mixed. In the corresponding haploid multiallelic Moran model there are only $M-1$ variables $n^{(a)}$, $a=1,\ldots,M-1$, since the fixed population constraint $\sum^M_{\alpha=1} n^{(\alpha)} = N$, means that $n^{(M)}$ can be expressed in terms of the other $M-1$ variables: $n^{(M)}= N - \sum^{M-1}_{a=1} n^{(a)}$. Here the Greek indices $\alpha, \beta,\ldots$ will always run from $1$ to $M$ and the Roman indices $a, b,\ldots$ will always run from $1$ to $M-1$. Previously we used the reduction method to obtain an effective model which was amenable to analysis. Here will have a different perspective: we will ask if we can perform a reduction on the multiallelic SLVC model with $M$ variables to the multiallelic Moran model with $M-1$ variables. If this is so, then the more natural SLVC model will give the same results as the Moran model at medium and long times. From a mathematical point of view, the difference between the reduction described in Sec.~\ref{sec:D_island_Moran} and in Sec.~\ref{sec:D_island_SLVC} is that previously there were $\mathcal{D}-1$ or $2\mathcal{D}-1$ fast modes, and a single slow mode, whereas here there is one fast mode and $M-1$ slow modes, thus giving an effective model which is $(M-1)$ dimensional~\cite{constable_2017}. The reduction procedure has been carried out in Ref.~\cite{constable_2017}. An equation analogous to Eq.~(\ref{reduced_SDE_simple}) is found, but now in $(M-1)$ variables: \begin{equation} \frac{\mathrm{d}z^{(a)}}{\mathrm{d}\tau} = A^{(a)}(\underline{z}) + \frac{1}{\sqrt{V}}\,\zeta^{(a)}(\tau), \ \ a=1,\ldots,M-1, \label{SDE_reduced_Mallele} \end{equation} where $\zeta^{(a)}(\tau)$ is a Gaussian noise with zero mean and with a correlator \begin{equation} \left\langle \zeta^{(a)}(\tau) \zeta^{(b)}(\tau') \right\rangle = \frac{2 b^{(0)} c^{(0)}}{\left( b^{(0)} - d^{(0)} \right)^2}\,\left[ z^{(a)} \delta_{a b} - z^{(a)} z^{(b)} \right]\,\delta \left( \tau - \tau' \right). \label{reduced_correlator_Mallele} \end{equation} Here, the notation of underlining a vector is used for $(M-1)$-dimensional vectors and bold for $M$-dimensional vectors. The correlation function is given to zeroth order in $\epsilon$, which is the neutral model result. It is exactly the form found in the $M$-allele Moran model, up to some rescalings which are absorbed into the new time $\tau$~\cite{constable_2017}. In the neutral case $\underline{A}=\underline{0}$, and the SLVC model reduces exactly to the Moran model at medium to long times, after rescaling the time. If selection is included, $\underline{A}$ is no longer equal to zero. In order to aid the comparison with the Moran model, it is useful to introduce two quantities: \begin{equation} \hat{C}^{(a b)} \equiv \hat{c}^{(a b)} - \hat{c}^{(a M)} - \hat{c}^{(M b)} + \hat{c}^{(M M)}, \label{C_hat_defn} \end{equation} and \begin{equation} \Phi^{(\alpha)} \equiv \frac{b^{(0)} \hat{b}^{(\alpha)} - d^{(0)} \hat{d}^{(\alpha)}}{b^{(0)} - d^{(0)}}. \label{Phi_defn} \end{equation} Then $A_a(\underline{z})$ takes the form \begin{eqnarray} A^{(a)}(\underline{z}) &=& \epsilon z^{(a)} \left\{ \left[ \left( \Phi^{(a)} - \hat{c}^{(a M)} \right) - \left( \Phi^{(M)} - \hat{c}^{(M M)} \right) \right] \right. \nonumber \\ &-& \sum^{M-1}_{b=1} \left[ \left( \Phi^{(b)} - \hat{c}^{(b M)} \right) - \left( \Phi^{(M)} - \hat{c}^{(M M)} \right) \right] z^{(b)} \nonumber \\ &-& \left. \sum^{M-1}_{b = 1} \hat{C}^{(a b)} z^{(b)} + \sum^{M-1}_{b,c = 1} \hat{C}^{(b c)} z^{(b)} z^{(c)} \right\} + \mathcal{O}(\epsilon^2). \label{A_tilde_Mallele} \end{eqnarray} For now, we note that, just as in Eq.~(\ref{A_bar_Dislands}), $\underline{A}$ is cubic in the components of $\underline{z}$. This is an important point in mappings between reduced SLVC and Moran models, as we will now see. We begin our discussion with the $M$-allele Moran model in the case where the selection is frequency independent, that is, when the weight functions $W^{(\alpha)}$, analogous to those introduced in Eq.~(\ref{transition_rate_simple}), are independent of $\underline{n}$. Specifically we assume that $W^{(\alpha)}=1 + \rho^{(\alpha)}s$, where the $\rho^{(\alpha)}$ are constants. This is the case that we have examined so far in this paper. Then we find, after making the diffusion approximation, a model given by Eqs.~(\ref{SDE_reduced_Mallele}), (\ref{reduced_correlator_Mallele}) and (\ref{A_tilde_Mallele}), but only if $\hat{C}^{(a b)}=0$ for all $a$ and $b$~\cite{constable_2017}. If this condition holds, the reduced SLVC model and the Moran model with frequrncy independent selection match, provided that we make the identification $\rho^{(\alpha)} = \Phi^{(\alpha)} - \hat{c}^{(\alpha M)}$. We also need to match up the selection strength used in the SLVC model ($\epsilon$) to the one used in the Moran model ($s$). The relation between them is: $s=\epsilon (b^{(0)}-d^{(0)})/b^{(0)}$. Although some care has to be taken with making the identification between the two models~\cite{constable_2017}, one can note that the function $\underline{A}$ in the Moran model with frequency dependent selection is quadratic, and Eq.~(\ref{A_tilde_Mallele}) is cubic in general, so the condition $\hat{C}^{(a b)}=0$ gives the possibility of a direct mapping between the two models. We have assumed that selection is frequency independent so far, since this is the usual supposition made by many population geneticists and historically was the standard assumption used. However this may simply be a theoretical prejudice, since if one wishes to allow the fitness weightings $W^{(\alpha)}$ to depend on the composition of the population, one has to devise a model for this dependence, and so frequency independence is the simplest and most convenient choice. In addition, there are hints from experimental investigations that even if there are attempts to suppress factors that might lead to frequency dependent selection, it still seems to emerge~\cite{maddamsetti_2015}. Therefore it seems important to devise a natural way of including frequency dependence in modelling selection. Fortunately, there does exist a methodology to do this. It is based on ideas from game theory, where each allele ``plays'' a game with every other allele in the population~\cite{nowak_2006}. In the way we choose to implement this~\cite{constable_2017}, the fitness weightings are taken to have the form \begin{equation} W^{(\alpha)}(\underline{n}) = 1 + s \left[ \sum_{b=1}^{M-1} g^{(\alpha b)} \frac{n^{(b)}}{N} + g^{(\alpha M)}\left( 1 - \sum_{b=1}^{M-1}\frac{n^{(b)}}{N} \right) \right] \,, \label{W_freq_dep} \end{equation} where $g^{(\alpha \beta)}$ is the payoff to allele $\alpha$ from interacting with type $\beta$. We can now make the diffusion approximation, just as in the frequency independent case, but now using $W^{(\alpha)}(\underline{n})$ given by Eq.~(\ref{W_freq_dep}), rather than the $\underline{n}$-independent form $W^{(\alpha)}=1 + \rho^{(\alpha)}s$. Clearly, the structure of $W^{(\alpha)}(\underline{n})$ in Eq.~(\ref{W_freq_dep}) can potentially lead to more complicated $\underline{z}$ dependence in $\underline{A}$, and indeed $A^{(a)}(\underline{z})$ is now found to be cubic and given by~\cite{constable_2017} \begin{equation} sz^{(a)} \left[ \mathcal{G}^{(aM)} + \sum^{M-1}_{b=1} G^{(ab)}z^{(b)} - \sum^{M-1}_{b=1} \mathcal{G}^{(bM)} z^{(b)} - \sum^{M-1}_{b,c=1} G^{(bc)} z^{(b)} z^{(c)} \right], \label{Replicator_A} \end{equation} which is of exactly the same form as that given by Eq.~(\ref{A_tilde_Mallele}). To get the precise correspondence between the two models one must take $G^{(a b)} = - \hat{C}^{(a b)}$ for all $a$ and $b$, where $G^{(a b)}$ has the same structure as is displayed for $\hat{C}^{(a b)}$ in Eq.~(\ref{C_hat_defn}), namely \begin{equation} \mathcal{G}^{(a \beta)} \equiv g^{(a \beta)} - g^{(M \beta)}; \quad G^{(a b)} \equiv \mathcal{G}^{(a b)} - \mathcal{G}^{(a M)}. \label{relative_fitnesses} \end{equation} In addition the identification $g^{(\alpha M)} = \Phi^{(\alpha)} - \hat{c}^{(\alpha M)}$ has to be made~\cite{constable_2017}. The fact that it is only $\mathcal{G}^{(a M)}$ and $G^{(a b)}$, and not $g^{(\alpha \beta)}$ alone, that appear in the expression for $\underline{A}$ is interesting, since the quantity $\mathcal{G}^{(a \beta)}$ can be interpreted as a relative fitness, namely the payoff to allele $a$ against an opponent $\beta$ relative to the payoff to allele $M$ against the same opponent. Similarly, $G^{(a b)}$ is a relative relative fitness. Therefore, as one would expect, it is not the actual payoffs which are important, but their values relative to the other payoffs. In Sec.~\ref{sec:D_island_SLVC} we discussed how existence of an interior fixed point, that is, one not on the boundaries, could lead to different fixation probabilities and mean times to fixation. To investigate the possible existence of such fixed points in the frequency dependent $M$-allele case, we set $A^{(a)}(\underline{z})$, given by Eq.~(\ref{Replicator_A}), to zero. Now summing this expression over $a$ gives \begin{equation} 0 = \left[ 1 - \sum^{M-1}_{a=1} z^{(a)} \right] \left\{ \sum^{M-1}_{b=1} \mathcal{G}^{(bM)} z^{(b)} + \sum^{M-1}_{b,c=1} G^{(bc)} z^{(b)} z^{(c)} \right\}. \label{fixed_point_proof} \end{equation} If the fixed point is not to be on the boundary, then $\sum^{M-1}_{a=1} z^{(a)} \neq 1$ and so the second bracket in Eq.~(\ref{fixed_point_proof}) must vanish. Substituting this condition into the expression (\ref{Replicator_A}), which is itself taken to be zero, gives the fixed point equation to be $\mathcal{G}^{(aM)} + \sum^{M-1}_{b=1} G^{(ab)}z^{(b)} = 0$, since $z^{(a)} \neq 0$ for internal fixed points. Since this non-boundary fixed point equation is linear, there can generically be at most one fixed point. The position of this fixed point can therefore easily be found, and a determination made as to whether it lies in the SS and is therefore admissible. A similar analysis for the frequency independent case yields the condition $\rho^{(a)}=\rho^{(M)}$ for all $a$. However, if all the $\rho^{(\alpha)}$ are equal there is no selection, so in the case of frequency independent selection there are no interior fixed points. The finding that the more realistic SLVC model reduces to the Moran model with frequency dependent selection is another reason to use frequency dependence in the modelling of selection in the Moran model. Although, as we have already remarked, the resulting Moran model is still $(M-1)$-dimensional, and so difficult to analyse, some progress can still be made in some cases~\cite{constable_2017}. In this way the SLVC model may, in effect, be analysed. An example of such a situation is shown in Fig.~\ref{fig:SLVC_Malleles}. \begin{figure}[t] \setlength{\abovecaptionskip}{-2pt plus 3pt minus 2pt} \begin{center} \includegraphics[height=0.25\textwidth]{JSP_fig5a.pdf} \includegraphics[height=0.25\textwidth]{JSP_fig5b.pdf} \includegraphics[height=0.025\textwidth]{JSP_fig5lega.pdf} \includegraphics[height=0.025\textwidth]{JSP_fig5legb.pdf} \end{center} \caption{Plots of the unconditional mean time until the fixation of a single allele/species and the probability of the fixation of an allele/species for the Moran and SLVC models with frequency-independent selection in the case $M=6$ alleles/species. In these plots all alleles in the Moran model are under one of two selection pressures, while in the SLVC model all species have differing parameters that combine to give two selection pressures, making the system mappable to the Moran model presented. Analytical results are only available for the probability of fixation. Simulations results are the mean of $10^{3}$ stochastic simulations of the Moran and SLVC models. Parameters used are given Ref.~\cite{constable_2017}, where the parameterization of $\bm{x}^{(0)}$ in terms of $\kappa$ is also described. }\label{fig:SLVC_Malleles} \end{figure} \subsection{Diploid Moran model with sexual reproduction: The Hardy-Weinberg assumption from first principles}\label{sec:HW} In the Moran model discussed in Sec.~\ref{sec:example} and Sec.~\ref{sec:D_island_Moran}, individuals are haploid (they carry only a single allele) and reproduction occurs asexually (individuals simply duplicate themselves). While this is a relevant case for certain simple organisms, it is less so for many complex organisms which are diploid and reproduce sexually, such as animals. Suppose we want to model such a system, with diploid individuals and two possible alleles at a single locus, reproducing sexually with any other individual in the population (here, there are no sexes). A mechanistic approach might be to attempt to model the three possible genotypes in the population; here we will denote the homozygotes by A$^{(1)}$A$^{(1)}$ and A$^{(2)}$A$^{(2)}$, while the heterozygotes will be denoted by A$^{(1)}$A$^{(2)}$. If we fix the population size to be $N$, this leaves two free variables. As we have previously discussed, this makes obtaining analytic quantities in the model far more difficult than in the asexual haploid case, where the system was described by a single variable. \hspace{0.2cm} \noindent \textbf{Classic Approach.} Classic studies in population genetics circumvented this complexity by building single variable models that implicitly exploited a separation of timescales. It was noticed early on in theoretical population genetics that if there were no fitness differences between the genotypes (i.e. the system was neutral) the frequency of genotypes in such a diploid system would quickly relax to Hardy-Weinberg frequencies~\cite{hardy_1908,weinberg_1908}, where the number of each genotype could be described in terms of a single variable, the frequency of one of the alleles. In the terminology of our present paper, the system would quickly relax to a CM. Denoting the allele frequencies by $x^{(1)}=n^{\rm (A^{(1)})}/(2N)$ and $x^{(2)}=n^{\rm (A^{(2)}) }/2N=(1-x^{(1)})$, and the genotype frequencies by $y^{(1)}=n^{\rm (A^{(1)}A^{(1)}) }/N$, $y^{(2)}=n^{\rm (A^{(1)}A^{(2)}) }/N$, $y^{(3)}=n^{\rm (A^{(2)}A^{(2)})}/N=(1-y^{(1)}-y^{(2)})$, this is given by~\cite{ewens_2004} \begin{eqnarray} y^{(1)} = (x^{(1)})^{2} \,, \qquad y^{(2)} = 2 x^{(1)} (1-x^{(1)}) \,, \qquad y^{(3)} = (1-x^{(1)})^{2} \,. \label{eq_HW_freq} \end{eqnarray} Rather than model the dynamics of the diploid population, the dynamics of the \emph{alleles} were modelled with the assumption that they existed at Hardy-Weinberg frequencies. This was assumed to also hold when selection was sufficiently weak that the deviations from these ``equilibrium'' frequencies were not too great~\cite{moran_1957} (note the conceptual similarities with our approach). Genotypes AA are assumed to be under selective pressure A$^{(1)}$A$^{(1)}$, genotypes A$^{(1)}$A$^{(2)}$ under $(1 + s h)$ and genotypes A$^{(2)}$A$^{(2)}$ under $1$~\cite{ewens_2004}. Note then that choosing $h>1$ corresponds to overdominance, while $h \leq 1$ corresponds to underdominance~\cite{ewens_2004}. The details are given in \aref{sec:app_HW}, however upon applying the diffusion approximation one obtains an FPE (\ref{FPE_simple}) or SDE (\ref{SDE_simple}), with~\cite{ewens_2004} \begin{eqnarray} A( x^{(1)} ) &=& s x^{(1)} (1-x^{(1)} ) \left[ x^{(1)} + h ( 1 - 2 x^{(1)} ) \right] \,, \nonumber \\ B( x^{(1)} ) &=& 2 x^{(1)} ( 1 - x^{(1)} ) \,.\label{eq_HW_classic} \end{eqnarray} \noindent \textbf{Mechanistic Approach.} Whereas \eref{eq_HW_classic} was developed using an \textit{a priori} assumption that the system lay at Hardy-Weinberg equilibrium, we may now use the methods detailed in \sref{sec:example} to formally obtain an approximation for the dynamics on the CM. We note that a similar approach was taken recently in~\cite{hossjer_2016} and that this separation of timescales has been long noted and exploited~\cite{watterson_1964,ethier_1980,ethier_1988}. We begin by modelling the genotypes themselves; genotypes encounter each other at a rate proportional to their frequency in the population, weighted by a joint probability $W^{( \alpha \beta )}$ that genotype $\alpha$ successfully mates with genotype $\beta$. In this way we account for selection. In a similar fashion to \sref{sec:setup}, we assume selection is small and formalize this by setting \begin{eqnarray} W^{( \alpha \beta )} = 1 + s \alpha^{( \alpha \beta )} \,. \label{W_alpha_beta} \end{eqnarray} Expanding the master equation for small $s$ as in \sref{sec:setup}, we obtain a two dimensional description of the dynamics. We now seek to understand how this two-dimensional description is related to the classic description. \begin{figure}[t] \begin{center} \includegraphics[width=0.33\textwidth]{JSP_fig6a.pdf} \includegraphics[width=0.57\textwidth]{JSP_fig6b.pdf} \end{center} \caption{Left panel: phase diagram of the dynamics for the mechanistic diploid Moran model described in \sref{sec:HW}. Non-neutral dynamics are depicted by grey arrows. The blue dashed line shows the form of the neutral CM described in \eref{eq_HW_freq}. The orange arrow shows a single deterministic trajectory in the neutral system. Since the population size is fixed such that $y_1+y_2<N$, the area above the dashed black line is outside the dynamical region. Right panel: fixation probabilities as a function of $z_0$, the initial condition on the CM, for the diploid Moran model. In solid colours the fitness of genotypes is parameterised using Eq.~(\ref{W_alpha_beta}). The dashed line gives the dynamics for the parameterisation $\alpha^{(11)}=\alpha^{(13)}=\alpha^{(23)}=1$, $\alpha^{(12)}=\alpha^{(22)}=-1$ and $\alpha^{(11)}=2$. In all cases, $N=100$. } \label{fig:HW} \end{figure} Having set up the model, we can proceed to the next stage of the analysis; identifying the fast and slow variables, as described in \sref{sec:first_stage}. We can obtain a CM by setting selection equal to zero ($s=0$). In this case the CM is given by \eref{eq_HW_freq}. We can now linearise the system about this CM to obtain the Jacobian, and use this matrix to obtain $u^{ \{ 1 \} }$ and $v^{ \{ 1 \} }$, the left and right eigenvectors of the Jacobian corresponding to the zero eigenvalue. Finally, using \eref{eq_general_ABar} and \eref{eq_general_BBar}, we can obtain an effective description for the system dynamics in terms of $z=y^{(1)}$ on the CM. However, in order to make a comparison between this effective theory and the classic theory, we must express the effective theory in terms of $x^{(1)}$, the frequency of A$^{(1)}$ alleles (see \eref{eq_HW_classic}). Since $y^{(1)}=(x^{(1)})^{2}$ on the CM, we must therefore make the transformation $x^{(1)}=\sqrt{z}$. The full calculation is given in \aref{sec:app_HW}. We note that while there are some subtle mathematical points that need to be attended to, these should not distract from the key points of the method. Our final result is that we can approximate the dynamics of the mechanistic model by an SDE of type \eref{SDE_simple} with terms \begin{eqnarray} & & \tilde{A}( x^{(1)} ) = s x^{(1)} ( 1 - x^{(1)} ) \left[ \alpha^{(23)} - \alpha^{(33)} + \left( \alpha^{(13)} + 2 \alpha^{(22)} - 6 \alpha^{(23)} \right. \right. \nonumber \\ &+& \left. 3 \alpha^{(33)} \right) x^{(1)} + 3 \left( \alpha^{(12)} - \alpha^{(13)} - 2 \alpha^{(22)} + 3 \alpha^{(23)} - \alpha^{(33)} \right) (x^{(1)})^{2} \label{eq_HW_effective_A} \nonumber \\ & & \left. + \left( \alpha^{(11)} - 4 \alpha^{(12)} + 2 \alpha^{(13)} + 4 \alpha^{(22)} - 4 \alpha^{(23)} + \alpha^{(33)} \right) (x^{(1)})^{3} \right] \,, \\ & & \tilde{B}(z) = z ( 1 - z ) \,. \label{eq_HW_effective_B} \end{eqnarray} While the form of \eref{eq_HW_effective_A} appears a little complicated, we can in fact show that this becomes identical to \eref{eq_HW_classic} under the assumption that each term $\alpha^{( \alpha \beta )}$ can be decomposed into the sum of contributions from each allele; \begin{eqnarray} \alpha^{( \alpha \beta )} = \alpha^{(\alpha)} + \alpha^{(\beta)} \,, \end{eqnarray} and that $\alpha^{(\alpha)}$ take the precise values \begin{eqnarray} \alpha^{(1)} = 1 \,, \qquad \alpha^{(2)} = h \,, \qquad \alpha^{(3)} = 0 \,. \end{eqnarray} Since these values give a precise mapping from Eqs.~(\ref{eq_HW_effective_A}) and (\ref{eq_HW_effective_B}) to \eref{eq_HW_classic}, we can now ask what consequences this has for $W^{( \alpha \beta )}$. This matrix now takes the form \begin{eqnarray} W = \left( \begin{array}{ccc} 1 + 2 s & 1 + s + s h & 1 + s \\ 1 + s + h s & 1 + 2 s h & 1 + s h \\ 1+ s s & 1 + h s & 1 \\ \end{array} \right) \,. \end{eqnarray}\label{eq_HW_param} This is in fact \emph{exactly} the form of $W$ that we would expect at leading order in $s$ if \begin{eqnarray} W^{( \alpha \beta )} = W^{( \alpha )}W^{( \beta )} \,, \qquad W^{(1)} = 1 + s\,, \quad W^{(2)} = 1 + s h \,, \quad W^{(3)} = 1 \,, \end{eqnarray} i.e. if the fitness of each genotype is the same as that postulated in \eref{eq_HW_classic} and the success of genotype pairings is a multiplicative function of the individual genotype fitness. A similar mapping exists if we assume an additive interaction of the genotypes fitness on their pair sexual success or an averaging effect (see \aref{sec:app_HW}). It is testament to the great physical intuition of the founders of population genetics that this captures their assumptions. While we have essentially recovered here a known result, it is worth noting that of course the approach we have taken is not without merits. In particular, it allows us to explore a far richer fitness landscape than the original model (see \fref{fig:HW}). \subsection{Stochastic epidemics on networks} \label{sec:epidemics} The networks which we have discussed so far in this paper have described the way in which islands interact with each other through migration. The islands have had populations of individuals which are already themselves large (in the sense that they are of order $N$, the large parameter in the system). These are metapopulation models, since the whole system is a population of populations~\cite{hanski_1999}. However, in many network models the nodes are composed of a single individual, rather than a population. It is not immediately obvious how to apply the diffusion approximation in this case, and hence the reduction method of Sec.~\ref{sec:example}. However there are situations when the programme we have outlined so far can be pursued, and we now describe such a case. The example is taken from stochastic models of infectious diseases, rather than population genetics, but it illustrates the points we wish to make clearly. We will follow Ref.~\cite{parrarojas_2016}, where more details can be found. The model is an SIR model, which means that the individuals are either susceptible to the disease (in which case they are in class $S$), infected with the disease (in which case they are in class $I$), or recovered from having the disease (in which case they are in class $R$). We assume that the disease is such that individuals recover, and do not die as a consequence of having the disease. Our interest will be in the properties of an epidemic which might occur, and we assume that this takes place over a much shorter timeframe than the demographic processes of birth and death. So the only processes which occur in the model are (i) infection, and (ii) recovery. The network structure enters because we assume that each individual has a fixed number of contacts from which the infection is acquired, even though the contacts themselves will change. Therefore we may imagine all individuals located at the nodes of a network, where the degree of that node is equal to the number of contacts characteristic of that individual. The network is considered in the so-called dynamic limit, in which the network structure is assumed to evolve much more quickly than the epidemic, so that the only role of the network is to encode the number of connections a given individual has to other individuals. The variables at the microscale are therefore $S_k, I_k$ and $R_k$ where $k$ labels the degree of the node on which individuals of the type $S, I$ and $R$ are located. As is often done in SIR models, we assume that the population is closed, so that at time $t$ $S_k(t)+I_k(t)+R_k(t)=N_k$, where $N_k$ is independent of time. This implies that one set of individuals --- for example those which have recovered --- can be removed: $R_k(t)=N_k-S_k(t)-I_k(t)$. The large parameter in the model is the total number of individuals: $N=\sum^K_{k=1} N_k$, where $K$ is the maximum degree of the network. The specific network of interest in Ref.~\cite{parrarojas_2016} was a truncated Zipf distribution where the probability of an individual having degree $k$ is given by $d_k \propto k^\alpha$, with $-3 < \alpha < -2$ and $k=1,\ldots,K$, but the method we describe is also applicable to other distributions. We have stressed throughout this article that one needs to start with the microscopic description, at the level of single individuals, to avoid ambiguities. However here, to avoid too much formalism, we will move directly to the mesoscopic model which can be derived from the microscopic description~\cite{parrarojas_2016}: \begin{align} \frac{ds_k}{d\tau} &= -\beta ks_k\sum^K_{l=1} li_l + \frac{\eta_1^{(k)}(\tau)}{\sqrt{N}},\label{eqn:sk}\\ \frac{di_k}{d\tau} &= \beta ks_k\sum^K_{l=1} li_l-\gamma i_k + \frac{\eta_2^{(k)}(\tau)}{\sqrt{N}},\label{eqn:ik} \end{align} where \begin{align} \left\langle \eta_\mu^{(k)}(\tau) \eta_\nu^{(l)}(\tau ')\right\rangle &= B^{(k)}_{\mu \nu}(\bm{x}) \delta_{k l} \delta(\tau - \tau '), \label{eqn:noise_corr_SIR_1} \end{align} and where $k=1,\ldots,K$ and $\bm{x}$ is a vector of all the $2K$ variables. Here $\beta$ and $\gamma$ are the infection and recovery rates respectively, and $N$ is assumed to be sufficiently large that $s_k = S_k/N$ and $i_k = I_k/N$ can be assumed to be continuous. We will not give the precise form of $B^{(k)}_{\mu \nu}(\bm{x})$, $\mu,\nu=1,2$, here, nor the form of the rescaled time $\tau$. The problem we face is clear from Eqs.~(\ref{eqn:sk}) and (\ref{eqn:ik}): this is a stochastic system with $2K$ variables, where in many cases of interest $K$ will be not be small. This is then another case where we wish to reduce the number of variables, and where the reduction method described above may be useful. In fact, one can effect a partial reduction before applying the technique of Sec.~\ref{sec:example} by making the ansatz $s_k(\tau)=d_k\theta(\tau)^k$, which replaces the $K$ equations (\ref{eqn:sk}) by the single equation \begin{equation} \frac{d\theta}{d\tau} = - \beta \theta \sum^{K}_{l=1} l i_l + \frac{1}{\sqrt{N}}\xi(\tau), \label{theta_eqn} \end{equation} where $\xi(\tau)$ is a Gaussian noise with zero mean which is related to $\eta_1^{(k)}(\tau)$. After this initial manoeuvre we attempt to further reduce this $(K+1)$-dimensional system by searching for fast and slow modes. Examination of the Jacobian~\cite{parrarojas_2016} reveals that there is one $K$-fold degenerate eigenvalue which may be significantly greater in magnitude than the remaining eigenvalue. If we denote the ratio of the magnitude of the former to the magnitude of the latter by $\epsilon$, then as long as $\epsilon$ is small there will be effectively $K$ fast modes and $1$ slow mode, so we would expect to be able to reduce the motion to a two-dimensional SS where one of the variables is $\theta$ and the other is the slow one just mentioned. It is found that this additional variable is $\lambda(\tau) = \sum^{K}_{l=1} l i_l(\tau)$~\cite{parrarojas_2016}, so that Eq.~(\ref{theta_eqn}) simply becomes $d\theta/d\tau = - \beta \theta \lambda + \xi(\tau)/\sqrt{N}$. The equation for $\lambda(\tau)$ is more complicated, involving as it does three different noise terms, and we do not give it here. Although the system has been reduced to a manageable two dimensional model, it is always worthwhile to perform numerical investigations to see if it is possible to make further reductions, since it is still the case that two dimensional stochastic systems are difficult to study analytically. In this case, it is found that typically the noise term in the theta equation is much smaller in magnitude to the noise terms in the lambda equation. Therefore we can try to omit the noise term in Eq.~(\ref{theta_eqn}) and combine the three noises in the $\lambda$ equation together to obtain the further simplification: \begin{eqnarray} \frac{d\theta}{d\tau} &=& - \beta \theta \lambda, \nonumber \\ \frac{d\lambda}{d\tau} &=& \lambda\left( \beta \phi(\theta) - \gamma \right) + \frac{1}{\sqrt{N}} \bar{\sigma} \zeta(\tau), \label{ultimate_reduction} \end{eqnarray} where $\phi(\theta)=\sum^K_{k=1} k^2 d_k \theta^k$, $\zeta(\tau)$ is a Gaussian noise with zero mean and $\langle \zeta(\tau) \zeta(\tau') \rangle = \delta(\tau - \tau')$, and $\bar{\sigma}$ is a function of $\theta$ and $\lambda$ which is not given here~\cite{parrarojas_2016}. The model defined by Eq.~(\ref{ultimate_reduction}) is a semi-deterministic mesoscopic model, in the sense that one of the dynamical equations is deterministic, having no noise term. In fact it can be identified as the Cox-Ingersoll-Ross (CIR) model~\cite{CIR}, for which some analytic results are known. \begin{figure}[t] \begin{center} \includegraphics[width=0.45\textwidth]{JSP_fig7.pdf} \end{center} \caption{The distribution of epidemic sizes for $N=20 000, K=1000$ and $\alpha=-2.5$. The histogram gives the result for the full model, the solid blue line is the result for the reduced mesoscopic model and the red dashed line the result for the semi-deterministic mesoscopic model given by Eq.~(\ref{ultimate_reduction}).} \label{fig:r_infinity} \end{figure} As a test of the various reductions that have been made on this model we will investigate how well they capture the distribution of epidemic sizes as given by $r_\infty$, the number of recovered individuals at the end of the epidemic~\cite{parrarojas_2016}. The results are shown for a particular case in Fig.~\ref{fig:r_infinity}, and indicate that even the very much simplified model given by Eq.~(\ref{ultimate_reduction}) gives a good representation of the result. \section{ Revealing noise-induced selection via fast variable elimination } \label{sec:noise_induced} In \sref{sec:example}, we laid the groundwork for conducting fast-variable elimination in stochastic systems, while in \sref{sec:applications} the utility of this approach was demonstrated. We have essentially shown that, given a separation of timescales exists, the dynamics of a high dimensional system can be approximated by the projection of these dynamics onto an SS. Thus far, this is true for both the deterministic and stochastic component of the dynamics (see Eqs.(\ref{eq_general_ABar})~and~(\ref{eq_general_BBar})). However, there is a further consideration that we have until now omitted; noise-induced dynamics in the slow subspace. While accounting for such dynamics is more technically involved than the method outlined in \sref{sec:example}, the resultant dynamical behaviour can be very interesting, and so it is worth describing the key aspects here. Noise-induced dynamics are the phenomena whereby the projection of the deterministic dynamics onto the slow-subspace (see \eref{eq_general_ABar}) does not entirely describe the time-evolution of the average dynamics when demographic noise is accounted for. In the context of a system that features a CM, this means that rather than there being no average dynamics along the CM, a bias in fact emerges in some direction. Although this is a second order effect (it scales inversely with the population size and disappears in the infinite population size limit), it can in fact completely govern the dynamics along the CM, where the first order terms that control the collapse of the system to the CM disappear. If the system under consideration is one featuring competing organisms, this bias can be interpreted as noise-induced selection (or drift-induced selection in a population-genetics context). The origin of these bias terms can be interpreted in various ways. First, and perhaps most intuitively, noise-induced dynamics can be graphically understood as resulting from a bias in how fluctuations taking the system off the CM (or SS) return to the CM. Under certain scenarios, such as when the CM is strongly curved or the trajectories to the CM are divergent, fluctuations off the CM do not return, on average, to the point on the CM from which they originated (see \fref{fig:PG_projection}). This introduces a bias that stochastically `ratchets' the dynamics in a preferred direction. Second, these noise-induced dynamics can be understood as a mathematical consequence of making a non-linear change of variables into the slow-subspace of the stochastic system. As we have mentioned, the SDEs that we work with should be interpreted in the It\={o} sense. In this context, different rules of stochastic calculus apply, and accounting for the additional terms that arise in the analysis gives rise to additional terms in the reduced description on the CM. For readers not familiar with the analysis of SDEs, It\={o} calculus can in some sense be loosely understood as arising in the same way as Jensen's inequality; the average of a function of a random variable is not necessarily the same as the function evaluated at the average of that random variable. Finally, in a more biological context, noise-induced dynamics in systems of competing organisms can be understood resulting from a selective pressure to reduce variance in reproductive output (see Gillespie's Criterion~\cite{gillespie_1974,hansen_2017}). However, in contrast to Gillespie's original study, variance between the reproductive output of organisms can arise as a result of the dynamics of the organisms, rather than being assumed \textit{a priori}. Of course, the existence of these noise induced dynamics is not in itself dependent on a system exhibiting a separation of timescales. We have already mentioned Gillespie's Criterion, which demonstrates that selection for a genotype with a lower variance in reproductive output takes place in a model with only a single variable (measuring the frequency of one of two genotypes). Further, in a similar single-variable population genetics context that assumes a well-mixed population of finite size, it has been shown that noise-induced selection favours genotypes that increase the population size~\cite{houch_2012,houch_2014}. However, while fast-variable elimination does not generate noise-induced dynamics itself, it does often give us a means of quantifying and analytically understanding the effect. As in the earlier sections, rather than transforming into the fast-slow basis of the problem, removing the fast variables and then transforming back into the original variables, we take a shortcut by using a non-linear projection of the dynamics onto the CM. The reason for this is two-fold. Firstly, it is more straightforward practically; it is in the original, biologically relevant variables that we can better understand the behaviour of the system. Secondly, as has long been recognised~\cite{coullet_1983,roberts_2015}, the non-linear transform that yields the slow-fast basis is not always obvious. In order to obtain the non-linear projection, second order perturbation techniques can be used to deduce the effective dynamics. The full calculation is too long to reproduce here, however a clear and coherent explanation is given in Ref.~\cite{parsons_rogers_2015}. There it is shown that if a system has $M$ variables and a one-dimensional slow-subspace, then the equation for the reduced/effective dynamics can be expressed by \begin{eqnarray} \frac{ \mathrm{d} z }{ \mathrm{d} \tau} = \bar{ A }( z ) + \bar{ A }^{ \rm S }( z ) + \sqrt{ \frac{1}{N} } \zeta(t) \,. \label{eq_SDE_reduced_induced} \end{eqnarray} The term $\bar{ A }( z )$ and the white noise term $\zeta(t)$ retain the forms given in Eqs.~(\ref{eq_general_ABar})-(\ref{eq_general_BBar}) (that is, they are respectively components of the deterministic dynamics and noise projected onto the SS). Most interesting is the term $ \bar{ A }^{ \rm S }( z )$ as it is this that controls the noise induced dynamics. It is of order $1/N$ (induced as it is by demographic noise) and takes the explicit form \begin{eqnarray} \bar{ A }^{ \rm S }( z )= \frac{1}{N} \sum_{i,j=1}^M & & \left[ \frac{ \mathrm{d} u^{\{1\}}_i }{ \mathrm{d} z } u^{\{1\}}_j + u^{\{1\}}_i \frac{ \mathrm{d} u^{\{1\}}_j }{ \mathrm{d} z } - 2 u^{\{1\}}_i u^{\{1\}}_j \sum_{k=1}^{M} \left( v^{\{1\}}_k \frac{ \mathrm{d} u^{\{1\}}_k }{ \mathrm{d} z } \right) \right. \nonumber \\ & & \left. - u^{\{1\}}_i u^{\{1\}}_j \sum_{k=1}^{M} \left( \frac{ \mathrm{d} v^{\{1\}}_k }{ \mathrm{d} z } u^{\{1\}}_k \right) \right] \left. B_{ij}(\bm{x}) \right|_{\mathrm{CM}} \,, \label{eq_general_ABar_S} \end{eqnarray} where we recall that $\bm{v}^{\{1\} }$ and $\bm{u}^{\{ 1\} }$ are the right and left eigenvectors of the neutral Jacobian on the CM corresponding to the zero eigenvalue (see \sref{sec:example}). The terms which feature $\mathrm{d} u^{\{ 1\} }_i / \mathrm{d} z $ capture how quickly the fast directions change as a function of $z$, and can thus be understood as the component of the noise-induced dynamics that arises from the non-linearity of trajectories to the CM. Meanwhile the term $\mathrm{d} v^{\{ 1\} }_k / \mathrm{d} z $ is the component of the noise-induced dynamics that arises from curvature of the CM itself~\cite{parsons_rogers_2015}. Note that if both $v^{\{ 1\} }_k$ and $u^{\{ 1\} }_k$ are independent of $z$ (that is, the CM is linear and the direction of fast trajectories to the SS do not vary along the SS to first order in the selection strength), then there can be no noise-induced dynamics at this order. This is the case in the models discussed in Sections \ref{sec:example} and \ref{sec:D_island_Moran}. Finally, there are also cases where, despite the CM being curved or the trajectories to the CM being divergent, there is still no noise induced selection as the terms in \eref{eq_general_ABar_S} all cancel. This is the case in both Sections \ref{sec:SLVC} and \ref{sec:HW} for the parameterisations given in those sections. In what follows we will illustrate how the effective description provided by \eref{eq_SDE_reduced_induced} can be used to analytically tackle some particular problems of interest. In \sref{sec:LV_PG} we will address a two-species Lotka-Volterra model. Unlike in \sref{sec:SLVC} however, we will allow the species to have distinct birth, death and competition rates at leading order. Calculation of the effective system will reveal both that slow-living species and species that increase the global carrying capacity are stochastically selected for. In \sref{sec:heterogamety} we will describe a population genetic model of transitions between modes of sex determination. Although this population genetic model is much more complicated than the haploid Moran model, the presence of a CM in the neutral limit allows us to analytically characterise a noise-induced bias favouring the substitution of dominant neutral mutations. \subsection{Two-species Lotka-Volterra type models}\label{sec:LV_PG} We begin with a two-species Lotka-Volterra model of a similar type to that described in \sref{sec:M_allele_SLVC} (see \eref{parameters_non_neut}), except we now make the restriction \begin{eqnarray} b^{(1)} - d^{(1)} = \tilde{b}( 1 + \epsilon ) \,, \quad c^{(11)}&=&c^{(21)}\equiv c^{(1)} \,, \nonumber \\ b^{(2)} - d^{(2)} = \tilde{b} \,, \quad \qquad \quad c^{(21)}&=&c^{(22)}\equiv c^{(2)} \,. \end{eqnarray} By taking the limit of large system size, we can apply the diffusion approximation, as described in \sref{sec:setup}. The system is now approximated by an SDE of the same form as \eref{SDE_simple}. However, since the system is in two variables, it is difficult to analyse. We next follow the approach taken in \sref{sec:first_stage} and identify a CM. A CM exists under the above parameterisation if $\epsilon $ is equal to zero. It now takes the form \begin{eqnarray} x^{(2)} = \frac{1}{c^{(2)}} \left( \tilde{b} - c^{(1)} x^{(1)} \right) \, . \label{eq_PG_CM} \end{eqnarray} Notice that in isolation, species $1$ exists at $x^{(1)}=\tilde{b} / c^{(1)}$ and species $2$ at $x^{(2)}=\tilde{b} / c^{(2)}$. Further, if we assume that $c^{(2)}>c^{(1)}$, then increasing the frequency of species $1$ in the population increases the joint carrying capacity of the species, as can be seen in \fref{fig:PG_projection}. If additionally $\epsilon>0$, such that species $1$ reproduces at a lower rate that species $2$, then this can be interpreted as a model of public good production. Species $1$ pays a cost $\epsilon$ to its reproductive rate to increase the carrying capacity of both species. Species $2$ pays no cost, but still enjoys a reduced death rate in the presence of species $1$ as a result of the lower competition parameter of species $1$ (interpreted here as resulting from the production of a public good). However, in isolation, species $2$ exists at lower numbers than species $1$, interpreted here as resulting from the non-production of the public good. We can now briefly describe the deterministic dynamics. In the neutral limit, where $\epsilon=0$, the population grows to a point on the CM described by \eref{eq_PG_CM}. We now move away from the neutral limit, such that $1\gg \epsilon>0$; the system grows to a point on the SS (which is equal to the CM at leading order in $\epsilon$) after which the system moves along the SS until species $1$ is driven to extinction. We now use fast-variable elimination to characterise the dynamics when demographic noise is accounted for. As discussed in \sref{sec:first_stage}, our first task is to identify the left and right eigenvectors of the system evaluated on the CM, \eref{eq_PG_CM}. Note that while the right-eigenvectors are not so important in \sref{sec:second_stage} (where there was no noise-induced selection), they become very important here, featuring as they do in \eref{eq_general_ABar_S}. We find \begin{align}\label{eigenvectors_pg_simple_left} \bm{u}^{\{ 1\} } = \frac{ 1 }{ \tilde{b} }\,\left( \begin{array}{c} \tilde{b} - c^{(1)} z \\ -c^{(2)} z \end{array} \right), \ \ \bm{u}^{\{ 2\} } = -\frac{ 1 }{ \tilde{b} }\,\left( \begin{array}{c} \tilde{b} - c^{(1)} z \\ \tilde{b} c^{(2)}/c^{(1)} + c^{(2)} z \end{array} \right). \end{align} \begin{align}\label{eigenvectors_pg_simple_right} \bm{v}^{\{ 1\} } =\,\left( \begin{array}{c} 1 \\ - c^{(1)}/c^{(2)} \end{array} \right), \ \ \bm{v}^{\{ 2\} } = -\frac{ c^{(1)} }{ \tilde{b} - c^{(1)} z }\,\left( \begin{array}{c} z \\ (\tilde{b} - c^{(1)} z)/c^{(2)} \end{array} \right). \end{align} We can use these expressions for the left and right eigenvectors to obtain an approximate description of the dynamics in the SS using \eref{eq_SDE_reduced_induced} with expressions for $\bar{A}( z )$, $\bar{A}^{ \rm S }( z ) $ and $ \bar{B}(z)$ directly from Eqs.~(\ref{eq_general_ABar}), (\ref{eq_general_ABar_S}) and (\ref{eq_general_BBar}), where $z$ is the value of $x^{(1)}$ on the SS; \begin{eqnarray} \bar{A}( z ) &=& - \epsilon z \left( \tilde{b} - c^{(1)} z\right) \,,\\ \bar{A}^{ \rm S }( z ) &=& \frac{1}{N} \frac{2}{ \tilde{b}^{2} } z \left( \tilde{b} - c^{(1)} z\right) \left( c^{(2)} (\tilde{b} + d^{(2)} ) - c^{(1)} (\tilde{b} + d^{(1)} ) \right) \,, \\ \bar{B}(z) &=& \frac{2}{ \tilde{b}^{2} } z \left( \tilde{b} - c^{(1)} z\right) \left[ z \left( c^{(2)} (\tilde{b} + d^{(2)} ) - c^{(1)} (\tilde{b} + d^{(1)} ) \right) + \tilde{b} ( d^{(1)} + \beta)\right] \,. \nonumber \\ \end{eqnarray} \begin{figure}[H] \begin{center} \includegraphics[width=0.45\textwidth]{JSP_fig8a.pdf} \includegraphics[width=0.45\textwidth]{JSP_fig8b.pdf} \includegraphics[width=0.9\textwidth]{JSP_fig8c.pdf} \caption{Phase plots for the deterministic dynamics of the neutral Lotka-Volterra model described in \sref{sec:LV_PG} under three different parameter scenarios. In each the CM is plotted as a blue dashed line. The surface of the orange ellipses is indicative of the distribution of fluctuations arising from the centre of the ellipse. Arrows from the centre of the ellipse (going up, down, left and right) show possible fluctuations away from the CM that occur with equal probability. Arrows travelling from the end of these fluctuations back to the CM show the trajectories along which these fluctuations are quenched. Bias in the direction to which fluctuations are mapped are illustrated by green and purple lines. \textbf{Top left:} System with two species identical ($b^{(1)}=b^{(2)}$, $d^{(1)}=d^{(2)}$, $c^{(1)}=c^{(2)}$). Fluctuations down and right are projected back to the CM in the direction of $x^{(1)}$, but this is perfectly countered by fluctuations up and left which are projected back to the CM with in the direction of $x^{(2)}$. \textbf{Top right:} System with two species, the second reproducing and dying at a faster rate ($b^{(1)}-d^{(1)}=b^{(2)}-d^{(2)}$ $b^{(2)}>b^{(1)}$, $d^{(2)}>d^{(1)}$, $c^{(1)}=c^{(2)}$). As a result of this asymmetry, species $2$ experiences greater demographic fluctuations (the ellipse is larger in direction $x^{(2)}$). Consequently, fluctuations off the CM are projected back to the CM with a bias that favours $x^{(1)}$. \textbf{Bottom:} System with two species, the first increasing the carrying capacity of the system ($b^{(1)}=b^{(2)}$, $d^{(1)}=d^{(2)}$, $c^{(2)}>c^{(1)}$). The asymmetry in the angle of the CM now induces a bias in the projection of fluctuations back to the CM that favours $x^{(1)}$. } \label{fig:PG_projection} \end{center} \end{figure} An analysis of these terms reveals a number of points. Firstly, and as we might expect, in the limit $c^{(1)}=c^{(2)}\equiv c^{(0)}$, $b^{(1)}=b^{(2)}\equiv b^{(0)}$ and $d^{(1)}=d^{(2)}\equiv d^{(0)}$ the noise induced term $\bar{A}^{ \rm S }( z )$ vanishes and we are left with a system that takes the same form as that in \sref{sec:SLVC}. Secondly, we examine the limit in which $\epsilon=0$. Now $\bar{A}( z )=0$ as there are no dynamics along the CM in the infinite population size limit. However, if the population has a finite size the noise induced term $ \bar{A}^{ \rm S }( z ) $ is not zero. Thirdly, continuing with the system when $\epsilon=0$, and now additionally asking that $c^{(1)}=c^{(2)}\equiv c^{(0)}$ (i.e. that the carrying capacity is unchanged by the composition of the population), we find that $ \bar{A}^{ \rm S }( z ) $ is positive for all $z$ along the CM if $b_1 < b_2$. The species with the lower birth rate (and death rate, since $\tilde{b}$ is fixed) is therefore selected for. This insight, made in Refs.~\cite{parsons_quince_2007_1,doering_2012,chotibut_2017}, is a result of the fact that phenotypes which are reproducing and dying more quickly are subject to greater population fluctuations (they have a larger rate of population turnover). Consequently, it is easier for the longer lived phenotype (lower birth/death rates), to invade and fixate. This phenomena can be viewed as analogous to Gillespie's Criterion. Turning instead to the limit $\epsilon=0$, $b^{(1)}=b^{(2)}\equiv b^{(0)}$ and $d^{(1)}=d^{(2)}\equiv d^{(0)}$ reveals a similar, but biologically distinct insight. In this case though the rate of population turnover of both species is identical, one species exists at greater numbers in isolation than the other (this species has a lower value of $c^{(\alpha)}$). Again while $\bar{A}( z )=0$, the noise-induced term $ \bar{A}^{ \rm S }( z ) $ is non-zero and drives the dynamics in the finite system. We now find that $\bar{A}( z )>0$ for all $z$ on the CM if $c^{(2)}>c^{(1)}$. That is, the species that increases the \emph{joint} carrying capacity of the species is selected for. One interpretation of this noise-induced term is that it is easier for a novel mutant species to invade a small population than a large one; thus species that increase the carrying capacity of the population receive a benefit by being more stochastically robust to invasion attempts~\cite{constable2016}. Note that in the above context, if $\epsilon>0$ but $N$ is finite, it is possible that $\bar{A}^{ \rm S }( z )> \bar{A}( z )$ along the length of the SS. Biologically, this provides a mechanism that allows for the evolution of public good producing behaviour despite the evolution of such behaviours being forbidden in the deterministic limit. This has been noted in more typical population genetic models~\cite{houch_2012,houch_2014} as well as models of the form described here. If the population is not well mixed but exists in space, it has been shown that for weak migration the noise induced selection for public good production is amplified both in metapopulation~\cite{constable2016,chotibut_2017} and continuous space models~\cite{hallatschek_2011}. In \cite{constable2016} it was also shown that this behaviour (whereby a species that increases the joint population carrying capacity is stochastically selected for) is generic and robust to the inclusion of a suite of environmental variables in the model that can modify population size. \subsection{Models of transitions between male and female heterogamety }\label{sec:heterogamety} \begin{figure}[th] \begin{center} \includegraphics[width=0.35\textwidth]{JSP_fig9a.pdf} \includegraphics[width=0.45\textwidth]{JSP_fig9b.pdf} \end{center} \caption{Left: Figure illustrating the neutral dynamics of the model of transitions from male to female heterogamety. Only the frequency of female genotypes is shown. The blue dashed line is the CM, defined by \eref{eq_CM_sex_det}, with male genotypes assumed here to be fixed at their value on the CM. Grey arrows indicate deterministic trajectories starting at various points and eventually reaching the CM. Right: Form of noise-induced selection on the SS (see \eref{eq_heterogamety_drift}).} \label{fig:heterogamety_traj} \end{figure} In this model, a diploid population is considered in which sex is genetically determined by a dominant mutation at a single locus. In mammals, sex is determined by the dominant Y chromosome, so that XY individuals are male while XX individuals are female. Such a system is termed male heterogamety. In birds the situation is somewhat reversed. Their sex determination system features a dominant feminising W chromosome, such that ZW individuals are female, while ZZ individuals are male. This scenario is termed female heterogamety. Intriguingly, while these systems are relatively static in both mammals and birds, transitions between male and female heterogamety can occur in reptiles and amphibians. In this section, we will discuss how fast-variable elimination can be exploited to understand the impact of genetic drift on these transitions. We consider a system of male heterogamety comprised of XX females and XY males. A mutation arises on the X chromosome, changing it to an X$^{\prime}$ and rendering it dominant to the Y, such that X$^{\prime}$Y individuals are female. Genotypes X$^{\prime}$X can also be produced, which are female, as can YY genotypes, which are male. This renders a system of five genotypes. Along with the absorbing state in which the system is entirely XX and XY (male heterogamety), there is also and absorbing state when the system is entirely X$^{\prime}$Y and YY (female heterogamety). Note that this state is analogous to that found in birds up to some relabelling of the chromosomes. We wish to understand the transitions between these states. We can construct a population genetic model of this process in a very similar way to the diploid Moran model (see \aref{sec:app_HW}), except with matings restricted to being between males and females. We assume a fixed population size $N$. Males and females of each genotype encounter each other proportional to their frequency in the population. They produce a progeny, that inherits its chromosomes (alternatively, alleles at a single locus) in a Mendellian fashion from its parents. Simultaneously, in order that the population size is fixed to $N$, another individual is picked to die. In order to account for selection against certain genotypes, the probability that each genotype is selected to die can be a weighted probability. The detailed model set-up is given in \cite{veller2017}, however here we will discuss only the main results. Let the frequency of XX, X$^{\prime}$X and X$^{\prime}$Y (female) and XY and YY genotypes (male) be respectively given by $\bm{x} = (x^{(1)},x^{(2)},x^{(3)},x^{(4)},x^{(5)})$. Due to the condition of fixed population size, we can then express $x^{(5)}$ as $x^{(5)}=1-x^{(1)}-x^{(2)}-x^{(3)}-x^{(4)}$. If all genotypes are equally fit, then this four-dimensional system exhibits a one-dimensional CM that connects the absorbing states, characterised in \cite{bull1977}. It is given by \begin{eqnarray} & & x^{(1)} = \frac{(1-z)^2}{2(1+z)^2},\quad x^{(2)} = \frac{z(1-z)}{(1+z)^2},\quad x^{(3)} = \frac{z}{1+z},\nonumber \\ & & x^{(4)} = \frac{1}{2}(1-z),\quad x^{(5)} = \frac{z}{2} \,, \label{eq_CM_sex_det} \end{eqnarray} and illustrated in \fref{fig:heterogamety_traj}. By definition there are no deterministic dynamics along this line and so one might expect that transitions between the absorbing states occur with equal probability. However employing the fast-variable elimination described in the introduction to this section, we find that in finite populations, there is noise-induced (equivalently, drift-induced) selection along the line: \begin{eqnarray} \bar{A}( z ) &=& 0 \,,\nonumber\\ \bar{A}^{ \rm S }( z ) &=& \frac{ 1 }{ 4 N } \frac{ z (1 - z) (1 + z)^3 (1 + z^2) \{1 + (2 - z) z [4 + z (6 + z)]\} }{ [1 + z (2 + 3 z)]^3 } \,, \label{eq_heterogamety_drift} \\ \bar{B}(z) &=& \frac{1}{4} \frac{ z (1 + z )^{3} \{ 1 + z \left[ 1 + z (6 - z (6 + (3 - z) z)) \right] \} }{ \left[1 + z (2 + 3 z)\right]^2} \,,\nonumber \end{eqnarray} where $z$ is the value of $ 1 - 2 x^{(4)}$ on the CM (see \eref{eq_CM_sex_det}). We find that $ \bar{A}^{ \rm S }( z ) $ is positive along the entire length of the CM (see \fref{fig:heterogamety_traj}). Since $z=0$ corresponds to the absorbing state in which the population is entirely comprised of XX and XY genotypes, and $z=1$ corresponds to the absorbing state in which the population is entirely comprised of X$^{\prime}$Y and YY genotypes, we can conclude that the dominant mutation X$^{\prime}$ is selected for along the entire CM. Therefore, if a dominant X$^{\prime}$ mutation occurs in a resident XX|XY population, it is more likely to invade and fixate than a recessive X mutation in a resident X$^{\prime}$Y|YY population. The above picture, whereby dominant sex-determining mutations are more likely to invade and fixate, is recapitulated for successive invasions. If the resident population is X$^{\prime}$Y|YY, a dominant $Y$ mutation can occur, Y$^{\prime}$, yielding X$^{\prime}$Y$^{\prime}$ males. Again five genotypes can emerge following random mating including X$^{\prime}$X$^{\prime}$ (female) and Y$^{\prime}$Y (male). This dominant mutation is once again more likely to invade, creating an emergent directionality to evolution with substitution rates of neutral dominant mutations being five times higher than those of recessive mutations~\cite{veller2017}. Intriguingly, the behaviour described here recapitulates the empirical observation that sex chromosomes typically evolve to include a cascade of inhibitory mutations~\cite{wilkins1995}, with more recent mutations at the top of the cascade. The ``bottom up'' hypothesis~\cite{vandoorn2014b} therefore pictures the sequential invasion of sex-determining genes, each one of which represses the action of the previous mutation. While it can be argued that the drift-induced phenomenon described here is not solely responsible for this pattern, and other deterministic explanations have been suggested~\cite{vandoorn2010}, it is nevertheless interesting to see the emergence of the pattern in a model with minimal assumptions. In \cite{veller2017}, the above analysis is extended to include biologically relevant selection. In addition more complicated models of transitions between sex chromosomes are also explored via the same timescale-separation arguments. \section{Discussion} \label{sec:discussion} Our aim in this paper has been to describe and review an approach to the systematic modelling and analysis of a class of models in population genetics. Readers with a background in theoretical physics, and statistical physics in particular, will have found many of the ideas and techniques familiar. These include: the care taken in distinguishing between the form of the models at the microscale, the mesoscale and the macroscale; the idea that the model should be formulated and unambiguously defined at the microscale and that the corresponding mesoscopic and macroscopic models should be derived from the former by some form of systematic approximation procedure; the nature of these approximation procedures (the diffusion limit and fast mode elimination); the formalism of non-equilibrium statistical physics including master equations, FPEs and SDEs. While the techniques and philosophy that we use come from statistical physics, the motivation and the models are taken from population biology, and especially from population genetics. Due to the difficulty of performing analytical calculations with models that have many variables and parameters, researchers in these fields frequently create models for rather specific situations and to answer one very limited and definite question. These models may be restricted to one particular situation, but they at least have the possibility of being analysed. On the other hand the danger with this way of doing things is that the subject becomes characterised by a series of models with conflicting assumptions and little overlap. We have taken a different path: we have not held back from setting-up general models, even though they typically contain very many variables and parameters, but have instead tried to systematically approximate these models to obtain ones which are more tractable. In this way the variables that we focus on are those which are likely to be the most important in the medium to long term (the slow modes) and the parameters which are chosen as being important at late times are combinations of the initial parameters of the original model which would have been impossible to guess \textit{a priori}. There are also some modellers who would argue that to study more complex models, and especially those which feature individuals with more than one or two attributes and which have many parameters, one should turn to agent based models. Here there is no attempt at mathematical analysis and a computer simulation is set up in which the agents (\textit{i.e.}~the individuals) are simply allowed to interact with each other according to a set of rules. These rules will have many parameters associated with them, which will not in general easily map onto the parameters of the type we defined here when formulating the microscopic models. While agent based models might be useful in some situations, the difficulty with the whole approach is that one may have no idea what aspect of the model is responsible for a particular outcome that one is interested in. In fact, this is frequently put forward as a strength of the agent-based method: behaviour \textit{emerges} without having to have been programed-in. We believe that our approach straddles those where the models are simple and tailored for a particular situation and those agent based models which allow for complex systems, but which may be unable to provide insight into the underlying reasons for particular outcomes. Of course, fast-variable elimination has a long history in theoretical biology and ecology. Most prominently, the form of the Michaelis-Menten function~\cite{briggs_1925} and the Hollings type II functional response~\cite{dawes_2013} are derived based on arguments that assume a separation of timescales. However, these techniques tend to be employed most frequently in deterministic settings. The field of population genetics provides a notable exception to this rule, with the crucial role of genetic drift providing motivation for studies that employed fast-variable elimination in a stochastic setting. We have explicitly discussed the classical assumption of diploid populations at Hardy-Weinberg frequencies in \sref{sec:HW}, where a separation of timescales was assumed \textit{a priori}. However, there also exists a range of studies which apply more sophisticated techniques (see, for example \cite{nagylaki1980SM,ethier_1980,ethier_1988,stephan_1999,hossjer_2016,newberry_2016}). The relatively frequent occurrence of stochastic fast-variable elimination in population genetics is the result of two factors. The first, as we have already mentioned, is that population genetics models are often inherently stochastic. The second is that a common assumption in many population genetic models (even when fast-variable elimination techniques are not required) is that selection is weak relative to the other processes. If then the question we wish to ask is which of two alleles (or genotypes) fixes in a population, the stage is set for methods based on the elimination of fast-variables to be useful: by definition, if there is no selection, there must be a CM connecting states in which either allele is fixed; if selective forces are weak, the CM is simply replaced by an SS; if there are only two alleles competing, regardless of the other variables, the SS is one-dimensional, allowing an analytic treatment of the resulting effective equation. Given the ubiquity of the above listed features in models of population genetics, it is therefore perhaps surprising that there are in fact not many more studies employing fast-variable elimination. Perhaps some of the reasons for this stem from the apparent difficulty of dealing with these types of problem, with previous studies~\cite{stephan_1999} relying on the perturbative techniques described by Gardiner~\cite{gardiner_2009}. While certainly thorough, this approach is perhaps lacking in intuition, taking place as it does within the context of the FPE and involving operators. In contrast, we believe that there is much to say for the approaches that we have outlined in this paper. As they are described within the context of SDEs (entirely equivalent to the FPE), and as the effective equation is constructed from physically meaningful quantities (such as the left and right eigenvectors), it is relatively straightforward to apply the methodology. We note that while other approaches to fast-variable elimination in the SDE setting are well practiced (see~\cite{constable_2013}, which gives a more complete review of this area of the literature), these too have not cemented themselves as methods within the population genetics literature. The fact that, as described in \sref{sec:heterogamety}, the techniques we have presented are yielding novel insights to models first developed in the 1970s is testament to the untapped potential of the approach. In terms of applications of the method, we have tried to some extent to move away from using the Moran model, which has been a very popular starting point among some researchers over the last decade or two. Instead we have introduced competition between individuals to control population growth. There are many other types of interaction between individuals that could be included, and this is an interesting question for the future. But even when only competition between individuals is included, some interesting points emerge. For example, as discussed in Sec.~\ref{sec:M_allele_SLVC}, the model with competition can be mapped into a Moran model, but only if the selection is frequency dependent, which suggests that the traditional use of frequency independent selection may be too restrictive. In addition, the mapping gives a direct connection between the competition rates and the elements of the payoff matrix. Only relative payoffs of a certain type appear in the mapping between the two models, which again could be used to constrain the nature of the payoffs. This would be useful, since the lack of prescriptions available to guide the choice of payoffs is a significant weakness in the application of game theory to the subject. Of course, our methods are only valid when selection is weak, and moving beyond this regime, such as when strong mutualisms are accounted for, is known to invalidate this mapping between the SLVC and models of game theory~\cite{chotibut_2016}. Extending such a study using the methods we have presented is also an interesting avenue for future studies. Perhaps one of the most surprising insights that fast-variable elimination has provided is the occurrence of noise-induced dynamics in otherwise deterministically neutral systems. These dynamics, described in \sref{sec:noise_induced}, can be interpreted as drift-induced selection and can give rise to surprising behaviours, such as selection reversal. While historically such behaviour has been observed and characterised in simpler settings~\cite{gillespie_1974}, it has been often dismissed as a small second order effect of little biological significance~\cite{hansen_2017}. This is because noise induced selection is inversely proportional to the size of the system, and therefore, it is argued, likely to be swamped by other processes. However, this misses the fact that when selection is weak (a frequent assumption in population genetics) or when a population has a small effective population size (such as when a system is spatially distributed) noise-induced terms can in fact dominate the dynamical behaviour. Moreover, utilisation of fast-variable elimination techniques has led to an increasing awareness that noise-induced selection appears in many distinct evolutionary systems in ecology, epidemiology and population genetics. We have already discussed some of these in detail in \sref{sec:noise_induced}. However more examples are arising with increasing frequency. In \cite{lin_mig_1}, stochastic Lotka-Volterra models for multiple islands (similar to the models described in \sref{sec:D_island_SLVC}) were used to show that demographic stochasticity induces a selective bias for species that disperse at a higher rate between identical islands, where there is no selection deterministically. Intriguingly, this bias persists even when the islands are inhomogeneous and a deterministic selective pressure exists for slower dispersal~\cite{lin_mig_2}. In \cite{kogan_2014} meanwhile, a model of viral evolution was investigated, showing that noise-induced dynamics selected for viral strains with a fast infection and recovery time in SIS and SIR models. It is clear that the methods we have described in this paper hold a great degree of utility, whether one is seeking to consolidate existing disparate models, add more biological detail to well understood systems or uncover and characterise entirely new dynamics. Beyond drawing attention to this fact, we hope that as presented, this work will prove of value to readers with both biology and statistical physics backgrounds. For the biologists, this paper has aimed to show that these techniques are not as conceptually formidable as other studies may have suggested. For the statistical physicists, this paper has aimed to show that many interesting and relevant problems in biology still exist, as yet untapped, and amenable to analysis with their approaches. \section*{Acknowledgments} GWAC thanks the Finnish Center for Excellence in Biological Interactions and the Leverhulme Early Career Fellowship provided by the Leverhulme Trust for funding.
1,108,101,565,375
arxiv
\section{Introduction} Fermat varieties have been touchstones for various conjectures in number theory and algebraic geometry. For example, they have provided evidences for the conjectures of Weil, Hasse-Weil, Hodge, Tate, Bloch, etc. Another most fascinating conjecture is the Beilinson conjecture \cite{beilinson} on special values of the $L$-functions of motives over number fields. It integrates former conjectures of Tate, Birch-Swinnerton-Dyer, Bloch and Deligne, and gives us a beautiful perspective on the mysterious but strong connections between the analysis ($L$-function) and the geometry (cohomology) of motives. Unfortunately, we have only limited results on the conjecture and no general approach seems to be known. The aim of this paper is to study the conjecture for motives associated to Fermat curves. To an algebraic variety, or more generally to a motive $M$ over a number field, its $L$-function $L(M,s)$ is defined by a Dirichlet series convergent on a complex right half plane. Conjecturally in general, it has an analytic continuation to the whole complex plane and satisfies a functional equation with respect to $s \leftrightarrow 1-s$. The Beilinson conjecture explains the behavior of the $L$-function at an integer in terms of the regulator map, that is, a canonical map $$r_\sD \colon H_\sM^\bullet(M,\Q(r))_\Z \lra H_\sD^\bullet(M_\R,\R(r))$$ from the integral part of the motivic cohomology to the real Deligne cohomology (see \S 4 for definitions). The conjecture asserts, under an assumption on $r$, firstly that $r_\sD \otimes}\def\op{\oplus_\Q \R$ is an isomorphism, which implies by the functional equation that $$\dim_\Q H_\sM^\bullet(M,\Q(r))_\Z = \ord_{s=1-r} L(M^\vee,s),$$ where $M^\vee$ is the dual motive. For an integer $n$, let $L^*(M,n)$ denote the first non-vanishing Taylor coefficient of $L(M,s)$ at $s=n$. Then the second assertion is that $$\det(r_\sD) = L^*(M^\vee,1-r)$$ in $\R^*/\Q^*$, where the determinant is taken with respect to a canonical $\Q$-structure on the Deligne cohomology. For example, if $M=M^\vee=\Spec k$, the spectrum of a number field, then $L(M,s)$ is the Dedekind zeta function $\z_k(s)$. The regulator map for $r=1$ is the classical regulator map $$\mathscr{O}}\def\sP{\mathcal{P}_k^* \otimes}\def\op{\oplus_\Z \Q \lra \prod_{v | \infty }\R$$ given by the logarithms of the absolute values for infinite places $v$. In fact, to obtain the isomorphism in this case, we need to add $\Q$ on the left-hand side which maps diagonally. Then the conjecture reduces to the classical unit theorem of Dirichlet and the class number formula. The generalization to all $r \geq 1$ is due to Borel. Let $X_N$ be the Fermat curve of degree $N$ over a number field $k$ defined by the homogeneous equation $$x_0^N+y_0^N=z_0^N.$$ The regulator map we study is: $$r_\sD \colon H^2_{\sM}(X_N,\Q(2))_\Z \lra H^2_{\sD}(X_{N,\R},\R(2)).$$ According to the conjecture, the dimension of the motivic cohomology group is $[k:\Q]$ times $\mathrm{genus}(X_N)=(N-1)(N-2)/2$. An element of the motivic cohomology group is given by a Milnor $K_2$-symbol on the function field. In \cite{ross}, Ross showed that the regulator image of the element $$e_N=\{1-x,1-y\} \in H^2_{\sM}(X_N,\Q(2))_\Z,$$ where $x=x_0/z_0$, $y=y_0/z_0$ are the affine coordinates, is non-trivial. There are also relevant studies of Ross \cite{ross-cr} and Kimura \cite{k-kimura} (see \S 4.10, \S4.12). The corresponding $L$-function is $L(h^1(X_N),s)$ at $s=0$ (or at $s=2$ by the functional equation). Suppose for simplicity that $k$ contains $\mu_N$, the group of $N$-th roots of unity. As was proved by Weil \cite{weil-numbers}, \cite{weil-jacobi}, the $L$-function decomposes into the Jacobi-sum Hecke $L$-functions (see \S3.5) as $$L(h^1(X_N),s)=\prod_{(a,b) \in I_N} L(j_N^{a,b},s)$$ where $I_N=\{(a,b) \mid a,b \in \Z/N\Z, a,b,a+b \neq 0\}$, hence satisfies all the desired analytic properties. In the study of Fermat varieties, the decomposition in the category of motives with coefficients is extremely useful. The terminology ``Fermat motive" appeared in Shioda's paper \cite{shioda}, although the idea goes back to the paper of Weil \cite{weil-jacobi} (see \S 3.7). When $\mu_N \subset k$, the group $\mu_N \times \mu_N$ acts on $X_N$ and using this action, we have a decomposition $$h^1(X_N) = \bigoplus_{(a,b) \in I_N} X_N^{a,b}$$ in the category $\sM_{k,\Q(\mu_N)}$ of motives over $k$ with coefficients in $\Q(\mu_N)$ (see \S2.8), which corresponds to the above decomposition of the $L$-function (Theorem \ref{fermat-L}). By projecting $e_N$, we obtain an element $$e_N^{a,b} \in H_\sM^2(X_N^{a,b},\Q)_\Z$$ for each $(a,b) \in I_N$. Our main result Theorem \ref{main-theorem} calculates the image of $e_N^{a,b}$ under each $v$-component $r_{\sD,v}$ of $r_\sD$, where $v$ is an infinite place of $k$. Define a hypergeometric function of two variables by $$F(\alpha}\def\b{\beta}\def\g{\gamma}\def\d{\delta}\def\pd{\partial}\def\io{\iota,\b; x, y) = \sum_{m,n \geq 0} \frac{(\alpha}\def\b{\beta}\def\g{\gamma}\def\d{\delta}\def\pd{\partial}\def\io{\iota,m)(\b,n)}{(\alpha}\def\b{\beta}\def\g{\gamma}\def\d{\delta}\def\pd{\partial}\def\io{\iota+\b+1,m+n)} x^m y^n$$ where $(\alpha}\def\b{\beta}\def\g{\gamma}\def\d{\delta}\def\pd{\partial}\def\io{\iota,m) = \alpha}\def\b{\beta}\def\g{\gamma}\def\d{\delta}\def\pd{\partial}\def\io{\iota(\alpha}\def\b{\beta}\def\g{\gamma}\def\d{\delta}\def\pd{\partial}\def\io{\iota+1)\cdots (\alpha}\def\b{\beta}\def\g{\gamma}\def\d{\delta}\def\pd{\partial}\def\io{\iota+m-1)$ is the Pochhammer symbol. This is a special case of Appell's $F_3$, one of his four hypergeometric functions of two variables \cite{appell}. Then the regulator is expressed by special values $$\wt{F}(\alpha}\def\b{\beta}\def\g{\gamma}\def\d{\delta}\def\pd{\partial}\def\io{\iota,\b) := \frac{\vG(\alpha}\def\b{\beta}\def\g{\gamma}\def\d{\delta}\def\pd{\partial}\def\io{\iota)\vG(\b)}{\vG(\alpha}\def\b{\beta}\def\g{\gamma}\def\d{\delta}\def\pd{\partial}\def\io{\iota+\b+1)}F(\alpha}\def\b{\beta}\def\g{\gamma}\def\d{\delta}\def\pd{\partial}\def\io{\iota,\b;1,1)$$ for $\alpha}\def\b{\beta}\def\g{\gamma}\def\d{\delta}\def\pd{\partial}\def\io{\iota, \b \in \frac{1}{N}\Z$, at the point $(x,y)=(1,1)$ which lies on the boundary of the domain of convergence. Notice that the period of $X_N^{a,b}$ is essentially the Beta value $B(\tfrac{a}{N},\tfrac{b}{N})$, whose inverse is related to a value of Gauss' hypergeometric function as $$ F(\alpha}\def\b{\beta}\def\g{\gamma}\def\d{\delta}\def\pd{\partial}\def\io{\iota,\b,\alpha}\def\b{\beta}\def\g{\gamma}\def\d{\delta}\def\pd{\partial}\def\io{\iota+\b+1;1) = \frac{\alpha}\def\b{\beta}\def\g{\gamma}\def\d{\delta}\def\pd{\partial}\def\io{\iota+\b}{\alpha}\def\b{\beta}\def\g{\gamma}\def\d{\delta}\def\pd{\partial}\def\io{\iota\b} B(\alpha}\def\b{\beta}\def\g{\gamma}\def\d{\delta}\def\pd{\partial}\def\io{\iota,\b)^{-1}.$$ The value $\wt{F}(\alpha}\def\b{\beta}\def\g{\gamma}\def\d{\delta}\def\pd{\partial}\def\io{\iota,\b)$ can also be written by the value at $x=1$ of Barnes' generalized hypergeometric function ${}_3F_2$ of one variable (see \S 4.10). We shall show the non-vanishing of $r_{\sD,v}(e_N^{a,b})$ by using the integral representation of $F_3$ (see \S 4.9). Since each target $H_\sD^2(X_{N,v}^{a,b},\R(2))$ is one-dimensional, we obtain the surjectivity of $r_{\sD,v}$ for each $X_N^{a,b}$, hence for $X_N$ (Corollary \ref{corollary-1}). For a general number field $k$ not necessarily containing $\mu_N$, we also have a motivic decomposition $$h^1(X_N) = \bigoplus_{[a,b]_k} X_N^{[a,b]_k},$$ where $[a,b]_k$ is the orbit of $(a,b)$ by the action of $\Gal(k(\mu_N)/k) \subset (\Z/N\Z)^*$. The study of $X_N^{[a,b]_k}$ is essentially the same as that of $X_N^{a,b}$ and we can also derive results for $X_N^{[a,b]_k}$. If $N=3$, $4$, $6$ and $k\subset\Q(\mu_N)$, then there is only one infinite place and we obtain the surjectivity of the whole $r_\sD$ (Corollary \ref{corollary-3}). In general, however, we do not have enough elements for the Beilinson conjecture. An attempt is to use the action of the symmetric group of degree $3$ acting on $X_N$ as permutations of the coordinates. In this way, we obtain at most $3$ elements from each $e_N^{a,b}$, and we shall show that they are actually enough for the surjectivity of the whole $r_{\sD}$ if $N=5, 7$ and $k\subset\Q(\mu_N)$, with a restriction on $(a,b)$ when $N=7$ (Theorem \ref{N=5}, Proposition \ref{N=7}). This paper is constructed as follows. In \S 2, we first recall briefly the necessary materials on motives and fix our notations. Then we define motives $X_N^{a,b}$, $X_N^{[a,b]_k}$ associated to Fermat curves and study the relations among them. In \S 3, after recalling the definition of the $L$-function of a motive with coefficients, we calculate the $L$-functions of our motives and derive basic properties. At the end, we compare our $L$-functions with the Artin $L$-functions of Weil. Finally in \S 4, we first recall the Beilinson regulator and the Beilinson conjecture for motives with coefficients. Then we define elements in the motivic cohomology groups and study the Deligne cohomology groups of our motives. The main results are stated in \S 4.7 and proved in \S 4.8, \S4.9 after introducing Appell's hypergeometric function. In \S 4.10, we introduce Barnes' hypergeometric function and discuss some variants. At the end, we calculate the action of the symmetric group and give applications for $N=5$ and $7$. \medskip A part of this work was done when the author was visiting l'Institut de Math\'ematiques de Jussieu from 2004 to 2006, supported by the JSPS Postdoctoral Fellowships for Research Abroad. I would like to thank them for their hospitality and support. I would like to thank sincerely Bruno Kahn for his hospitality and enlightening discussions. Finally, I am grateful to Seidai Yasuda for valuable discussions. \section{Fermat Motives} \subsection{Motives} We recall briefly the definition of the category of pure motives modulo rational equivalences (``Chow motives"). For more details, see \cite{scholl} and its references. For a field $k$, let $\sV_k$ be the category of smooth projective $k$-schemes. For $X \in \sV_k$ and a non-negative integer $n$, let $\CH^n(X)$ (resp. $\CH_n(X)$) be the Chow group of codimension-$n$ (resp. dimension-$n$) algebraic cycles on $X$ modulo rational equivalences. For example, $\CH^1(X)$ is the Picard group. Recall that for a flat (resp. proper) morphism $f \colon X \ra Y$, we have the pull-back (resp. push-forward) map $$f^*\colon \CH^n(Y)\lra \CH^n(X),\quad f_* \colon \CH_n(X) \lra \CH_n(Y).$$ If $f$ is flat and finite of degree $d$, we have $f_* \circ f^* = d$. In particular, $f^*$ (resp. $f_*$) is injective (resp. surjective) modulo torsion. For $X$, $Y \in \sV_k$, the group of {\em correspondences} of degree $r$ from $X$ to $Y$ is defined by \begin{equation*}\begin{split} \Corr^r(X,Y) & = \bigoplus_i \Q \otimes}\def\op{\oplus_\Z \CH^{\dim X_i +r}(X_i \times Y)\\ & = \bigoplus_j \Q \otimes}\def\op{\oplus_\Z \CH_{\dim Y_j -r}(X \times Y_j) \end{split}\end{equation*} where $X_i$ (resp. $Y_j$) are the irreducible components of $X$ (resp. $Y$). For a morphism $f\colon Y \ra X$, let $\vG_f \subset X \times Y$ be the transpose of its graph. Then it defines an element of $\Corr^0(X,Y)$. The composition of correspondences $$\Corr^r(X,Y) \otimes}\def\op{\oplus \Corr^s(Y,Z) \lra \Corr^{r+s}(X,Z)$$ is defined by $$f \otimes}\def\op{\oplus g \longmapsto g \circ f = {\pr_{XZ}}_*(\pr_{XY}^*(f) \cdot \pr_{YZ}^*(g)),$$ where $\pr_{**}$ is the projection from $X \times Y \times Z$ to the indicated factors, and $\cdot$ is the intersection product. In particular, we have $$\vG_f \circ \vG_g = \vG_{g \circ f}.$$ The class of the diagonal $\Delta_X = \vG_{\id_X}$ is the identity for the composition. The category $\sM_k = \sM_{k,\Q}$ of {\em motives over $k$ with $\Q$-coefficients} is defined as follows. The objects are triples $(X,p,m)$ where $X \in \sV_k$, $m \in \Z$, and $p \in \Corr^0(X,X)$ is an idempotent, that is, $p \circ p = p$. The morphisms are defined by \begin{equation*} \Hom_{\sM_k}((X,p,m),(Y,q,n)) = q \circ \Corr^{n-m}(X,Y) \circ p. \end{equation*} We simply write $(X,p)$ instead of $(X,p,0)$, and $h(X)$ instead of $(X,\varDelta_X)$. Then, $h$ defines a {\em contravariant} functor \begin{equation*} h \colon \sV_k^\mathrm{opp} \lra \sM_k; \quad X \longmapsto h(X), \ f \longmapsto \vG_f. \end{equation*} For a field extension $E$ of $\Q$, the category $\sM_{k,E}$ of {\em motives with $E$-coefficients} is defined: it has the same objects as $\sM_k$, and the morphisms $$\Hom_{\sM_{k,E}}(M,N) = E \otimes}\def\op{\oplus_\Q \Hom_{\sM_k}(M,N).$$ When $E \subset E'$, we regard $\sM_{k,E}$ as a subcategory of $\sM_{k,E'}$. The category $\sM_{k,E}$ is an additive $E$-linear category with the direct sum $$(X,p,m) \oplus (Y,q,n) = (X \sqcup Y, p \oplus q, m+n),$$ and the zero object $\mathbf{0}=h(\phi)$. It is a peudo-abelian category, that is, every projector (i.e. $f \in \End_{\sM_{k,E}}(M)$ such that $f \circ f =f$) has an image. For example, $$(X,p)=\Im(p\colon h(X) \ra h(X)).$$ On $\sM_{k,E}$, there exists a natural tensor product $\otimes}\def\op{\oplus$ such that $$h(X) \otimes}\def\op{\oplus h(Y) = h(X \times Y).$$ The identity for $\otimes}\def\op{\oplus$ is the {\em unit motive} $\mathbf{1} = h(\Spec k)$. The {\em Lefschetz motive} is defined by $\mathbf{L} = h(\Spec k, \varDelta_X,-1)$. Then we have $$(X,p,m) \otimes}\def\op{\oplus \mathbf{L}^{\otimes}\def\op{\oplus n} = (X,p,m-n).$$ For a motive $M=(X,p,m)$ with $\dim X =d$, its {\em dual motive} is defined by \begin{equation*} M^\vee = (X,{}^t\! p,d-m) \end{equation*} where ${}^t\! p$ is the transpose of $p$. For an integer $r$, the $r$-th {\em Tate twist} of $M$ is defined by $$M(r)= (X,p,m+r) = M \otimes}\def\op{\oplus \mathbf{L}^{\otimes}\def\op{\oplus(-r)}.$$ For a morphism $f \colon X \ra Y$, we have the pull-back \begin{equation*} f^* := \vG_f \colon h(Y) \lra h(X). \end{equation*} On the other hand, for irreducible $X$ and $Y$, we have the push-forward \begin{equation*} f_* := \,^t \vG_f\colon h(X) \lra h(Y)(\dim Y - \dim X). \end{equation*} Suppose that $X$, $X'$, $Y$, $Y' \in \sV_k$ are irreducible, and let $f \in \Corr^r(X,Y)$. Then, for morphisms $\alpha}\def\b{\beta}\def\g{\gamma}\def\d{\delta}\def\pd{\partial}\def\io{\iota \colon X' \ra X$, $\b \colon Y' \ra Y$, we have \begin{equation}\label{corr-pull} \b^* \circ f \circ \alpha}\def\b{\beta}\def\g{\gamma}\def\d{\delta}\def\pd{\partial}\def\io{\iota_* =(\alpha}\def\b{\beta}\def\g{\gamma}\def\d{\delta}\def\pd{\partial}\def\io{\iota \times \b)^* f, \end{equation} and for morphisms $\alpha}\def\b{\beta}\def\g{\gamma}\def\d{\delta}\def\pd{\partial}\def\io{\iota\colon X \ra X'$, $\b \colon Y \ra Y'$, we have \begin{equation}\label{corr-push} \b_* \circ f \circ \alpha}\def\b{\beta}\def\g{\gamma}\def\d{\delta}\def\pd{\partial}\def\io{\iota^* =(\alpha}\def\b{\beta}\def\g{\gamma}\def\d{\delta}\def\pd{\partial}\def\io{\iota \times \b)_* f. \end{equation} If $f\colon X \ra Y$ is a finite morphism of degree $d$, we have $f_* \colon h(X) \ra h(Y)$, and the formulae \eqref{corr-pull}, \eqref{corr-push} lead to: \begin{equation*} f^* \circ f_* = [X \times_Y X] \ \in \End(h(X)) \end{equation*} and \begin{equation*} f_* \circ f^* = d [\Delta_Y] \ \in \End(h(Y)). \end{equation*} In particular, $f^*$ (resp. $f_*$) is injective (resp. surjective). \subsection{Motives of curves} In the case of curves, we have the so-called Chow-K\"unneth decomposition, which is still conjectural in general. Let $f \colon X \ra \Spec k$ be a smooth irreducible projective curve, and suppose for simplicity that it has a $k$-rational point $x$. Define correspondences $e^i \in \Corr^0(X,X)$ by \begin{equation*} e^0 = \{x\} \times X, \quad e^2 = X \times \{x\}, \quad e^1= \varDelta_X - e^0 -e^2. \end{equation*} One sees easily that $e^0$, $e^2$ are idempotents and that $e^0 \circ e^2 = e^2 \circ e^0 =0$, hence $e^1$ is also an idempotent orthogonal to $e^0$ and $e^2$. The $i$-th cohomological motive of $X$ defined by \begin{equation*} h^i(X) =(X,e^i). \end{equation*} By definition, we have a decomposition (depending on the choice of $x$) $$h(X) \simeq h^{0}(X)\oplus h^{1}(X) \oplus h^{2}(X).$$ Since $e^0=f^* \circ x^*$, the map $f^*\colon \mathbf{1} \ra h(X)$ induces an isomorphism $\mathbf{1} \os{\simeq}{\lra} h^0(X)$ whose inverse is given by $x^*$. Similarly, since $e^2=x_* \circ f_*$, the map $f_*\colon h(X) \ra \mathbf{L}$ induces an isomorphism $h^2(X) \os{\simeq}{\lra} \mathbf{L}$ whose inverse is given by $x_*$. If $X = \P^1$, we have $h(\P^1) \simeq \mathbf{1} \oplus \mathbf{L}$. \subsection{Functorialities} Let $K/k$ be a field extension, and $$\varphi_{K/k}\colon \Spec K \lra \Spec k$$ be the structure morphism. Then we have the ``scalar extension" functor \begin{equation*} \sV_k \lra \sV_K; \quad X \longmapsto X_K := X \times_k K. \end{equation*} The pull-back on the Chow group $\CH^*(X \times_k Y) \ra \CH^*(X_K \times_K Y_K)$ defines a homomorphism $$\Corr^r(X,Y) \lra \Corr^r(X_K,Y_K); \quad f \longmapsto f_K,$$ which is injective. Therefore, the above functor extends to a faithful functor \begin{equation*} {\varphi_{K/k}^*}\colon \sM_{k,E} \lra \sM_{K,E}; \quad (X,p,m) \longmapsto (X_K, p_K,m) \end{equation*} On the other hand, for a finite separable extension $K/k$, we have Grothendieck's ``scalar restriction" functor \begin{equation*} \sV_K \lra \sV_k \end{equation*} which sends $X \ra \Spec K$ to the composite $$X_{|k} := X \ra \Spec K \ra \Spec k.$$ The push-forward $\CH^*(X \times_K Y) \ra \CH^*(X_{|k} \times_k Y_{|k})$ defines a homomorphsim $$\Corr^r(X,Y) \lra \Corr^r(X_{|k},Y_{|k}); \quad f \longmapsto f_{|k}$$ and induces a functor \begin{equation*} \varphi_{K/k *}\colon \sM_{K,E} \lra \sM_{k,E}; \quad (X,p,m) \longmapsto (X_{|k},p_{|k},m), \end{equation*} which is left and right adjoint to $\vphi_{K/k}^*$. \subsection{Fermat curves} Let $k$ be a field and $N$ be a positive integer prime to $\mathrm{char}(k)$. Let $X_N=X_{N,k}$ be the smooth projective curve over $k$ defined by the homogeneous equation \begin{equation}\label{equation-fermat} x_{0}^N+y_{0}^N=z_{0}^N. \end{equation} It has genus $(N-1)(N-2)/2$. Define a closed subscheme by $$Z_N = X_N \cap \{z_0=0\}$$ and let $U_N = X_N - Z_N$ be the open complement. The affine equation is written as \begin{equation*} x^{N}+y^{N}=1 \quad (x=x_{0}/z_{0}, y=y_{0}/z_{0}). \end{equation*} If $N'$ divides $N$, with $N=N'd$, we have a $k$-morphism \begin{equation*} \pi_{N/N',k} \colon X_{N} \lra X_{N'}; \quad (x_{0}:y_{0}:z_{0}) \longmapsto (x^{d}_{0}:y^{d}_{0}:z^{d}_{0}) \end{equation*} which is finite of degree $d^2$. It respecs $Z_*$, $U_*$, and \'etale over $U_{N'}$. On the other hand, for a field extention $K/k$, we have a canonical morphism $$\pi_{N,K/k} \colon X_{N,K} \lra X_{N,k}. $$ We denote the composition as $$\pi_{N/N',K/k} \colon X_{N,K} \lra X_{N',k}.$$ By the evident relation $\pi_{N'/N'', K/k} \circ \pi_{N/N',L/K} =\pi_{N/N'', L/k}$, the curves $X_{N,k}$ for various $N$ and $k$ form a projective system. \subsection{Group actions} Fix an algebraic closure $\ol k$ of $k$. For $N$ prime to $\mathrm{char}(k)$, put \begin{equation*} K_N=k(\mu_N). \end{equation*} For each $N$, fix a primitive $N$-th root of unity $\zeta_N \in K_N$ in such a way that $\z_{Nd}^d=\z_N$. Define finite groups by \begin{equation*} G_N=\Z/N\Z \oplus \Z/N\Z, \quad H_N=(\Z/N\Z)^*. \end{equation*} Then, $H_N$ acts (from the left) on $G_N$ by the multiplication on both factors and we put \begin{equation*} \vG_N=G_N \rtimes H_N. \end{equation*} We denote an element $(r,s) \in G_N$ also by $g_N^{r,s}$, and write the addition multiplicatively, i.e. $$g_N^{r,s} g_N^{r',s'} = g_N^{r+r',s+s'}.$$ Let $H_{N,k} \subset H_N$ be the image of the injective homomorphism $\Gal(K_N/k) \ra H_N$ which maps $\sigma$ to the unique element $h$ such that $\sigma(\zeta_N)=\zeta_N^h$. Finally, define a subgroup of $\vG_N$ by \begin{equation*} \vG_{N,k}= G_{N} \rtimes H_{N,k}. \end{equation*} Now we define an action of $\vG_{N,k}$ on $X_{N,K_N}$. Throughout this paper, we let groups act on schemes from the right, so that they induce right actions on rational points, homology groups, etc. and left actions on functions, differential forms, cohomology groups, etc. First, let $G_N$ act on $X_{K_N}$ by \begin{equation*} g_N^{r,s}(x_0:y_0:z_0) = (\z_N^r x_0: \z_N^s y_0:z_0). \end{equation*} Secondly, the action of $H_{N,k}=\Gal(K_N/k)$ on $K_N$ induces an action on $\Spec K_N$, hence, by the base-change, an action on $X_{N',K_N}$ for any $N'$. Finally, since the above actions satisfy $hg = h(g)h$ for any $g \in G_N$, $h \in H_{N,k}$, they extend to an action of $\vG_{N,k}$ on $X_{N,K_N}$. Summarizing, we have the following commutative diagram with the indicated automorphism groups: \begin{equation}\label{diagram-1} \xymatrix{ X_{N,K_N} \ar[rr]^{G_N} \ar[d]_{H_{N,k}} \ar[rrd]^{\vG_{N,k}} && X_{1,K_N} \ar[d]^{H_{N,k}}\\ X_{N,k}\ar[rr] && X_{1,k}. }\end{equation} For $N'|N$, the canonical surjective homomorphisms $$G_{N} \lra G_{N'} , \quad H_{N,k} \lra H_{N',k}, \quad \vG_{N,k} \lra \vG_{N',k}$$ are compatible with the morphisms $$\pi_{N/N',K_N}, \quad \pi_{N,K_{N}/K_{N'}}, \quad \pi_{K_{N}/K_{N'},N/N'},$$ respectively. We shall use the notation $$G_{N/N'} := \Ker(G_{N} \ra G_{N'}) =\Aut(X_{N,K_N}/X_{N',K_N}).$$ \subsection{Index sets} We say that an element $(a,b) \in G_N$ is {\em primitive} if $\gcd(N,a,b)=1$, and let $G_N^\prim \subset G_N$ be the subset of primitive elements. If we put $$d=\gcd(N,a,b), \quad N'=N/d, \quad a'=a/d, \quad b'=b/d, $$ then $(a',b') \in G_{N'}^\prim$. For $(a,b) \in G_{N}$, let $[a,b]_{k}$ denote its $H_{N,k}$-orbit. Then the map $$G_{N'} \lra G_N ; \quad (a',b') \mapsto (a'd,b'd)$$ induces bijections $$G_N \simeq \bigsqcup_{N' | N} G_{N'}^\prim, \quad H_{N,k}\backslash G_N \simeq \bigsqcup_{N' \mid N} H_{N',k}\backslash G_{N'}^\prim. $$ Since it induces a bijection $[a',b']_k \os{\simeq}{\ra} [a,b]_k$, we have $$\sharp[a,b]_{k}=\sharp[a',b']_{k}= \sharp H_{N',k}=[K_{N'}:k].$$ Define a subset of $G_N$ by $$I_N=\left\{(a,b) \in G_{N} \mid a, b, a+b \neq 0\right\} $$ and put $I_N^\prim = I_N \cap G_N^\prim$. Then, $I_N$ and $I_N^\prim$ are stable under the action of $H_{N,k}$. Note that $\sharp I_N = (N-1)(N-2)$, twice the genus of $X_N$. We have also bijections $$I_N \simeq \bigsqcup_{N' \mid N} I_{N'}^\prim, \quad H_{N,k}\backslash I_N \simeq \bigsqcup_{N' \mid N} H_{N',k}\backslash I_{N'}^\prim. $$ \subsection{Projectors} For an integer $N$, put \begin{equation*} E_N=\Q(\mu_N) \subset \ol\Q \end{equation*} and for each $N$, fix a primitive $N$-th root of unity $\xi_N \in \ol \Q$ in such a way that $\xi_{Nd}^d=\xi_N$. \begin{definition} For $(a,b) \in G_{N}$, let $\theta_N^{a,b} \colon G_N \ra E_N^*$ be the character defined by \begin{equation*} \theta_{N}^{a,b}(g_N^{r,s}) = \xi_{N}^{ar+bs}. \end{equation*} Any character of $G_N$ is uniquely written in this form. If $N=N'd$, $a=a'd$ and $b=b'd$, then $\theta_N^{a,b}$ is the pull-back of $\theta_{N'}^{a',b'}$ by the natural homomorphism $G_N \ra G_{N'}$. \end{definition} \begin{definition} For $(a,b) \in G_N$, define an element of the group ring $E_N[G_N]$ by \begin{equation*} p_N^{a,b} = \frac{1}{N^2} \sum_{g \in G_N} \theta_N^{a,b}(g^{-1}) g. \end{equation*} Evidently, \begin{equation}\label{property-p^{a,b}} \sum_{(a,b)\in G_{N}}p_{N}^{a,b} =1, \quad p_{N}^{a,b} p_{N}^{c,d} = \begin{cases} p_{N}^{a,b} & \text{if $(a,b)=(c,d)$,} \\ 0 & \text{otherwise.} \end{cases} \\ \end{equation} \end{definition} \begin{definition} For a class $[a,b]_k \in H_{N,k} \backslash G_N $, define an element of $E_N[G_{N}]$ by \begin{equation*} p_{N}^{[a,b]_k} = \sum_{(c,d) \in [a,b]_k} p_{N}^{c,d}. \end{equation*} Then we have \begin{equation}\label{property-p^{[a,b]}} \sum_{[a,b]_k\in H_{N,k} \backslash G_N} p_{N}^{[a,b]_k} =1, \quad p_{N}^{[a,b]_k} p_{N}^{[c,d]_k} = \begin{cases} p_{N}^{[a,b]_k} & \text{if $[a,b]_k=[c,d]_k$,} \\ 0 & \text{otherwise.} \end{cases} \ \end{equation} \end{definition} It is easy to prove: \begin{lemma}\label{projector-level-change} Let $N=N'd$. Under the natural homomorphism $E_N[G_N] \ra E_N[G_{N'}]$, \begin{enumerate} \item $p_N^{a,b} \longmapsto\begin{cases} p_{N'}^{a',b'} & \text{if $(a,b)=(a'd,b'd)$ for some $(a',b') \in G_{N'}$},\\ 0 & \text{otherwise}, \end{cases} $ \item $p_N^{[a,b]_k} \longmapsto \begin{cases} p_{N'}^{[a',b']_k} & \text{if $[a,b]_k=[a'd,b'd]_k$ for some $[a',b']_k\in H_{N,k}\backslash G_{N'}$},\\ 0 & \text{otherwise}. \end{cases} $ \end{enumerate} \end{lemma} \begin{definition} Let $E_{N,k}$ be the subfield of $E_N$ fixed by $H_{N,k}$ viewed as a subgroup of $\Gal(E_N/\Q) \simeq H_N$. \end{definition} Extend by linearity the action of $H_{N,k}$ on $G_N$ to the group ring $E_N[G_N]$ (notice: $H_{N,k}$ does not act on $E_N$). \begin{lemma}\label{projector-smaller} For any $(a,b) \in G_N$ and $h \in H_{N,k}$, we have{\rm :} \begin{enumerate} \item $h(p_N^{a,b})=p_N^{h^{-1}a,h^{-1}b}$, \item $h(p_N^{[a,b]_k})=p_N^{[a,b]_k}$, \item $p_N^{[a,b]_k}\in E_{N,k}[G_N]$. \end{enumerate} \end{lemma} \begin{proof} (i) is easy and (ii) follows from (i). (iii) follows from $$ p_N^{[a,b]_k} = \frac{\sharp[a,b]_k}{\sharp H_{N,k}} \sum_{h \in H_{N,k}} p_N^{ha,hb} =\frac{\sharp[a,b]_k}{\sharp H_{N,k}} \frac{1}{N^2} \sum_{g \in G_N} \Tr_{E_N/E_{N,k}}(\theta_N^{a,b}(g^{-1})) g. $$ \end{proof} \begin{remark} In fact, $p_N^{[a,b]_k}\in E_{N',k}[G_N]$ where $N'=\mathrm{gcd}(N,a,b)$. \end{remark} \subsection{Fermat motives} As a base point, we choose $$x=(0 \colon 1 \colon1) \ \in X_N(k)$$ so that it is compatible under the morphisms $\pi_{N/N',K/K'}$ for various degrees and base fields. The action of $G_N$ on $X_{N,K_N}$ induces an action (from the left) on $h(X_{N,K_N})$, and by linearity we obtain an $E_N$-algebra homomorphism $$E_N[G_N] \lra \End_{\sM_{K_N,E_N}}(h(X_{N,K_N})). $$ By abuse of notation, the image of an element of the group ring will be denoted by the same letter. For example, we just denote by $g$ instead of $g^*$ or $\vG_g$. \begin{definition} For $(a,b) \in G_N$, define{\rm :} $$X_{N}^{a,b} = (X_{N,K_N}, p_{N}^{a,b})\ \in \sM_{K_N,E_N}.$$ Then, by \eqref{property-p^{a,b}}, we have a decomposition \begin{equation*} h(X_{N,K_N}) \simeq \bigoplus_{(a,b)\in G_{N}} X_{N}^{a,b}. \end{equation*} \end{definition} \begin{proposition}\label{decomposition-X_K} We have isomorphisms in $\sM_{K_N,E_N}${\rm :} \begin{enumerate} \item $X_N^{0,0} \simeq \mathbf{1} \oplus \mathbf{L}$, \item $X_N^{a,b} \simeq {\bf 0}$ \/ if only one of $a$, $b$, $a+b$ is \/ $0$, \item $h^1(X_{N,K_N}) \simeq \bigoplus_{(a,b) \in I_N} X_N^{a,b}$. \end{enumerate} \end{proposition} \begin{proof} Let $f \colon X_{N,K_N} \ra \Spec K_N$ be the structure morphism. For $g \in G_N$, we have $$e^0 \circ g = \vG_{x \circ f} \circ \vG_g = \vG_{g \circ x \circ f} = \vG_{g(x) \circ f}, $$ the graph of the constant morphism with value $g(x)$. Since $$g_N^{r,s}(x)=(0\colon \z_N^s\colon 1)=g_N^{0,s}(x),$$ we obtain \begin{equation}\label{e^0} e^0 \circ p_N^{a,b} = \frac{1}{N^2} \left(\sum_r \xi_N^{-ar}\right) \left(\sum_{s} \xi_N^{-bs} \vG_{g_N^{0,s}(x) \circ f}\right). \end{equation} In particular, we have \begin{align*} e^0-e^0 \circ p_N^{0,0} = \frac{1}{N} \Bigl(N\vG_{x\circ f} - \sum_s \vG_{g_N^{0,s}(x) \circ f}\Bigr) = \frac{1}{N} \div\left(\frac{1-y_1}{x_1}\right)=0 \end{align*} where $(x_i,y_i)$ ($i=1,2$) are the affine coordinates of the $i$-th component of $X_{N,K_N} \times X_{N,K_N}$. Since $e^2 \circ g = e^2$ for any $g \in G_N$, we have \begin{equation}\label{e^2} e^2 \circ p_N^{a,b} = \frac{1}{N^2} \left(\sum_{r,s} \xi_N^{-(ar+bs)}\right) e^2. \end{equation} In particular, we have $e^2 \circ p_N^{0,0} = e^2$. Therefore, we have \begin{multline*} e^1 \circ p_N^{0,0} = (1 - e^0 - e^2) \circ p_N^{0,0} \\ = p_N^{0,0} - e^0 - e^2 = \frac{1}{N^2} \div\left(\frac{y_1^N-y_2^N}{(1-y_1)^N (1-y_2)^N}\right)=0, \end{multline*} hence (i) is proved. If $a \neq 0$ and $b=0$, then we have $$p_N^{a,0} = \frac{1}{N^2} \sum_r \xi^{-ar} \sum_s \vG_{g_N^{r,s}}.$$ Since $$\sum_s\vG_{g_N^{r,s}} - \sum_s\vG_{g_N^{r',s}} = \div\left(\frac{x_1-\z_N^rx_2}{x_1-\z_N^{r'}x_2}\right)=0, $$ $\sum_s \vG_{g_N^{r,s}}$ does not depend on $r$, hence we obtain $p_N^{a,0} =0$. The other cases of (ii) are similarly proved. Finally, if $(a,b) \in I_N$, then by \eqref{e^0} and \eqref{e^2}, we have $e^0\circ p_N^{a,b} = e^2 \circ p_N^{a,b} =0$, hence \begin{align*} \sum_{(a,b) \in I_N} p_N^{a,b} = e^1 \circ \sum_{(a,b) \in I_N} p_N^{a,b} = e^1 \circ \sum_{(a,b) \in G_N} p_N^{a,b} = e^1, \end{align*}and (iii) follows. \end{proof} Now, we define Fermat motives over $k$. For any $E$, an element of $E[G_N]$ fixed by the action of $H_{N,k}$ defines a cycle on $X_{N,K_{N,k}} \times_{K_N} X_{N,K_N}$ {\em defined over $k$}, i.e. in the image of $$ (\pi_N\times_{K_N}\pi_N)^*\colon E \otimes}\def\op{\oplus_\Z \CH^1(X_N \times_k X_N) \lra E \otimes}\def\op{\oplus_\Z \CH^1(X_{N,K_N} \times_{K_N} X_{N,K_N}), $$ where $\pi_N = \pi_{N,K_N/k}$. To make the situation clear, we denote the canonical morphisms as: \begin{equation}\label{morphism-product} \xymatrix{ X_{N,K_N} \times_{K_N} X_{N,K_N} \ar@{^{(}->}[r] \ar[rd]_{\pi_N\times_{K_N} \pi_N \ \ } & X_{N,K_N} \times_k X_{N,K_N}\ar[d]^{\pi_N\times_k\pi_N} \\ & X_N \times_k X_N . } \end{equation} Note that $\pi_N\times_{K_N} \pi_N$ and $\pi_N\times_{k} \pi_N$ are finite morphisms of degree $\sharp H_{N,k}$ and $\sharp H_{N,k}^2$, respectively. In particular, $(\pi_N\times_{K_N}\pi_N)^*$ is injective and its left-inverse is given by $(\sharp H_{N,k})^{-1} (\pi_N\times_{K_N}\pi_N)_*$. Since the intersection product is compatible with the pull-back, we obtain an $E$-algebra homomorphism $$E[G_N]^{H_{N,k}} \lra E \otimes}\def\op{\oplus_\Z \CH^1(X_{N} \times_k X_N) = \End_{\sM_{k,E}}(h(X_N)).$$ By Lemma \ref{projector-smaller} (iii), $p_N^{[a,b]_k}$ defines an element of $\End_{\sM_{k,E_{N,k}}} (h(X_N))$, which we also denote by the same letter. Since $$(\pi_N\times_{K_N}\pi_N)^*(\pi_N\times_{K_N}\pi_N)_* p_N^{a,b}= \sum_{h\in H_{N,k}} p_N^{ha,hb},$$ we have \begin{equation}\label{definition-p_N^{[a,b]_k}} (\pi_N\times_{K_N}\pi_N)_* p_N^{a,b}= \frac{\sharp H_{N,k}}{\sharp[a,b]_k} \, p_N^{[a,b]_k}. \end{equation} \begin{definition} For $[a,b]_k \in H_{N,k} \backslash G_N$, define{\rm :} $$X_N^{[a,b]_k} = (X_{N}, p_{N}^{[a,b]_k}) \ \in \sM_{k, E_{N,k}}.$$ Then, by \eqref{property-p^{[a,b]}}, we have a decomposition \begin{equation*} h(X_{N}) \simeq \bigoplus_{[a,b]_k \in H_{N,k} \backslash G_N} X_{N}^{[a.b]_k}. \end{equation*} \end{definition} \begin{proposition}\label{decomposition-X} We have isomorphisms in $\sM_{k,E_{N,k}}${\rm :} \begin{enumerate} \item $X_N^{[0,0]_k} \simeq \mathbf{1} \oplus \mathbf{L}$, \item $X_N^{[a,b]_k} \simeq {\bf 0}$ \ if only one of $a$, $b$, $a+b$ is $0$, \item $h^1(X_N) \simeq \bigoplus_{[a,b]_k \in H_{N,k}\backslash I_N} X_N^{[a,b]_k}$. \end{enumerate} \end{proposition} \begin{proof} Since $$\End_{\sM_{k,E_{N,k}}}(h(X_N)) \lra \End_{\sM_{K_N,E_N}}(h(X_{N,K_N}))$$ is injective, we can compute $e^i \circ p_N^{[a,b]_k}$ in the latter ring. Then the proof reduces to Proposition \ref{decomposition-X_K}. \end{proof} \begin{remark}\label{fermat-dual} Since $\,^tp_N^{a,b} = p_N^{-a,-b}$, we have \begin{equation*} (X_N^{a,b})^\vee = X_N^{-a,-b}(1), \quad (X_N^{[a,b]_k})^\vee = X_N^{[-a,-b]_k}(1). \end{equation*} \end{remark} \begin{remark}\label{C_N} We can also define similarly a motive $X_N^{[a,b]}$ for each $H_N$-orbit $[a,b]$ of $(a,b)\in G_N$. Then one shows that $X_N^{[a,b]} \in \sM_{k}$. If $\Gal(K_N/k)=H_N$, e.g. $k=\Q$, then $X_N^{[a,b]_k} =X_N^{[a,b]}$. For integrs $0<a,b<N$, let $C_N^{a,b}$ be the projective smooth curve whose affine equation is given by \begin{equation*} v^N=u^a(1-u)^b \end{equation*} (it has singularities possibly at $u=0,1,\infty$). There exists a morphism \begin{equation*} \psi \colon X_N \lra C_N^{a,b}; \quad (x,y) \longmapsto (u,v)=(x^N,x^ay^b). \end{equation*} Suppose that $N$ is a prime number and $(a,b) \in I_N$. Then one shows that $\psi$ induces an isomorphism in $\sM_{k}$: \begin{equation*} X_N^{[a,b]} \simeq h^1(C_N^{a,b}). \end{equation*} \end{remark} \subsection{Relations among Fermat motives} When $(a,b)$ is not primitive, our motives $X_N^{a,b}$, $X_N^{[a,b]_k}$ come from motives of lower degree. Let $N=N'd$ and use the following abbreviated notations: \begin{equation*} \xymatrix{ X_{N,K_N} \ar[dd]_{\pi_{N}} \ar[r]^{\pi_{K_N}} & X_{N',K_N} \ar[d]^{\pi_{K_N/K_{N'}}} \\ & X_{N',K_{N'}} \ar[d]^{\pi_{N'}}\\ X_{N} \ar[r]^{\pi_k} & X_{N'}. }\end{equation*} Consider the homomorphisms on Chow groups with coefficients in $E_N$ (resp. $E_{N,k}$) induced by the $K_N$-morphism (resp. $k$-morphism) $$ \pi_{K_N}\times\pi_{K_N} \colon X_{N,K_N} \times X_{N,K_N} \lra X_{N',K_N} \times X_{N',K_N}, $$ resp. $$ \pi_{k}\times\pi_{k} \colon X_{N} \times X_{N} \lra X_{N'} \times X_{N'}. $$ \begin{lemma}\label{projector-functoriality} Let the notations as above. \begin{enumerate} \item For $(a,b) \in G_N$, we have $$ (\pi_{K_N}\times\pi_{K_N})_*p_{N}^{a,b} = \begin{cases} d^2 p_{N',K_N}^{a',b'} & \text{if $(a,b)=(a'd,b'd)$ for some $(a',b')\in G_{N'}$,}\\ 0 & \text{otherwise}. \end{cases} $$ \item For $(a',b') \in G_{N'}$, we have $(\pi_{K_N}\times\pi_{K_N})^*p_{N',K_N}^{a',b'} = d^2 p_N^{a'd,b'd}$. \item For $(a,b) \in G_N$, we have $$ (\pi_k\times\pi_k)_*p_{N}^{[a,b]_k} = \begin{cases} d^2 p_{N'}^{[a',b']_k} & \text{if $[a,b]_k=[a'd,b'd]_k$ for some $(a',b')\in G_{N'}$,}\\ 0 & \text{otherwise}. \end{cases} $$ \item For $(a',b') \in G_{N'}$, we have $(\pi_k\times\pi_k)^*p_{N'}^{[a',b']_k} = d^2 p_N^{[a'd,b'd]_k}$. \end{enumerate} \end{lemma} \begin{proof} Since the degree of $X_{N,K_N}$ over $X_{N',K_N}$ is $d^2$, we have $$(\pi_{K_N}\times\pi_{K_N})_*\vG_{g_N^{r,s}} = d^2 \vG_{g_{N'}^{r,s}},$$ and (i) follows from Lemma \ref{projector-level-change} (i). On the other hand, (ii) follows easily from $$(\pi_{K_N}\times\pi_{K_N})^* \vG_{g_{N'}^{r,s}} = \sum_{g_N^{r,s} \mapsto g_{N'}^{r',s'}} \vG_{g_N^{r,s}}.$$ Put $m=\sharp H_{N,k} / \sharp[a,b]_k$. Then, using similar notations as \eqref{morphism-product}, we have: \begin{align*} &(\pi_k\times\pi_k)_*p_{N}^{[a,b]_k} \\ &= m^{-1} (\pi_k\times\pi_k)_* (\pi_N\times_{K_N}\pi_N)_* p_{N}^{a,b} \\ &= m^{-1} (\pi_{N'}\times_{K_{N'}}\pi_{N'})_* (\pi_{K_N/K_{N'}}\times_{K_N}\pi_{K_N/K_{N'}})_* (\pi_{K_N}\times\pi_{K_N})_* p_{N}^{a,b} \\ & \us{{\rm (i)}}{=} \begin{cases} m^{-1}d^2 (\pi_{N'}\times_{K_{N'}}\pi_{N'})_* (\pi_{K_N/K_{N'}}\times_{K_N}\pi_{K_N/K_{N'}})_* p_{N',K_N}^{a',b'}, \\ 0. \end{cases} \end{align*} Using $(\pi_{K_N/K_{N'}}\times_{K_N}\pi_{K_N/K_{N'}})_* p_{N',K_N}^{a',b'} = [K_N:K_{N'}] p_{N'}^{a',b'}$, \eqref{definition-p_N^{[a,b]_k}} and $\sharp [a,b]_k = \sharp [a',b']_k$, we obtain (iii). Finally, (iv) follows from the injectivity of $(\pi_N\times_{K_N}\pi_N)^*$ and \begin{align*} &(\pi_N\times_{K_N}\pi_N)^*(\pi_k\times\pi_k)^*p_{N'}^{[a',b']_k} \\ &= (\pi_{K_N}\times\pi_{K_N})^*(\pi_{K_N/K_{N'}}\times_{K_N}\pi_{K_N/K_{N'}})^*(\pi_{N'}\times_{K_{N'}}\pi_{N'})^*p_{N'}^{[a',b']_k} \\ & = \sum_{(e',f') \in [a',b']_k} (\pi_{K_N}\times\pi_{K_N})^*p_{N',K_N}^{e',f'} \us{{\rm (ii)}}{=} \sum_{(e',f') \in [a',b']_k} d^2 p_N^{e'd,f'd} \\ & = d^2 \sum_{(e,f) \in [a'd,b'd]_k} p_N^{e,f} = d^2 (\pi_N\times_{K_N}\pi_N)^* p_N^{[a,b]_k}. \end{align*} \end{proof} \begin{proposition}\label{prop-level} Let $N=N'd$, $(a',b') \in G_{N'}$ and $(a,b) = (a'd, b'd) \in G_N$. Then we have{\rm :} \begin{enumerate} \item $X_N^{a,b} \simeq \varphi_{K_N/K_{N'}}^* X_{N'}^{a',b'}$ \ in $\sM_{K_N,E_N}$, \item $X_N^{[a,b]_k} \simeq X_{N'}^{[a',b']_k}$ \ in $\sM_{k,E_{N,k}}$. \end{enumerate} \end{proposition} \begin{proof} (i) By definition, $\varphi_{K_N/K_{N'}}^* X_{N'}^{a',b'} = (X_{N',K_N}, p_{N',K_N}^{a',b'})$. Consider the following commutative diagram with $\pi=\pi_{K_N}$: \begin{equation}\label{four-1} \xymatrix{ h(X_{N',K_N})\ar[r]^{\pi^*} \ar[d]_{p_{N',K_N}^{a',b'}} & h(X_{N,K_N}) \ar[r]^{\pi_*} \ar[d]_{p_N^{a,b}} & h(X_{N',K_N}) \ar[r]^{\pi^*} \ar[d]_{p_{N',K_N}^{a',b'}} & h(X_{N,K_N}) \ar[d]_{p_N^{a,b}} \\ h(X_{N',K_N})\ar[r]^{\pi^*} & h(X_{N,K_N}) \ar[r]^{\pi_*} & h(X_{N',K_N}) \ar[r]^{\pi^*} & h(X_{N,K_N}). }\end{equation} Let us show that the commutativity of the first square. Since $\circ \pi_* = (\pi \times \id)^*$ by \eqref{corr-pull} and is injective, it suffices to show the commutativity after applying it. First, we have $$\pi^*\circ p_{N',K_N}^{a',b'} \circ \pi_* =(\pi \times \pi)^* p_{N',K_N}^{a',b'} = d^2p_N^{a,b}$$ by Lemma \ref{projector-functoriality} (ii). On the other hand, using \eqref{corr-pull} and \eqref{corr-push}, we have $$p_N^{a,b} \circ \pi^* \circ \pi_* = (\pi \times \id)^*(\pi \times \id)_*p_N^{a,b} = p_N^{a,b} \circ \sum_{g\in G_{N/N'}} g .$$ Since $\theta_N^{a,b}(g)=1$ for $g \in G_{N/N'}$, we have $p_N^{a,b} \circ g= p_N^{a,b}$, hence the commutativity is proved. The commutativity of the second square is the ``transpose" of the first one: $$\pi_* \circ p_N^{a,b}={}^t\!(p_N^{-a,-b} \circ \pi^* ) = {}^t\!(\pi^*\circ p_{N',K_N}^{-a',-b'}) =p_{N',K_N}^{a',b'} \circ \pi_*.$$ Therefore, $\pi^*$ maps $X_{N',K_N}^{a',b'}$ to $X_N^{a,b}$ and $\pi_*$ maps $X_N^{a,b}$ to $X_{N',K_N}^{a',b'}$. Recall that $\pi_* \circ \pi^* = d^2$. On the other hand, we have $$\pi^*\circ\pi_*\circ p_N^{a,b}= \pi^*\circ p_{N',K_N}^{a',b'}\circ \pi_* = d^2 p_N^{a,b}.$$ Therefore, $$\pi^*\colon X_{N',K_N}^{a',b'} \lra X_N^{a,b}, \quad d^{-2}\pi_*\colon X_N^{a,b} \lra X_{N',K_N}^{a',b'}$$ are isomorphisms inverse to each other. (ii) Consider the commutative diagram with $\pi=\pi_{k}$: \begin{equation}\label{four-2} \xymatrix{ h(X_{N'})\ar[r]^{\pi^*} \ar[d]_{p_{N'}^{[a',b']_k}} & h(X_{N}) \ar[r]^{\pi_*} \ar[d]_{p_N^{[a,b]_k}} & h(X_{N'}) \ar[r]^{\pi^*} \ar[d]_{p_{N'}^{[a',b']_k}} & h(X_{N}) \ar[d]_{p_N^{[a,b]_k}} \\ h(X_{N'})\ar[r]^{\pi^*} & h(X_{N}) \ar[r]^{\pi_*} & h(X_{N'}) \ar[r]^{\pi^*} & h(X_{N}). }\end{equation} Similarly as above using Lemma \ref{projector-functoriality} (iv), the commutativity of the first square is reduced to show \begin{equation*}\label{p-pi-pi} p_N^{[a,b]_k} \circ \pi^* \circ \pi_* = d^2 p_N^{[a,b]_k}. \end{equation*} We can compute the composition after applying the faithful functor $\vphi_{K_N/k}^*$. Then we are reduced to the calculation of (i). The second square is again the ``transpose" of the first one. The rest of the proof is parallel to (i). \end{proof} Together with Propositions \ref{decomposition-X_K} and \ref{decomposition-X}, we obtain: \begin{corollary}\label{cor-level} We have isomorphisms{\rm :} \begin{enumerate} \item $h^1(X_{N,K_N}) \simeq \bigoplus_{N'|N} \bigoplus_{(a',b') \in I_{N'}^\prim} \varphi_{K_N/K_{N'}}^*X_{N'}^{a',b'}$ \ in $\sM_{K_N,E_N}$, \item $h^1(X_N) \simeq \bigoplus_{N'|N} \bigoplus_{[a',b']_k \in H_{N',k} \backslash I_{N'}^\prim} X_{N'}^{[a',b']_k}$ \ in $\sM_{k,E_{N,k}}$. \end{enumerate} \end{corollary} Next, the relation between $X_N^{a,b}$ and $X_{N}^{[a,b]_k}$ is as follows. \begin{proposition}\label{prop-field} \ \begin{enumerate} \item For $(a,b) \in G_N$, we have an isomorphism in $\sM_{K_N,E_N}${\rm :} $$\varphi_{K_N/k}^* X_N^{[a,b]_k} \simeq \bigoplus_{(c,d) \in[a,b]_k} X_N^{c,d}.$$ \item For $(a,b) \in G_N^\prim$, we have an isomorphism in $\sM_{k,E_N}${\rm :} $$\varphi_{K_N/k*}X_N^{a,b} \simeq X_N^{[a,b]_k}.$$ \end{enumerate} \end{proposition} \begin{proof} (i) is clear from the definitions. We prove (ii). Recall that $\vphi_{K_N/k *}X_N^{a,b} = ({X_{N,K_N}},{p_N^{a,b}})$, viewed as a scheme and a correspondence over $k$. Consider the following diagram in $\sM_{k,E_N}$ with $\pi=\pi_N$ viewed as a $k$-morphism: \begin{equation*}\label{four-3} \xymatrix{ h(X_N) \ar[d]_{p_N^{[a,b]_k}}\ar[r]^{p_N^{a,b}\circ\pi^*} & h(X_{N,K_N}) \ar[r]^{\pi_*\circ p_N^{a,b}} \ar[d]_{p_N^{a,b}} & h(X_N) \ar[r]^{p_N^{a,b}\circ\pi^*} \ar[d]_{p_N^{[a,b]_k}} & h(X_{N,K_N}) \ar[d]_{p_N^{a,b}} \\ h(X_N)\ar[r]^{p_N^{a,b}\circ \pi^*} & h(X_{N,K_N}) \ar[r]^{\pi_*\circ p_N^{a,b}} & h(X_N) \ar[r]^{p_N^{a,b}\circ\pi^*} & h(X_{N,K_N}) . }\end{equation*} It suffices to show the commutativity of the first square after applying $\circ \pi_*$. Then, using \eqref{corr-pull} and \eqref{corr-push}, we have \begin{align*} p_N^{a,b} \circ \pi^*\circ p_N^{[a,b]_k} \circ \pi_* = p_N^{a,b} \circ (\pi\times\pi)^* p_N^{[a,b]_k} = p_N^{a,b} \circ \sum_{h \in H_{N,k}} h \circ p_N^{a,b} \circ \sum_{h \in H_{N,k}} h \\ = \sum_{h \in H_{N,k}} (h \circ p_N^{ha,hb}) \circ p_N^{a,b} \circ \sum_{h \in H_{N,k}} h = p_N^{a,b} \circ \sum_{h \in H_{N,k}} h =p_N^{a,b} \circ p_N^{a,b} \circ \pi^*\circ \pi_* . \end{align*} The second square is again the ``transpose" of the first one. Now, since $(a,b)$ is primitive, we have by definition \begin{equation}\label{pi-p-pi} \pi_* \circ p_N^{a,b} \circ\pi^* = (\pi \times \pi)_* p_N^{a,b} = p_N^{[a,b]_k}, \end{equation} hence $\pi_* \circ p_N^{a,b}\circ\pi^* \circ p_N^{[a,b]_k} = p_N^{[a,b]_k}$. On the other hand, we have $$p_N^{a,b}\circ\pi^* \circ \pi_* \circ p_N^{a,b} = p_N^{a,b} \circ \sum_{h \in H_{N,k}} h \circ p_N^{a,b} = \sum_{h \in H_{N,k}} h p_N^{ha,hb} \circ p_N^{a,b} = p_N^{a,b}.$$ Therefore, $$p_N^{a,b}\circ \pi^* \colon X_N^{[a,b]_k} \lra \vphi_{K_N/k *}X_N^{a,b}, \quad \pi_* \colon \vphi_{K_N/k *}X_N^{a,b} \lra X_N^{[a,b]_k}$$ are isomorphisms inverse to each other. \end{proof} Combining Corollary \ref{cor-level} (ii) and Proposition \ref{prop-field} (ii) we obtain: \begin{corollary} We have an isomorphism in $\sM_{k,E_N}${\rm :} $$h^1(X_N) \simeq \bigoplus_{N'|N} \bigoplus_{[a',b']_k \in H_{N',k}\backslash I_{N'}^\prim } \vphi_{K_{N'}/k *} X_{N'}^{a',b'},$$ where $(a',b')$ is any representative of $[a',b']_k$. \end{corollary} \section{$L$-functions of Fermat motives} \subsection{$\ell$-adic realization of motives} For a scheme $X$ and a prime number $\ell$ invertible on $X$, let $\mu_{\ell^n}$ be the \'etale sheaf of $\ell^n$-th roots of unity. For an integer $m$, we write $\Z/\ell^n\Z(m) =\mu_{\ell^n}^{\otimes}\def\op{\oplus m}$. The {\em $\ell$-adic \'etale cohomology group} is defined by $$ H_\mathrm{\acute{e}t}^i(X,\Ql(m)) = \Ql \otimes}\def\op{\oplus_{\Z_\ell}}\def\Ql{{\Q_\ell} \varprojlim_{n} H_\mathrm{\acute{e}t}^i(X,\Z/\ell^n\Z(m)). $$ Let $k$ be a field with $\mathrm{char}(k) \neq \ell$, and $\ol k$ be an algebraic closure of $k$. If $X \in \sV_k$, then $$H_\ell^i(X)(m) := H^i_\mathrm{\acute{e}t}(X_{\ol k},\Ql(m)). $$ is a finitely generated $\Ql$-module on which the absolute Galois group $\Gal(\ol k/k)$ acts continuously. If $X$ is a projective smooth curve over $k$, then $H^1_\ell(X)(1)$ is isomorphic to the $\ell$-adic Tate module of the Jacobian variety of $X$. Let $X$, $Y \in \sV_k$ with $d = \dim X$, $d'=\dim Y$. For a correspondence $f \in \Corr^r(X,Y) = \Q \otimes}\def\op{\oplus_\Z \CH^{d +r}(X \times Y)$, let $$[f] \in H_\ell^{2(d+r)}(X \times Y)(d+r)$$ denote the cycle class of $f$. Consider the composition: \begin{multline*} H_\ell^i(X)(m) \os{\mathrm{pr}_X^*}{\lra} H_\ell^i(X \times Y)(m) \\ \os{\cup [f]}{\lra} H_\ell^{i+2(d +r)}(X \times Y)(m+d+r) \os{\mathrm{pr}_{Y*} }{\lra} H_\ell^{i+2r}(Y)(m+r), \end{multline*} which we also denote by $f$. Here, $\cup$ is the cup product and the homomorphism $\mathrm{pr}_{Y*}$ is the dual of \begin{equation*} H_\ell^{2d'-i-2r}(Y)(d'-m-r) \os{\mathrm{pr}_Y^*}{\lra} H_\ell^{2d'-i-2r}(X \times Y)(d'-m-r) \end{equation*} via the Poincar\'e duality. For a motive $M=(X,p,m) \in \sM_{k,E}$, its $\ell$-adic cohomology is defined by \begin{equation*} H_\ell^i(M) = p (E \otimes}\def\op{\oplus_\Q H_\ell^i(X)(m)). \end{equation*} Then, $H_\ell = \oplus_i H_\ell^i$ extends to the covariant {\em $\ell$-adic realization functor} \begin{equation*} H_\ell \colon \sM_{k,E} \lra \operatorname{\mathsf{Mod}}_{E_\ell[\Gal(\ol k/k)]} \end{equation*} to the category of modules over $$E_\ell :=E\otimes}\def\op{\oplus_\Q \Ql$$ with Galois action. If $X \in \sV_k$ is a curve, then we have $H_\ell(h^i(X)) = H^i_\ell(X)$. Note that $$H_\ell(M(r))= H_\ell(M)(r) := H_\ell(M) \otimes}\def\op{\oplus_\Ql \Ql(1)^{\otimes}\def\op{\oplus r}$$ where $\Ql(1)$ is a one-dimensional $\Ql$-vector space on which $\Gal(\ol k/k)$ acts via the $\ell$-adic cyclotomic character. For any (resp. finite separable) extension $k'/k$ in $\ol k$, we have commutative diagrams of functors \begin{equation}\label{diagram-realization} \begin{split} \xymatrix{ \sM_{k,E} \ar[r]^(.4){H_\ell} \ar[d]_{\vphi_{k'/k}^*} & \operatorname{\mathsf{Mod}}_{{E_\ell}[\Gal(\ol k/k)]} \ar[d]^{\Res_{k'/k}} \\ \sM_{k',E} \ar[r]^(.4){H_\ell} & \operatorname{\mathsf{Mod}}_{{E_\ell}[\Gal(\ol k/k')]} } \quad \xymatrix{ \sM_{k',E} \ar[r]^(.4){H_\ell} \ar[d]_{\vphi_{k'/k,*}} & \operatorname{\mathsf{Mod}}_{{E_\ell}[\Gal(\ol k/k')]} \ar[d]^{\Ind_{k'/k}} \\ \sM_{k, E} \ar[r]^(.4){H_\ell} & \operatorname{\mathsf{Mod}}_{{E_\ell}[\Gal(\ol k/k)]} } \end{split}\end{equation} where $\Res_{k'/k}$ is the restriction of the Galois action, and $$\Ind_{k'/k}(V) = {E_\ell}[\Gal(\ol k/k)] \otimes}\def\op{\oplus_{{E_\ell}[\Gal(\ol k/k')]} V$$ is the induced Galois module. \subsection{$L$-functions of motives} Let $k$ be a number field and $\mathscr{O}}\def\sP{\mathcal{P}_k$ be its integer ring. For a finite place $v$ of $k$, let $\mathscr{O}}\def\sP{\mathcal{P}_v$, $k_v$ be the completion of $\mathscr{O}}\def\sP{\mathcal{P}_k$, $k$, respectively, and $\F_v$ be the residue field; put $N(v)=\sharp \F_v$. Let $I_v \subset D_v \subset \Gal(\ol k/k)$ be the inertia and the decomposition subgroups at $v$, respectively, and $\mathrm{Fr}_v \in \Gal(\ol{\F_v}/\F_v) \simeq D_v/I_v$ be the geometric Frobenius of $\F_v$, i.e. the inverse of the the $N(v)$-th power Frobenius map. Let $E$ be a number field. For a prime number $\ell$, we have a natural decomposition \begin{equation*} E_\ell = \prod_{\lambda}\def\om{\omega}\def\s{\sigma | \ell} E_\lambda}\def\om{\omega}\def\s{\sigma, \end{equation*} where $\lambda}\def\om{\omega}\def\s{\sigma$ runs through the places of $E$ over $\ell$, and $E_\lambda}\def\om{\omega}\def\s{\sigma$ is the completion. For an $E_\ell$-module $V$, let \begin{equation*} V = \bigoplus_{\lambda}\def\om{\omega}\def\s{\sigma |\ell} V_\lambda}\def\om{\omega}\def\s{\sigma, \quad V_\lambda}\def\om{\omega}\def\s{\sigma := E_\lambda}\def\om{\omega}\def\s{\sigma \otimes}\def\op{\oplus_{E_\ell} V \end{equation*} be the corresponding decomposition into $E_\lambda}\def\om{\omega}\def\s{\sigma$-modules. Let $M=(X,p,m) \in \sM_{k,E}$ be a motive. For each finite place $v$ of $k$, choose $\ell \neq \mathrm{char}(\F_v)$ and a place $\lambda}\def\om{\omega}\def\s{\sigma | \ell$ of $E$. Then, the zeta polynomial of $M$ at $v$ is defined by \begin{equation*} P_v(M,T) = \det\left(1-\mathrm{Fr}_vT; H_\lambda}\def\om{\omega}\def\s{\sigma(M)^{I_v}\right) \ \in E_\lambda}\def\om{\omega}\def\s{\sigma[T] \end{equation*} where we write $H_\lambda}\def\om{\omega}\def\s{\sigma(M)= H_\ell(M)_\lambda}\def\om{\omega}\def\s{\sigma$. \begin{conjecture}[$\ell$-independence]\label{l-independence} Let $M \in \sM_{k,E}$. For any finite place $v$ of $k$, $P_v(M,T) \in \mathscr{O}}\def\sP{\mathcal{P}_E[T]$, and is independent of the choice of $\ell$ and $\lambda}\def\om{\omega}\def\s{\sigma$. \end{conjecture} Assume the conjecture for $M$. For each embedding $\s \colon E \hookrightarrow \C$, the {\em $L$-factor} at $v$ is defined by \begin{equation*} L_v(\s,M,s) = \s P_v(M,N(v)^{-s})^{-1}, \end{equation*} and the {\em $L$-function} is defined by \begin{equation*} L(\s,M,s) = \prod_{v} L_v(\s,M,s). \end{equation*} We denote by $L(M,s)$ the system $(L(\s,M,s))_\s$, which may be viewed as an $E_\C$-valued function, where \begin{equation*} E_\C := E \otimes}\def\op{\oplus_\Q \C = \prod_{\s\colon E \hookrightarrow \C} \C. \end{equation*} Note the relation \begin{equation*} L(M(r),s)=L(M,s+r). \end{equation*} We also define the $L$-function $L(h^i(M),s)$ of the (conjectural) $i$-th motive to be the $L$-function associated to its ``realization" $H^i_\ell(M)$. We shall use the following proposition, which follows from \eqref{diagram-realization}. \begin{proposition}\label{induce-L} Let $k'/k$ be a finite extension of number fields and suppose that a motive $M \in \sM_{k',E}$ satisfies Conjecture \ref{l-independence}. Then, $\vphi_{k'/k *}M$ satisfies Conjecture \ref{l-independence} and we have $$L(\vphi_{k'/k *}M, s) = L(M,s).$$ \end{proposition} If $X \in \sV_k$, it has good reduction at almost all $v$, i.e. there exists a proper smooth model over $\mathscr{O}}\def\sP{\mathcal{P}_v$ with generic fiber $X \times_k k_v$; denote the special fiber by $X_{\F_v}$. We say that $M=(X,p,m)$ has good reduction at $v$ if $X$ and all the components of a cycle representing $p$ have good reductions. For such $M$ and $v$, $H_\ell(M)$ is unramified at $v$, i.e. the action of $I_v$ is trivial, and there exists an isomorphism $$H_\ell(M) \simeq H_\ell(M_{\F_v})$$ of $\Gal(\ol{\F_v}/\F_v)$-modules (see \cite{sga4demi}). Therefore, \begin{equation}\label{zeta-zeta} P_v(M,T) = P(M_{\F_v},T):= \det(1-\mathrm{Fr}_{\F_v}T; H_\lambda}\def\om{\omega}\def\s{\sigma(M_{\F_v})) \end{equation} and Conjecture \ref{l-independence} holds \cite{deligne-weil1}. Suppose that $H_\ell(M)=H^i_\ell(M)$. Then, by the Weil conjecture proved by Deligne \cite{deligne-weil1}, for good $v$, $P(M_{\F_v},T)$ is of pure weight $w=i-2m$, i.e. any complex conjugate of a reciprocal root has complex absolute value $N(v)^{w/2}$. Therefore, $L(M,s)$ except for the bad factors converges absolutely for $\Re(s)> w/2+1$ and has no zero nor pole in the region. Further, the weight-monodromy conjecture \cite{deligne-weil2} implies that the bad factors either have no zero nor pole in the same region (cf. \cite{schneider}). \subsection{Jacobi sums} Let $K$ be a finite field of characteristic $p$ with $q$ elements which contains the $N$-th roots of unity, i.e. $N \mid q-1$. Fix an isomorphism \begin{equation}\label{mumu} \mu_N(K) \os{\sim}{\lra} \mu_N(E_N). \end{equation} Composing the $\frac{q-1}{N}$-th power map $K^* \ra \mu_N(K)$ with \eqref{mumu}, we obtain a character \begin{equation*} \chi_N \colon K^* \lra \mu_N(E_N) \end{equation*} of exact order $N$. We extend $\chi_N^a$ to the whole $K$ by setting $\chi_N^a(0)=0$. For $(a,b) \in G_N$, the {\em Jacobi sum} is defined by \begin{equation*} j(\chi_N^a,\chi_N^b) = - \sum_{x,y \in K, x+y=1} \chi_N^a(x) \chi_N^b(y) \ \in E_N. \end{equation*} Fix an algebraic closure $\ol K$ of $K$ and let $K_n/K$ be the subextension of degree $n$. The character $\chi_{N,n} \colon K_n^* \ra E_N^*$ is defined as above via $\mu_N(K_n) = \mu_N(K) \simeq \mu_N(E_N)$, or equivalently, $\chi_{N,n} =\chi_N \circ N_{K_n/K}$. Then, for any $(a,b) \in I_N$, we have the Davenport-Hasse relation \begin{equation}\label{davenport-hasse} j(\chi_{N,n}^a,\chi_{N,n}^b) = j(\chi_N^a,\chi_N^b)^n. \end{equation} \subsection{Zeta polynomials of Fermat motives} Let $k$ be a finite field of characteristic $p \nmid N$, $\ol k$ be an algebraic closure of $k$ and put $K=K_N=k(\mu_N)$. By choosing $\z_N \in K_N$ and $\xi_N \in E_N$, the motives $X_N^{a.b} \in \sM_{K_N,E_N}$ and $X_N^{[a,b]_k} \in \sM_{k,E_{N,k}}$ are defined. Note that $H_{N,k} \subset (\Z/N\Z)^*$ is the cyclic subgroup generated by $\sharp k$. Fix the isomorphism \eqref{mumu} by assigning $\z_N$ to $\xi_N$. Then the character $\chi_N$ and the Jacobi sums are defined. We calculate the zeta polynomials of $X_N^{a,b}$ and $X_N^{[a,b]_k}$. We only consider the case $(a,b) \in I_N$, whereas the other cases are obvious by Propositions \ref{decomposition-X_K} and \ref{decomposition-X}. The key is the Grothendieck's fixed point formula (cf. \cite{sga4demi}): for an endomorphism $F$ of $X$ over $k$, we have \begin{equation}\label{fixed-point-formula} \sharp X(\ol k)^F = \sum_i (-1)^i \Tr(F; H^i_\ell(X)). \end{equation} \begin{theorem}\label{fermat-polynomial}\ \begin{enumerate} \item If $(a,b) \in I_N$, then $P(X_N^{a,b}, T) = 1- j(\chi_N^a,\chi_N^b) T$. \item If $(a,b) \in I^\prim_N$, then $P(X_N^{[a,b]_k}, T) = 1- j(\chi_N^a,\chi_N^b) T^{\sharp[a,b]_k}$. \end{enumerate} In particular, these do not depend on the choice of $\lambda}\def\om{\omega}\def\s{\sigma$ and belong to $\mathscr{O}}\def\sP{\mathcal{P}_{E_{N,k}}[T]$. \end{theorem} \begin{proof} (i) First, by taking the logarithm, we have \allowdisplaybreaks \begin{align*} &\log \det\bigl(1-\mathrm{Fr}_{K} T; H_\lambda}\def\om{\omega}\def\s{\sigma^1(X_N^{a,b})\bigr) \\ &=\log \prod_{i=0,1,2} \det\bigl(1-\mathrm{Fr}_{K} T; H_\lambda}\def\om{\omega}\def\s{\sigma^i(X_N^{a,b})\bigr)^{(-1)^{i+1}} \\ &= \log \prod_{i=0,1,2} \det\bigl(1-\mathrm{Fr}_{K} p_N^{a,b} T; H_\lambda}\def\om{\omega}\def\s{\sigma^i(X_{N,K})\bigr)^{(-1)^{i+1}}\\ &= \sum_{i=0,1,2} (-1)^{i} \sum_{n \geq 1} \frac{1}{n} \Tr\bigl((\mathrm{Fr}_K p_N^{a,b})^n; H_\lambda}\def\om{\omega}\def\s{\sigma^i(X_{N,K})\bigr) T^n\\ &= \sum_{i=0,1,2} (-1)^{i} \sum_{n \geq 1} \frac{1}{n} \Tr\bigl(\mathrm{Fr}_K^n p_N^{a,b}; H_\lambda}\def\om{\omega}\def\s{\sigma^i(X_{N,K})\bigr) T^n\\ &= \sum_{n \geq 1} \frac{1}{n} \left(\frac{1}{N^2} \sum_{g \in G_N} \theta_N^{a,b}(g^{-1}) \sum_{i=0,1,2} (-1)^{i} \Tr\bigl(\mathrm{Fr}_K^n g; H_\lambda}\def\om{\omega}\def\s{\sigma^i(X_{N,K})\bigr) \right) T^n . \\ \end{align*} Then by \eqref{fixed-point-formula}, the alternating sum of the traces equals $$\vL(\mathrm{Fr}_K^n g) := \sharp\bigl\{P \in X_{N}(\ol k) \bigm| \mathrm{Fr}_K^ng(P)=P\bigr\}.$$ This is devided as $\vL(\mathrm{Fr}_K^n g)=\vL_0(\mathrm{Fr}_K^n g)+\vL_1(\mathrm{Fr}_K^n g)$ with \begin{align*} &\vL_0(\mathrm{Fr}_K^n g) := \sharp\bigl\{P \in U_{N}(\ol k) \bigm| \mathrm{Fr}_K^ng(P)=P\bigr\}, \\ &\vL_1(\mathrm{Fr}_K^n g) := \sharp\bigl\{P \in Z_{N}(\ol k) \bigm| \mathrm{Fr}_K^ng(P)=P\bigr\}, \end{align*} where $U_N$, $Z_N \subset X_N$ are the subschemes defined in \S 2.4. Let $q = \sharp K$. If $g=g_N^{-r,-s}$, then we have \begin{align*} & \vL_0(\mathrm{Fr}_K^n g) \\ &= \sharp \bigl\{(x,y) \in \ol k^2 \bigm| x^N+y^N=1, x^{q^n} = \z_N^r x, y^{q^n}=\z_N^s y \bigr\} \\ &= \sum_{u+v=1} \sharp\bigl\{x \in \ol k \bigm| x^N=u, x^{q^n}=\z_N^r x\bigr\} \sharp\bigl\{y \in \ol k \bigm| y^N=v, y^{q^n}=\z_N^s y\bigr\}. \end{align*} Here, the sum is taken over $u$, $v \in K_n$, the extension of $K$ of degree $n$, since $u^{q^n}=x^{Nq^n}=(\z_N^rx)^N=x^N=u$. If $u \neq 0$, then $x^N=u$ and $x^{q^n}=\z_N^r x$ imply that $u^{\frac{q^n-1}{N}} =x^{q^n-1}=\z_N^r$. Conversely, if $u^{\frac{q^n-1}{N}} =\z_N^r$, then any solution of $x^N=u$ satisfies $x^{q^n}=\z_N^r x$. Therefore, we have \begin{equation*} \sharp\bigl\{x \in \ol k \bigm| x^N=u, x^{q^n}=\z_N^r x\bigr\} = \begin{cases} N \cdot \delta \bigl(u^{\frac{q^n-1}{N}}=\z_N^r\bigr) & \text{if $u \neq 0$} \\ 1 & \text{if $u=0$} \end{cases} \end{equation*} where $\delta (\mathrm{P})$ is $1$ (resp. $0$) if the statement $\mathrm{P}$ is true (resp. false). It follows that \begin{multline*} \vL_0(\mathrm{Fr}_K^n g) = N^2 \sum_{u,v \in K_n^*, u+v=1} \delta \bigl(u^{\frac{q^n-1}{N}}=\z_N^r\bigr) \delta \bigl(v^{\frac{q^n-1}{N}}=\z_N^s\bigr) \\ + N \delta(r=0) + N \delta(s=0). \end{multline*} On the other hand, \begin{align*} & \vL_1(\mathrm{Fr}_K^n g)\\ & =\sharp \bigl\{(x_0 : y_0) \in \P^1(\ol k) \bigm| x_0^N+y_0^N=0, (x_0^{q^n}:y_0^{q^n}) = (\z_N^rx_0 : \z_N^s y_0) \bigr\}\\ &= \sharp \bigl\{w \in \ol k^* \bigm| w^N=-1, w^{q^n}=\z_N^{r-s}w \bigr\} \\ &= N \delta\bigl((-1)^{\frac{q^n-1}{N}}=\z_N^{r-s}\bigr). \end{align*} Now, by the definition of $\chi_{N,n}$, we have $\chi_{N,n}(u)=\xi_N^r$ for $r$ such that $u^{\frac{q^n-1}{N}}=\z_N^r$. Therefore, we have \begin{align*} &\sum_{r,s} \theta_N^{a,b}(g_N^{r,s}) \sum_{u,v \in K_n^*, u+v=1} \delta \bigl(u^{\frac{q^n-1}{N}}=\z_N^r\bigr) \delta \bigl(v^{\frac{q^n-1}{N}}=\z_N^s\bigr) \\ &=\sum_{u,v \in K_n^*, u+v=1} \chi^a_{N,n}(u)\chi^b_{N,n}(v) = -j(\chi_{N,n}^a,\chi_{N,n}^b)=-j(\chi_N^a,\chi_N^b)^n \end{align*} by \eqref{davenport-hasse}. On the other hand, since $(a,b) \in I_N$, \begin{multline*} \sum_{r,s} \theta_N^{a,b}(g_N^{r,s}) \left( \delta(r=0)+\delta(s=0)+ \delta\bigl((-1)^{\frac{q^n-1}{N}}=\z_N^{r-s}\bigr)\right) \\ = \sum_s \xi_N^{bs} + \sum_r \xi_N^{ar} + (-1)^{\frac{q^n-1}{N}a} \sum_s \xi_N^{(a+b)s} =0. \end{multline*} We obtained $$\frac{1}{N^2} \sum_{g \in G_N} \theta_N^{a,b}(g^{-1})\vL(\mathrm{Fr}_K^n g) =-j(\chi_N^a,\chi_N^b)^n $$ and hence $$P(X_N^{a,b},T)= \exp \sum_{n \geq 1} \frac{-j(\chi_N^a,\chi_N^b)^n}{n} T^n = 1-j(\chi_N^a,\chi_N^b) T. $$ The statement (ii) follows from (i), Proposition \ref{prop-field} (ii) and the general fact $$P(\pi_{k'/k*}M',T)=P(M',T^{[k':k]})$$ for finite fields $k'/k$ and $M' \in \sM_{k',E}$. The final assertion follows since $j(\chi_N^{ap},\chi_N^{bp}) = j(\chi_N^a,\chi_N^b)$. \end{proof} \begin{remark} For $(a,b) \in I_N$ not necessarily primitive, let $N=N'd$, $a=a'd$, $b=b'd$, with $(a',b') \in I^\prim_{N'}$. Then we have $$ P(X_N^{[a,b]_k},T)=P(X_{N'}^{[a',b']_k},T) =1-j(\chi_{N'}^{a'},\chi_{N'}^{b'})T^{\sharp[a,b]_k} $$ (recall that $\sharp[a,b]_k=\sharp[a',b']_k$). \end{remark} \begin{corollary} If $(a,b) \in I_N$, then $H_\ell(X_N^{a,b})$ is a free $E_{N,\ell}$-module of rank $1$ and $H_\ell(X_N^{[a,b]_k})$ is a free $(E_{N,k})_\ell$-module of rank $\sharp[a,b]_k$. \end{corollary} \begin{remark} This also follows from the corresponding result for the singular cohomology (cf. \cite{otsubo}) and the comparison theorem between \'etale and singular cohomology (Artin's theorem). \end{remark} \subsection{$L$-functions of Fermat motives} Now, let $k$ be a number field and $K_N =k(\mu_N) \subset \ol k$ as before. By choosing $\z_N \in \mu_N(K_N)$ and $\xi_N \in \mu_N(E_N)$, the motives $X_N^{a,b} \in \sM_{K_N,E_N}$ and $X_N^{[a,b]_k} \in \sM_{k,E_{N,k}}$ are defined. By Proposition \ref{decomposition-X_K}, the cases $(a,b) \not\in I_N$ are easy: $$L(X_N^{0,0},s) = \z_{K_N}(s)\z_{K_N}(s-1), \quad L(X_N^{[0,0]_k},s) = \z_{k}(s)\z_{k}(s-1), $$ and $L(X_N^{a,b},s) =L(X_N^{[a,b]_k},s) =1$ if only one of $a$, $b$, $a+b$ is $0$. For $(a,b) \in I_N$, we are reduced to the primitive case over $K_N$: if $(a,b)=(a'd,b'd)$ with $N=N'd$, $(a',b') \in I_{N'}^\prim$, then we have \begin{equation*} L(X_N^{a,b},s) = L(X_{N',K_N}^{a',b'},s), \quad L(X_N^{[a,b]_k},s) = L(X_{N'}^{[a',b']_k},s) = L(X_{N'}^{a',b'},s) \end{equation*} by Propositions \ref{prop-level}, \ref{prop-field} and \ref{induce-L}. If $v \nmid N$ is a place of $K_N$, we have a canonical isomorphism $$\mu_N(K_N) \os{\simeq}{\lra} \mu_N(\F_v).$$ By abuse of notation, we also denote the image of $\z_N$ by the same letter. Then, using $\z_N$ and $\xi_N$, the motive $X_{N,\F_v}^{a,b} \in \sM_{\F_v,E_N}$ is defined. The character $$\chi_{N,v}\colon \F_v^* \lra E_N^*$$ and the Jacobi sum \begin{equation*} j_{N}^{a,b}(v) := j(\chi_{N,v}^a,\chi_{N,v}^b) \end{equation*} are defined as in \S 3.3. At $v \nmid N$, the motive $X_N^{a,b}$ has good reduction: $(X_N^{a,b})_{\F_v}=X_{N,\F_v}^{a,b}$. By \eqref{zeta-zeta} and Theorem \ref{fermat-polynomial}, we have: \begin{proposition} Let $(a,b) \in I_N$ and $v$ be a finite place of $K_N$ not dividing $N$. Then we have $$P_v(X_N^{a,b},T)= 1-j_N^{a,b}(v) T.$$ \end{proposition} Now we determine the bad $L$-factors. \begin{proposition}\label{bad-realization} Let $(a,b) \in I_N^\prim$ and $v$ be a place of $K_N$ dividing $N$. Then we have $H_\ell(X_{N}^{a,b})^{I_v}=0$ for any $\ell\neq \mathrm{char}(\F_v)$. In particular, $P_v(X_N^{a,b},T)=1$. \end{proposition} \begin{proof} Let $X_{N,\F_v}$ denote the special fiber at $v$ of the model defined by the same equation as $X_N$. By a similar argument as in \cite{deligne-weil2} (3.6), we have a canonical surjection \begin{equation*} \xymatrix{ H_\mathrm{\acute{e}t}^1(X_{N,\ol \F_v}, \Ql) \ar@{->>}[r] & H_\mathrm{\acute{e}t}^1(X_{N,\ol k}, \Ql)^{I_v} } \end{equation*} compatible with the action of $\mathrm{Fr}_v$. Let $p=\mathrm{char}(\F_v)$, $N=p^eN'$ with $e>0, p \nmid N'$ and consider the commutative diagram \begin{equation*} \xymatrix{ H_\mathrm{\acute{e}t}^1(X_{N',\ol \F_v}, \Ql) \ar[r]^\simeq \ar[d]_{\pi_{N/N',\ol \F_v}^*} & H_\mathrm{\acute{e}t}^1(X_{N',\ol k}, \Ql) \ar@{^{(}->}[d]_{\pi_{N/N',\ol k}^*} \phantom{^{I_v}. }\\ H_\mathrm{\acute{e}t}^1(X_{N,\ol \F_v}, \Ql) \ar@{->>}[r] & H_\mathrm{\acute{e}t}^1(X_{N,\ol k}, \Ql)^{I_v}. } \end{equation*} Since $X_{N',K_N}$ has good reduction at $v$, the upper horizontal map is an isomorphism. The right vertical map is injective by the norm argument: $\pi_{N/N *} \circ \pi_{N/N'}^* = p^{2e}$. Consider a morphism $$f \colon X_{N',\F_v} \lra X_{N,\F_v}; \quad (x_0:y_0:z_0) \longmapsto (x_0:y_0:z_0),$$ which identifies $X_{N',\F_v}$ with the reduced scheme associated to $X_{N,\F_v}$. Since $f$ is defined by a nilpotent ideal, it induces an isomorphism on cohomology. The composite $f\circ \pi_{N/N',\ol \F_v}$ coincides with the base change of $(F_{(p)})^e$ where $F_{(p)}$ is the Frobenius endomorphism of $X_{N,\F_p}$ \cite{sga4demi}, Rapport, \S 1. The action of $F_{(p)}$ on cohomology coincides with that of $\mathrm{Fr}_{\F_p}$, hence is an isomorphism. Therefore, $\pi_{N/N',\ol \F_v}^*$ is surjective and all the maps in the diagram are isomorphic. By Proposition \ref{prop-level}, (i), we obtain $$H_\ell^1(X_{N})^{I_v} = \bigoplus_{(a',b')\in I_{N'}} H_\ell(X_N^{a'p^e,b'p^e}),$$ which finishes the proof. \end{proof} We have proved: \begin{theorem}\label{fermat-L} \ \begin{enumerate} \item For any $(a,b) \in G_N$, Conjecture \ref{l-independence} is true for $X_N^{a,b}$ and $X_N^{[a,b]_k}$. \item For $(a,b) \in I_{N}^\prim$ and an embedding $\s\colon E_N \hookrightarrow \C$, we have \begin{align*} L(\s,X_N^{a,b},s)= L(\s_{|E_{N,k}},X_N^{[a,b]_k},s)= \prod_{v \nmid N} \Bigl(1-\s\bigl( j_{N}^{a,b}(v)\bigr) N(v)^{-s}\Bigr)^{-1}. \end{align*} \end{enumerate} \end{theorem} \begin{remark}\label{sigma-independence} It follows that $L(X_N^{a,b},s)$ depens only on the class $[a,b]_k$ and is an $(E_{N,k})_\C$-valued function. In particular, if $H_{N,k} = H_N$ (e.g. $k=\Q$), then it is $\C$-valued, i.e. $L(\s,X_N^{a,b},s)$ does not depend on $\s$. \end{remark} \subsection{Functional equation} Let $k$ be a number field and for an infinite place $v$ of $k$, let $k_v$ be its completion. Then, for $X \in \sV_k$, we have $$X_\R := X \times_\Q \R = \bigsqcup_{v| \infty} X_v, \quad X_v :=X \times_k k_v$$ regarded as $\R$-schemes. More generally, for a motive $M \in \sM_{k,E}$, we have $$M_\R:=\vphi_{\R/\Q}^*\vphi_{k/\Q *}M = \bigoplus_{v | \infty} M_v, \quad M_v:=\vphi_{k_v/\R *}\vphi_{k_v/k}^*M$$ in $\sM_{\R,E}$. For a motive $M=(X,p,m) \in \sM_{\R,E}$ over $\R$, the $i$-th singular cohomology is defined by $$H^i(M(\C),\Q) := p\left(E \otimes}\def\op{\oplus_\Q H^i(X(\C),\Q(m))\right)$$ (we use such a notation although $M(\C)$ itself is not defined), where $$\Q(m):=(2\pi i)^m \Q.$$ It is a Hodge structure of pure weight $w=i-2m$, that is, there is a bigrading $$H^i(M(\C),\Q) \otimes}\def\op{\oplus_\Q \C \simeq \bigoplus_{p+q=w}H^{p,q}(M)$$ as an $E_\C$-module such that the complex conjugation $c_\infty$ (on the coefficients) exchanges $H^{p,q}$ and $H^{q,p}$. Since $X$ is defined over $\R$, the complex conjugation $F_\infty$ called the {\em infinite Frobenius} acts on $X(\C)$ and hence on the cohomology, which also exchanges $H^{p,q}$ and $H^{q,p}$. The Hodge numbers are defined by $$h^{p,q}(M)=\rank_{E_\C} H^{p,q}(M), \quad h_\pm^{p,p}(M) = \rank_{E_\C} H_\pm^{p,p}(M),$$ where $H_\pm^{p,p}(M) := H^{p,p}(M)^{F_\infty = \pm (-1)^p}$. Using the standard notations \begin{equation*} \vG_\R(s) :=\pi^{-s/2}\vG(s/2),\quad \vG_\C(s) := \vG_\R(s)\vG_\R(s+1)=2(2\pi)^{-s}\vG(s), \end{equation*} we put \begin{equation*} \vG(M,s) = \prod_{p<q} \vG_\C(s-p)^{h^{p,q}(M)} \prod_{p} \vG_\R(s-p)^{h_+^{p,p}(M)} \vG_\R(s+1-p)^{h_-^{p,p}(M)} . \end{equation*} Now, for a motive $M \in \sM_{k,E}$ over a number field, assume Conjecture \ref{l-independence} and define the {\em completed $L$-function} by $$ \vL(M,s)=L(M,s)\vG(M_\R,s). $$ By the Poincar\'e duality $H_\ell^i(M)^\vee \simeq H_\ell^{2d-i}(M^\vee)$, Conjecture \ref{l-independence} also holds for $M^\vee$. \begin{conjecture}[Hasse-Weil]\label{hasse-weil} $L(M,s)$ is continued to a meromorphic function on the whole complex plane and satisfies a functional equation $$\vL(M,s) = \varepsilon(M,s) \vL(M^\vee,1-s)$$ where $\varepsilon(M,s)$ is the product of a constant and an exponential function (see \cite{deligne-valeurs}, \cite{serre-facteurs}). \end{conjecture} \begin{remark} If $M=h^i(X)$, then by the hard Lefschetz theorem $H_\ell^{2d-i}(X)(d) \simeq H_\ell^i(X)(i)$, the functional equation is also written as $$\vL(h^i(X),s) = \varepsilon(h^i(X),s) \vL(h^i(X), i+1-s).$$ \end{remark} Now, consider the Fermat motive $M=X_N^{a,b} \in \sM_{K_N,E_N}$, $(a,b) \in I_N$. Recall that $M^\vee = X_N^{-a,-b}(1)$ (Remark \ref{fermat-dual}). Weil \cite{weil-jacobi} proved that $j_N^{a,b}$ is a Hecke character of conductor dividing $N^2$, hence by Theorem \ref{fermat-L}, $L(X_N^{a,b},s)$ satisfies Conjecture \ref{hasse-weil}. As we shall see later (Remark \ref{remark-h-number}), for each infinite place $v$ of $K_N$, we have \begin{equation}\label{h-number} h^{0,1}(M_v)=h^{1,0}(M_v)=1 \end{equation} and the others are $0$. Therefore, we have \begin{equation*} \vL(X_N^{a,b},s)= L(X_N^{a,b},s)\vG_\C(s)^{r_2} \end{equation*} where $r_2 = [K_N:\Q]/2$ is the number of the complex places of $K_N$, so we obtain: \begin{corollary}\label{functional-equation} Let $(a,b) \in I_N$. \begin{enumerate} \item $L(X_N^{a,b},s)$ is analytically continued to an entire function on the whole complex plane. \item $\vL(X_N^{a,b},s) = \alpha}\def\b{\beta}\def\g{\gamma}\def\d{\delta}\def\pd{\partial}\def\io{\iota\b^{s} \cdot \vL(X_N^{-a,-b}, 2-s)$ with some $\alpha}\def\b{\beta}\def\g{\gamma}\def\d{\delta}\def\pd{\partial}\def\io{\iota, \b \in (E_N)_\C^*$. \item $L(X_N^{a,b},s)$ has zero of order $r_2$ at each non-positive integer. \end{enumerate} \end{corollary} \begin{remark}\label{epsilon} If $N=p$ is a prime number and $K_p=\Q(\mu_p)$, then the $\varepsilon$-factor is classically known by Hasse \cite{hasse} (cf. \cite{gross-rohrlich}): $$\varepsilon(X_p^{a,b},s)=\pm (p^{p-2+f})^{s-1}$$ with $f=1$ or $2$, easily calculated from $(a,b)$. The sign (root number) is determined by Gross-Rohrlich \cite{gross-rohrlich}. \end{remark} \subsection{Artin $L$-functions} In \cite{weil-jacobi}, Weil interpreted the Jacobi-sum Hecke $L$-function as an Artin $L$-function. Our motivic $L$-function is regarded as a rephrasing of it. Though not necessary in the sequel, we explain the relation between them. Although the representation $D_{a,b}$ of Weil (loc. cit.) is not written explicitly, our $\rho_N^{[a,b]_k}$ below should correspond to it. Let $\mathscr{X}}\def\sY{\mathscr{Y}}\def\sV{\mathscr{V}$ be a scheme of finite type over $\Spec \Z$ and $|\mathscr{X}}\def\sY{\mathscr{Y}}\def\sV{\mathscr{V}|$ be the set of its closed points. For $x \in |\mathscr{X}}\def\sY{\mathscr{Y}}\def\sV{\mathscr{V}|$, let $\kappa(x)$ be its (finite) residue field and put $N(x) = \sharp \kappa(x)$. The {\em Hasse zeta function} of $\mathscr{X}}\def\sY{\mathscr{Y}}\def\sV{\mathscr{V}$ is defined by \begin{equation*} \z(\mathscr{X}}\def\sY{\mathscr{Y}}\def\sV{\mathscr{V},s) = \prod_{x \in |\mathscr{X}}\def\sY{\mathscr{Y}}\def\sV{\mathscr{V}|} (1- N(x)^{-s})^{-1}. \end{equation*} If $\mathscr{X}}\def\sY{\mathscr{Y}}\def\sV{\mathscr{V}$ is of Krull dimension $d$, $\z(\mathscr{X}}\def\sY{\mathscr{Y}}\def\sV{\mathscr{V},s)$ converges absolutely for $\Re(s)>d$ and defines a holomorphic function in the region. Let $\mathscr{X}}\def\sY{\mathscr{Y}}\def\sV{\mathscr{V} \ra \sY$ be a finite flat covering of schemes of finite type over $\Z$ which is generically \'etale and Galois with Galois group $G$. Let $\rho$ be a complex representation of $G$ and $\chi$ be its character. The {\em Artin $L$-function} $L(\mathscr{X}}\def\sY{\mathscr{Y}}\def\sV{\mathscr{V}/\sY,\rho,s)$ is defined by \begin{equation*} \log L(\mathscr{X}}\def\sY{\mathscr{Y}}\def\sV{\mathscr{V}/\sY,\rho,s) = \sum_{y \in |Y|} \sum_{n=1}^\infty \frac{\chi(y^n) N(y)^{-s}}{n} \end{equation*} (see \cite{serre-zeta}). If $\rho$ is the unit representation (resp. the regular representation), then it reduces to $\z(\sY,s)$ (resp. $\z(\mathscr{X}}\def\sY{\mathscr{Y}}\def\sV{\mathscr{V},s)$). Now, let $\mathscr{X}}\def\sY{\mathscr{Y}}\def\sV{\mathscr{V}_{N,k}$ be the Fermat scheme of degree $N$ over $\mathscr{O}}\def\sP{\mathcal{P}_k$ defined by the same equation \eqref{equation-fermat}, and consider the diagram similar to \eqref{diagram-1}. By the basic functorialities \cite{serre-zeta}, we have \begin{align*} \z(\mathscr{X}}\def\sY{\mathscr{Y}}\def\sV{\mathscr{V}_{N,k},s) & = L(\mathscr{X}}\def\sY{\mathscr{Y}}\def\sV{\mathscr{V}_{N,K_N}/\mathscr{X}}\def\sY{\mathscr{Y}}\def\sV{\mathscr{V}_{N,k}, 1_{H_{N,k}},s)\\ & =L(\mathscr{X}}\def\sY{\mathscr{Y}}\def\sV{\mathscr{V}_{N,K_N}/\mathscr{X}}\def\sY{\mathscr{Y}}\def\sV{\mathscr{V}_{1,k}, \Ind_{H_{N,k}}^{\vG_{N,k}} 1_{H_{N,k}}, s). \end{align*} It is not difficult to determine the irreducible decomposition of $\Ind_{H_{N,k}}^{\vG_{N,k}} 1_{H_{N,k}}$. For $(a,b) \in G_N$, there is a unique $(a',b')\in G_{N'}^\prim$ with $N=N'd$ such that $(a,b)=(a'd,b'd)$. Put \begin{align*} \rho_N^{[a,b]_k} = \Res_{\vG_{N',k}}^{\vG_{N,k}} \Ind_{G_{N'}}^{\vG_{N',k}} \s\theta_{N'}^{a',b'}, \end{align*} where $$\s\theta_N^{a,b} \colon G_N \os{\theta_N^{a,b}}{\lra} E_N^* \os{\s}{\hookrightarrow} \C^*$$ is the composition. Then one shows that $\rho_N^{[a,b]_k}$ is irreducible and \begin{equation*} \Ind_{H_{N,k}}^{\vG_{N,k}} 1_{H_{N,k}} = \bigoplus_{[a,b]_k \in H_{N,k}\backslash G_N} \rho_N^{[a,b]_k}. \end{equation*} Therefore, we obtain \begin{equation*} \z(\mathscr{X}}\def\sY{\mathscr{Y}}\def\sV{\mathscr{V}_{N,k},s)=\prod_{[a,b]_k \in H_{N,k}\backslash G_N} L(\mathscr{X}}\def\sY{\mathscr{Y}}\def\sV{\mathscr{V}_{N,K_N}/\mathscr{X}}\def\sY{\mathscr{Y}}\def\sV{\mathscr{V}_{1,k}, \rho_N^{[a,b]_k},s). \end{equation*} Further, we have \begin{align*} L(\mathscr{X}}\def\sY{\mathscr{Y}}\def\sV{\mathscr{V}_{N,K_N}/\mathscr{X}}\def\sY{\mathscr{Y}}\def\sV{\mathscr{V}_{1,k},\rho_N^{[a,b]_k},s) &= L(\mathscr{X}}\def\sY{\mathscr{Y}}\def\sV{\mathscr{V}_{N',K_{N'}}/\mathscr{X}}\def\sY{\mathscr{Y}}\def\sV{\mathscr{V}_{1,k}, \Ind_{G_{N'}}^{\vG_{N',k}} \s\theta_{N'}^{a',b'},s)\\ &=L(\mathscr{X}}\def\sY{\mathscr{Y}}\def\sV{\mathscr{V}_{N',K_{N'}}/\mathscr{X}}\def\sY{\mathscr{Y}}\def\sV{\mathscr{V}_{1,K_{N'}}, \s\theta_{N'}^{a',b'},s). \end{align*} \begin{proposition} For $(a,b) \in I_{N}^\prim$, we have \begin{equation*} L(\mathscr{X}}\def\sY{\mathscr{Y}}\def\sV{\mathscr{V}_{N,K_{N}}/\mathscr{X}}\def\sY{\mathscr{Y}}\def\sV{\mathscr{V}_{1,K_{N}}, \s\theta_{N}^{a,b},s) = L(\s,X_{N}^{a,b},s)^{-1}. \end{equation*} \end{proposition} \begin{proof} We prove it fiberwise; let $v$ be a finite place of $K_N$. Then, we have $$\log L(\mathscr{X}}\def\sY{\mathscr{Y}}\def\sV{\mathscr{V}_{N,\F_v}/\mathscr{X}}\def\sY{\mathscr{Y}}\def\sV{\mathscr{V}_{1,\F_v},\s\theta_N^{a,b},s) = \sum_{n=1}^\infty \frac{\nu_n T^n}{n}, \quad T=N(v)^{-s}$$ where $$\nu_n = \frac{1}{N^2} \sum_{g \in G_N} \s\theta_N^{a,b}(g^{-1}) \vL(\mathrm{Fr}_{\F_v}^ng)$$ (see \cite{serre-zeta}). If $v \nmid N$, it equals $\log P_v(X_N^{a,b},T)$ by the proof of Theorem \ref{fermat-polynomial}. If $v |N$, let $p=\mathrm{char}(\F_v)$ and $N'=N/p$. Then, the action of $g \in G_N$ on $\mathscr{X}}\def\sY{\mathscr{Y}}\def\sV{\mathscr{V}_{N,\F_v}$ depends only on the image of $g$ in $G_{N'}$. Since $(a,b)$ is primitive, we have $\sum_{g \in G_{N/N'}} \theta_N^{a.b}(g^{-1})=0$, hence $\nu_n=0$, and the proof finishes by Proposition \ref{bad-realization}. \end{proof} If $N'=1$, i.e. $(a,b)=(0,0)$, then we are reduced to $$\z(\mathscr{X}}\def\sY{\mathscr{Y}}\def\sV{\mathscr{V}_{1,k},s) = \z(\P^1_{\mathscr{O}}\def\sP{\mathcal{P}_k},s) = \z_{k}(s)\z_{k}(s-1).$$ If only one of $a$, $b$, $a+b$ is $0$, one proves easily that $L(\mathscr{X}}\def\sY{\mathscr{Y}}\def\sV{\mathscr{V}_{N,K_{N}}/\mathscr{X}}\def\sY{\mathscr{Y}}\def\sV{\mathscr{V}_{1,K_{N}}, \s\theta_{N}^{a,b},s) =1$. Summarizing, we obtain: \begin{proposition} $$\z(\mathscr{X}}\def\sY{\mathscr{Y}}\def\sV{\mathscr{V}_{N,k},s)= \z_{k}(s)\z_{k}(s-1) \prod_{N'|N} \prod_{[a',b']_k \in H_{N',k}\backslash I_{N'}^\prim} L(\s,X_{N'}^{a',b'},s)^{-1}.$$ \end{proposition} \section{Regulators of Fermat motives} \subsection{Motivic cohomology} We briefly recall the definition of motivic cohomology of motives and its integral part. For more details, see \cite{nekovar}, \cite{schneider}, \cite{scholl-2}. For a noetherian scheme $X$, let $K_i(X)$ (resp. $K'_i(X)$) be the algebraic $K$-group of vector bundles (resp. coherent sheaves) \cite{quillen}. If $X$ is regular, the natural map $K_i(X) \ra K_i'(X)$ is an isomorphism. For a quasi-projective variety $X$ over a field, we define its {\em motivic cohomology group} by $$H^n_\sM(X,\Q(r)) = K_{2r-n}(X)_\Q^{(r)}, $$ the Adams eigenspace of weight $r$ \cite{soule}. Recall the Grothendieck Riemann-Roch theorem $\Q \otimes}\def\op{\oplus_\Z \CH^r(X) = K_0(X)_\Q^{(r)}$. For $X$, $Y \in \sV_k$ and $f \in \Corr^d(X,Y) = K_0(X\times Y)_\Q^{(\dim X + d)}$ (for $X$ irreducible), the composition \begin{multline*} K_i(X)_\Q \os{\pr_X^*}{\lra} K_i(X\times Y)_\Q \os{\cup f}{\lra} K_i(X \times Y)_\Q =K'_i(X \times Y)_\Q \\ \os{\pr_{Y *}}{\lra} K'_i(Y)_\Q=K_i(Y)_\Q \end{multline*} induces a homomorphism $H_\sM^n(X,\Q(r)) \ra H_\sM^{n+2d}(Y,\Q(r+d))$ by the Riemann-Roch theorem \cite{soule}, \cite{tamme}. For a motive $M=(X,p,m) \in \sM_{k,E}$, its motivic cohomology group is defined to be the $E$-module \begin{equation*} H^n_\sM(M,\Q(r)) = p(E \otimes}\def\op{\oplus_\Q H_\sM^n(X,\Q(r+m))). \end{equation*} For a fixed $r$, \begin{equation}\label{functor-motivic} H_\sM \colon \sM_{k,E} \lra \operatorname{\mathsf{Mod}}_E; \quad M \longmapsto \bigoplus_n H_\sM^n(M,\Q(r)) \end{equation} is a well-defined covariant additive functor. Let $k$ be a number field. There is a functorial way \cite{scholl-2} of defining a subspace $$H_\sM^n(M,\Q(r))_\Z \subset H_\sM^n(M,\Q(r))$$ called the {\em integral part}. Conjecturally, it is a finite-dimensional $E$-vector space. If $M=h(X)$ and if there exists a proper flat model $\mathscr{X}}\def\sY{\mathscr{Y}}\def\sV{\mathscr{V}$ of $X$ over $\mathscr{O}}\def\sP{\mathcal{P}_k$ which is {\em regular}, it coincides with the original definition of Beilinson: $$ H_\sM^n(X,\Q(r))_\Z = \Im\bigl(K_{2r-n}(\mathscr{X}}\def\sY{\mathscr{Y}}\def\sV{\mathscr{V})_\Q \ra K_{2r-n}(X)_\Q^{(r)}\bigr), $$ which is independent of the choice of $\mathscr{X}}\def\sY{\mathscr{Y}}\def\sV{\mathscr{V}$. The existence of such a model is known for curves. For a general motive $M=(X,p,m)$, we have by definition: \begin{equation}\label{integral-part} H^n_\sM(M,\Q(r))_\Z=\Im\bigl( E \otimes}\def\op{\oplus_\Q H^n_\sM(X,\Q(r+m))_\Z \ra H^n_\sM(M,\Q(r))\bigr). \end{equation} \subsection{Deligne cohomology} We briefly recall definitions and necessary facts on Deligne cohomology. For more details, see \cite{esnault-viehweg}, \cite{nekovar}, \cite{schneider}. For $X \in \sV_\C$, let $\Omega}\def\vG{\varGamma}\def\vL{\varLambda}\def\vO{\varOmega_X^\bullet$ be the complex of the sheaves of holomorphic differential forms on $X(\C)$. For a subring $A \subset \R$ and an integer $r$, put $$A(r)= (2\pi i)^rA \subset \C$$ and define a complex $A(r)_\sD$ of sheaves on $X(\C)$ to be $$A(r) \lra \mathscr{O}}\def\sP{\mathcal{P}_{X(\C)} \lra \Omega}\def\vG{\varGamma}\def\vL{\varLambda}\def\vO{\varOmega_X^1 \lra \cdots \lra \Omega}\def\vG{\varGamma}\def\vL{\varLambda}\def\vO{\varOmega_X^{r-1}$$ with $A(r)$ located in degree $0$. Then the {\em Deligne cohomology group} is defined by the hypercohomology group \begin{equation*} H_\sD^n(X,A(r)) = \mathbf{H}^n(X(\C),A(r)_\sD). \end{equation*} By the distinguished triangle $$\tau_{<r}\Omega}\def\vG{\varGamma}\def\vL{\varLambda}\def\vO{\varOmega_X^\bullet [-1] \lra A(r)_\sD \lra A(r) \os{+1}{\lra}$$ we obtain a long exact sequence \begin{multline*} \cdots \ra H^{n-1}(X(\C),A(r)) \lra H^{n-1}_\dR(X(\C))/F^r \\ \lra H_\sD^n(X,A(r)) \lra H^n(X(\C),A(r)) \lra \cdots \end{multline*} where $F^\bullet$ denotes the Hodge filtration. If $n<2r$, then the kernel of $$H^n(X(\C),A(r)) \lra H^n_\dR(X(\C))/F^r$$ is torsion (cf. \cite{schneider}). Hence, for $A=\R$, we obtain an exact sequence \begin{equation}\label{deligne-cohomology-exact} 0 \lra H^{n-1}(X(\C),\R(r)) \lra H^{n-1}_\dR(X(\C))/F^r \lra H_\sD^n(X,\R(r)) \lra 0. \end{equation} The de Rham isomorphism together with the projection $\C \ra \R(r-1)$ induces an exact sequence \begin{equation}\label{deligne-cohomology-exact-2} 0 \lra F^rH_\dR^{n-1}(X(\C)) \lra H^{n-1}(X(\C),\R(r-1)) \lra H_\sD^n(X,\R(r)) \lra 0. \end{equation} Now, let $X \in \sV_\R$. Then the {\em (real) Deligne cohomology group} of $X$ is defined by \begin{equation*} H^n_\sD(X, \R(r)) = H^n_\sD(X_\C,\R(r))^+, \end{equation*} where ${}^+$ denotes the subspace fixed by $F_\infty \otimes}\def\op{\oplus c_\infty$ (see \S 3.6). Under the GAGA isomorphism $H^n_\dR(X(\C)) \simeq H^n_\dR(X_\C/\C)$, $H^n_\dR(X(\C))^+$ corresponds to $H^n_\dR(X/\R)$, the algebraic de Rham cohomology of $X/\R$, on which the de Rham filtration is already defined. Therefore, if $n<2r$, then \eqref{deligne-cohomology-exact} and \eqref{deligne-cohomology-exact-2} induce the following exact sequences: \begin{equation}\label{r-deligne-1} 0 \lra H^{n-1}(X(\C),\R(r))^+ \lra H^{n-1}_\dR(X/\R)/F^r \lra H^{n}_\sD(X,\R(r)) \lra 0, \end{equation} \begin{equation}\label{r-deligne-2} 0 \lra F^rH^{n-1}_\dR(X/\R) \lra H^{n-1}(X(\C),\R(r-1))^+ \lra H^{n}_\sD(X,\R(r)) \lra 0. \end{equation} Since the Deligne cohomology and homology form a twisted Poincar\'e duality theory \cite{gillet} (cf. \cite{jannsen}), the above definitions extend to motives: for $M=(X,p,m) \in \sM_{\R,E}$, we define \begin{equation*} H_\sD^n(M,\R(r)) = p (E \otimes}\def\op{\oplus_\Q H_\sD^n(X,\R(r+m))) \end{equation*} and obtain a covariant functor \begin{equation}\label{functor-deligne} H_\sD \colon \sM_{\R,E} \lra \operatorname{\mathsf{Mod}}_{E_\R}; \quad M \longmapsto \bigoplus_n H^n_\sD(M,\R(r)). \end{equation} \subsection{Regulator} For $X \in \sV_\C$, the theory of Chern characters gives the canonical regulator map from motivic cohomology to Deligne cohomology \begin{equation*} r_\sD \colon H^n_\sM(X,\Q(r)) \lra H^n_\sD(X,\R(r)) \end{equation*} functorial in $X$ (see \cite{nekovar}, \cite{schneider}). If $X\in \sV_\R$, the image of the composite $$H_\sM^n(X,\Q(r)) \lra H_\sM^n(X_\C,\Q(r)) \lra H_\sD^n(X_\C,\R(r))$$ is contained in $H_\sD^n(X,\R(r))$. All these constructions extend to motives: for $M \in \sM_{\R,E}$ we have an $E$-linear map \begin{equation*} r_\sD \colon H^n_\sM(M,\Q(r)) \lra H^n_\sD(M,\R(r)) \end{equation*} functorial in $M$, i.e. compatible with \eqref{functor-motivic} and \eqref{functor-deligne}. Consider the case $n=r=2$. For a field $k$ and $X \in \sV_k$, let $X^{(1)}$ be the set of points on $X$ of codimension one. Then we have (cf. \cite{nekovar}) \begin{equation*} H_\sM^2(X,\Q(2)) = \Ker\Bigl(K_2^{\mathrm{M}}(k(X))\otimes}\def\op{\oplus \Q \os{T \otimes}\def\op{\oplus \Q}{\lra} \bigoplus_{x \in X^{(1)}} k(x)^* \otimes}\def\op{\oplus \Q\Bigr). \end{equation*} Here, the Milnor $K$-group $K_2^\mathrm{M}(k(X))$ is the abelian group generated by symbols $\{f,g\} \in k(X)^* \otimes}\def\op{\oplus_\Z k(X)^*$, divided by Steinberg relations $\{f,1-f\}=0\ (f \neq 0,1)$. The tame symbol $T=(T_x)$ is defined by \begin{equation*} T_x(\{f,g\}) = (-1)^{\ord_x(f)\ord_x(g)} \left(\frac{f^{\ord_x(g)}}{g^{\ord_x(f)}}\right)(x). \end{equation*} On the other hand, for $X \in \sV_\C$, we have by \eqref{deligne-cohomology-exact-2} $$H^2_\sD(X,\R(2)) \os{\sim}{\lra} H^1(X(\C),\R(1)) = \Hom(H_1(X(\C),\Z),\R(1)).$$ \begin{proposition}[\cite{beilinson-curve}, cf. \cite{ramakrishnan}]\label{formula-regulator} Let $X$ be a smooth projective curve over $\C$ and $e = \sum_i \{f_i,g_i\} \in H_\sM^2(X,\Q(2))$. Then, under the above identifications, we have \begin{equation*} r_\sD(e)(\g)= i \Im \sum_i \left( \int_\g \log f_i \, d\log g_i - \log |g_i(P)| \int_\g d\log f_i \right) \end{equation*} for a cycle $\g \in H_1(X(\C),\Z)$ with base point $P \in X(\C)$. \end{proposition} \subsection{The Beilinson conjecture} In the remainder of this paper, $k$ will always be a number field. Let $M=(X,p,m) \in \sM_{k,E}$ and assume Conjectures \ref{l-independence} and \ref{hasse-weil}. Recall that $L(h^i(M),s)$ is an $E_\C$-valued function. On the real axis, it takes values in \begin{equation}\label{E_R} E_\R = \prod_{w |\infty}\nolimits E_w = \Bigl[\prod_{\s\colon E \hookrightarrow \C}\nolimits \C \Bigr]^+, \end{equation} where the script ${}^+$ denotes the fixed part by the complex conjugation acting both on the set $\{\s\}$ and on each $\C$. For an integer $n$, define the {\em special value} $L^*(h^i(M),n) \in E_\R^*=\prod_w E_w^*$ by: \begin{equation*} L^*(\s,h^i(M),n) = \lim_{s \ra n} \frac{L(\s, h^i(M),s)}{(s-n)^{\ord_{s=n} L(\s,h^i(M),s)}}. \end{equation*} Note that the order of zero does not depend on $\s$. Moreover, Conjecture \ref{hasse-weil} and \eqref{r-deligne-1} imply that \begin{equation*} \rank_{E_\R} H^{i+1}_\sD(M_\R,\R(r))= \ord_{s=1-r} L(h^i(M)^\vee,s) \end{equation*} if $w=i-2(m+r) \leq -3\ $ (cf. \cite{schneider}). By composing the natural map $H^n_\sM(M,\Q(r))_\Z \ra H^n_\sM(M_\R,\Q(r))$ with the regulator map for $M_\R$, we obtain the regulator map for $M$ \begin{equation*} r_\sD \colon H^n_\sM(M,\Q(r))_\Z \lra H^n_\sD(M_\R,\R(r)). \end{equation*} Let \begin{equation*} r_{\sD,v} \colon H^n_\sM(M,\Q(r))_\Z \lra H^n_\sD(M_v,\R(r)) \end{equation*} be its $v$-component. For an $E_\R$-module $H$, a {\em $\Q$-structure} is an $E$-submodule $H_0 \subset H$ such that $H_0 \otimes}\def\op{\oplus_\Q \R =H$. For a ring $R$ and a free $R$-module $H$ of rank $n$, its {\it determinant module} is defined by \begin{equation*} \det H = \wedge^n H, \end{equation*} the highest exterior power. Let $M \in \sM_{k,E}$ and consider the exact sequence \eqref{r-deligne-2} for $M_\R$. The singular cohomology with $\R(r)$-coefficients has the natural $\Q$-structure. On the other hand, the de Rham cohomology has the $\Q$-structure $H^n_\dR(M/\Q)$, on which the Hodge filtration is already defined. Let \begin{equation*} \mathscr{B}}\def\sD{\mathscr{D}}\def\sH{\mathscr{H}(h^i(M)(r)) \subset \det H^{i+1}_\sD(M,\R(r)). \end{equation*} be the $\Q$-structure induced by \eqref{r-deligne-2}. \begin{conjecture}[Beilinson \cite{beilinson}]\label{conj-beilinson} Suppose that $w =i-2(m+r)\leq -3$. \begin{enumerate} \item The regulator map tensored with $\R$ $$r_\sD \otimes}\def\op{\oplus_\Q \R \colon H^{i+1}_\sM(M,\Q(r))_\Z \otimes}\def\op{\oplus_\Q \R \lra H_\sD^{i+1}(M_\R,\R(r))$$ is an isomorphism. \item In $\det H_\sD^{i+1}(M_\R,\R(r))$, we have $$ r_\sD\bigl(\det H_\sM^{i+1}(M,\Q(r))_\Z\bigr)=L^*(h^i(M)^\vee, 1-r)\mathscr{B}}\def\sD{\mathscr{D}}\def\sH{\mathscr{H}(h^i(M)(r)).$$ \end{enumerate} \end{conjecture} \begin{remark} The finite generation of the integral part of the motivic cohomology and the injectivity of the regulator map are in general very difficult. A weaker version of the conjecture is to find an $E$-linear subspace of $H^{i+1}_\sM(M,\Q(r))_\Z$ for which the same statements hold. The conjecture is in fact formulated for $w <0$ (see \cite{beilinson}, \cite{d-s}, \cite{nekovar}, \cite{schneider}). \end{remark} In particular, if $X$ is a projective smooth curve over $k$, the conjecture (i) implies $$\dim_\Q H_\sM^2(X,\Q(2))_\Z =[k:\Q] \cdot \mathrm{genus}(X).$$ For our Fermat motives, by the description of the Deligne cohomology which shall be given in \S 4.6, we should have $$\dim_{E_N} H_\sM^2(X_N^{a,b},\Q(2))_\Z =\dim_{E_{N,k}} H_\sM^2(X_N^{[a,b]_k},\Q(2))_\Z = [K_N:\Q]/2 $$ for $(a,b) \in I_N^{\prim}$. \subsection{Elements in motivic cohomology} Starting with Ross' element, we define elements in the motivic cohomology of Fermat motives and study their relations. Let $X_N$ be the Fermat curve over a number field $k$. As explained in \cite{ross}, p.228, we have $$H_\sM^2(X_N,\Q(2))_\Z = H_\sM^2(X_N,\Q(2)),$$ hence by \eqref{integral-part}, we have $$ H_\sM^2(X_N^{a,b},\Q(2))_\Z =H_\sM^2(X_N^{a,b},\Q(2)), \ H_\sM^2(X_N^{[a,b]_k},\Q(2))_\Z = H_\sM^2(X_N^{[a,b]_k},\Q(2)). $$ If we put \begin{equation*} e_N = \{1-x,1-y\} \in K^M_2(k(X_N)), \end{equation*} then the tame symbol $T(e_N)$ is torsion (\cite{ross}, Theorem 1), so $e_N$ defines an element of $H^2_\sM(X_N,\Q(2))_\Z$. \begin{remark} The divisors of $1-x$, $1-y$ and their $G_N$-translations are supported on torsion points of $X_N$, embedded in its Jacobian variety by choosing as a base point any point with $x_0y_0z_0=0$. \end{remark} \begin{definition} Define{\rm :} \begin{alignat*}{2} &e_N^{a,b} = p_N^{a,b} \pi_{K_N/k}^* e_N & & \in H_\sM^2(X_N^{a,b},\Q(2))_\Z, \\ &e_N^{[a,b]_k} =p_N^{[a,b]_k} e_N &&\in H_\sM^2(X_N^{[a,b]_k},\Q(2))_\Z. \end{alignat*} \end{definition} \begin{proposition}\label{e_N} If $N=N'd$, $(a,b) \in G_N$ and $(a',b')\in G_{N'}$, then we have{\rm :} \begin{enumerate} \item $\pi_{N/N',k *}e_N = e_{N'}$. \item $\pi_{N/N',K_N/K_{N'}}^*e_{N'}^{a',b'} = d^2 e_N^{a'd,b'd}$. \item $\pi_{N/N',K_N *} e_N^{a,b} = \begin{cases} \pi_{K_N/K_{N'}}^* e_{N'}^{a',b'} & \text{if $(a,b)=(a'd,b'd), \exists(a',b')\in G_{N'}$}, \\ 0 & \text{otherwise}. \end{cases}$ \item $\pi_{N/N',k}^* e_{N'}^{[a',b']_k} = d^2 e_N^{[a'd,b'd]_k}$. \item $\pi_{N/N',k *} e_N^{[a,b]_k} = \begin{cases} e_{N'}^{[a',b']_k} & \text{if $(a,b)=(a'd,b'd), \exists(a',b') \in G_{N'}$}, \\ 0 & \text{otherwise}. \end{cases}$ \item $\pi_{N,K_N/k}^* e_N^{[a,b]_k} = \sum_{(c,d)\in[a,b]_k} e_N^{c,d}$. \item $\pi_{N,K_N/k *} e_N^{a,b} = \frac{[K_N:k]}{\sharp[a,b]_k}e_N^{[a,b]_k} \ (=e_N^{[a,b]_k} \ \text{if} \ (a,b)\in I_N^\prim)$. \end{enumerate} \end{proposition} \begin{proof} (i) \ Let $(x,y)$ (resp. $(x',y')$) be the affine coordinates of $X_N$ (resp. $X_{N'}$), so that $x'=x^d$, $y'=y^d$. Consider the intermediate curve $$X_{N,N'} \colon x^N+y'^{N'}=1,$$ with natural morphisms $X_N \os{\pi_1}{\lra} X_{N,N'} \os{\pi_2}{\lra} X_{N'}$. By the projection formula for the cup product in $K$-theory, we have: \begin{align*} &\pi_{N/N' *}\{1-x,1-y\} = \pi_{2 *} \pi_{1 *}\{\pi_1^*(1-x),1-y\}\\ & = \pi_{2 *} \{1-x,\pi_{1 *}(1-y)\} = \pi_{2 *}\{1-x,1-y'\} \\ & = \pi_{2 *} \{1-x, \pi_2^*(1-y')\} = \{\pi_{2 *}(1-x), 1-y'\} \\& = \{1-x',1-y'\}. \end{align*} (ii) \ Since $$\pi_{N/N',K_N/k}^*e_{N'} = \left(\sum_{g \in G_{N/N'}}\nolimits g\right) \pi_{N,K_N/k}^*e_N$$ and $p_N^{a'd,b'd}g=p_N^{a'd,b'd}$ for $g \in G_{N/N'}$, this follows from the commutativity of \eqref{four-1}. (iii) \ The first case follows from (i) and the commutativity of \eqref{four-1}. For the second case, $\pi_{N/N',K_N}^*$ is injective and we have \begin{align*} \pi_{N/N',K_N}^* \pi_{N/N',K_N *} p_N^{a,b} = \sum_{g \in G_{N/N'}} g p_N^{a,b} = \sum_{g \in G_{N/N'}} \theta_N^{a,b}(g) p_N^{a,b}=0. \end{align*} (iv) and (v) follow similarly as (ii) and (iii) from the commutativity of \eqref{four-2}. (vi) is clear by definition. (vii) \ Using (ii), we are reduced to the primitive case, which follows from \eqref{pi-p-pi}. \end{proof} \subsection{Deligne cohomology of Fermat motives} We calculate the Deligne cohomology of $X_N^{a,b} \in \sM_{K_N,E_N}$. Note that both $K_N$ and $E_N$ are totally imaginary. Let $M \in \sM_{k,E}$ be a motive, and for a complex place $v$ of $k$, let $\tau}\def\th{\theta}\def\x{\xi}\def\z{\zeta}\def\vphi{\varphi, \ol\tau}\def\th{\theta}\def\x{\xi}\def\z{\zeta}\def\vphi{\varphi \colon k \hookrightarrow \C$ be the conjugate embeddings inducing $v$, and put $M_\tau}\def\th{\theta}\def\x{\xi}\def\z{\zeta}\def\vphi{\varphi = \vphi_{\C/k,\tau}\def\th{\theta}\def\x{\xi}\def\z{\zeta}\def\vphi{\varphi}^*M$. Since $F_\infty$ exchanges the components of $$H_\sD^n(M_{v,\C},\R(r)) = H_\sD^n(M_\tau}\def\th{\theta}\def\x{\xi}\def\z{\zeta}\def\vphi{\varphi,\R(r)) \times H_\sD^n(M_{\ol\tau}\def\th{\theta}\def\x{\xi}\def\z{\zeta}\def\vphi{\varphi},\R(r)), $$ we have canonically \begin{equation*} H_\sD^n(M_v,\R(r)) = H_\sD^n(M_\tau}\def\th{\theta}\def\x{\xi}\def\z{\zeta}\def\vphi{\varphi,\R(r)). \end{equation*} In particular, for each infinite place $v$ of $K_N$ and a choice of $\tau$, we have an identification of $E_{N,\R}$-modules \begin{equation}\label{identification} H^2_\sD(X_{N,v}^{a,b},\R(2)) = H^2_\sD(X_{N,\tau}\def\th{\theta}\def\x{\xi}\def\z{\zeta}\def\vphi{\varphi}^{a,b},\R(2)) = H^1(X_{N,\tau}\def\th{\theta}\def\x{\xi}\def\z{\zeta}\def\vphi{\varphi}^{a,b},\R(1)). \end{equation} The $\Q$-structure $\mathscr{B}}\def\sD{\mathscr{D}}\def\sH{\mathscr{H}$ splits as $$\mathscr{B}}\def\sD{\mathscr{D}}\def\sH{\mathscr{H}(h^1(X_N^{a,b})) = \bigoplus_{v | \infty} \mathscr{B}}\def\sD{\mathscr{D}}\def\sH{\mathscr{H}(h^1(X_{N,v}^{a,b}))$$ where $\mathscr{B}}\def\sD{\mathscr{D}}\def\sH{\mathscr{H}(h^1(X_{N,v}^{a,b}))$ corresponds via \eqref{identification} to $H^1(X_{N,\tau}\def\th{\theta}\def\x{\xi}\def\z{\zeta}\def\vphi{\varphi}^{a,b},\Q(1))$. Similarly to the $\ell$-adic case, for an $E_\R$-module $V$, let $$V = \bigoplus_{w | \infty} V_w, \quad V_w=E_w \otimes}\def\op{\oplus_{E_\R} V$$ be the decomposition corresponding to \eqref{E_R}. If $w$ is a complex place and $\s,\ol\s \colon E \hookrightarrow \C$ are the embeddings inducing $w$, then we have $$V_w = \bigl[V_\s \oplus V_{\ol\s}\bigr]^+,$$ where we put $V_\s= \C \otimes}\def\op{\oplus_{E_\R, \s} V$, and $+$ denotes the part fixed by the complex conjugation acting both on the set $\{\s,\ol\s\}$ and on $\C$. Therefore we have a canonical isomorphism $V_w = V_\s$. For $v \in V$, let $v_\s \in V_\s$ denote its $\s$-component. Applying these to our situation, for each infinite place $v$ of $K_N$ and an embedding $\s\colon E_N \hookrightarrow \C$, we obtain an identification \begin{equation}\label{identification-sigma} H_\sD^1(X_{N,v}^{a,b},\R(2))_\s = H^1(X_{N,\tau}\def\th{\theta}\def\x{\xi}\def\z{\zeta}\def\vphi{\varphi}^{a,b}(\C),\R(1))_\s = \s(p_N^{a,b})H^1(X_{N,\tau}\def\th{\theta}\def\x{\xi}\def\z{\zeta}\def\vphi{\varphi}(\C),\C), \end{equation} the subspace on which $G_N$ acts by the $\C^*$-valued character $\s\theta_N^{a,b}$. Now, for the moment, let $X_N$ be the Fermat curve over $\C$. By choosing a primitive root of unity $\z_N \in \C$, $G_N$ acts on $X_N$. Let us recall the structure of the homology and cohomology groups of $X_N(\C)$. See \cite{otsubo}, \cite{rohrlich} for the details. \begin{definition} Define a path by $$\d_N\colon [0,1] \lra X_N(\C); \quad t \longmapsto (t^{\frac 1 N},(1-t)^{\frac 1 N})$$ where the branches are taken in $\R$. Then, $(1-g_N^{r,0})(1-g_N^{0,s})\d_N$ becomes a cycle and defines an element of $H_1(X_N(\C),\Q)$. Put $$\g_N = \frac{1}{N^2} \sum_{(r,s) \in G_N} (1-g_N^{r,0})(1-g_N^{0,s})\d_N.$$ It does not depend on the choice of $\z_N$. \end{definition} \begin{definition} For integers $a$, $b$, define a differential form on $X_N(\C)$ by \begin{equation*} \om_N^{a,b} = x^ay^{b-N}\frac{dx}{x} = -x^{a-N}y^b \frac{dy}{y}. \end{equation*} For $(a,b) \in G_N$, put $\om_N^{a,b} = \om_N^{\langle a \rangle, \langle b \rangle}$, where $\langle a \rangle \in \{1,2,\dots, N\}$ denotes the integer representing $a$. If $(a,b) \in I_N$, then $\om_N^{a,b}$ is of the second kind (i.e. has no residues), so defines an element of $H^1(X_N(\C),\C)$, which we denote by the same letter. Moreover, $\om_N^{a,b}$ is of the first kind (i.e. holomorphic) if and only if $\angle{a}+\angle{b}<N$. \end{definition} \begin{proposition}\label{homology} \ \begin{enumerate} \item $H_1(X_N(\C),\Q)$ is a cyclic $\Q[G_N]$-module generated by $\g_N$. \item The set $\bigl\{\om_N^{a,b} \bigm| (a,b) \in I_N\bigr\}$ is a basis of $H^1(X_N(\C),\C)$. \item For $(a,b) \in I_N$, we have $$\int_{\g_N} \om_N^{a,b} = \frac{1}{N} B\bigl(\tfrac{\angle a}{N},\tfrac{\angle b}{N}\bigr),$$ where $B(\alpha}\def\b{\beta}\def\g{\gamma}\def\d{\delta}\def\pd{\partial}\def\io{\iota,\b)$ is the Beta function. \end{enumerate} \end{proposition} Note that $\om_N^{a,b}$ is an eigenform for the $G_N$-action: $$g_N^{r,s}\om_N^{a,b} = \z_N^{ar+bs} \om_N^{a,b}.$$ We normalize $\om_N^{a,b}$ as \begin{equation}\label{omega-normalized} \wt\om_N^{a,b} := \Bigl(\frac 1 N B\bigl(\tfrac{\angle a}{N},\tfrac{\angle b}{N}\bigr)\Bigr)^{-1} \om_N^{a,b}. \end{equation} Then we have for any $g_N^{r,s} \in G_N$ \begin{equation}\label{period} \int_{g_N^{r,s}\g_N}\wt\om_N^{a,b}=\int_{\g_N}g_N^{r,s}\wt\om_N^{a,b}=\z_N^{ar+bs}. \end{equation} Hence we have \begin{equation*} c_\infty \wt\om_N^{a,b} =\wt\om_N^{-a,-b}. \end{equation*} Let us return to the original situation over $K_N$, and for each embedding $\tau \colon K_N \hookrightarrow \C$, let \begin{equation*} \g_{N,\tau}\def\th{\theta}\def\x{\xi}\def\z{\zeta}\def\vphi{\varphi} \in H_1(X_{N,\tau}\def\th{\theta}\def\x{\xi}\def\z{\zeta}\def\vphi{\varphi}(\C),\Q), \quad \om_{N,\tau}\def\th{\theta}\def\x{\xi}\def\z{\zeta}\def\vphi{\varphi}^{a,b}, \wt\om_{N,\tau}\def\th{\theta}\def\x{\xi}\def\z{\zeta}\def\vphi{\varphi}^{a,b} \in H^1(X_{N,\tau}\def\th{\theta}\def\x{\xi}\def\z{\zeta}\def\vphi{\varphi}(\C),\C) \end{equation*} be the corresponding classes for $X_{N,\tau}\def\th{\theta}\def\x{\xi}\def\z{\zeta}\def\vphi{\varphi}(\C) = \Mor_{K_N,\tau}\def\th{\theta}\def\x{\xi}\def\z{\zeta}\def\vphi{\varphi}(\C,X_{N,K_N})$. By Proposition \ref{homology} (i), it follows that $$H_1(X_{N,\tau}\def\th{\theta}\def\x{\xi}\def\z{\zeta}\def\vphi{\varphi}^{a,b}(\C),\Q) := p_N^{a,b} (E_N \otimes}\def\op{\oplus_\Q H_1(X_{N,\tau}\def\th{\theta}\def\x{\xi}\def\z{\zeta}\def\vphi{\varphi}(\C),\Q))$$ is a one-dimensional $E_N$-vector space generated by $p_N^{a,b}\g_{N,\tau}\def\th{\theta}\def\x{\xi}\def\z{\zeta}\def\vphi{\varphi}$. \begin{definition} For each infinite place $v$ of $K_N$, choose $\tau}\def\th{\theta}\def\x{\xi}\def\z{\zeta}\def\vphi{\varphi$ inducing $v$. For $(a,b) \in I_N$, define $$\lambda}\def\om{\omega}\def\s{\sigma_{N,v}^{a,b} \in H_\sD^2(X_{N,v}^{a,b},\R(2))$$ to be the element corresponding to $2\pi i \cdot(p_N^{a,b}\g_{N,\tau}\def\th{\theta}\def\x{\xi}\def\z{\zeta}\def\vphi{\varphi})^\vee$ under the identification \eqref{identification}. Only the sign depends on the choice of $\tau}\def\th{\theta}\def\x{\xi}\def\z{\zeta}\def\vphi{\varphi$. \end{definition} \begin{proposition}\label{basis-deligne} Let $(a,b) \in I_N$ and the notations be as above. Then, \begin{enumerate} \item $H_\sD^2(X_{N,v}^{a,b},\R(2))=E_{N,\R} \lambda}\def\om{\omega}\def\s{\sigma_{N,v}^{a,b}$ and $\mathscr{B}}\def\sD{\mathscr{D}}\def\sH{\mathscr{H}(h^1(X_{N,v}^{a,b}))=E_N \lambda}\def\om{\omega}\def\s{\sigma_{N,v}^{a,b}$. \item Under the identification \eqref{identification-sigma}, we have $$(\lambda}\def\om{\omega}\def\s{\sigma_{N,v}^{a,b})_\s = 2 \pi i \cdot \wt\om_{N,\tau}\def\th{\theta}\def\x{\xi}\def\z{\zeta}\def\vphi{\varphi}^{ha,hb}$$ where $h \in H_N$ is the element such that $\tau}\def\th{\theta}\def\x{\xi}\def\z{\zeta}\def\vphi{\varphi(\z_N)^h=\s(\xi_N)$. \end{enumerate} \end{proposition} \begin{remark}\label{remark-h-number} The equality \eqref{h-number} follows since $$H^1(X_{N,v}^{a,b}(\C),\C) = H^1(X_{N,\tau}\def\th{\theta}\def\x{\xi}\def\z{\zeta}\def\vphi{\varphi}^{a,b}(\C),\C) \oplus H^1(X_{N,\ol\tau}\def\th{\theta}\def\x{\xi}\def\z{\zeta}\def\vphi{\varphi}^{a,b}(\C),\C)$$ and exactly one of $\om_{N,\tau}\def\th{\theta}\def\x{\xi}\def\z{\zeta}\def\vphi{\varphi}^{ha,hb}$, $\om_{N,\ol\tau}\def\th{\theta}\def\x{\xi}\def\z{\zeta}\def\vphi{\varphi}^{-ha,-hb}$ is holomorphic. \end{remark} \subsection{Main results} We state the main results of this paper. For $\alpha}\def\b{\beta}\def\g{\gamma}\def\d{\delta}\def\pd{\partial}\def\io{\iota \in \C - \{0,-1,-2, \dots\}$ and a non-negative integer $n$, let \begin{equation*} (\alpha}\def\b{\beta}\def\g{\gamma}\def\d{\delta}\def\pd{\partial}\def\io{\iota,n)=\alpha}\def\b{\beta}\def\g{\gamma}\def\d{\delta}\def\pd{\partial}\def\io{\iota(\alpha}\def\b{\beta}\def\g{\gamma}\def\d{\delta}\def\pd{\partial}\def\io{\iota+1)(\alpha}\def\b{\beta}\def\g{\gamma}\def\d{\delta}\def\pd{\partial}\def\io{\iota+2)\cdots (\alpha}\def\b{\beta}\def\g{\gamma}\def\d{\delta}\def\pd{\partial}\def\io{\iota+n-1) = \frac{\vG(\alpha}\def\b{\beta}\def\g{\gamma}\def\d{\delta}\def\pd{\partial}\def\io{\iota+n)}{\vG(\alpha}\def\b{\beta}\def\g{\gamma}\def\d{\delta}\def\pd{\partial}\def\io{\iota)} \end{equation*} be the Pochhammer symbol, where $\vG(\alpha}\def\b{\beta}\def\g{\gamma}\def\d{\delta}\def\pd{\partial}\def\io{\iota)$ is the Gamma function. \begin{definition}\label{def-F} Define a function of positive real numbers $\alpha}\def\b{\beta}\def\g{\gamma}\def\d{\delta}\def\pd{\partial}\def\io{\iota$, $\b$, by $$\wt F(\alpha}\def\b{\beta}\def\g{\gamma}\def\d{\delta}\def\pd{\partial}\def\io{\iota,\b)= \frac{\vG(\alpha}\def\b{\beta}\def\g{\gamma}\def\d{\delta}\def\pd{\partial}\def\io{\iota)\vG(\b)}{\vG(\alpha}\def\b{\beta}\def\g{\gamma}\def\d{\delta}\def\pd{\partial}\def\io{\iota+\b+1)} \sum_{m, n \geq 0} \frac{(\alpha}\def\b{\beta}\def\g{\gamma}\def\d{\delta}\def\pd{\partial}\def\io{\iota,m)(\b,n)}{(\alpha}\def\b{\beta}\def\g{\gamma}\def\d{\delta}\def\pd{\partial}\def\io{\iota+\b+1,m+n)},$$ which takes values in positive real numbers. Its convergence will be explained later. \end{definition} The main result of this paper is the following. \begin{theorem}\label{main-theorem} Let $(a,b) \in I_N$, and $v$ be an infinite place of $K_N$ induced by $\tau}\def\th{\theta}\def\x{\xi}\def\z{\zeta}\def\vphi{\varphi$. Consider the regulator map $$r_{\sD,v} \colon H_\sM^2(X_N^{a,b},\Q(2))_\Z \lra H_\sD^2(X_{N,v}^{a,b},\R(2)).$$ Then we have $$r_{\sD,v}(e_N^{a,b}) = \mathbf{c}_{N,v}^{a,b} \lambda}\def\om{\omega}\def\s{\sigma^{a,b}_{N,v}$$ with $\mathbf{c}_{N,v}^{a,b} \in E_{N,\R}^*$. For any embedding $\s \colon E_N \hookrightarrow \C$, we have $$\s(\mathbf{c}_{N,v}^{a,b}) = -\frac{1}{4N^2\pi i} \Bigl(\wt F\bigl(\tfrac{\angle{ha}}{N},\tfrac{\angle{hb}}{N}\bigr)-\wt{F}\bigl(\tfrac{\angle{-ha}}{N},\tfrac{\angle{-hb}}{N}\bigr) \Bigr) \ \in \C^* $$ where $h \in H_N$ is the unique element satisfying $\tau}\def\th{\theta}\def\x{\xi}\def\z{\zeta}\def\vphi{\varphi(\z_N)^h=\s(\xi_N)$. In particular, $r_{\sD,v} \otimes}\def\op{\oplus_\Q \R$ is surjective. \end{theorem} The proof will be given in the subsequent subsections. First, we give several corollaries. \begin{corollary}\label{corollary-1} If $k$ contains all the $N$-th roots of unity, then the regulator map $$r_{\sD,v} \otimes}\def\op{\oplus_\Q \R \colon H_\sM^2(X_N,\Q(2))_\Z \otimes}\def\op{\oplus_\Q \R \lra H_\sD^2(X_{N,v},\R(2))$$ is surjective for any infinite place $v$ of $k$. \end{corollary} \begin{proof} After tensoring with $E_N$, the both sides decomposes into the cohomology groups of $X_N^{a,b}$, on which the regulator is surjective by Theorem \ref{main-theorem}. Hence the original map is surjective. \end{proof} \begin{corollary}\label{corollary-2} Let $(a,b) \in I_N$ and $v$ be an infinite place of $k$. Then the image of $e_N^{[a,b]_k}$ under the regulator map \begin{align*} r_{\sD,v} \colon H_\sM^2(X_N^{[a,b]_k},\Q(2))_\Z & \lra H_\sD^2(X_N^{[a,b]_k},\R(2)) \end{align*} is non-trivial. \end{corollary} \begin{proof} By Proposition \ref{prop-level} (ii) and Proposition \ref{e_N} (iv), we can assume that $(a,b)$ is primitive. By Proposition \ref{prop-field} (ii), after taking $E_N \otimes}\def\op{\oplus_{E_{N,k}} -$, the regulator in question is identified with the product of the regulators of Theorem \ref{main-theorem} for the places of $K_N$ over $v$, under which $e_N^{[a,b]_k}$ corresponds to $e_N^{a,b}$ by Proposition \ref{e_N} (vii). \end{proof} \begin{corollary}\label{corollary-3} Suppose that $N=3$, $4$ or $6$, and $k\subset\Q(\z_N)$. Then the regulator map \begin{align*} r_\sD \otimes}\def\op{\oplus_\Q \R \colon H^2_\sM(X_N,\Q(2))_\Z \otimes}\def\op{\oplus_\Q \R \lra H^2_\sD(X_{N,\R},\R(2)) \end{align*} is surjective. \end{corollary} \begin{proof} Since $\Q(\mu_N)$ is imaginary quadratic, the surjectivity for $X_{N,\Q(\mu_N)}$ (resp. $X_{N,\Q}$) follows from Corollary \ref{corollary-1} (resp. Corollary \ref{corollary-2}). \end{proof} \begin{remark} If $N=3$, $4$ or $6$, then the motive $X_N^{[a,b]_\Q}$ is isomorphic to $h^1(E)$, where $E$ is an elliptic curve over $\Q$ with complex multiplication by the integer ring of $\Q(\mu_N)$. Therefore, the surjectivity was already known \cite{beilinson}, \cite{bloch}, \cite{deninger} (see also \cite{d-w}). \end{remark} \subsection{Calculation of the regulators} We calculate the regulator of $e_N^{a,b}$ and prove the formula of Theorem \ref{main-theorem}. First, since $$d\log (1-x) = - \sum_{m \geq 1} x^m \frac{dx}{x}, \quad \log (1-y) = - \sum_{n \geq 1} \frac{y^n}{n},$$ we have: $$d\log (1-x)\log (1-y) = \sum_{m,n \geq 1} \frac 1 n x^my^n \frac{dx}{x} = \sum_{m,n \geq 1} \frac 1 n \om_N^{m,n+N}. $$ \begin{lemma} For $a$, $b \in \Z$, we have modulo exact forms $$(\tfrac a N + \tfrac b N,i+j)\om_N^{a+Ni,b+Nj} \equiv (\tfrac a N,i)(\tfrac b N,j)\om_N^{a,b}.$$ \end{lemma} \begin{proof} First, since $$ d(x^ay^b) = ax^ay^b\frac{dx}{x} + bx^ay^b\frac{dy}{y} = ax^ay^b\frac{dx}{x} - bx^{a+N}y^{b-N}\frac{dx}{x}, $$ we have $a \om_N^{a,b+N} \equiv b \om_N^{a+N,b}$. On the other hand, \begin{align*} \om_N^{a+N,b} = x^a(1-y^N)y^{b-N}\frac{dx}{x} = \om_N^{a,b}-\om_N^{a,b+N}. \end{align*} From these we obtain $$(a+b)\om_N^{a+N,b} \equiv a \om_N^{a,b}, \quad (a+b)\om_N^{a,b+N} \equiv b \om_N^{a,b}.$$ By using these formulae repeatedly, we obtain the result. \end{proof} \begin{remark} This relation reflects, and is in fact equivalent to, the relation of Beta values: $$(\tfrac a N+ \tfrac b N,i+j)B(\tfrac a N+i,\tfrac b N+j) = (\tfrac a N,i)(\tfrac b N,j)B(\tfrac a N, \tfrac b N),$$ which follows from the well-known relations \begin{equation}\label{beta-gamma} B(\alpha}\def\b{\beta}\def\g{\gamma}\def\d{\delta}\def\pd{\partial}\def\io{\iota,\b)= \frac{\vG(\alpha}\def\b{\beta}\def\g{\gamma}\def\d{\delta}\def\pd{\partial}\def\io{\iota)\vG(\b)}{\vG(\alpha}\def\b{\beta}\def\g{\gamma}\def\d{\delta}\def\pd{\partial}\def\io{\iota+\b)}, \quad \vG(\alpha}\def\b{\beta}\def\g{\gamma}\def\d{\delta}\def\pd{\partial}\def\io{\iota+1)=\alpha}\def\b{\beta}\def\g{\gamma}\def\d{\delta}\def\pd{\partial}\def\io{\iota\vG(\alpha}\def\b{\beta}\def\g{\gamma}\def\d{\delta}\def\pd{\partial}\def\io{\iota). \end{equation} \end{remark} Using the above lemma, we obtain: \begin{equation*}\begin{split} d\log&(1-x)\log(1-y) \\ &\equiv \sum_{m,n \geq 1} \frac{1}{m+n} \, \om_N^{m,n}\\ & = \sum_{1 \leq a,b \leq N} \sum_{i,j \geq 0} \frac{1}{a+Ni+b+Nj} \, \om_N^{a+Ni,b+Nj} \\ & \equiv \frac{1}{N} \sum_{1 \leq a,b \leq N} \sum_{i,j \geq 0} \frac{(\frac{a}{N},i)(\frac{b}{N},j)}{(\frac{a}{N}+\frac{b}{N}, i+j+1)} \, \om_N^{a,b}\\ &= \frac{1}{N^2} \sum_{1 \leq a,b \leq N} \frac{\vG(\frac a N)\vG(\frac b N)}{\vG(\frac{a}{N}+\frac{b}{N}+1)} \sum_{i,j \geq 0} \frac{(\frac{a}{N},i)(\frac{b}{N},j)}{(\frac{a}{N}+\frac{b}{N}+1, i+j)} \, \wt\om_N^{a,b}\\ & = \frac{1}{N^2} \sum_{1 \leq a,b \leq N} \wt F(\tfrac a N, \tfrac b N)\, \wt\om_N^{a,b} . \end{split}\end{equation*} We apply Proposition \ref{formula-regulator} for $f=1-y$, $g=1-x$; note that $e_N= -\{f,g\}$. We can start our cycles $(1-g_N^{r,0})(1-g_N^{0,s})\d_N$ from $P=(0,1)$, so that the second term of the formula vanishes. (More precisely, we modify slightly the cycle so that it is contained in the region $|y|<1$, and does not pass through the singularities of $f$ and $g$.) Now we calculate the first term of the formula. First, since $\om_N^{a,N}$, $\om_N^{N,b}$ are exact forms, they have trivial periods. Secondly, let $a+b=N$. Then $\om_N^{a,b}$, only having logarithmic singularities along $Z_N(\C)$, is a well-defined element of $H^1(U_N(\C),\C)$. Our cycles $g_N^{r,s}\g_N$ are already defined on $U_N(\C)$, and the formula \eqref{period} holds also in this case. Since $\wt F(\tfrac a N, \tfrac b N) = \wt F(\tfrac b N, \tfrac a N)$, and $$\int_{g_N^{r,s}\g_N} \bigl(\wt\om_N^{a,b}+\wt\om_N^{b,a}\bigr) = \z_N^{a(r-s)}+\z_N^{a(s-r)}$$ is a real number, these terms do not contribute to the regulator. Therefore, for a cycle $\g' \in H_1(X_{N,\tau}\def\th{\theta}\def\x{\xi}\def\z{\zeta}\def\vphi{\varphi}(\C),\Q)$, we obtained: \begin{equation}\label{regulator-step2} \begin{split} r_{\sD,v}(e_N)(\g') &= -\frac{i}{N^2} \Im \left(\sum_{(a,b)\in I_N} \wt F\bigl(\tfrac{\angle{a}}{N}, \tfrac{\angle{b}}{N}\bigr) \int_{\g'} \wt\om_{N,\tau}\def\th{\theta}\def\x{\xi}\def\z{\zeta}\def\vphi{\varphi}^{a,b} \right)\\ &= -\frac{1}{2N^2} \sum_{(a,b)\in I_N} \wt F\bigl(\tfrac{\angle{a}}{N}, \tfrac{\angle{b}}{N}\bigr) \int_{\g'} \bigl(\wt\om_{N,\tau}\def\th{\theta}\def\x{\xi}\def\z{\zeta}\def\vphi{\varphi}^{a,b}-\wt\om_{N,\tau}\def\th{\theta}\def\x{\xi}\def\z{\zeta}\def\vphi{\varphi}^{-a,-b}\bigr)\\ &= -\frac{1}{2N^2} \sum_{(a,b)\in I_N} \Bigl(\wt F\bigl(\tfrac{\angle{a}}{N}, \tfrac{\angle{b}}{N}\bigr) -\wt F\bigl(\tfrac{\angle{-a}}{N}, \tfrac{\angle{-b}}{N}\bigr)\Bigr) \int_{\g'} \wt\om_{N,\tau}\def\th{\theta}\def\x{\xi}\def\z{\zeta}\def\vphi{\varphi}^{a,b}. \end{split}\end{equation} Apply this to $\g' = p_N^{a,b}\g_{N,\tau}\def\th{\theta}\def\x{\xi}\def\z{\zeta}\def\vphi{\varphi}$. By the adjointness $$\int_{p_N^{a,b}\g_{N,\tau}\def\th{\theta}\def\x{\xi}\def\z{\zeta}\def\vphi{\varphi}}\wt\om_{N,\tau}\def\th{\theta}\def\x{\xi}\def\z{\zeta}\def\vphi{\varphi}^{c,d}=\int_{\g_{N,\tau}\def\th{\theta}\def\x{\xi}\def\z{\zeta}\def\vphi{\varphi}} p_N^{a,b}\wt\om_{N,\tau}\def\th{\theta}\def\x{\xi}\def\z{\zeta}\def\vphi{\varphi}^{c,d},$$ and Proposition \ref{basis-deligne}, we obtain the formula of Theorem \ref{main-theorem}. We are left to show that $\mathbf{c}_{N,v}^{a,b}$ is invertible, which will be done in the next subsection. \begin{corollary}\label{346} Let $N=3, 4$, or $6$, $(a,b) \in I_N$, and assume the Beilinson conjecture (Conjecture \ref{conj-beilinson}) for $X_N^{[a,b]_\Q} \in \sM_\Q$ . Then it follows \begin{equation*} L(j_N^{a,b},2)\equiv \pi L^*(j_N^{a,b},0) \equiv \sin \tfrac{2\pi}{N} \Bigl(\wt F\bigl(\tfrac{\angle{a}}{N},\tfrac{\angle{b}}{N}\bigr)-\wt{F}\bigl(\tfrac{\angle{-a}}{N},\tfrac{\angle{-b}}{N}\bigr) \Bigr) \end{equation*} modulo $\Q^*$. \end{corollary} \begin{proof} The first equivalence follows from Remark \ref{sigma-independence}, Corollary \ref{functional-equation} and Remark \ref{epsilon}. We calculate the regulator of $e_N^{[a,b]_\Q}$. The target of the regulator is $$H^1(X_N(\C),\Q(1))^+ = \Hom\bigl(H_1(X_N(\C),\Q)^-,\Q(1)\bigr).$$ Since $F_\infty \d_N=\d_N$, we have $F_\infty g_N^{r,s}\d_N = g_N^{-r,-s}\d_N$, $F_\infty\g_N=\g_N$, and hence $F_\infty g_N^{r,s}\g_N=g_N^{-r,-s}\g_N$. By Proposition \ref{homology} (i), $H_1(X_N(\C),\Q)^-$ is generated by $$\bigl\{(g_N^{r,s}-g_N^{-r,-s})\g_N \bigm| (r,s) \in G_N\bigr\}.$$ Since the only non-primitive case is $N=6$, $[a,b]_\Q=[2,2]_\Q$, which reduces to $N=3$, $[a,b]_\Q=[1,1]_\Q$, we can assume that $(a,b)$ is primitive. Choose $(r,s)$ such that $ar+bs=1$. Then it follows that $H_1(X_{N}^{[a,b]}(\C),\Q)^-$ is a one-dimensional $\Q$-module generated by $$\g_N^{[a,b]_\Q}:= p_N^{[a,b]_\Q}(g_N^{r,s}-g_N^{-r,-s})\g_N = (\xi_N-\xi_N^{-1})(p_N^{a,b}-p_N^{-a,-b})\g_N.$$ Therefore, by \eqref{regulator-step2} we have $$r_{\sD}(e_N^{[a,b]_\Q})(\g_N^{[a,b]_\Q})=-\frac{1}{N^2}(\xi_N-\xi_N^{-1}) \Bigl(\wt F\bigl(\tfrac{\angle{a}}{N},\tfrac{\angle{b}}{N}\bigr)-\wt{F}\bigl(\tfrac{\angle{-a}}{N},\tfrac{\angle{-b}}{N}\bigr)\Bigr),$$ hence the second equivalence follows. \end{proof} \begin{remark} Some cases of the corollary are proved unconditionally (with the rational factor determined) in \cite{otsubo-comparison} by comparing our element $e_N^{[a,b]_\Q}$ with Bloch's element \cite{bloch} for an elliptic curve with complex multiplication. \end{remark} \subsection{Hypergeometric functions and the end of the proof} We introduce Appell's hypergeometric function $F_3$, and finish the proof of Theorem \ref{main-theorem}. First, let us recall some properties of the classical hypergeometric series of Gauss \begin{equation*} F(\alpha}\def\b{\beta}\def\g{\gamma}\def\d{\delta}\def\pd{\partial}\def\io{\iota,\b,\g; x) = \sum_{n\geq 0}\frac{(\alpha}\def\b{\beta}\def\g{\gamma}\def\d{\delta}\def\pd{\partial}\def\io{\iota,n)(\b,n)}{(\g,n)(1,n)} x^n, \end{equation*} where $\g \not\in\{0, -1, -2, \dots\}$. \begin{proposition}[cf. \cite{t-kimura}]\label{gauss} \ \begin{enumerate} \item $F(\alpha}\def\b{\beta}\def\g{\gamma}\def\d{\delta}\def\pd{\partial}\def\io{\iota,\b,\g;x)$ converges absolutely for $|x|<1$. \item If $|x|<1$ and $\Re(\g)>\Re(\alpha}\def\b{\beta}\def\g{\gamma}\def\d{\delta}\def\pd{\partial}\def\io{\iota)>0$, then we have: \begin{equation*} F(\alpha}\def\b{\beta}\def\g{\gamma}\def\d{\delta}\def\pd{\partial}\def\io{\iota,\b,\g;x) = \frac{\vG(\g)}{\vG(\alpha}\def\b{\beta}\def\g{\gamma}\def\d{\delta}\def\pd{\partial}\def\io{\iota)\vG(\g-\alpha}\def\b{\beta}\def\g{\gamma}\def\d{\delta}\def\pd{\partial}\def\io{\iota)} \int_0^1 u^{\alpha}\def\b{\beta}\def\g{\gamma}\def\d{\delta}\def\pd{\partial}\def\io{\iota-1}(1-u)^{\g-\alpha}\def\b{\beta}\def\g{\gamma}\def\d{\delta}\def\pd{\partial}\def\io{\iota-1}(1-xu)^{-\b} \, du, \end{equation*} where the integral is taken along the segment $0 \leq u \leq 1$, and the branches are determined by $\arg(u)=0$, $\arg(1-u)=0$ and $|\arg(1-xu)| \leq \pi/2$. \item If\, $\Re(\g-\alpha}\def\b{\beta}\def\g{\gamma}\def\d{\delta}\def\pd{\partial}\def\io{\iota-\b)>0$, then $F(\alpha}\def\b{\beta}\def\g{\gamma}\def\d{\delta}\def\pd{\partial}\def\io{\iota,\b,\g;x)$ converges ansolutely for $|x|=1$, and we have \begin{equation*} F(\alpha}\def\b{\beta}\def\g{\gamma}\def\d{\delta}\def\pd{\partial}\def\io{\iota,\b,\g;1)= \frac{\vG(\g)\vG(\g-\alpha}\def\b{\beta}\def\g{\gamma}\def\d{\delta}\def\pd{\partial}\def\io{\iota-\b)}{\vG(\g-\alpha}\def\b{\beta}\def\g{\gamma}\def\d{\delta}\def\pd{\partial}\def\io{\iota)\vG(\g-\b)}. \end{equation*} As a function of $\alpha}\def\b{\beta}\def\g{\gamma}\def\d{\delta}\def\pd{\partial}\def\io{\iota$, $\b$ and $\g$, $F(\alpha}\def\b{\beta}\def\g{\gamma}\def\d{\delta}\def\pd{\partial}\def\io{\iota,\b,\g;1)$ is holomorphic in the domain $\Re(\g-\alpha}\def\b{\beta}\def\g{\gamma}\def\d{\delta}\def\pd{\partial}\def\io{\iota-\b)>0$, $\g \not\in\{0, -1, -2, \dots\}$. \end{enumerate} \end{proposition} Appell's hypergeometric series $F_3(\alpha}\def\b{\beta}\def\g{\gamma}\def\d{\delta}\def\pd{\partial}\def\io{\iota,\alpha}\def\b{\beta}\def\g{\gamma}\def\d{\delta}\def\pd{\partial}\def\io{\iota'\b,\b',\g;x,y)$ of two variables is defined for $\g \not\in\{0, -1, -2, \dots\}$ by \begin{equation*} F_3(\alpha}\def\b{\beta}\def\g{\gamma}\def\d{\delta}\def\pd{\partial}\def\io{\iota,\alpha}\def\b{\beta}\def\g{\gamma}\def\d{\delta}\def\pd{\partial}\def\io{\iota',\b,\b',\g;x,y) = \sum_{m,n \geq 0} \frac{(\alpha}\def\b{\beta}\def\g{\gamma}\def\d{\delta}\def\pd{\partial}\def\io{\iota,m)(\alpha}\def\b{\beta}\def\g{\gamma}\def\d{\delta}\def\pd{\partial}\def\io{\iota',n)(\b,m)(\b',n)}{(\g,m+n)(1,m)(1,n)} x^my^n. \end{equation*} This satisfies the following properties: \begin{proposition}\label{F_3} \ \begin{enumerate} \item $F_3(\alpha}\def\b{\beta}\def\g{\gamma}\def\d{\delta}\def\pd{\partial}\def\io{\iota,\alpha}\def\b{\beta}\def\g{\gamma}\def\d{\delta}\def\pd{\partial}\def\io{\iota',\b,\b',\g;x,y)$ converges absolutely for $|x|<1$, $|y|<1$. \item If $\Re(\alpha}\def\b{\beta}\def\g{\gamma}\def\d{\delta}\def\pd{\partial}\def\io{\iota)>0$, $\Re(\alpha}\def\b{\beta}\def\g{\gamma}\def\d{\delta}\def\pd{\partial}\def\io{\iota')>0$, and $\Re(\g-\alpha}\def\b{\beta}\def\g{\gamma}\def\d{\delta}\def\pd{\partial}\def\io{\iota-\alpha}\def\b{\beta}\def\g{\gamma}\def\d{\delta}\def\pd{\partial}\def\io{\iota')>0$, then we have: \begin{multline*} F_{3}(\alpha}\def\b{\beta}\def\g{\gamma}\def\d{\delta}\def\pd{\partial}\def\io{\iota,\alpha}\def\b{\beta}\def\g{\gamma}\def\d{\delta}\def\pd{\partial}\def\io{\iota',\b,\b',\g; x,y) = \frac{\vG(\g)}{\vG(\alpha}\def\b{\beta}\def\g{\gamma}\def\d{\delta}\def\pd{\partial}\def\io{\iota)\vG(\alpha}\def\b{\beta}\def\g{\gamma}\def\d{\delta}\def\pd{\partial}\def\io{\iota')\vG(\g-\alpha}\def\b{\beta}\def\g{\gamma}\def\d{\delta}\def\pd{\partial}\def\io{\iota-\alpha}\def\b{\beta}\def\g{\gamma}\def\d{\delta}\def\pd{\partial}\def\io{\iota')} \\ \times \iint_{\Delta}u^{\alpha}\def\b{\beta}\def\g{\gamma}\def\d{\delta}\def\pd{\partial}\def\io{\iota-1}(1-xu)^{-\b}v^{\alpha}\def\b{\beta}\def\g{\gamma}\def\d{\delta}\def\pd{\partial}\def\io{\iota'-1}(1-yv)^{-\b'}(1-u-v)^{\g-\alpha}\def\b{\beta}\def\g{\gamma}\def\d{\delta}\def\pd{\partial}\def\io{\iota-\alpha}\def\b{\beta}\def\g{\gamma}\def\d{\delta}\def\pd{\partial}\def\io{\iota'-1} du \, dv, \end{multline*} where $\Delta = \{(u,v) \mid u, v, 1-u-v \geq 0\}$, and the branches of the integrands are chosen similarly as above. \item Suppose that $\Re(\g-\alpha}\def\b{\beta}\def\g{\gamma}\def\d{\delta}\def\pd{\partial}\def\io{\iota-\b)>0$ and $\Re(\g-\alpha}\def\b{\beta}\def\g{\gamma}\def\d{\delta}\def\pd{\partial}\def\io{\iota'-\b')>0$. Then, $F_3(\alpha}\def\b{\beta}\def\g{\gamma}\def\d{\delta}\def\pd{\partial}\def\io{\iota,\alpha}\def\b{\beta}\def\g{\gamma}\def\d{\delta}\def\pd{\partial}\def\io{\iota',\b,\b',\g;x,y)$ converges absolutely for $|x|=|y|=1$. As a function of $\alpha}\def\b{\beta}\def\g{\gamma}\def\d{\delta}\def\pd{\partial}\def\io{\iota$, $\alpha}\def\b{\beta}\def\g{\gamma}\def\d{\delta}\def\pd{\partial}\def\io{\iota'$, $\b$, $\b'$ and $\g$, $F_3(\alpha}\def\b{\beta}\def\g{\gamma}\def\d{\delta}\def\pd{\partial}\def\io{\iota,\alpha}\def\b{\beta}\def\g{\gamma}\def\d{\delta}\def\pd{\partial}\def\io{\iota',\b,\b',\g;1,1)$ is holomorphic in the domain: $\Re(\g-\alpha}\def\b{\beta}\def\g{\gamma}\def\d{\delta}\def\pd{\partial}\def\io{\iota-\b)>0$, $\Re(\g-\alpha}\def\b{\beta}\def\g{\gamma}\def\d{\delta}\def\pd{\partial}\def\io{\iota'-\b')>0$, $\g \neq 0, -1, -2, \dots$. \end{enumerate} \end{proposition} \begin{proof} We only prove (iii). See \cite{appell}, \cite{t-kimura} for the other statements. First, we have $$F_3(\alpha}\def\b{\beta}\def\g{\gamma}\def\d{\delta}\def\pd{\partial}\def\io{\iota,\alpha}\def\b{\beta}\def\g{\gamma}\def\d{\delta}\def\pd{\partial}\def\io{\iota',\b,\b',\g;x,y) = \sum_{n \geq 0} \frac{(\alpha}\def\b{\beta}\def\g{\gamma}\def\d{\delta}\def\pd{\partial}\def\io{\iota',n)(\b',n)}{(\g,n)(1,n)} F(\alpha}\def\b{\beta}\def\g{\gamma}\def\d{\delta}\def\pd{\partial}\def\io{\iota,\b,\g+n;x) y^n. $$ Since $\Re(\g+n-\alpha}\def\b{\beta}\def\g{\gamma}\def\d{\delta}\def\pd{\partial}\def\io{\iota-\b)>0$, $F(\alpha}\def\b{\beta}\def\g{\gamma}\def\d{\delta}\def\pd{\partial}\def\io{\iota,\b,\g+n;x)$ converges absolutely for $|x|=1$ by Proposition \ref{gauss}. Since $|\g+n|>|\g|$ for sufficiently large $n$, and then $|\g+n+i|>|\g+i|$ for any $i$, the sum $$\sum_{m \geq 0} \left|\frac{(\alpha}\def\b{\beta}\def\g{\gamma}\def\d{\delta}\def\pd{\partial}\def\io{\iota,m)(\b,m)}{(\g+n,m)(1,m)}\right|$$ is bounded independently of $n$, and the absolute convergence of $F_3$ follows from that of $F(\alpha}\def\b{\beta}\def\g{\gamma}\def\d{\delta}\def\pd{\partial}\def\io{\iota',\b',\g;y)$ for $|y|=1$, which follows from the assumption and Proposition \ref{gauss}. The holomorphicity follows by a similar argument as in the case of one variable using the fact that $F_3$ is a Newton series with respect to $\alpha}\def\b{\beta}\def\g{\gamma}\def\d{\delta}\def\pd{\partial}\def\io{\iota$, $a'$, $\b$ and $\b'$, and is a factorial series with respect to $\g$. \end{proof} Now, consider the special case \begin{equation*} F(\alpha}\def\b{\beta}\def\g{\gamma}\def\d{\delta}\def\pd{\partial}\def\io{\iota,\b;x,y) := F_3(\alpha}\def\b{\beta}\def\g{\gamma}\def\d{\delta}\def\pd{\partial}\def\io{\iota,\b,1,1,\alpha}\def\b{\beta}\def\g{\gamma}\def\d{\delta}\def\pd{\partial}\def\io{\iota+\b+1;x,y) = \sum_{m,n \geq 0} \frac{(\alpha}\def\b{\beta}\def\g{\gamma}\def\d{\delta}\def\pd{\partial}\def\io{\iota,m)(\b,n)}{(\alpha}\def\b{\beta}\def\g{\gamma}\def\d{\delta}\def\pd{\partial}\def\io{\iota+\b+1,m+n)} x^my^n. \end{equation*} If $\Re(\alpha}\def\b{\beta}\def\g{\gamma}\def\d{\delta}\def\pd{\partial}\def\io{\iota), \Re(\b) >0$, then by the above proposition, it converges absolutely for $|x|, |y| \leq 1$, and the integral representation takes the form $$F(\alpha}\def\b{\beta}\def\g{\gamma}\def\d{\delta}\def\pd{\partial}\def\io{\iota,\b; x,y) = \frac{\vG(\alpha}\def\b{\beta}\def\g{\gamma}\def\d{\delta}\def\pd{\partial}\def\io{\iota+\b+1)}{\vG(\alpha}\def\b{\beta}\def\g{\gamma}\def\d{\delta}\def\pd{\partial}\def\io{\iota)\vG(\b)} \iint_\Delta u^{\alpha}\def\b{\beta}\def\g{\gamma}\def\d{\delta}\def\pd{\partial}\def\io{\iota-1} (1-xu)^{-1} v^{\b-1} (1-yv)^{-1} du \, dv.$$ In particular, if we define (see Definition \ref{def-F}) \begin{equation*} \wt{F}(\alpha}\def\b{\beta}\def\g{\gamma}\def\d{\delta}\def\pd{\partial}\def\io{\iota,\b) = \frac{\vG(\alpha}\def\b{\beta}\def\g{\gamma}\def\d{\delta}\def\pd{\partial}\def\io{\iota)\vG(\b)}{\vG(\alpha}\def\b{\beta}\def\g{\gamma}\def\d{\delta}\def\pd{\partial}\def\io{\iota+\b+1)} F(\alpha}\def\b{\beta}\def\g{\gamma}\def\d{\delta}\def\pd{\partial}\def\io{\iota,\b;1,1), \end{equation*} then we have \begin{equation}\label{equation-integral} \wt{F}(\alpha}\def\b{\beta}\def\g{\gamma}\def\d{\delta}\def\pd{\partial}\def\io{\iota,\b)=\iint_{\Delta} u^{\alpha}\def\b{\beta}\def\g{\gamma}\def\d{\delta}\def\pd{\partial}\def\io{\iota-1}(1-u)^{-1}v^{\b-1}(1-v)^{-1} du \, dv. \end{equation} \begin{proposition}\label{decreasing} Consider $\wt{F}(\alpha}\def\b{\beta}\def\g{\gamma}\def\d{\delta}\def\pd{\partial}\def\io{\iota,\b)$ as a function of positive real numbers $\alpha}\def\b{\beta}\def\g{\gamma}\def\d{\delta}\def\pd{\partial}\def\io{\iota$, $\b$. Then{\rm :} \begin{enumerate} \item $\wt{F}(\alpha}\def\b{\beta}\def\g{\gamma}\def\d{\delta}\def\pd{\partial}\def\io{\iota,\b)$ is monotonously decreasing with respect to each parameter. \item Suppose that $0 < \alpha}\def\b{\beta}\def\g{\gamma}\def\d{\delta}\def\pd{\partial}\def\io{\iota, \b <1$. Then, $\wt{F}(\alpha}\def\b{\beta}\def\g{\gamma}\def\d{\delta}\def\pd{\partial}\def\io{\iota,\b) \neq \wt{F}(1-\alpha}\def\b{\beta}\def\g{\gamma}\def\d{\delta}\def\pd{\partial}\def\io{\iota,1-\b)$ if and only if $\alpha}\def\b{\beta}\def\g{\gamma}\def\d{\delta}\def\pd{\partial}\def\io{\iota+\b\neq 1$. \end{enumerate} \end{proposition} \begin{proof} (i) is immediate from \ref{equation-integral}. To prove (ii), first assume that $\alpha}\def\b{\beta}\def\g{\gamma}\def\d{\delta}\def\pd{\partial}\def\io{\iota+\b<1$. Then we have $$\wt{F}(\alpha}\def\b{\beta}\def\g{\gamma}\def\d{\delta}\def\pd{\partial}\def\io{\iota,\b) = \wt{F}(\b,\alpha}\def\b{\beta}\def\g{\gamma}\def\d{\delta}\def\pd{\partial}\def\io{\iota) > \wt{F}(1-\alpha}\def\b{\beta}\def\g{\gamma}\def\d{\delta}\def\pd{\partial}\def\io{\iota,\alpha}\def\b{\beta}\def\g{\gamma}\def\d{\delta}\def\pd{\partial}\def\io{\iota) >\wt{F}(1-\alpha}\def\b{\beta}\def\g{\gamma}\def\d{\delta}\def\pd{\partial}\def\io{\iota,1-\b).$$ Similarly, if $\alpha}\def\b{\beta}\def\g{\gamma}\def\d{\delta}\def\pd{\partial}\def\io{\iota+\b>1$, then $\wt{F}(\alpha}\def\b{\beta}\def\g{\gamma}\def\d{\delta}\def\pd{\partial}\def\io{\iota,\b)<\wt{F}(1-\alpha}\def\b{\beta}\def\g{\gamma}\def\d{\delta}\def\pd{\partial}\def\io{\iota,1-\b)$. Finally, if $\alpha}\def\b{\beta}\def\g{\gamma}\def\d{\delta}\def\pd{\partial}\def\io{\iota+\b=1$, then we have $\wt{F}(\alpha}\def\b{\beta}\def\g{\gamma}\def\d{\delta}\def\pd{\partial}\def\io{\iota,\b)=\wt{F}(\b,\alpha}\def\b{\beta}\def\g{\gamma}\def\d{\delta}\def\pd{\partial}\def\io{\iota)=\wt{F}(1-\alpha}\def\b{\beta}\def\g{\gamma}\def\d{\delta}\def\pd{\partial}\def\io{\iota,1-\b)$. \end{proof} Applying this proposition to $\alpha}\def\b{\beta}\def\g{\gamma}\def\d{\delta}\def\pd{\partial}\def\io{\iota = \frac{\angle{ha}}{N}$, $\b = \frac{\angle{hb}}{N}$, the proof of Theorem \ref{main-theorem} is completed. \begin{remark} We could also use Deligne's description of the regulator (cf. \cite{hain} \cite{ramakrishnan}), which uses Chen's iterated integral. For a path $\g \colon [0,1] \ra X$ and differential forms $\omega$, $\eta$, we define $$\int_\g \om\eta := \int_0^1\left(\int_0^t \g^*\om(s)\right) \g^*\eta(t).$$ Roughly speaking, the regulator map sends $\{f,g\}$ to $$\g \longmapsto i \Im\left( \int_\g d\log f \, d\log g\right). $$ In this way, one finds more directly the integral \eqref{equation-integral}. Note that the iterated integral is a double integral over the region $0 \leq s, t, t-s \leq 1$, which is transformed to $\Delta$ by $u=s$, $v=1-t$. \end{remark} \subsection{Variants} We discuss some variants which involve special values of hypergeometric functions of one variable (cf. \cite{slater}) $${}_{p}F_{q}\left({{\alpha}\def\b{\beta}\def\g{\gamma}\def\d{\delta}\def\pd{\partial}\def\io{\iota_1,\cdots, \alpha}\def\b{\beta}\def\g{\gamma}\def\d{\delta}\def\pd{\partial}\def\io{\iota_{p}} \atop {\b_1,\cdots, \b_{q}}};x\right) = \sum_{n \geq 0} \frac{(\alpha}\def\b{\beta}\def\g{\gamma}\def\d{\delta}\def\pd{\partial}\def\io{\iota_1,n)\cdots (\alpha}\def\b{\beta}\def\g{\gamma}\def\d{\delta}\def\pd{\partial}\def\io{\iota_p,n)}{(\b_1,n)\cdots (\b_q,n)(1,n)}x^n. $$ It converges absolutely for $|x|<1$, and converges at $x=1$ if $$\sum_{j=1}^q \b_j - \sum_{i=1}^p \alpha}\def\b{\beta}\def\g{\gamma}\def\d{\delta}\def\pd{\partial}\def\io{\iota_i >0.$$ The integral representation of ${}_3F_2$ (cf. \cite{slater}, 4.1) is written as follows: \begin{multline*} {}_3F_2\left({{a,b,c} \atop {d,e}};x\right) = \frac{\vG(d)\vG(e)}{\vG(a)\vG(d-a)\vG(c)\vG(e-c)} \times \\ \iint_{\Delta} u^{a-1}(1-xu)^{-b}(1-u(1-v)^{-1})^{d-a-1}v^{e-c-1}(1-v)^{c-a-1} du \, dv. \end{multline*} By comparing with Proposition \ref{F_3} (ii), we obtain \begin{multline*} F_3(\alpha}\def\b{\beta}\def\g{\gamma}\def\d{\delta}\def\pd{\partial}\def\io{\iota,\alpha}\def\b{\beta}\def\g{\gamma}\def\d{\delta}\def\pd{\partial}\def\io{\iota',\b,\b',\alpha}\def\b{\beta}\def\g{\gamma}\def\d{\delta}\def\pd{\partial}\def\io{\iota+\alpha}\def\b{\beta}\def\g{\gamma}\def\d{\delta}\def\pd{\partial}\def\io{\iota'+1; x,1) \\ = \frac{\vG(\alpha}\def\b{\beta}\def\g{\gamma}\def\d{\delta}\def\pd{\partial}\def\io{\iota+\alpha}\def\b{\beta}\def\g{\gamma}\def\d{\delta}\def\pd{\partial}\def\io{\iota'+1)\vG(\alpha}\def\b{\beta}\def\g{\gamma}\def\d{\delta}\def\pd{\partial}\def\io{\iota-\b'+1)}{\vG(\alpha}\def\b{\beta}\def\g{\gamma}\def\d{\delta}\def\pd{\partial}\def\io{\iota+1)\vG(\alpha}\def\b{\beta}\def\g{\gamma}\def\d{\delta}\def\pd{\partial}\def\io{\iota+\alpha}\def\b{\beta}\def\g{\gamma}\def\d{\delta}\def\pd{\partial}\def\io{\iota'-\b'+1)} {}_3F_2\left({{\alpha}\def\b{\beta}\def\g{\gamma}\def\d{\delta}\def\pd{\partial}\def\io{\iota,\b,\alpha}\def\b{\beta}\def\g{\gamma}\def\d{\delta}\def\pd{\partial}\def\io{\iota-\b'+1} \atop {\alpha}\def\b{\beta}\def\g{\gamma}\def\d{\delta}\def\pd{\partial}\def\io{\iota+1,\alpha}\def\b{\beta}\def\g{\gamma}\def\d{\delta}\def\pd{\partial}\def\io{\iota+\alpha}\def\b{\beta}\def\g{\gamma}\def\d{\delta}\def\pd{\partial}\def\io{\iota'-\b'+1}};x \right). \end{multline*} In particular, we have $$\wt{F}(\alpha}\def\b{\beta}\def\g{\gamma}\def\d{\delta}\def\pd{\partial}\def\io{\iota,\b) = \frac{1}{\alpha}\def\b{\beta}\def\g{\gamma}\def\d{\delta}\def\pd{\partial}\def\io{\iota} \frac{\vG(\alpha}\def\b{\beta}\def\g{\gamma}\def\d{\delta}\def\pd{\partial}\def\io{\iota)\vG(\b)}{\vG(\alpha}\def\b{\beta}\def\g{\gamma}\def\d{\delta}\def\pd{\partial}\def\io{\iota+\b)} {}_3F_2\left({{\alpha}\def\b{\beta}\def\g{\gamma}\def\d{\delta}\def\pd{\partial}\def\io{\iota,\alpha}\def\b{\beta}\def\g{\gamma}\def\d{\delta}\def\pd{\partial}\def\io{\iota,1} \atop {\alpha}\def\b{\beta}\def\g{\gamma}\def\d{\delta}\def\pd{\partial}\def\io{\iota+1,\alpha}\def\b{\beta}\def\g{\gamma}\def\d{\delta}\def\pd{\partial}\def\io{\iota+\b}}; 1\right) . $$ By using Dixon's formula (cf. \cite{slater}, 2.3.3) $${}_3F_2\left({{a,b,c} \atop {d,e}};1\right) = \frac{\vG(d)\vG(e)\vG(s)}{\vG(a)\vG(b+s)\vG(c+s)} {}_3F_2\left({{d-a,f-a,s} \atop {b+s,c+s}};1\right),$$ where $s=d+e-a-b-c$, repeatedly, we obtain three other expressions. In particular, we have \begin{equation}\label{Fand3F2} \wt{F}(\alpha}\def\b{\beta}\def\g{\gamma}\def\d{\delta}\def\pd{\partial}\def\io{\iota,\b) = \left(\frac{\vG(\alpha}\def\b{\beta}\def\g{\gamma}\def\d{\delta}\def\pd{\partial}\def\io{\iota)\vG(\b)}{\vG(\alpha}\def\b{\beta}\def\g{\gamma}\def\d{\delta}\def\pd{\partial}\def\io{\iota+\b)}\right)^2 {}_3F_2\left({{\alpha}\def\b{\beta}\def\g{\gamma}\def\d{\delta}\def\pd{\partial}\def\io{\iota,\b,\alpha}\def\b{\beta}\def\g{\gamma}\def\d{\delta}\def\pd{\partial}\def\io{\iota+\b-1} \atop {\alpha}\def\b{\beta}\def\g{\gamma}\def\d{\delta}\def\pd{\partial}\def\io{\iota+\b,\alpha}\def\b{\beta}\def\g{\gamma}\def\d{\delta}\def\pd{\partial}\def\io{\iota+\b}};1\right), \end{equation} which is symmetric and has better convergence. On the other hand, Ross \cite{ross-cr} and Kimura \cite{k-kimura} also studied the element \begin{equation*} \{1-xy,x\} = -\{1-xy,y\} \ \in H_\sM^2(X_N,\Q(2))_\Z \end{equation*} (the tame symbols vanish). We explain that its study is in fact equivalent to the study of $e_N^{[1,1]_k}$, or equivalently, of $e_N^{1,1}$. Let the curve $C_N^{1,1}$ and the morphism of degree $N$ $$\psi\colon X_N \lra C_N^{1,1}$$ be as in Remark \ref{C_N}. The automorphism group (over $K_N$) of $X_N/C_N^{1,1}$ is $\bigl\{g_N^{r,-r} \bigm| r \in \Z/N\Z\bigr\}$. One sees easily $$\{1-xy,x\} = \frac 1 N \psi^*\{1-v,u\}.$$ As Yasuda pointed out to the author, we can prove that \begin{equation}\label{yasuda} \psi_* e_N = 3\{1-v,u\} = 3 \psi_*\{1-xy,x\} \end{equation} in $H_\sM^2(C_N^{1,1},\Q(2))_\Z$. Therefore, we have $$3\{1-xy,x\}= \frac{1}{N} \psi^*\psi_* e_N = \frac{1}{N} \left(\sum_{r\in\Z/N\Z}\nolimits g_N^{r,-r}\right) e_N.$$ Since $$ \sum_r g_N^{r,-r} p_N^{a,b} = \begin{cases} Np_N^{a,b} & \text{if $a=b$},\\ 0 & \text{otherwise}, \end{cases} $$ we obtain \begin{equation}\label{1-xy,x} 3\{1-xy,x\} = \left(\sum_{a\in \Z/N\Z}\nolimits p_N^{a,a} \right)e_N. \end{equation} In particular, the regulator of $\{1-xy,x\}$ is calculated by the regulators of $e_N^{a,a}$, which then reduces to the study of $e_{N'}^{1,1}$ for some $N'|N$. If we apply a similar calculation as for $e_N$ to $\{1-xy,x\}$, we obtain similar results as Theorem \ref{main-theorem} and its corollaries for $X_N^{a,a}$ and $X_N^{[1,1]_k}$. Then we encounter with another generalized hypergeometric function of one variable, namely, \begin{equation*} G(\alpha}\def\b{\beta}\def\g{\gamma}\def\d{\delta}\def\pd{\partial}\def\io{\iota,\b,\g;x) := \sum_{n \geq 0} \frac{(\alpha}\def\b{\beta}\def\g{\gamma}\def\d{\delta}\def\pd{\partial}\def\io{\iota,n)(\b,n)}{(\g,2n)} x^n \end{equation*} and its special value \begin{equation*} \wt G(\alpha}\def\b{\beta}\def\g{\gamma}\def\d{\delta}\def\pd{\partial}\def\io{\iota,\b) := \frac{\vG(\alpha}\def\b{\beta}\def\g{\gamma}\def\d{\delta}\def\pd{\partial}\def\io{\iota)\vG(\b)}{\vG(\alpha}\def\b{\beta}\def\g{\gamma}\def\d{\delta}\def\pd{\partial}\def\io{\iota+\b+1)} G(\alpha}\def\b{\beta}\def\g{\gamma}\def\d{\delta}\def\pd{\partial}\def\io{\iota,\b,\alpha}\def\b{\beta}\def\g{\gamma}\def\d{\delta}\def\pd{\partial}\def\io{\iota+\b+1;1). \end{equation*} In fact, it is again a special case of ${}_3F_2$: \begin{equation}\label{Gand3F2} G(\alpha}\def\b{\beta}\def\g{\gamma}\def\d{\delta}\def\pd{\partial}\def\io{\iota,\b,\g;x) = {}_3F_2\left({{\alpha}\def\b{\beta}\def\g{\gamma}\def\d{\delta}\def\pd{\partial}\def\io{\iota,\b,1} \atop {\frac{\g}{2}, \frac{\g+1}{2}}}; \frac{x}{4}\right). \end{equation} We remark that it converges for $|x|<4$, and $x=1$ is not on the boundary. By \eqref{1-xy,x} and the comparison of the regulators, we obtain: \begin{equation*} \wt F\bigl(\tfrac{\angle{a}}{N}, \tfrac{\angle{a}}{N}\bigr)-\wt F\bigl(\tfrac{\angle{-a}}{N}, \tfrac{\angle{-a}}{N}\bigr) = 3\left(\wt G\bigl(\tfrac{\angle{a}}{N}, \tfrac{\angle{a}}{N}\bigr)-\wt G\bigl(\tfrac{\angle{-a}}{N}, \tfrac{\angle{-a}}{N}\bigr)\right) \end{equation*} for any $a \neq 0$. It follows that $$\wt F(\alpha}\def\b{\beta}\def\g{\gamma}\def\d{\delta}\def\pd{\partial}\def\io{\iota,\alpha}\def\b{\beta}\def\g{\gamma}\def\d{\delta}\def\pd{\partial}\def\io{\iota)-\wt F(1-\alpha}\def\b{\beta}\def\g{\gamma}\def\d{\delta}\def\pd{\partial}\def\io{\iota,1-\alpha}\def\b{\beta}\def\g{\gamma}\def\d{\delta}\def\pd{\partial}\def\io{\iota) = 3 \left(\wt G(\alpha}\def\b{\beta}\def\g{\gamma}\def\d{\delta}\def\pd{\partial}\def\io{\iota,\alpha}\def\b{\beta}\def\g{\gamma}\def\d{\delta}\def\pd{\partial}\def\io{\iota)-\wt G(1-\alpha}\def\b{\beta}\def\g{\gamma}\def\d{\delta}\def\pd{\partial}\def\io{\iota,1-\alpha}\def\b{\beta}\def\g{\gamma}\def\d{\delta}\def\pd{\partial}\def\io{\iota)\right)$$ for any $\alpha}\def\b{\beta}\def\g{\gamma}\def\d{\delta}\def\pd{\partial}\def\io{\iota \in \C$ with $0<\Re(\alpha}\def\b{\beta}\def\g{\gamma}\def\d{\delta}\def\pd{\partial}\def\io{\iota)<1$, for the both sides are holomorphic with respect to $\alpha}\def\b{\beta}\def\g{\gamma}\def\d{\delta}\def\pd{\partial}\def\io{\iota$. It seems that \begin{equation}\label{FandG} \wt F(\alpha}\def\b{\beta}\def\g{\gamma}\def\d{\delta}\def\pd{\partial}\def\io{\iota,\alpha}\def\b{\beta}\def\g{\gamma}\def\d{\delta}\def\pd{\partial}\def\io{\iota) = 3 \wt G(\alpha}\def\b{\beta}\def\g{\gamma}\def\d{\delta}\def\pd{\partial}\def\io{\iota,\alpha}\def\b{\beta}\def\g{\gamma}\def\d{\delta}\def\pd{\partial}\def\io{\iota) \end{equation} for any $\alpha}\def\b{\beta}\def\g{\gamma}\def\d{\delta}\def\pd{\partial}\def\io{\iota\in\C$ with $\Re(\alpha}\def\b{\beta}\def\g{\gamma}\def\d{\delta}\def\pd{\partial}\def\io{\iota)>0$, but $\wt F(\alpha}\def\b{\beta}\def\g{\gamma}\def\d{\delta}\def\pd{\partial}\def\io{\iota,\b) \neq 3 \wt G(\alpha}\def\b{\beta}\def\g{\gamma}\def\d{\delta}\def\pd{\partial}\def\io{\iota,\b)$ in general. By \eqref{Fand3F2} and \eqref{Gand3F2}, \eqref{FandG} is equivalent to: $$\frac{\vG(\alpha}\def\b{\beta}\def\g{\gamma}\def\d{\delta}\def\pd{\partial}\def\io{\iota)^2}{\vG(2\alpha}\def\b{\beta}\def\g{\gamma}\def\d{\delta}\def\pd{\partial}\def\io{\iota)} \, {}_3F_2\left({{\alpha}\def\b{\beta}\def\g{\gamma}\def\d{\delta}\def\pd{\partial}\def\io{\iota,\alpha}\def\b{\beta}\def\g{\gamma}\def\d{\delta}\def\pd{\partial}\def\io{\iota,2\alpha}\def\b{\beta}\def\g{\gamma}\def\d{\delta}\def\pd{\partial}\def\io{\iota-1} \atop {2\alpha}\def\b{\beta}\def\g{\gamma}\def\d{\delta}\def\pd{\partial}\def\io{\iota,2\alpha}\def\b{\beta}\def\g{\gamma}\def\d{\delta}\def\pd{\partial}\def\io{\iota}};1\right) = \frac{3}{2\alpha}\def\b{\beta}\def\g{\gamma}\def\d{\delta}\def\pd{\partial}\def\io{\iota}\, {}_3F_2\left({{\alpha}\def\b{\beta}\def\g{\gamma}\def\d{\delta}\def\pd{\partial}\def\io{\iota,\alpha}\def\b{\beta}\def\g{\gamma}\def\d{\delta}\def\pd{\partial}\def\io{\iota,1} \atop {\alpha}\def\b{\beta}\def\g{\gamma}\def\d{\delta}\def\pd{\partial}\def\io{\iota+\frac{1}{2},\alpha}\def\b{\beta}\def\g{\gamma}\def\d{\delta}\def\pd{\partial}\def\io{\iota+1}};\frac{1}{4}\right).$$ The author does not know if such a relation is known to the experts. \begin{remark} We could also study the element $\{1-x^ry^s,x\}$, though its tame symbols do not vanish in general. Then, the hypergeometric function involved should be $$\sum_{n \geq 0} \frac{(\alpha}\def\b{\beta}\def\g{\gamma}\def\d{\delta}\def\pd{\partial}\def\io{\iota,rn)(\b,sn)}{(\g,(r+s)n)}x^n, \quad \g=\alpha}\def\b{\beta}\def\g{\gamma}\def\d{\delta}\def\pd{\partial}\def\io{\iota+\b+1,$$ which is also written as ${}_{p}F_{q}\left({{\alpha}\def\b{\beta}\def\g{\gamma}\def\d{\delta}\def\pd{\partial}\def\io{\iota_1,\cdots, \alpha}\def\b{\beta}\def\g{\gamma}\def\d{\delta}\def\pd{\partial}\def\io{\iota_{p}} \atop {\b_1,\cdots, \b_{q}}};\frac{x}{R}\right)$ with $p=q+1=r+s+1$, suitable $\alpha}\def\b{\beta}\def\g{\gamma}\def\d{\delta}\def\pd{\partial}\def\io{\iota_i$, $\b_j$ and $R>1$. \end{remark} \subsection{Action of the symmetric group} The Fermat curve has another symmetry, namely, the action of the symmetric group. Using this, we construct more elements in motivic cohomology. Let us suppose for simplicity that $N$ is odd, so that the equation \eqref{equation-fermat} of $X_N$ is written as: $$x_0^N+y_0^N+(-z_0)^N=0.$$ The symmetric group $S_3$ of degree $3$ acts on $X_N$ as permutations on the set $\{x_0,y_0,-z_0\}$. Since $$(1\ 2)^*e_N = \{1-y,1-x\} = -e_N,$$ we do not get a new element by $(1\ 2)$. The quotient of $S_3$ by the subgroup generated by $(1 \ 2)$ is represented by $(1)$, $(1\ 3)$ and $(2\ 3)$. For $(a,b) \in G_N$, put \begin{equation*} c=-a-b \in \Z/N\Z, \end{equation*} and use redundant notations with index $(a,b,c)$ instead of $(a,b)$, such as $p_N^{a,b,c} = p_N^{a,b}$, $e_N^{a,b,c}=e_N^{a,b}$. \begin{definition} Define elements of $H_\sM^2(X_N^{a,b},\Q(2))_\Z$ by $$e_{(1)}^{a,b,c}= e_N^{a,b,c}, \quad e_{(1\ 3)}^{a,b,c}= p_N^{a,b,c}(1\ 3)^*e_{N,K_N}, \quad e_{(2\ 3)}^{a,b,c}= p_N^{a,b,c}(2 \ 3)^*e_{N,K_N}, $$ where we put $e_{N,K_N}=\pi_{K_N/k}^*e_N$. \end{definition} \begin{lemma}\label{projector-twist} In $\End_{\sM_{K_N,E_N}}(X_{N,K_N})$, we have $$p_N^{a,b,c} \circ (1\ 3)^* = (1\ 3)^* \circ p_N^{c,b,a}, \quad p_N^{a,b,c} \circ (2\ 3)^* = (2\ 3)^* \circ p_N^{a,c,b}.$$ \end{lemma} \begin{proof} Since $ (1\ 3)\circ g_N^{r,s} = g_N^{-r,-r+s}\circ (1\ 3)$ in $\sV_k$, we have \begin{multline*} N^2 p_N^{a,b,c} \circ (1\ 3)^* = \sum_{r,s} \theta_N^{a,b}(g_N^{r,s})^{-1} g_N^{r,s} \circ (1\ 3)^* \\ = (1\ 3)^* \circ \sum_{r,s} \theta_N^{a,b}(g^{r,s})^{-1} g_N^{-r,-r+s} = (1\ 3)^* \circ \sum_{r',s'} \theta_N^{a,b}(g_N^{-r',-r'+s'})^{-1} g_N^{r',s'} \\ = (1\ 3)^* \circ \sum_{r',s'} \theta_N^{-a-b,b}(g^{r',s'})^{-1} g_N^{r',s'} = N^2 (1\ 3)^* \circ p_N^{c,b,a}. \end{multline*} The other one is parallel. \end{proof} Put \ $\eta_N=e^{\frac{2\pi i}{N}} \in \C^*$, and a polynomial \begin{equation*} \Phi_N(T) = T^{\frac{N-1}{2}}-T^{\frac{1-N}{2}}. \end{equation*} Obviously, $\Phi_N(\eta_N^a) \in i\R$ and $\Phi_N(\eta_N^{-a})=-\Phi_N(\eta_N^a)$. \begin{lemma}\label{regulator-twist} Let $(a,b) \in I_N$ and $\wt\om_N^{a,b,c} = \wt\om_N^{a,b} \in H^1(X_N(\C),\C)$ be as defined in \eqref{omega-normalized}. Then we have{\rm :} \begin{equation*} (1\ 3)^* \wt\om_N^{a,b,c} = -\frac{\Phi_N(\eta_N^a)}{\Phi_N(\eta_N^c)}\, \wt\om_N^{c,b,a}, \quad (2\ 3)^* \wt\om_N^{a,b,c} = -\frac{\Phi_N(\eta_N^b)}{\Phi_N(\eta_N^c)}\, \wt\om_N^{a,c,b}. \end{equation*} \end{lemma} \begin{proof} We only prove the first one. First, assume that $\angle{a}+\angle{b} <N$, i.e. $\angle{a}+\angle{b}+\angle{c}=N$. Then we have \begin{align*} (1\ 3)^*\om_N^{a,b} &= \left(\frac{1}{x}\right)^{\angle{a}} \left(-\frac{y}{x}\right)^{\angle{b}-N}\left(-\frac{dx}{x}\right) \\ & = (-1)^{\angle{b}} x^{N-\angle{a}-\angle{b}}y^{\angle{b}-N}\frac{dx}{x} = (-1)^{\angle{b}} \om_N^{c,b}. \end{align*} By \eqref{beta-gamma} and the well-known relation $$\vG(\alpha}\def\b{\beta}\def\g{\gamma}\def\d{\delta}\def\pd{\partial}\def\io{\iota)\vG(1-\alpha}\def\b{\beta}\def\g{\gamma}\def\d{\delta}\def\pd{\partial}\def\io{\iota) = \frac{\pi}{\sin \pi \alpha}\def\b{\beta}\def\g{\gamma}\def\d{\delta}\def\pd{\partial}\def\io{\iota},$$ we have \begin{equation*} \frac{B\bigl(\tfrac{\angle{c}}{N}, \tfrac{\angle{b}}{N}\bigr)}{B\bigl(\tfrac{\angle{a}}{N}, \tfrac{\angle{b}}{N}\bigr)} = \frac{\vG\bigl(\frac{\angle{a}+\angle{b}}{N}\bigr)}{\vG\bigl(\frac{\angle{a}}{N}\bigr)\vG\bigl(\frac{\angle{b}}{N}\bigr)} \frac{\vG\bigl(\frac{\angle{c}}{N}\bigr)\vG\bigl(\frac{\angle{b}}{N}\bigr)} {\vG\bigl(\frac{\angle{c}+\angle{b}}{N}\bigr)} = \frac{\vG\bigl(1-\frac{\angle{c}}{N}\bigr)\vG\bigl(\frac{\angle{c}}{N}\bigr)}{\vG\bigl(\frac{\angle{a}}{N}\bigr)\vG\bigl(1-\frac{\angle{a}}{N}\bigr)} =\frac{\sin\frac{\angle{a}}{N}\pi}{\sin\frac{\angle{c}}{N}\pi}. \end{equation*} Since $(-1)^{\angle{a}}\sin\tfrac{\angle{a}}{N}\pi = - \Im \bigl(\eta_N^{\frac{N-1}{2}a}\bigr),$ we obtain the formula. The case $\angle{a}+\angle{b} >N$ is reduced to the first case using \begin{align*} c_\infty(1\ 3)^*\wt\om_N^{a,b} &= (1\ 3)^* c_\infty\wt\om_N^{a,b} = (1\ 3)^* \wt\om_N^{-a,-b}, \quad c_\infty\wt\om_N^{c,b}=\wt\om_N^{-c,-b}. \end{align*} \end{proof} \begin{definition} For $a$, $b \in \Z/N\Z$, and an infinite place $v$ of $K_N$, define $\mathbf{r}_{N,v}^{a/b} \in E_{N,\R}^*$ by $$ \s (\mathbf{r}_{N,v}^{a/b}) = - \Phi_N(\eta_N^{ha})/\Phi_N(\eta_N^{hb}) $$ for each $\s \colon E_N \hookrightarrow \C$, where $h \in H_N$ is such that $\tau}\def\th{\theta}\def\x{\xi}\def\z{\zeta}\def\vphi{\varphi(\z_N)^h=\s(\xi_N)$ for $\tau}\def\th{\theta}\def\x{\xi}\def\z{\zeta}\def\vphi{\varphi\colon K_N \hookrightarrow \C$ inducing $v$. It does not depend on the choice of $\tau}\def\th{\theta}\def\x{\xi}\def\z{\zeta}\def\vphi{\varphi$. \end{definition} \begin{proposition}\label{twist-regulator} Let $N$ be odd and the notations be as in Theorem \ref{main-theorem}. Then we have \begin{align*} r_{\sD,v} (e_{(1\ 3)}^{a,b,c}) = \mathbf{r}_{N,v}^{c/a} \mathbf{c}_{N,v}^{c,b,a} \lambda}\def\om{\omega}\def\s{\sigma_{N,v}^{a,b,c}, \quad r_{\sD,v} (e_{(2\ 3)}^{a,b,c}) =\mathbf{r}_{N,v}^{c/b} \mathbf{c}_{N,v}^{a,c,b} \lambda}\def\om{\omega}\def\s{\sigma_{N,v}^{a,b,c}. \end{align*} \end{proposition} \begin{proof} We only prove the first one. By Lemma \ref{projector-twist} and Theorem \ref{main-theorem}, we have \begin{multline*} r_{\sD,v}(p_N^{a,b,c}(1\ 3)^*e_{N,K_N})=r_{\sD,v}((1\ 3)^*p_N^{c,b,a}e_{N,K_N})\\ = (1\ 3)^* r_{\sD,v}(p_N^{c,b,a}e_{N,K_N}) = \mathbf{c}_{N,v}^{c,b,a} (1\ 3)^*\lambda}\def\om{\omega}\def\s{\sigma_v^{c,b,a} = \mathbf{c}_{N,v}^{c,b,a} \mathbf{r}_{N,v}^{c/a} \lambda}\def\om{\omega}\def\s{\sigma_v^{a,b,c}, \end{multline*} where the last equality follows from Proposition \ref{basis-deligne} and Lemma \ref{regulator-twist}. \end{proof} \subsection{Examples} Now we study two particular cases $N=5$ and $7$ with $k=\Q$. Then, for any $(a,b) \in I_N$, $$\dim_{E_N} H_\sM^2(X_N^{a,b},\Q(2))_\Z = \dim_\Q H_\sM^2(X_N^{[a,b]_\Q},\Q(2))_\Z$$ is conjectured to be $2$ and $3$, respectively. For brevity, we put for $(a,b) \in I_N$ \begin{equation*} F_N^{a,b,c} =\wt{F}\bigl(\tfrac{\angle{a}}{N},\tfrac{\angle{b}}{N}\bigr)-\wt F\bigl(\tfrac{\angle{-a}}{N},\tfrac{\angle{-b}}{N}\bigr) \ \in \R. \end{equation*} Note that \begin{equation*}\label{F^{a,b,c}} F_N^{a,b,c}=-F_N^{-a,-b,-c}, \quad F_N^{a,b,c}=F_N^{b,a,c}. \end{equation*} By Proposition \ref{decreasing}, $F_N^{a,b,c}>0$ if and only if $\angle{a}+\angle{b} <N$. Moreover, $F_N^{a,b,c}$ is monotonously decreasing with respect to each $\angle{a}$ and $\angle{b}$. \begin{theorem}\label{N=5} Suppose that $k\subset \Q(\mu_5)$. Then the regulator map \begin{align*} r_\sD \otimes}\def\op{\oplus_\Q \R\colon H_\sM^2(X_5,\Q(2))_\Z \otimes}\def\op{\oplus_\Q \R \lra H_\sD^2(X_{5,\R},\R(2)) \end{align*} is surjective. \end{theorem} \begin{proof} It suffices to prove the surjectivity for $X_5^{a,b}$ for any $(a,b) \in I_5$. Since the surjectivity depdnds only on the class $[a,b]_\Q$, it suffices to prove it for $(a,b)=(1,1)$, $(1,2)$, and $(2,1)$. For an embedding $\s \colon E_5 \hookrightarrow \C$, define $\tau}\def\th{\theta}\def\x{\xi}\def\z{\zeta}\def\vphi{\varphi_1, \tau}\def\th{\theta}\def\x{\xi}\def\z{\zeta}\def\vphi{\varphi_2 \colon K_5 \hookrightarrow \C$ by $\tau}\def\th{\theta}\def\x{\xi}\def\z{\zeta}\def\vphi{\varphi_1(\z_5)=\s(\xi_5), \tau}\def\th{\theta}\def\x{\xi}\def\z{\zeta}\def\vphi{\varphi_2(\z_5)^2=\s(\xi_5)$, and let $v_i$ be the infinite place of $K_5$ induced by $\tau}\def\th{\theta}\def\x{\xi}\def\z{\zeta}\def\vphi{\varphi_i$. By Theorem \ref{main-theorem} and Proposition \ref{twist-regulator}, we have $$ \begin{pmatrix} r_{\sD}(e_{(1)}^{a,b,c})_\s & r_\sD(e_{(1\ 3)}^{a,b,c})_\s \end{pmatrix} = -\frac{1}{4\cdot 5^2\pi i} \begin{pmatrix} \lambda}\def\om{\omega}\def\s{\sigma_{5,v_1}^{a,b,c} & \lambda}\def\om{\omega}\def\s{\sigma_{5,v_2}^{a,b,c} \end{pmatrix} A^{a,b,c} $$ with $$ A^{a,b,c}= \begin{pmatrix} F_5^{a,b,c} & -\Phi_5(\eta_5^c)\Phi_5(\eta_5^a)^{-1}F_5^{c,b,a} \\ F_5^{2a,2b,2c} & -\Phi_5(\eta_5^{2c}) \Phi_5(\eta_5^{2a})^{-1}F_5^{2c,2b,2a} \end{pmatrix}. $$ First, let $(a,b,c) = (1,1,3)$. Then one calculates \begin{align*} \det(A^{1,1,3}) = \frac{\eta_5^2-\eta_5^{-2}}{\eta_5-\eta_5^{-1}}F_5^{1,1,3}F_5^{1,2,2} + \frac{\eta_5-\eta_5^{-1}}{\eta_5^2-\eta_5^{-2}}F_5^{3,1,1}F_5^{2,2,1}. \end{align*} Since $F_5^{1,1,3}, F_5^{1,2,2}, F_5^{3,1,1}, F_5^{2,2,1} > 0$, it follows that $\det (A^{1,1,3}) > 0$. Secondly, if $(a,b,c)=(1,2,2)$, then \begin{align*} \det(A^{1,2,2}) &= - \frac{\eta_5^2-\eta_5^{-2}}{\eta_5-\eta_5^{-1}}F_5^{1,2,2}F_5^{4,4,2} - \frac{\eta_5-\eta_5^{-1}}{\eta_5^2-\eta_5^{-2}}F_5^{2,2,1}F_5^{2,4,4}\\ &= \frac{\eta_5^2-\eta_5^{-2}}{\eta_5-\eta_5^{-1}}F_5^{1,2,2}F_5^{1,1,3} + \frac{\eta_5-\eta_5^{-1}}{\eta_5^2-\eta_5^{-2}}F_5^{2,2,1}F_5^{3,1,1}\\ &= \det(A^{1,1,3}) >0. \end{align*} Finally, by the symmetry, the remaining case $(a,b,c) = (2,1,2)$ is proved by using $e_{(2\ 3)}^{a,b,c}$ instead of $e_{(1\ 3)}^{a,b,c}$. \end{proof} By a more precise argument similar to the proof of Corollary \ref{346}, we obtain: \begin{corollary}\label{cor-N=5} Let $(a,b) \in I_5$ and assume the Beilinson conjecture (Conjecture \ref{conj-beilinson}) for $X_5^{[a,b]_\Q} \in \sM_\Q$. Then it follows that \begin{align*} L(j_5^{a,b},2) \equiv \pi^2 L^*(j_5^{a,b},0) \equiv \frac{\sin\frac{4\pi}{5}}{\sin\frac{2\pi}{5}}F_5^{1,1,3}F_5^{1,2,2} + \frac{\sin\frac{2\pi}{5}}{\sin\frac{4\pi}{5}}F_5^{3,1,1}F_5^{2,2,1} \end{align*} modulo $\Q^*$. \end{corollary} \begin{proof} Just note that a basis of $H_1(X^{[a,b]_\Q}_5(\C),\Q)^{-}$ is given by \begin{align*} &\bigl((\xi_5-\xi_5^{-1})(p_5^{a,b}-p_5^{-a,-b})+ (\xi_5^2-\xi_5^{-2})(p_5^{2a,2b}-p_5^{-2a,-2b})\bigr) \g_5, \\ &\bigl((\xi_5^2-\xi_5^{-2})(p_5^{a,b}-p_5^{-a,-b})-(\xi_5-\xi_5^{-1})(p_5^{2a,2b}-p_5^{-2a,-2b})\bigr) \g_5. \end{align*} \end{proof} \begin{remark} Kimura \cite{k-kimura} studies the curve $C_5^{1,1}$ (see Remark \ref{C_N}) over $\Q$, which is equivalent to the study of $X_5^{[1,1]} \in \sM_\Q$, or to $X_5^{1,1} \in \sM_{K_5,E_5}$. He computes numerically the determinant of the regulators of $$\alpha}\def\b{\beta}\def\g{\gamma}\def\d{\delta}\def\pd{\partial}\def\io{\iota= \psi_* \{1-xy,x\}, \quad \b=\psi_*\left\{x+y,\frac{1-x}{y}\right\},$$ and showed that it is non-trivial. By \eqref{yasuda} and $$(1\ 3)^*e_N = (1\ 3)^*\left\{\frac{1-x}{y}, \frac{1-y}{x}\right\} = \left\{\frac{1-x}{y}, x+y\right\},$$ where the first equality follows from $\{1-x,x\} = \{y,1-y\} =0$ and $N^2\{x,y\} = \{x^N,y^N\}=0$, his study corresponds to the study of our $e_{(1)}^{1,1}$ and $e_{(1\ 3)}^{1,1}$. \end{remark} \begin{proposition}\label{N=7} Let $N=7$, $(a,b) \in I_7$ and suppose that $k\subset\Q(\mu_7)$. Then the regulator map \begin{align*} r_\sD \otimes}\def\op{\oplus_\Q \R\colon H_\sM^2(X_7^{[a,b]_\Q},\Q(2))_\Z \otimes}\def\op{\oplus_\Q \R \lra H_\sD^2(X_{7,\R}^{[a,b]_\Q},\R(2)) \end{align*} is surjective if $a$, $b$ and $c$ are different to each other. Otherwise, the dimension of\, $\Im(r_{\sD})$ is at least $2$. \end{proposition} \begin{proof} The surjectivity is equivalent to that for $X_7^{a,b}$. The regulators of $e_{(1)}^{a,b,c}$, $e_{(1\ 3)}^{a,b,c}$ and $e_{(2\ 3)}^{a,b,c}$ are expressed by the matrix $$B^{a,b,c}= \begin{pmatrix} F_7^{a,b,c} & - \frac{\Phi_7(\eta_7^c)}{\Phi_7(\eta_7^a)}F_7^{c,b,a} & -\frac{\Phi_7(\eta_7^c)}{\Phi_7(\eta_7^b)}F_7^{a,c,b} \\ F_7^{2a,2b,2c} & -\frac{\Phi_7(\eta_7^{2c})}{\Phi_7(\eta_7^{2a})} F_7^{2c,2b,2a} & -\frac{\Phi_7(\eta_7^{2c})}{\Phi_7(\eta_7^{2b})}F_7^{2a,2c,2b}\\ F_7^{3a,3b,3c} & -\frac{\Phi_7(\eta_7^{3c})}{\Phi_7(\eta_7^{3a})}F_7^{3c,3b,3a} & -\frac{\Phi_7(\eta_7^{3c})}{\Phi_7(\eta_7^{3b})}F_7^{3a,3c,3b} \end{pmatrix}. $$ In the first case, we have $[a,b]_\Q=[1,2]_\Q$ or $[2,1]_\Q$, and by the symmetry, it suffices to treat $(a,b,c)=(1,2,4)$. Then one calculates: \begin{align*} \det(B^{1,2,4}) &= C (s^3+t^3+u^3-3stu)\\ &= \frac{C}{2} (s+t+u)\left\{(s-t)^2+(t-u)^2+(u-s)^2\right\} \end{align*} with \begin{equation*} s=\frac{iF_7^{1,2,4}}{\Phi_7(\eta_7^4)}, \quad t=\frac{iF_7^{2,4,1}}{\Phi_7(\eta_7)}, \quad u=\frac{iF_7^{3,6,5}}{\Phi_7(\eta_7^5)}, \quad C= i\Phi_7(\eta_7^4)\Phi_7(\eta_7)\Phi_7(\eta_7^5) . \end{equation*} Note that $F_7^{1,2,4}>F_7^{1,4,2}>F_7^{2,4,1} >0$. Since $s,u<0<t$, we have $(s-t)^2+(t-u)^2+(u-s)^2 \neq 0$. On the other hand, $s+t+u \neq 0$ since $$-s-u >\left(-\frac{i}{\Phi_7(\eta_7^4)} -\frac{i}{\Phi_7(\eta_7^5)}\right) F_7^{2,4,1} = t.$$ Hence we obtained $\det(B^{1,2,4})\neq 0$. In the second case, we are reduced to consider $(a,b,c)=(1,1,5)$. Then we have $r_{\sD}(e_{(1\ 3)}^{a,b})=r_{\sD}(e_{(2\ 3)}^{a,b})$. However, the minor matrix of $B^{1,1,5}$ $$\begin{pmatrix} F_7^{1,1,5} & -\frac{\Phi_7(\eta_7^5)}{\Phi_7(\eta_7)} F_7^{5,1,1} \\ F_7^{2,2,3} & -\frac{\Phi_7(\eta_7^3)}{\Phi_7(\eta_7^2)}F_7^{3,2,2} \end{pmatrix} $$ has non-trivial determinant since $F_7^{1,1,5}, F_7^{5,1,1}, F_7^{2,2,3}, F_7^{3,2,2} >0$, and $$\Phi_7(\eta_7^3)\Phi_7(\eta_7^2)^{-1}<0< \Phi_7(\eta_7^5)\Phi_7(\eta_7)^{-1}.$$ \end{proof} Similarly as Corollary \ref{346} and Corollary \ref{cor-N=5}, we obtain: \begin{corollary} Let $(a,b) \in I_7$ and assume that $a, b$ and $c$ are different to each other. If the Beilinson conjecture (Conjecture \ref{conj-beilinson}) holds for $X_7^{[a,b]_\Q}$, then it follows that \begin{align*} L(j_7^{a,b},2)\equiv \pi^3 L^*(j_7^{a,b},0) \equiv s^3+t^3+u^3-3stu \end{align*} modulo $\Q^*$, where $$s= -\frac{F_7^{1,2,4}}{\sin \frac{4\pi}{7}}, \quad t= \frac{F_7^{2,4,1}}{\sin \frac{6\pi}{7}}, \quad u= -\frac{F_7^{4,1,2}}{\sin \frac{2\pi}{7}}.$$ \end{corollary} \begin{remark} As above, for any odd $N \geq 5$ and $(a,b)\in I_N$, we can always find two of $\bigl\{e_{(1)}^{a,b}, e_{(1\ 3)}^{a,b}, e_{(2\ 3)}^{a,b}\bigr\}$ whose regulators are linearly independent. \end{remark}
1,108,101,565,376
arxiv
\section{Introduction} The Nambu -- Jona-Lasinio (NJL) model \cite{Nambu} is a quantum field theory of fermions in which the only interaction is an attractive contact interaction between fermion and anti-fermion. Its Lagrangian density is \begin{equation} {\cal L}=\bar\psi_{\alpha p}(\partial{\!\!\! /}\,+m)\psi_{\alpha p} -{g^2\over{2N_f}}\left[(\bar\psi_{\alpha p}\psi_{\alpha p})^2- (\bar\psi_{\alpha p}\gamma_5\vec\tau_{pq}\psi_{\alpha q})^2\right], \label{eq:NJL} \end{equation} where $\vec\tau_{pq}$ are Pauli spin matrices with indices $p,q$ running over an SU(2) internal symmetry, and $\alpha$ runs over $N_f$ distinct fermion flavors. For bare mass $m=0$ there is an $\mbox{SU(2)}\otimes\mbox{SU(2)}$ chiral symmetry \begin{equation} \psi_L\mapsto U\psi_L\;\;\;;\;\;\;\psi_R\mapsto V\psi_R, \end{equation} where $U$ and $V$ are independent global SU(2) rotations. The original motivation for studying the model (\ref{eq:NJL}) is that for coupling $g^2$ larger than some critical $g_c^2$ it gives a qualitatively good description of dynamical chiral symmetry breaking in strong interaction physics, with order parameter $\Sigma\propto\langle\bar\psi\psi\rangle$. At the large coupling strengths required for chiral symmetry breaking, the fermions interact by exchange of composite scalars and pseudoscalars. In the chiral limit the pseudoscalars are Goldstone modes, and are associated with physical pions. More recently, NJL models have been proposed as a description of the Higgs sector of the standard model \cite{topmode}, in which the Higgs scalar appears as a fermion -- anti-fermion bound state, and the Goldstone modes are absorbed into longitudinal gauge boson degrees of freedom. In weak coupling perturbation theory (WCPT), the NJL model is non-renormalisable for $d>2$, being plagued by quadratic divergences in four dimensions. However, a perturbative expansion made about the critical point $g^2=g_c^2$ in powers of $1/N_f$ is known to be renormalisable for $2<d<4$ \cite{RWP}\cite{HKK2}, and in $d=4$ suffers from only logarithmic divergences, implying that a moderate separation between UV and IR scales can be made. Because the UV scale can never be completely eliminated from the theory by renormalisation without rendering the interaction strength vanishing, the model is {\em trivial\/} for $d=4$; it shares this property with the model more conventionally used to describe the Higgs sector, namely the O(4) linear sigma model, which contains elementary scalar fields. Both models are believed trivial, and hence provide effective field theories only for scales $\ll\Lambda$, the UV scale. At scales $\sim\Lambda$, logarithmic scaling violations will manifest themselves, which in turn necessitate the introduction of higher dimensional operators corresponding to new physical information. Taking these new operators into account, then in the sense that the number of undetermined parameters is the same for both NJL and sigma models, the two models have equal predictive power for the standard model \cite{Has}. If we formulate definite lattice (ie. bare) actions based on these models, however, an approach perhaps more familiar in condensed matter physics, then triviality may manifest itself in distinct ways. For both models the upper critical dimension is four, ie. the symmetry breaking transition is described by mean-field (Landau-Ginzburg) critical exponents with logarithmic corrections. In the sigma model, deviations from mean field behaviour are well-described by WCPT, since the renormalised coupling is bounded by the fixed point value $g_R=O(4-d)$ and is small as $d\to4$. Using the renormalisation group \cite{Brezin}\cite{Zinn}, the following prediction for the equation of state, relating the order parameter $\Sigma$ to the reduced coupling $t={1\over g^2}-{1\over g_c^2}$ and the symmetry-breaking field $m$ (we retain the notation of the NJL model) is derived: \begin{equation} m=At{\Sigma\over{\ln^{1\over2}\left({1\over\Sigma}\right)}}+ B{\Sigma^3\over{\ln\left({1\over\Sigma}\right)}}. \label{eq:eossigma} \end{equation} Note that $\Sigma$ is measured in units of the cutoff, so that the argument of the logarithms diverges in the continuum limit. By contrast, symmetry breaking in the NJL model occurs for large coupling strength, and WCPT is inapplicable. An alternative expansion parameter, $1/N_f$, can be used \cite{EgShz}; at leading order this yields a qualitatively different form for the equation of state \cite{KK} \begin{equation} m=At\Sigma+B\Sigma^3\ln\left({1\over\Sigma}\right). \label{eq:eosnjl} \end{equation} The different behaviour can be traced to the fact that in the sigma model the scalar excitations are elementary, and the associated wavefunction renormalisation constant $Z$ perturbatively close to one, whereas in the NJL model the scalars are composite, so that $Z$ vanishes in the continuum limit for $d<4$ where the model is renormalisable, and $Z\propto1/\ln({1\over\Sigma})$ for $d=4$ \cite{EgShz}\cite{KK}; hence we expect (\ref{eq:eosnjl}) to hold beyond leading order in $1/N_f$. The form (\ref{eq:eossigma}) in the sigma and related scalar models has been subject to extensive analysis via Monte Carlo simulation (two recent examples are refs.\cite{Kenna}\cite{Gockagain}), which appear to provide support, although logarithmic corrections are notoriously hard to isolate numerically. For the fermionic case, a simulation of the $d=4$ Gross-Neveu model, with discrete chiral symmetry, has shown support for the form (\ref{eq:eosnjl}) \cite{KKK}. The issue of which is the appropriate equation of state is important for several reasons; for instance, it informs the study of the inherently non-perturbative chiral symmetry breaking transition in lattice $\mbox{QED}_4$. Some studies \cite{Gock1}\cite{Gock2} have assumed a scenario of triviality for this model based on a form similar to (\ref{eq:eossigma}), which may not be appropriate. Another possibility is that different lattice implementations of the chirally symmetric four-fermi interaction, namely the ``dual site'' formulation \cite{CER} used in \cite{KKK} and in this study, and the ``link'' formulation used in a Monte Carlo study of the U(1) NJL model in \cite{AliK}, may lie in different universality classes with distinct patterns of logarithmic scaling violations. Studies of the corresponding lattice models in three spacetime dimensions (``dual site'' in \cite{HKK2} and ``link'' in \cite{DelDeb}), in which the two implementations yield radically different behaviour, support this scenario. In this paper we extend the work of \cite{KKK} with a numerical simulation of the $\mbox{SU(2)}\otimes\mbox{SU(2)}$ NJL model, for a small number of flavors $N_f=2$. The main motivation for this choice of four-fermi model, apart from the phenomenological reasons cited above, is that it has a maximal number of Goldstone excitations, all of which contribute to the $O(1/N_f)$ quantum corrections to the equation of state. We hope to discriminate between the two forms (\ref{eq:eossigma}) and (\ref{eq:eosnjl}), and in particular to see if (\ref{eq:eosnjl}) persists beyond leading order in $1/N_f$. In Sec. II we outline the lattice formulation of (\ref{eq:NJL}) using staggered fermions, and discuss its interpretation in terms of continuum four-component spinors. It turns out that in our simulation each lattice species describes eight continuum flavors, and hence the $N_f=2$ model requires $N={1\over4}$ lattice flavors. This necessitates the use of a hybrid molecular dynamics algorithm \cite{DuaneKog}\cite{GLTRS}. Extensive studies on small systems reveal the systematic error in the algorithm to be $O(N^2\Delta\tau^2)$, where $\Delta\tau$ is the timestep used in the evolution of the hybrid equations of motion; hence the systematic error can be kept under control. Of course, we are still left with an unquantified error due to the use of a fractional $N$, which implies a non-local action, but have no reason to suspect there is a problem. In Sec. III we confront data taken on a $16^4$ lattice with a range of bare mass values in the critical region $1/g^2\simeq0.5$, with five prototype equations of state reflecting various theoretical prejudices, including the simplest mean field ansatz, and a non-trivial continuum limit described by non-mean field critical indices, as well as (\ref{eq:eossigma}) and (\ref{eq:eosnjl}). No form satisfactorily fits all the data; this is because there is a finite scaling window where the correlation length is sufficiently large for a continuum description to apply. We show that the most successful equation of state ansatz, in the sense of small $\chi^2$, depends on which assumptions are made about the extent of the scaling window: if we include all mass values from a narrow region about the critical coupling then forms similar to (\ref{eq:eossigma}) are the most successful, whereas if only small mass values from a wide range of couplings are used, then (\ref{eq:eosnjl}) is the most appropriate form. In Sec. IV we examine data taken at vanishing bare mass, which is possible in four-fermi models formulated via the dual-site approach. Due to the masslessness of the pion in this limit the data is subject to a significant finite volume effect, which we attempt to account for using a formula first derived for the sigma model \cite{Gockagain}. We find that the variation of the finite volume correction with $g^2$ can best be explained by assuming an equation of state (\ref{eq:eosnjl}), together with a wavefunction renormalisation constant $Z\propto1/\ln\left({1\over\Sigma}\right)$: in other words, it is the finite volume corrections that seem most sensitive to the composite nature of the scalar excitations. The distinct triviality scenarios in NJL and sigma models \cite{KK}\cite{KKK} are rephrased in terms of the pion decay constant $f_\pi$. In Sec. V we present brief conclusions and directions for further work. \section{Lattice Formulation of the Model} The lattice action we have chosen to study is written \begin{equation} S_{bos}=\sum_{\alpha=1}^N\sum_{xy} \Psi_\alpha^\dagger(x)(M^\dagger M)^{-1}_{xy}\Psi_\alpha(y)+ {{2N}\over g^2}\sum_{\tilde x}\left(\sigma^2({\tilde x})+ \vec\pi({\tilde x}).\vec\pi({\tilde x})\right), \label{S_bos}\end{equation} where $\Psi_\alpha$ are bosonic pseudofermion fields defined in the fundamental representation of SU(2) on lattice sites $x$, the scalar field $\sigma$ and pseudoscalar triplet $\vec\pi$ are defined on the dual lattice sites $\tilde x$, and the index $\alpha$ runs over $N$ species of pseudofermion; the relation between $N$ and the number of continuum flavors $N_f$ will be made clear presently. The fermion kinetic matrix $M$ is the usual one for Gross-Neveu models with staggered lattice fermions \cite{CER}\cite{HKK2}, amended to incorporate an $\mbox{SU(2)}\otimes\mbox{SU(2)}$ chiral symmetry: \begin{equation} M_{xy}=\left({1\over2}\sum_\mu\eta_\mu(x)[\delta_{y,x+\hat\mu} -\delta_{y,x-\hat\mu}]+m\delta_{xy}\right)\delta_{\alpha\beta} \delta_{pq} +{1\over16}\delta_{xy}\delta_{\alpha\beta}\sum_{\langle\tilde x,x\rangle} \Biggl(\sigma(\tilde x)+i\varepsilon(x)\vec\pi(\tilde x).\vec\tau_{pq}\Biggr), \end{equation} where $m$ is the bare fermion mass, the SU(2) indices $p,q$ are shown explicitly, $\varepsilon(x)\equiv(-1)^{x_1+x_2+x_3+x_4}$, and the symbol $\langle\tilde x,x\rangle$ denotes the set of 16 dual sites $\tilde x$ adjacent to the direct lattice site $x$. The 3 Pauli matrices $\tau^i$ are normalised such that ${\rm tr}(\tau^i\tau^j)=2\delta^{ij}$. Integration over the pseudofermions yields the following path integral: \begin{equation} Z\propto\int D\sigma D\vec\pi\;{\rm det}^N(M^\dagger M)\exp\left( -{{2N}\over g^2}\sum_x(\sigma^2+\vec\pi.\vec\pi)\right). \label{hybrid}\end{equation} In terms of Grassmann fermion fields $\chi,\bar\chi,\zeta,\bar\zeta$, $Z$ can be derived from the equivalent action \begin{equation} S=\sum_\alpha\left(\bar\chi_\alpha M[\sigma,\vec\pi] \chi_\alpha+\bar\zeta_\alpha M^\dagger[\sigma,\vec\pi]\zeta_\alpha\right)+{{2N}\over g^2}\sum_x(\sigma^2+\vec\pi.\vec\pi). \label{S}\end{equation} The auxiliary fields $\sigma$ and $\vec\pi$ can then be integrated out to yield an action written completely in terms of fermion fields: \begin{eqnarray} S_{fer}&=&\sum_\alpha\sum_{xy}\bar\chi_\alpha(x)(\partial{\!\!\! /}\,+m)_{xy}\chi_\alpha(y)\nonumber\\ & &-{g^2\over{8N}}\sum_{\tilde x}\left[ \left({1\over16}\sum_\alpha\sum_{\langle x,\tilde x\rangle} \bar\chi_\alpha(x)\chi_\alpha(x)\right)^2-\sum_{i=1}^3 \left({1\over16}\sum_\alpha\sum_{\langle x,\tilde x\rangle} \bar\chi_\alpha(x)\varepsilon(x)\vec\tau\chi_\alpha(x)\right)^2\right] \label{Sfer}\\ & &+(\chi\mapsto\zeta;\;\bar\chi\mapsto\bar\zeta;\;\partial{\!\!\! /}\, \mapsto-\partial{\!\!\! /}\,),\nonumber \end{eqnarray} with $\partial{\!\!\! /}\,_{xy}$ a shorthand for the free kinetic term for staggered lattice fermions. So, we see that the action (\ref{S_bos}) describes $2N$ flavors of lattice fermion, but with a flavor symmetry group ${\rm U}(N)\otimes {\rm U}(N)$. The interaction term in (\ref{S},\ref{Sfer}) has been normalised so that the gap equation coming from the leading order $1/N$ approximation agrees with that derived for lattice Gross-Neveu models having ${\rm Z}_2$ \cite{HKK2} or U(1) \cite{HKimK} chiral symmetries. Next we identify the global SU(2)$\otimes$SU(2) chiral symmetry of the lattice model. This is most manifest in the form (\ref{S}). Let ${\cal P}_e(x),{\cal P}_o(x)$ be even and odd site projectors defined by \begin{equation} {\cal P}_{e/o}(x)={1\over2}(1\pm\varepsilon(x)). \end{equation} Then, noting that ${\cal P}_e\partial{\!\!\! /}\,{\cal P}_o={\cal P}_e\partial{\!\!\! /}\,$ etc, we find that (\ref{S}) is invariant, in the chiral limit $m\to0$, under the combined transformation: \begin{eqnarray} \chi\mapsto({\cal P}_eU+{\cal P}_oV)\chi\;&;&\;\bar\chi\mapsto\bar\chi ({\cal P}_eV^\dagger+{\cal P}_oU^\dagger)\nonumber\\ \zeta\mapsto({\cal P}_eV+{\cal P}_oU)\zeta\;&;&\;\bar\zeta\mapsto\bar\zeta ({\cal P}_eU^\dagger+{\cal P}_oV^\dagger)\nonumber\\ \Phi\equiv(\sigma1\kern-4.5pt1+i\vec\pi.\vec\tau)&\mapsto&V\Phi U^\dagger, \label{SU(2)}\end{eqnarray} where $U,V$ are independent SU(2) rotations. Note that (\ref{SU(2)}) is a symmetry because $\Phi$ is proportional to a SU(2) matrix, and the auxiliary potential proportional to ${\rm tr}\Phi^\dagger\Phi$. This property does not generalise to larger unitary groups. The symmetry (\ref{SU(2)}) is broken explicitly by a bare fermion mass, and spontaneously by the condensates $\langle\bar\chi\chi\rangle, \langle\bar\zeta\zeta\rangle$. Now let us consider the continuum flavor interpretation. It is well known that four-dimensional staggered lattice fermions can be interpreted in terms of four flavors of Dirac fermion, which decouple at tree level in the long-wavelength limit. One way of seeing this is via a transformation to fields $q,\bar q$ defined on the sites $y$ of a lattice of spacing $2a$ \cite{Kluberg-Stern}: \begin{equation} q^{\alpha a}(y)={1\over8}\sum_A\Gamma_A^{\alpha a}\chi(A;y)\;\;; \;\;\bar q^{\beta b}={1\over8}\sum_A\bar\chi(A;y)\Gamma_A^{*\beta b}, \label{q}\end{equation} where $\alpha,\beta=1,\ldots,4$ run over spin degrees of freedom, $a,b=1,\ldots,4$ over flavor, and $A$ is a four-vector with entries either 0 or 1 which identifies the corners of the elementary hypercube associated with site $y$; each site $x$ on the original lattice is thus mapped to a unique combination $(A;y)$. The $4\times4$ matrix $\Gamma_A\equiv\gamma_1^{A_1}\gamma_2^{A_2}\gamma_3^{A_3}\gamma_4^{A_4}$, where $\gamma_\mu$ are Dirac matrices. The action written in the form (\ref{S}) contains interaction terms of the form $\sigma\bar\chi\chi$, $\vec\pi.\bar\chi\vec\tau\chi$ etc. In the $q$-basis the $\sigma$ and $\pi$ fields are not all equivalent. If we label the dual site $(x_1+{1\over2},\ldots,x_4+{1\over2})$ by $(A;\tilde y)$, then for $A=(0,0,0,0)$ the interaction reads \begin{equation} S_{int}(0;\tilde y)=\sigma(0;\tilde y)\bar q(y)(1\kern-4.5pt1\otimes 1\kern-4.5pt1)q(y)+i\vec\pi(0;\tilde y).\bar q(y)\vec\tau(\gamma_5\otimes\gamma_5^*)q(y), \label{int0}\end{equation} where the first matrix in the direct product acts on spin degrees of freedom, and the second on flavor. Equation (\ref{int0}) resembles the continuum interaction between fermions and the auxiliary scalars, except that the pseudoscalar current is non-singlet (but diagonalisable) in flavor space. However, for $A_\mu=1,A_{\nu\not=\mu}=0$, say, the interaction is more complicated: \begin{eqnarray} S_{int}(&\mu&;\tilde y)=\sigma(\mu;\tilde y)\left[ \bar q(y)q(y)+{1\over2}\Delta_\mu^+\left(\bar q(y)q(y)+ \bar q(y)(\gamma_\mu\gamma_5\otimes\gamma_5^*\gamma_\mu^*)q(y)\right)\right] \nonumber\\ +i\vec\pi(&\mu&;\tilde y).\left[\bar q(y)\vec\tau(\gamma_5\otimes\gamma_5^*)q(y)+{1\over2}\Delta_\mu^+ \left(\bar q(y)\vec\tau(\gamma_5\otimes\gamma_5^*)q(y)+ \bar q(y)\vec\tau(\gamma_\mu\otimes\gamma_\mu^*)q(y)\right)\right], \end{eqnarray} where $\Delta_\mu^+$ is the forward difference operator on the $y$-lattice. Hence in addition to continuum-like interactions there are extra momentum-dependent terms coupling the auxiliary fields to bilinears which do not respect Lorentz or flavor covariance. For $A$ vectors with more non-zero entries there are still more complicated interactions containing more derivative terms. One might naively claim that the terms in $\Delta_\mu\sigma,\Delta_\mu\pi$ are $O(a)$ and hence irrelevant in the continuum limit. This ignores the fact that $\sigma$ and $\pi$ are auxiliary fields which have no kinetic terms to suppress high momentum modes in the functional integral. The non-covariant terms will thus survive integration over $\sigma,\vec\pi$ to manifest themselves in (\ref{Sfer}). A similar phenomenon happens in lattice formulations of the NJL model in which the interaction term is localised on a single link \cite{AliK}\cite{Gock}. The effects of the non-covariant terms in fact depend on the details of the model's dynamics. If long-range correlations develop among the auxiliary fields, ie. if an effective kinetic term is generated by radiative corrections, then the $\Delta_\mu\Phi$ term may become irrelevant -- this appears to be the case in two \cite{CER}\cite{Joli} and three \cite{HKK2} dimensional models, where in each case there is a renormalisable expansion available. In four dimensions the NJL model is trivial: hence no continuum limit exists. Even in this case, though, we expect correlations to develop between the auxiliary fields in the vicinity of the continuous chiral symmetry breaking transition. In this case the unwanted interactions would manifest themselves as scaling violations if an interacting effective theory is to be described \cite{Has}. From the arguments following (\ref{Sfer}) and (\ref{q}) we now state the relation between the parameter $N$ and the number of continuum flavors $N_f$: \begin{equation} N_f=8N. \end{equation} The extra factor of 2 over the usual relation for four-dimensional gauge theories arises from the impossibility of even/odd partitioning in the dual site approach to lattice four-fermi models: in other words the matrix $M$ contains diagonal terms which are not multiples of the unit matrix. In this paper we wish to study a small number of continuum flavors $N_f=2$, which necessitates a fractional $N=0.25$. This can be achieved using a hybrid algorithm, which produces a sequence of configurations weighted according to the action (\ref{hybrid}) by Hamiltonian evolution in a fictitious time $\tau$ \cite{DuaneKog}, the fields' conjugate momenta being stochastically refreshed at intervals. The advantage of this approach is that in the form (\ref{hybrid}) the variable $N$ can readily be set to a non-integer value (though there is no longer a transformation to a local action of the form (\ref{Sfer})); the cost is that the simulation must be run with a discrete timestep $\Delta\tau$. In principle several values of $\Delta\tau$ must be explored, then the limit $\Delta\tau\to0$ taken. We have chosen to implement the ``R-algorithm'' of Gottlieb {\em et al\/} \cite{GLTRS}, for which the systematic error is claimed to be $O(\Delta\tau^2)$. We have tested the R-algorithm by extensive runs on a $6^4$ lattice with parameters $1/g^2=0.56$ and mass $m=0.01$, the coupling being chosen to lie well into the broken symmetry phase according to the leading order gap equation (see below). We tested models with $N=0.25,1,3$ and 6, with timesteps $\Delta\tau=0.4$ (for$N\leq1$), 0.2, 0.1 and 0.05. For integer $N$ we were also able to run directly in the $\Delta\tau=0$ limit using an exact hybrid Monte Carlo algorithm \cite{DKPR}\cite{HKK2}. We ran for either 20000 ($N=3,6$) or 40000 ($N=0.25,1$) Hamiltonian trajectories between momentum refreshments, the trajectory lengths being drawn from a Poisson distribution with mean 1.0. For illustration we present results for two local quantities, the expectation value of the scalar field $\langle\sigma\rangle\equiv\Sigma$, and the energy density $\epsilon$ given by \begin{equation} \epsilon={1\over{2V}}\left\langle\sum_x M^{-1}_{x,x+\hat4}-M^{-1}_{x,x-\hat4}\right\rangle. \end{equation} The results are shown in Figs. \ref{fig:sigvsDt} and \ref{fig:epsvsDt}, together with quadratic fits of the form \begin{equation} \Sigma(N;\Delta\tau)=\Sigma_0(N)+A(N)\Delta\tau^2\;\;\;;\;\;\; \epsilon(N;\Delta\tau)=\epsilon_0(N)+B(N)\Delta\tau^2. \end{equation} Acceptable fits were found for all datasets except for the $\Sigma$ data at $N=6$, where the fit was restricted to $\Delta\tau\leq0.1$. We find good evidence that the systematic error is indeed $O(\Delta\tau^2)$. Moreover, the coefficients $A(N)$ and $B(N)$ are themselves adequately fitted by the forms $A(N)=aN^2$, $B(N)=bN^2$ as displayed in Fig. \ref{fig:ABvsN}. This behaviour is expected from inspection of the hybrid equations of motion \cite{DuaneKog}\cite{GLTRS}. Since fluctuations in the system are suppressed by powers of $1/N$, we see that for $N=0.25$ systematic errors are likely to be dwarfed by statistical ones for reasonable simulation runs. In the work presented in the rest of this paper we took $\Delta\tau=0.05$ just to be conservative and safe. In fact, for very small or vanishing bare mass $m$ near the critical point where the order parameter is very small, we also did production runs with $\Delta\tau=0.025$ to check explicitly that the systematic errors were smaller than the statistical errors. The bulk of the results of this paper were generated on a $16^4$ lattice with $N=0.25$, using masses $m$ ranging from 0.05 down to 0.0025. We have focussed our attention on the order parameter $\Sigma$, which differs from zero in the chiral limit only when chiral symmetry is spontaneously broken, and is simply related to the chiral condensate: $\Sigma={g^2\over2}\langle\bar\chi\chi\rangle$. Our results for $\Sigma$ as a function of bare mass $m$ and inverse coupling $1/g^2$ are presented in Tab. \ref{tab:data}. Because the kinetic matrix $M$ contains numerically large diagonal terms, this model is relatively cheap to simulate compared to, say, non-compact QED. This has enabled us to accumulate a large dataset, reflected in the relatively small statistical errors in Tab. \ref{tab:data}. Typical runs which resulted in each of the entries in that table were between 15,000 and 30,000 $\tau$ units long. Such statistics are an order of magnitude better than present day state-of-the-art lattice QCD simulations. The statistical error bars in the table were obtained by binning methods which are particularly reliable when applied to such substantial data sets. In fact, because the interactions appear on the diagonal, $M$ is sufficiently well-conditioned in the broken phase to permit simulations directly in the chiral limit $m=0$. Of course, since spontaneous symmetry breaking cannot occur in a finite volume $V$, $\Sigma$ is strictly zero in these simulations; defining \begin{equation} \bar\Phi={1\over V}\sum_x\Phi(x), \end{equation} then the next best thing to measure is \begin{equation} \vert\Phi\vert=\left\langle\sqrt{ \textstyle{1\over2}\mbox{tr}\bar\Phi^\dagger\bar\Phi}\right\rangle. \label{eq:Phi} \end{equation} This quantity is numerically very close to $\Sigma$ extrapolated to the chiral limit, but exceeds it in a finite volume. We will discuss the finite volume correction in more depth in Sec. \ref{sec:fv}. Our results for $\vert\Phi\vert$ are given in Tab. \ref{tab:datam0}. To monitor finite volume effects directly we did a small number of runs on $12^4$ and $20^4$ lattices, and tabulate the results in Tabs. \ref{tab:datam0} and \ref{tab:datafv} Finally in this section we give details of the model's gap equation. In the large-$N$ limit, for sufficiently strong coupling $g^2$, the scalar auxiliary $\sigma$ develops a spontaneous vacuum expectation value $\Sigma$ even in the chiral limit $m\to0$. To leading order in $1/N$ the relation between $\Sigma$, the bare mass $m$ and $g^2$ is given by the lattice tadpole or gap equation \cite{HKK2}: \begin{eqnarray} {1\over g^2}&=&16{{m+\Sigma}\over{\Sigma}} \int_{-\pi/2}^{\pi/2}{{d^4k}\over(2\pi)^4}{1\over{\sin^2k_\mu+ (m+\Sigma)^2}}\nonumber\\&=& {{m+\Sigma}\over{\Sigma}}\int_0^\infty d\alpha\exp-\alpha\left(2+(m+\Sigma)^2\right )I_0^4({\alpha\over2}), \end{eqnarray} with $I_0$ a modified Bessel function. In Fig. \ref{fig:gapeqn} we show predictions for $\Sigma$ as a function of $g^2$ for $m$ values 0.0, and 0.01. The $m=0$ line shows a continuous transition at $1/g_c^2\simeq0.62$ between a symmetric phase at weak coupling and a broken phase at strong coupling -- the curve having the $(g^2-g_c^2)^{1\over2}$ shape characteristic of mean field theory. For $m>0$, $\Sigma\not=0$ for all values of $g^2$. Also shown are simulation results from a $6^4$ lattice with $m=0.01$ for various $N$, showing: \begin{enumerate} \item[(i)] that increasing fluctuations, expected to be $\propto1/N$, cause a suppression of $\Sigma$. \item[(ii)] that the simulated values of $\Sigma$ exceed the gap equation predictions for $N>3$: this is in marked contrast to the case in three dimensions, where the gap equation gives an upper bound for all $N$ \cite{HKK2}. Perhaps the lack of numerical accuracy of the gap equation is because the $1/N$ expansion is not renormalisable in four dimensions. \end{enumerate} \section{Numerical Fits to the Equation of State} In this section we report on fits to various trial forms for the equation of state $m=f(\Sigma,1/g^2)$ using data for the order parameter tabulated in Tab. \ref{tab:data}. We used the numerical package {\sc minuit} to perform least squares fits, and in each case quote a standard error for the fitted parameters, and the $\chi^2$ per degree of freedom, which gives a measure of the quality of the fit. First we list the functional forms we have examined, in increasing order of sophistication: \begin{enumerate} \item[I] Mean Field (3 parameters): \begin{equation} m=A\left({1\over g^2}-{1\over g_c^2}\right)\Sigma+B\Sigma^3. \label{eq:fit1} \end{equation} This is the simplest form, arising from a mean field treatment neglecting fluctuations in the order parameter field. \item[II] Power Law (5 parameters): \begin{equation} m=A\left({1\over g^2}-{1\over g_c^2}\right)\Sigma^p+B\Sigma^\delta. \label{eq:fit2} \end{equation} This is a more general version of I, derivable by renormalisation group arguments from the assumption that a fixed point exists at the transition corresponding to a diverging ratio between cutoff and physical scales. A useful discussion is found in \cite{Zinn}. The index $\delta$ is the standard critical exponent, and $p=\delta-1/\beta$, where $\beta$ is another standard exponent. \item[III] Ladder Approximation (4 parameters): \begin{equation} m=A\left({1\over g^2}-{1\over g_c^2}\right)\Sigma+B\Sigma^\delta. \label{eq:fit3} \end{equation} This is a restricted form of II based on the relation $\gamma\equiv1\Rightarrow\delta-1/\beta\equiv1$, inspired by solution of the quenched gauged U(1) NJL model in ladder approximation \cite{KHKD}. \end{enumerate} The last three fits all assume that the leading behaviour is described by the exponents of mean field theory (ie. $\delta=3$, $p=1$), but with logarithmic scaling corrections. Since these corrections introduce a dependence on some ultraviolet scale, they imply that the renormalised theory near the fixed point can never be made independent of the cutoff, and hence that the continuum limit is described by a free field theory -- this is the phenomenon of triviality. The standard scenario is derived using renormalised perturbation theory in the context of $\mbox{O}(n)$ scalar field theory \cite{Brezin}\cite{Zinn}. This gives rise to: \begin{enumerate} \item[IV] Sigma Model Logarithmic Corrections (4 parameters): \begin{equation} m=A\left({1\over g^2}-{1\over g_c^2}\right) {\Sigma\over{\ln^{1\over2}\left({C\over\Sigma}\right)}} +B{\Sigma^3\over{\ln\left({C\over\Sigma}\right)}}. \label{eq:fit4} \end{equation} Here the powers of the logarithmic terms are derived on the assumption that the effective field theory at the fixed point is the O(4) linear sigma model, which has the same pattern of symmetry breaking as our SU(2)$\otimes$SU(2) model. Note that we include $C$, the UV scale of the logarithm, as a free parameter. Since the condensate data are measured in lattice units, we expect a plausible fit should have $C$ of $O(1)$. \item[V] Modified Sigma Model Logarithmic Corrections (5 parameters): \begin{equation} m=A\left({1\over g^2}-{1\over g_c^2}\right) {\Sigma\over{\ln^{q_1}\left({1\over\Sigma}\right)}} +B{\Sigma^3\over{\ln^{q_2}\left({1\over\Sigma}\right)}}. \label{eq:fit5} \end{equation} Here we allow the powers of the logarithms to vary. This phenomenological form has been used extensively in fits of the equation of state of both non-compact lattice QED \cite{Gock1}\cite{Gock2} and the U(1) NJL model \cite{AliK}. Note that we have set the UV scale in the logarithm to 1; fits in which both $p$ and $C$ are allowed to vary are inherently unstable, since $\ln(C/\Sigma)=\mbox{const.}+\ln^{1/\ln C}(1/\Sigma)$, so that as $\Sigma\to0$ the functional forms become so similar that the covariance matrix becomes singular. \item[VI] Large-$N$ Logarithmic Corrections (4 parameters): \begin{equation} m=A\left({1\over g^2}-{1\over g_c^2}\right)\Sigma +B\Sigma^3\ln\left({C\over\Sigma}\right). \label{eq:fit6} \end{equation} This form is predicted for the NJL model in the large-$N$ limit \cite{KK}, and was used to fit data from the four dimensional $\mbox{Z}_2$ NJL model in \cite{KKK}. There are important qualitative differences between form VI and the previous forms IV, V, due to the different role of the logarithmic term -- i.e. the ``effective'' value of the exponent $\delta$ measured at the critical coupling is larger than the mean field value 3 for IV and V, but smaller than 3 for fit VI. \end{enumerate} We also tried to fit slightly modified forms of IV -- VI, eg, allowing separate scales in the logarithms, but the results were either unstable, or not sufficiently distinct from the six forms presented here to be worth reporting. As we shall see, the main issue to arise when assessing the various forms (\ref{eq:fit1}-\ref{eq:fit6}) is how much of the data to include in the fit. We expect any fit only to be successful in some finite scaling region around the transition. To see this, consider fits using all 84 data points, shown in Tab. \ref{tab:fit1}. No fit gives acceptable results as judged by the large $\chi^2$, although the two 5 parameter forms II and V are both noticeably better. To proceed, we must make some assumptions about the size and shape of the scaling window in the $(1/g^2,m)$ plane, and truncate the dataset accordingly. In Tab. \ref{tab:fit2} we exclude extremal inverse coupling values and fit to $1/g^2\in[0.52,0.55]$, which still includes over half the data in the set. The resulting fits fall into two camps. Forms I, III and VI all yield similar large $\chi^2$ and $1/g_c^2$, and are all in effect forced very close to the mean field form $\delta=3$, $p=1$; in the case of VI by having the UV scale so large that the logarithmic term is effectively constant. These fits all fail because the data in this window does not favour $p=1$. Forms II, V and to a lesser extent IV all allow the effective $p$ to increase from 1, e.g. for form V $p_{eff}=1+q_1/\ln({1\over\Sigma})>1$. This enables better fits, which are also characterised by a smaller value for $1/g_c^2$; to see why consider a Fisher plot of the data, in which $\Sigma^2$ is plotted against $m/\Sigma$. The Fisher plot is designed so that the mean field equation of state (\ref{eq:fit1}) yields curves of constant $1/g^2$ which are straight lines of uniform slope, intercepting the vertical axis as $m\to0$ in the broken phase, the horizontal axis as $m\to0$ in the symmetric phase, and passing through the origin as $m\to0$ for $g^2=g_c^2$. Any departures from mean field behaviour show up as curvature in and variation in spacing between lines of uniformly spaced values of $1/g^2$. In Fig. \ref{fig:fish3narrow} we plot lines of constant $1/g^2$ using III, showing that the lines are straight and make a poor job of passing through the data points. The plots from I and IV are almost indistinguishable. In Fig.\ref{fig:fish2narrow} we show the same plot for II, and in Fig. \ref{fig:fish5narrow} the plot for V. These last two fits are relatively successful because they accommodate lines of constant $1/g^2$ whose curvature changes sign according to which phase one is in. Outside the fitted window form II copes slightly better in the broken phase. It is worth noting that forms II and V gave fits of similar quality when applied both to simulation results of both the U(1) NJL model \cite{AliK} and non-compact QED \cite{Gock2}; clearly the logarithmic correction of (\ref{eq:fit5}) can equally well be modelled by an effective $\delta>3$. We have checked that fits II and V were stable under exclusion of $1/g^2=0.52,0.55$, and also under exclusion of the low mass points $m=0.005,0.0025$. A study of Figs. \ref{fig:fish2narrow},\ref{fig:fish5narrow} however, indicates that fits II and V are least satisfactory for the low mass points closest to the origin. This has led us to explore an alternative scaling window, in which all $1/g^2$ values are kept, but high mass points are excluded. Results from just keeping the 29 points corresponding to $m=0.0025,0.005$ are given in Tab. \ref{tab:fit3}, and those for the 40 points obtained by also including $m=0.01$ in Tab. \ref{tab:fit4}. The picture changes dramatically: the $\chi^2$ values from fits III and VI are now much smaller and comparable to those from fits II and V, though for $m\leq0.005$ (Tab. \ref{tab:fit3}) the two 5 parameter fits still yield the lowest values. Once $m=0.01$ points are included, fits II, III, V and VI are virtually indistinguishable in quality. What has happened is that the data from this window prefer an effective value of $p$ much closer to 1, and an effective $\delta$ now less than 3. This results, e.g. in the value of $q_1$ in fit V being very small, almost consistent with zero. The most impressive effect is that the logarithmic correction of form VI is now acting in the correct way to fit the data, moreover with a reasonable value for the UV scale $C$: in fit V this manifests itself in a relatively large {\em negative\/} value for the exponent $q_2$. The form IV, which assigns a positive value to $q_2$, is unstable. In Figs. \ref{fig:fish2wide} and \ref{fig:fish6wide} we show Fisher plots for the forms II and VI respectively, using the fit parameters of Tab. \ref{tab:fit4}, showing that the fits are indeed successful over a wide range of $1/g^2$ at the expense of missing the higher mass points. The even spacing of the lines of constant $1/g^2$ in the broken phase is responsible for forcing the effective value of $p$ to one. The values of $\chi^2$ per degree of freedom for the four successful fits are slightly too high for us to claim that they are the last word on understanding the model's equation of state. Even though the fits reproduce the spacing of the data in $1/g^2$, close inspection of Figs. \ref{fig:fish2wide} and \ref{fig:fish6wide} suggests that the variation of the data with $m$ in the broken phase is not so well-described -- clearly more low mass data will be needed to settle the issue, which in turn will require a quantitative understanding of finite volume corrections. We can claim, however, that there appears to be a dramatic switch in the preferred form of the equation of state according to whether the scaling window is chosen long and thin or short and fat in the $(1/g^2,m)$ plane. In the next section we will consider data taken directly in the chiral limit, and find indirect evidence to support the short fat option. \section{Finite Volume Corrections in the Chiral Limit}\label{sec:fv} In the previous section we made no attempt to correct for finite volume effects in the measurements of $\Sigma$. From test runs for $m=0.005$ at $1/g^2=0.50$ on different-sized lattices, shown in Tab. \ref{tab:datafv}, finite volume effects are clearly present on a $12^4$ system, but the difference between $16^4$ and $20^4$, though statistically significant, is less than 2\%. We hope, therefore, that finite volume corrections would have no impact on the qualitative conclusions of the previous section. Finite volume effects are numerically important, however, when considering measurements of the quantity $\vert\Phi\vert$ made with $m=0$, defined in (\ref{eq:Phi}) and shown in Tab. \ref{tab:datam0}. Naively, we expect that in a finite system $\vert\Phi\vert$ differs from the true order parameter $\Sigma$ extrapolated to the chiral limit, because in the absence of a symmetry breaking term it is impossible to disentangle fluctuations of the order parameter field from those of the Goldstone modes; including the effects of the latter will give $\vert\Phi\vert>\Sigma_0$, where $\Sigma_0$ denotes the value of $\Sigma$ in the chiral limit. Only in the thermodynamic limit $V\to\infty$ do we expect the Goldstone modes to average to zero and the two quantities to coincide. This phenomenon also occurs in numerical studies of scalar field theory, and has been analysed for the O(4) sigma model in \cite{Gockagain}. Formally, $\vert\Phi\vert$ corresponds to the minimum of a ``constraint effective potential'' in a finite volume $V$; for the O(4) sigma model this can be calculated using renormalised perturbation theory. There is indeed a correction due to the Goldstone modes which is intrinsically $O(V^{-1})$. Numerically, however, a far more significant correction to the calculation arises from the need to renormalise the model in order to relate lattice parameters to measured quantities. To one-loop order this results in a relation \cite{Gockagain} \begin{equation} \vert\Phi\vert=\Sigma_0\left(1+B(m_RL){Z\over{\Sigma_0^2L^2}} +O(g_R^2)\right), \label{eq:Gock} \end{equation} where $m_R$, $g_R$ are the renormalised scalar mass and coupling strength, $L$ is the linear size of the system, and $Z$ is the wavefunction renormalisation constant, defined as the coefficient of $1/p^2$ in the unrenormalised pion propagator in the limit $p^2\to0$. The factor $B$ is a slowly varying function of $m_RL$ which accounts for the difference between one-loop integrals evaluated in finite and infinite volumes (for further background see \cite{HasLeut}). A fit to (\ref{eq:Gock}) for magnetisation data in the sigma model is given in Fig. 2 of \cite{Gockagain}. For the NJL model there is no reason for renormalised perturbation theory to apply; however since the effective theory in the broken phase is similar to the sigma model, in the sense that it has the same light scalar degrees of freedom, we expect a result similar to (\ref{eq:Gock}) to hold. We will therefore attempt to account for the finite volume correction $\Delta=\vert\Phi\vert-\Sigma_0$ with the formula \begin{equation} \Delta={{BZ}\over{\Sigma_0L^2}}, \label{eq:Delta} \end{equation} where the $B$ factor has been set constant, and higher order effects are ignored. The first prediction of (\ref{eq:Delta}) is that $\Delta\propto L^{-2}$. To test this, we can compare $\vert\Phi\vert$ data taken at $1/g^2=0.48$ from three different volumes against our best estimate for $\Sigma_0$. From the data of Tab. \ref{tab:datam0} we see that $\vert\Phi\vert$ decreases with $L$ as predicted. To evaluate $\Sigma_0$, we use the two most successful fits to the equation of state from the previous section, each one characteristic of the two different hypotheses about the shape of the scaling window. The narrow scaling window $0.52\leq1/g^2\leq0.55$ was best fitted by form V (\ref{eq:fit5}) using the parameters of Tab. \ref{tab:fit2} (the corresponding Fisher plot is shown in Fig. \ref{fig:fish5narrow}); in the chiral limit at $1/g^2=0.48$ this form predicts $\Sigma_0=0.3341$ (we implicitly assume here that the equation of state fits apply to the thermodynamic limit). This is already equal to the value of $\vert\Phi\vert$ on a $12^4$ lattice, and lies above $\vert\Phi\vert$ on larger systems, making it difficult to see how (\ref{eq:Delta}) applies. The second estimate comes from assuming a wide scaling window in $1/g^2$ but excluding higher mass data. This was best described using form VI (\ref{eq:fit6}) using the parameters of Tab. \ref{tab:fit4} (Fig. \ref{fig:fish6wide}); the chiral limit prediction now is $\Sigma_0=0.3030$. In Fig. \ref{fig:Delvs1onL} we plot $({L\over12})^n\Delta$ vs. $1/L$ for $n=1$, 2 and 3 and $L=12$, 16 and 20, assuming $\Sigma_0=0.3030$. We find that $L^2\Delta$ is approximately constant, which supports the hypothesis (\ref{eq:Delta}). Next we examine the dependence of $\Delta$ on $1/g^2$, and hence $\Sigma_0$, using data from the broken phase of the model taken from a constant volume of $16^4$. The data of Tab \ref{tab:datam0} were used in conjunction with the same two fits for $\Sigma_0$ to produce the two sets of values for $\Delta$ given in Tab. \ref{tab:Delta}. The quoted errors are the statistical errors in the measurement of $\vert\Phi\vert$, and take no account of errors in extrapolating $\Sigma$ to the chiral limit. The values $\Delta_{\mbox{V}}$ obtained using form V actually change sign over the range of $1/g^2$ explored, and clearly cannot be fitted by any relation of the form (\ref{eq:Delta}). The values $\Delta_{\mbox{VI}}$ obtained using form VI are plotted against $1/g^2$ in Fig. \ref{fig:Delvsbeta}. The most interesting trend is that $\Delta_{\mbox{VI}}$ is approximately constant for $0.45\leq1/g^2\leq0.52$, whereas $\Sigma_0$ itself falls from 0.4115 to 0.1332 over the same range. How do we reconcile this observation with (\ref{eq:Delta}), which apparently predicts $\Delta\propto1/\Sigma_0$? The answer lies in the field dependence of the wavefunction renormalisation constant $Z$, which differs markedly between the ferromagnetic O(4) sigma model and the fermionic NJL model \cite{EgShz}\cite{KK}\cite{KKK}. In the sigma model, $Z$ is perturbatively close to 1 (as borne out by the data of \cite{Gockagain}), and hence $\Delta\propto1/\Sigma_0$. In the NJL model, on the other hand, the large-$N_f$ approximation predicts $Z\propto1/\ln(1/\Sigma_0)$ \cite{EgShz}, and hence \begin{equation} \Delta\propto{1\over{\Sigma_0\ln\left({1\over\Sigma_0}\right)}}. \label{eq:Delta2} \end{equation} We can see this using another argument. In both sigma and NJL models, a Ward identity plus the assumption that both conserved current and transverse field couple principally to the Goldstone mode (in particle physics this mode is the pion, and the assumption is PCAC), leads to the relation \cite{HasLeut} \begin{equation} Zf_\pi^2=\Sigma_0^2, \end{equation} where $f_\pi$ is the coupling to the axial current, or in the context of hadronic physics the pion decay constant. We thus have \begin{equation} \Delta\propto{\Sigma_0\over{(f_\pi L)^2}}. \end{equation} Now, in the large-$N_f$ approximation (see Appendix for an explicit calculation), \begin{equation} f_\pi={{\sqrt{N_f}\Sigma_0}\over\pi}\ln^{1\over2} \left({1\over\Sigma_0}\right), \end{equation} yielding once again the relation (\ref{eq:Delta2}). Note that both scenarios predict triviality of the resulting theory. In the NJL model, in the large-$N_f$ treatment $\Sigma_0$ is a physical scale related to the renormalised fermion and scalar masses. The pion decay constant in physical units is thus $f_\pi/\Sigma_0$, which diverges as $\ln^{1\over2}(1/\Sigma_0)$ in the continuum limit $\Sigma_0\to0$. In the sigma model, $f_\pi/\Sigma_0=Z^{1\over2}$ which is constant in the continuum limit; however the physical scale is now the scalar mass $m_\sigma\propto\surd{g_R}\Sigma_0$. Hence in physical units \begin{equation} {{f_\pi}\over{m_\sigma}}\propto{1\over\surd{g_R}}\propto {\ln^{1\over2}\left({1\over\Sigma_0}\right)}, \end{equation} and again diverges in the continuum limit. We tested the hypothesis (\ref{eq:Delta2}) with a two parameter fit of the data of Tab. \ref{tab:Delta} to the form \begin{equation} \Delta={A\over{16^2\Sigma_0\ln\left({B\over\Sigma_0}\right)}}. \label{eq:fitDelta} \end{equation} With all points included we find $A=1.86(13)$, $B=1.10(11)$ with $\chi^2/\mbox{dof}$ of 4.3; however if the $1/g^2=0.53$ point is left out the fit is much more satisfactory: $A=1.40(10)$, $B=0.82(6)$ with $\chi^2/\mbox{dof}$ of 0.81. This fit is plotted in Fig. \ref{fig:Delvsbeta}. In either case it is reassuring that the UV scale of the logarithm is close to unity. It is also interesting that the fit accounts for the slight curvature in the plot, though this aspect is not tightly constrained by the current error bars. The fitted curve also rises sharply to pass close to the excluded point. To conclude, we can account for our finite volume effects in the chiral limit under three assumptions: \begin{enumerate} \item[(i)] that the equation of state fits to the $\Sigma$ data from the $16^4$ lattice in the previous section, when extrapolated to the chiral limit, accurately describe the thermodynamic limit. \item[(ii)] that the most appropriate form to extrapolate the equation of state is form VI (\ref{eq:fit6}), using the parameters of Tab. \ref{tab:fit4}. \item[(iii)] that the finite volume correction has the form (\ref{eq:Delta2}). \end{enumerate} The latter two assumptions are consistent with logarithmic corrections of the form advocated in \cite{KK}, which differ qualitatively from those used to describe triviality in scalar field theory. \section{Discussion} It would appear that for the lattice sizes and bare masses used in this study, which are close to state-of-the-art (though recall that dual-site four-fermi models are relatively cheap to simulate), it is still difficult to distinguish different patterns of triviality, or indeed distinguish a trivial from a non-trivial fixed point, purely on the basis of fits to model equations of state; different assumptions about the size and shape of the scaling region result in different best fits. Only once the analysis of Sec. IV is applied to the data taken at zero bare mass does the form (\ref{eq:fit6}), associated with composite scalar degrees of freedom, appear preferred. Our principal conclusion is thus that the form (\ref{eq:eosnjl}), (\ref{eq:fit6}) is qualitatively correct beyond leading order in the $1/N_f$ expansion. The sensitivity of the preferred form of the equation of state to the shape of the assumed scaling window has implications for similar studies in related fermionic models, noteably the U(1) NJL model studied using the ``link'' formulation \cite{AliK}, and non-compact $\mbox{QED}_4$ \cite{Gock2}; the two recent studies cited attempt fits based on the power law form (\ref{eq:fit2}) and the modified sigma model form (\ref{eq:fit5}), and find the two forms difficult to distinguish on the basis of $\chi^2$ alone (in other words, a value of $\delta>3$ is difficult to distinguish from a logarithmic correction with $q_2>0$). In Tab. \ref{tab:qs} the values of $q_1$ and $q_2$ are shown for the two scaling windows used in this study, together with the quoted fits for the two other models from \cite{AliK} and \cite{Gock2}, as well as theoretical values for the O(4) and O(2) sigma models \cite{Brezin}\cite{Zinn}, corresponding respectively to the broken $\mbox{SU(2)}\otimes\mbox{SU(2)}$ symmetry in this study, and the broken U(1) chiral symmetry of the models of \cite{AliK} and \cite{Gock2}. Using the data of \cite{Gock2} one can examine the stability of the fitted $q_1$ and $q_2$ with respect to exclusion of higher mass data; the quoted value for $q_1$ appears quite stable, that for $q_2$ less so. An outstanding issue is whether the disparity in the fitted values of the $q$ exponents in Tab. \ref{tab:qs} means that the different lattice models lie in different universality classes. The analysis presented in this paper suggests that current lattice simulations are unable to decide. Unfortunately for the two models considered in \cite{AliK}\cite{Gock2} there is no realisable way of simulating directly in the chiral limit, and hence the analysis of Sec. IV is unavailable. A useful direction to explore may be a simulation of the U(1) NJL model with the dual site formulation. \section*{Acknowledgements} SJH is supported by a PPARC Advanced fellowship and would like to thank Tristram Payne and Luigi Del Debbio for their help. JBK is partially supported by the National Science Foundation under grant NSF-PHY92-00148.
1,108,101,565,377
arxiv
\section{Introduction} \label{sec:intro} The class of L\'evy processes with paths whose graphs have convex hulls in the plane with smooth boundary almost surely has recently been characterised in~\cite{SmoothCM}. In fact, as explained in~\cite{SmoothCM}, to understand whether the boundary is smooth at a point with tangent of a given slope, it suffices to analyse whether the right-derivative $C'=(C'_t)_{t\in(0,T)}$ of the convex minorant $C=(C_t)_{t\in[0,T]}$ of a L\'evy process $X=(X_t)_{t\in[0,T]}$ is continuous as it attains that slope (recall that $C$ is the pointwise largest convex function satisfying $C_t\leq X_t$ for all $t\in[0,T]$). The main objective of this paper is to quantify the smoothness of the boundary of the convex hull of $X$ by quantifying the modulus of continuity of $C'$ via its lower and upper functions. In the case of times $0$ and $T$, we quantify the degree of smoothness of the boundary of the convex hull by analysing the rate at which $|C'_t|\to\infty$ as $t$ approaches either $0$ or $T$ (see \href{https://youtu.be/9uCge3eMHQg}{YouTube}~\cite{Presentation_AM} for a short presentation of our results). It is known that $C$ is a piecewise linear convex function~\cite{MR2978134,fluctuation_levy} and the image of the right-derivative $C'$ over the open intervals of linearity of $C$ is a countable random set $\mathcal{S}$ with a.s. deterministic limit points that do not depend on the time horizon $T$, see~\cite[Thm~1.1]{SmoothCM}. These limit points of $\mathcal{S}$ determine the continuity of $C'$ on $(0,T)$ outside of the open intervals of constancy of $C'$, see~\cite[App.~A]{SmoothCM}. Indeed, the \textit{vertex time process}~$\tau=(\tau_s)_{s\in\R}$, given by $\tau_s\coloneqq \inf\{t\in(0,T):C'_t>s\}\wedge T$ (where $a\wedge b\coloneqq\min\{a,b\}$ and $\inf\emptyset\coloneqq\infty$), is the right-inverse of the non-decreasing process $C'$. The process $\tau$ finds the times in $[0,T]$ of the vertices of the convex minorant $C$ (see~\cite[Sec.~2.3]{fluctuation_levy}), so the only possible discontinuities of $C'$ lie in the range of $\tau$. Clearly, it suffices to analyse only the times $\tau_s$ for which $C'$ is non-constant on the interval $[\tau_s,\tau_s+\varepsilon)$ for every $\varepsilon>0$ (otherwise, $\tau_s$ is the time of a vertex isolated from the right). At such a time, the continuity of $C'$ can be described in terms of a limit set of $\mathcal{S}$. In the present paper we analyse the quality of the right-continuity of $C'$ at such points. By time reversal, analogous results apply for the left-continuity of $t\mapsto C'_t$ on $(0,T)$ (i.e., as $t\ua\tau_s$ for $s\in\R$) and for the explosion of $C'_t$ as $t\ua T$. Throughout the paper, the variable $s\in\R$ will be reserved for \emph{slope}, indexing the vertex time process $\tau$. \subsection{Contributions} We describe the small-time fluctuations of the derivative of the boundary of the convex hull of $X$ at its points of smoothness. This requires studying the local growth of $C'$ in two regimes: at \emph{finite slope} (FS) $s$ in the deterministic set $\mathcal{L}^+(\mathcal{S})\subset\R$ of right-limit points\footnote{A point $x$ is a right-limit point of $A\subset\R$, denoted $x\in\mathcal{L}^+(A)$ if $A\cap(x,x+\varepsilon)\ne\emptyset$ for all $\varepsilon>0$ (see also~\cite[App.~A]{SmoothCM}).} of the set of slopes~$\mathcal{S}$ and at \emph{infinite slope} (IS) for L\'evy processes of infinite variation, see Figure~\ref{fig:CM_0_and_postmin} below. In terms of times, regime (FS) with $s\in\mathcal{L}^+(\mathcal{S})$ analyses how $C'$ leaves the slope~$s$ at vertex time $\tau_s$ in $[0,T)$ and regime (IS) analyses how $C'$ enters from $-\infty$ at time $0=\lim_{u\da-\infty}\tau_u$. At all other times $t\in(0,T)\setminus\{\tau_s\,:\,s\in\mathcal{L}^+(\mathcal{S})\}$, the derivative $C'$ is constant on $[t,t+\varepsilon)$ for some sufficiently small $\varepsilon>0$. In particular, in what follows we exclude all L\'evy processes that are compound Poisson with drift, since $C'$ only takes finitely many values in that case. \begin{figure}[ht] \centering \includegraphics[width=.49\textwidth]{CM_Growth_large.pdf} \includegraphics[width=.49\textwidth]{CM_Growth_small.pdf} \caption{\small The picture on the left shows the path of an $\alpha$-stable L\'evy process $X$ with $\alpha\in (1,2)$ and its convex minorant $C$ starting at time $0$. The picture on the right shows the post-minimum process $(X_{t+\tau_0}-X_{\tau_0})_{t\in[0,T-\tau_0]}$ of an $\alpha$-stable process with $\alpha \in (0,1)$ and its corresponding convex minorant $(C_{t+\tau_0}-C_{\tau_0})_{t\in[0,T-\tau_0]}$. Note that, in the case $\alpha\in(0,1)$, the derivative $C'$ is continuous only at $\tau_0$, i.e. at $t=0$ in the graph, and at no other contact point between the path and its convex minorant.} \label{fig:CM_0_and_postmin} \end{figure} {\textbf{Regime (FS): $C'$ immediately after $\tau_s$.}} Given a slope $s\in\R$, we have $s\notin\mathcal{S}$ a.s. by~\cite[Thm~3.1]{fluctuation_levy} since the law of $X$ is diffuse. By~\cite[Thm~1.1]{SmoothCM}, $s\in\mathcal{L}^+(\mathcal{S})$ if and only if the derivative $C'$ attains level $s$ at a unique time $\tau_s\in(0,T)$ (i.e. $C'_{\tau_s}=s$) and is not constant on every interval $[\tau_s,\tau_s+\varepsilon)$, $\varepsilon>0$, a.s. Moreover, $s\in\mathcal{L}^+(\mathcal{S})$ if and only if $\int_0^1 \p(X_t/t\in(s,s+\varepsilon))t^{-1}\mathrm{d} t=\infty$ for all $\varepsilon>0$. The regime (FS) includes an infinite variation process $X$ if it is strongly eroded (implying $\mathcal{L}^+(\mathcal{S})=\R$) or, more generally, if $(X_t-st)_{t\ge 0}$ is eroded (implying $s\in\mathcal{L}^+(\mathcal{S})$), see~\cite{SmoothCM}. Moreover, regime (FS) includes a finite variation process $X$ at slope $s\in\mathcal{L}^+(\mathcal{S})$ if and only if the natural drift $\gamma_0=\lim_{t\da 0}X_t/t$ equals $s$ and $\int_0^1 \p(X_t>\gamma_0t)t^{-1}\mathrm{d} t=\infty$ or, equivalently, if the positive half-line is regular for $(X_t-\gamma_0 t)_{t\ge 0}$ (see~\cite[Cor.~1.4]{SmoothCM} for a characterisation in terms of the L\'evy measure of $X$ or its characteristic exponent). Our results in regime (FS) are summarised as follows. For any process with $s\in\mathcal{L}^+(\mathcal{S})$, Theorem~\ref{thm:post-min-lower} establishes general sufficient conditions identifying when $\liminf_{t\da 0}(C'_{t+\tau_s}-s)/f(t)$ is either $0$ a.s. or $\infty$ a.s. In particular, we show that $\liminf_{t\da 0}(C'_{t+\tau_s}-s)/f(t)$ cannot take a positive finite value if $X$ has jumps of both signs and is an $\alpha$-stable with $\alpha\in(0,1]$ (recall that, if $\alpha>1$, then $\mathcal{L}^+(\mathcal{S})=\emptyset$ by~\cite[Prop.~1.6]{SmoothCM}). For processes $X$ in the small-time domain of attraction of an $\alpha$-stable process with $\alpha \in (0,1)$ (see Subsection~\ref{subsec:upper-post-min} below for definition), Theorem~\ref{thm:upper_fun_C'_post_min} finds a parametric family of functions $f$ that essentially determine the upper fluctuations of $C'_{t+\tau_{s}}-s$ up to sublogarithmic factors. In particular, Theorem~\ref{thm:upper_fun_C'_post_min} determines when $\limsup_{t\da 0}(C'_{t+\tau_s}-s)/f(t)$ equals $0$ a.s. or $\infty$ a.s., essentially characterising the right-modulus of continuity\footnote{We say that a non-decreasing function $\varphi:[0,\infty)\to[0,\infty)$ is a right-modulus of continuity of a right-continuous function $g$ at $x\in\R$ if $\limsup_{y\da x}|g(y)-g(x)|/\varphi(y-x)<\infty$.} of $C'$ at $\tau_s$. The family of functions $f$ is given in terms of the regularly varying normalising function of $X$. {\textbf{Regime (IS): $C'$ immediately after $0$.}} The boundary of the convex hull of $X$ is smooth at the origin if and only if $\lim_{t\da 0}C'_t=-\infty$ a.s., which is equivalent to $X$ being of infinite variation (see~\cite[Prop.~1.5~\&~Sec.~1.1.2]{SmoothCM}). If $X$ has finite variation, then $C'$ is bounded (see~\cite[Prop.~1.3]{SmoothCM}). In this case, $C'$ has positive probability of being non-constant on the interval $[0,\varepsilon)$ for every $\varepsilon>0$ if and only if the negative half-line is not regular. Moreover, if this event occurs, then $C'_t$ approaches the natural drift $\gamma_0$ as $t\da 0$ by~\cite[Prop.~1.3(b)]{SmoothCM} and the local behaviour of $C'$ at $0$ would be described by the results of regime (FS). Thus, in regime (IS) we only consider L\'evy processes of infinite variation. Our results in regime (IS) are summarised as follows. For any infinite variation process $X$, Theorem~\ref{thm:C'_limsup} establishes general sufficient conditions for $\limsup_{t \da 0}|C'_t|f(t)$ to equal either $0$ a.s. or $\infty$ a.s. In particular, we show that $\limsup_{t\da 0}|C'_t|f(t)$ cannot take a positive finite value if $X$ is $\alpha$-stable with $\alpha\in[1,2)$ and has (at least some) negative jumps. If the L\'evy process lies in the domain of attraction of an $\alpha$-stable process, with $\alpha \in (1,2]$, Theorem~\ref{thm:lower_fun_C'} finds a parametric family of functions $f$ that essentially determine the lower fluctuations of $C'$ up to sublogarithmic functions. The function $f$ is given in terms of the regularly varying normalising function of $X$. Again, these results describe the right-modulus of continuity of the derivative of the boundary of the convex hull of $X$ (as a closed curve in $\R^2$) at the origin. In this case, for a sufficiently small $\varepsilon>0$, we may locally parametrise the curve $((t,C_t);t\in[0,\varepsilon])$, as $((\varsigma(t),t);t\in[C_\varepsilon,0])$, using a local inverse $\varsigma(t)$ of $C_t$ with left-derivative $\varsigma'(t)=1/C'_{\varsigma(t)}$ that vanishes at $0$ (since $\lim_{t\da 0}1/|C'_t|=0$ a.s.). Thus, the left-modulus of continuity of $\varsigma$ at $0$ is described by the upper and lower limits of $(|C'_t|f(t))^{-1}$ as $t\da 0$, the main focus of our results in this regime. {\textbf{Consequences for the path of a L\'evy process and its meander.}} In Subsection~\ref{subsec:applications} we present some implications the results in this paper have for the path of $X$. We find that, under certain conditions, the local fluctuations of $X$ can be described in terms of those of $C'$, yielding novel results for the local growth of the post-minimum process of $X$ and the corresponding L\'evy meander (see Lemma~\ref{lem:upper_fun_Lev_path_post_slope} and Corollaries~\ref{cor:post_tau_s_Levy_path} and~\ref{cor:post-tau_s-Levy-path-attraction} below). \subsection{Strategy and ideas behind the proofs} An overview of the proofs of our results is as follows. First we show that, under our assumptions, the local properties of $C'$ do not depend on the time horizon $T$. This reduces the problem to the case where the time horizon $T$ is independent of $X$ and exponentially distributed (the corresponding right-derivative is denoted $\wh C'$). Second, we translate the problem of studying the local behaviour of $\wh C'$ to the problem of studying the local behaviour of its inverse: the vertex time process $\wh\tau$. Third, we exploit the fact that, since the time horizon $T$ is an independent exponential random variable with mean $1/\lambda$, the vertex time process $\wh\tau$ is a time-inhomogenous non-decreasing additive process (i.e., a process with independent but non-stationary increments) and its Laplace exponent is given by (see~\cite[Thm~2.9]{fluctuation_levy}): \begin{equation} \label{eq:cf_tau0} \E[e^{-w\wh\tau_u}] =e^{-\Phi_u(w)}, \quad \text{where} \quad \Phi_u(w)\coloneqq \int_0^\infty (1-e^{-w t})e^{-\lambda t}\p(X_t\le ut)\frac{\mathrm{d} t}{t}, \quad\text{for $w\ge 0$, $u\in\R$.} \end{equation} These three observations reduce the problem to the analysis of the fluctuations of the additive process~$\wh\tau$. The local properties of $C'$ are entirely driven by the small jumps of $X$. However, different facets of the small-jump activity of $X$ dominate in each regime, resulting in related but distinct results and criteria. Indeed, regime (FS) corresponds to the short-term behaviour of $\wh\tau_{s+u}-\wh\tau_{s}$ as $u\da 0$ while regime (IS) corresponds to the long-term behaviour of $\wh\tau_u$ as $u\to -\infty$ (note that, when $X$ is of infinite variation, $\tau_u>0$ for $u\in\R$ and $\lim_{u\to -\infty}\wh\tau_u=0$ a.s.). This bears out in a difference in the behaviour of the Laplace exponent $\Phi$ of $\wh\tau$ at either bounded or unbounded slopes and leads to an interesting diagonal connection in behaviour that we now explain. Our main tool is the novel description of the upper and lower fluctuations of a non-decreasing time-inhomogenous additive process $Y$ started at $Y_0=0$, in terms of its time-dependent L\'evy measure and Laplace exponent. In our applications, the process $Y$ is given by $(\wh\tau_{u+s}-\wh\tau_s)_{u\ge 0}$ in regime (FS) and $(\wh\tau_{-1/u})_{u\ge 0}$ (with conventions $-1/0=-\infty$ and $\wh\tau_{-\infty}=0$) in regime (IS). Then our main technical tools, Theorems~\ref{thm:limsup_L} and~\ref{thm:Y_limsup} of Section~\ref{sec:additive} below, describing the upper and lower fluctuations of $Y$, also serve to describe the lower and upper fluctuations, respectively, of the right-inverse $L$ of $Y$. Since, in regime (FS), we have $\wh C'_{t+\tau_s}-s=L_t$ but, in regime (IS), we have $\wh C'_t=-1/L_t$, the lower (resp. upper) fluctuations of $\wh C'$ in regime (FS) will have a similar structure to the upper (resp. lower) fluctuations of $\wh C'$ in regime (IS). This diagonal connection is \emph{a priori} surprising as the processes considered by either regime need not have a clear connection to each other. Indeed, regime (FS) considers most finite variation processes and only some infinite variation processes while regime (IS) considers exclusively infinite variation processes. This diagonal connection is reminiscent of the duality between stable process with stability index $\alpha\in(1,2]$ and a corresponding stable process with stability index $1/\alpha\in[1/2,1)$ arising in the famous time-space inversion first observed by Zolotarev for the marginals and later studied by Fourati~\cite{MR2218871} for the ascending ladder process (see also~\cite{MR4237257} for further extensions of this duality). The lower and upper fluctuations of the corresponding process $Y$ require varying degrees of control on its Laplace exponent $\Phi$ in~\eqref{eq:cf_tau0}. The assumptions of Theorem~\ref{thm:limsup_L} require tight two-sided estimates of $\Phi$, not needed in Theorem~\ref{thm:Y_limsup}. When applying Theorem~\ref{thm:limsup_L}, we are compelled to assume $X$ lies in the domain of attraction of an $\alpha$-stable process. In regime (FS) this assumption yields sharp estimates on the density of $X_t$ as $t\da 0$, which in turn allows us to control the term $\p(0<X_t-st\le ut)$ for small $t>0$ in the Laplace exponent $\Phi_{s+u}-\Phi_s$ of $\wh \tau_{u+s}-\wh \tau_s$ as $u\da 0$, cf.~\eqref{eq:cf_tau0} above. The growth rate of the density of $X_t$ as $t\da 0$ is controlled is by \emph{lower} estimates on the small-jump activity of $X$ given in Lemma~\ref{lem:generalized_Picard} below, a refinement of the results in~\cite{picard_1997} for processes attracted to a stable process. In regime (IS) we require control over the negative tail probabilities $\p(X_t\le ut)$ for small $t>0$ appearing in the Laplace exponent $\Phi_u$ of $\wh \tau_u$ as $u\to-\infty$, cf.~\eqref{eq:cf_tau0}. The behaviour of these tails are controlled by \emph{upper} estimates of the small-jump activity of $X$, which are generally easier to obtain. In this case, moment bounds for the small-jump component of the L\'evy process and the convergence in Kolmogorov distance implied by the attraction to the stable law, give sufficient control over these tail probabilities. \subsection{Connections with the literature} In~\cite{MR1747095}, Bertoin finds the law of the convex minorant of Cauchy process on $[0,1]$ and finds the exact asymptotic behaviour (in the form of a law of interated logarithm with a positive finite limit) for the derivative $C'$ at times $0$, $1$ and any $\tau_s$, $s\in\R$. The methods in~\cite{MR1747095} are specific to Cauchy process with its linear scaling property, making the approach hard to generalise. In fact, the results in~\cite{MR1747095} are a direct consequence of the fact that the vertex time process $\wh \tau$ has a Laplace transform $\Phi$ in~\eqref{eq:cf_tau0} that factorises as $\Phi_u(w)=\p(X_1\le u)\Phi_{\infty}(w)$, making $\wh \tau$ a gamma subordinator under the deterministic time-change $u\mapsto\p(X_1\le u)$, cf.~Example~\ref{ex:Cauchy} below. Paul L\'evy showed that the boundary of the convex hull of a planar Brownian motion has no corners at any point, see~\cite{MR0029120}, motivating~\cite{MR972777} to characterise the modulus of continuity of the derivative of that boundary. Given the recent characterisation of the smoothness of the convex hull of a L\'evy path~\cite{SmoothCM}, the results in the present paper are likewise motivated by the study of the modulus of continuity of the derivative of the boundary in this context. The literature on the growth rate of the path of a L\'evy process $X$ is vast, particularly for subordinators, see e.g.~\cite{MR0002054,MR1746300,MR292163,MR210190,MR968135,MR2480786,MR2591911}. The authors in~\cite{MR292163,MR210190} study the growth rate of a subordinator at $0$ and $\infty$. In~\cite{MR210190} (see also~\cite[Prop~4.4]{MR1746300}) Fristedt fully characterises the upper fluctuations of a subordinator in terms of its L\'evy measure, a result we generalise in Theorem~\ref{thm:Y_limsup} to processes that need not have stationary increments. In~\cite[Thm~4.1]{MR1746300} (see also~\cite[Thm~1]{MR292163}, a function essentially characterising the exact lower fluctuations of a subordinator is constructed in terms of its Laplace exponent. These methods are not easily generalised to the time-inhomogenous case since the Laplace exponent is now bivariate and there is neither a one-parameter lower function to propose nor a clear extension to the proofs. In~\cite{MR1113220}, Sato establishes results for time-inhomogeneous non-decreasing additive processes similar to our result in Section~\ref{sec:additive}. The assumptions in~\cite{MR1113220} are given in terms of the transition probabilities of the additive process, which are generally intractable, particularly for the processes $(\wh\tau_{-1/u})_{u>0}$ and $(\wh\tau_{u+s}-\wh\tau_s)_{u\ge 0}$, considered here. Our results are also easier to apply in other situations as well, for example, to fractional Poisson processes (see definition in~\cite{MR3943682}). The upper fluctuations of a L\'evy process at zero have been the topic of numerous studies, see~\cite{MR2370602,MR2591911} for the one-sided problem and~\cite{MR0002054,MR968135,MR2480786} for the two-sided problem. Similar questions have been considered for more general time-homogeneous Markov processes~\cite{Franziska_Kuhn,SoobinKimLeeLIL}. The time-homogeneity again plays an important role in these results. The lower fluctuations of a stochastic process is only qualitatively different from the upper fluctuations if the process is positive. This is the reason why this problem has mostly only been addressed for subordinators (see the references above) and for the running supremum of a L\'evy process, see e.g.~\cite{MR3019488}. We stress that the results in the present paper, while related in spirit to this literature, are fundamentally different in two ways. First, we study the \emph{derivative} of the convex minorant of a L\'evy path on $[0,T]$, which (unlike e.g. the running supremum) cannot be constructed locally from the restriction of the path of the L\'evy process to any short interval. Second, the convex minorant and its derivative are neither Markovian nor time-homogeneous. In fact, the only result in our context prior to our work is in the Cauchy case~\cite{MR1747095}, where the derivative of the convex minorant is an explicit gamma process under a deterministic time-change, cf.~Example~\ref{ex:Cauchy} below. \subsection{Organisation of the article} In Section~\ref{sec:small-time-derivative} we present the main results of this article. We split the section in four, according to regimes (FS) and (IS) and whether the upper or lower fluctuations of $C'$ are being described. The implications of the results in Section~\ref{sec:small-time-derivative} for the L\'evy process and meander are covered in Subsection~\ref{subsec:applications}. In Section~\ref{sec:additive}, technical results for general time-inhomogeneous non-decreasing additive processes are established. Section~\ref{sec:proofs} recalls from~\cite{fluctuation_levy} the definition and law of the vertex time process $\tau$ and provides the proofs of the results stated in Section~\ref{sec:small-time-derivative}. Section~\ref{sec:concluding_rem} concludes the paper. \section{Growth rate of the derivative of the convex minorant} \label{sec:small-time-derivative} Let $X=(X_t)_{t \ge 0}$ be an infinite activity L\'evy process (see~\cite[Def.~1.6, Ch.~1]{MR3185174}). Let $C=(C_t)_{t \in [0,T]}$ be the convex minorant of $X$ on $[0,T]$ for some $T>0$. Put differently, $C$ is the largest convex function that is piecewise smaller than the path of $X$ (see~\cite[Sec.~3,p.~8]{fluctuation_levy}). In this section we analyse the growth rate of the right derivative of $C$, denoted by $C'=(C_t')_{t\in(0,T)}$, near time $0$ and at the vertex time $\tau_s=\inf\{t>0:C'_t>s\}\wedge T$ of the slope $s\in \R$ (i.e., the first time $C'$ attains slope $s$). More specifically, we give sufficient conditions to identify the values of the possibly infinite limits (for appropriate increasing functions $f$ with $f(0)=0$): $\limsup_{t\da 0}(C'_{t+\tau_s}-s)/f(t)$ \& $\liminf_{t\da 0}(C'_{t+\tau_s}-s)/f(t)$ in the finite slope (FS) regime and $\limsup_{t\da 0}|C'_t|f(t)$ \& $\liminf_{t\da 0}|C'_t|f(t)$ in the infinite slope (IS) regime. The values of these limits are constants in $[0,\infty]$ a.s. by Corollary~\ref{cor:trivial} below. We note that these limits are invariant under certain modifications of the law of $X$, which we describe in the following remark. \begin{remark} \label{rem:modify_nu}\phantom{empty} \begin{itemize}[leftmargin=2em, nosep] \item[{\nf(a)}] Let $\p$ be the probability measure on the space where $X$ is defined. If the limits $\limsup_{t\da0}|C'_t|f(t)$, $\liminf_{t\da0}|C'_t|f(t)$, $\limsup_{t\da 0}(C'_{t+\tau_s}-s)/f(t)$ and $\liminf_{t\da 0}(C'_{t+\tau_s}-s)/f(t)$ are $\p$-a.s. constant, then they are also $\p'$-a.s. constant with the same value for any probability measure $\p'$ absolutely continuous with respect to $\p$. In particular, we may modify the L\'evy measure of $X$ on the complement of any neighborhood of $0$ without affecting these limits (see e.g.~\cite[Thm~33.1--33.2]{MR3185174}). \item[{\nf(b)}] We may add a drift process to $X$ without affecting the limits at $0$ since such a drift would only shift $|C'_t|$ by a constant value and $f(t)\to 0$ as $t \downarrow0$. Similarly, for the limits of $(C'_{t+\tau_s}-s)/f(t)$ as $t\da 0$, it suffices to analyse the post-minimum process (i.e., the vertex time $\tau_0$) of the process $(X_t-st)_{t\ge 0}$. For ease of reference, our results are stated for a general slope $s$.\qedhere \end{itemize} \end{remark} \subsection{Regime (FS): lower functions at time \texorpdfstring{$\tau_s$}{tau}} \label{subsec:lower-post-min} The following theorem describes the lower fluctuations of $C'_{t+\tau_s}-s$ as $t \da 0$. Recall that $\mathcal{L}^+(\mathcal{S})$ is the a.s. deterministic set of right-limit points of the set of slopes $\mathcal{S}$. \begin{theorem}\label{thm:post-min-lower} Let $s\in\mathcal{L}^+(\mathcal{S})$ and $f$ be continuous and increasing, satisfying $f(t)\le 1=f(1)$ for $t\in(0,1]$ and $f(0)=0=\lim_{c\da0}\limsup_{t\da0}f(ct)/f(t)$. Let $c>0$ and consider the following conditions: \begin{gather} \label{eq:post-min-Pi-large} \int_0^1 \p(0<(X_t-st)/t \le f(t/c))\frac{\mathrm{d} t}{t}<\infty,\\ \label{eq:post-min-Pi-var} \int_0^1\E\left[\frac{t}{f^{-1}((X_t-st)/t)^2}\1_{\{f(t/2)<(X_t-st)/t\le 1\}}\right]\mathrm{d} t<\infty,\\ \label{eq:post-min-Pi-mean} 2^n\int_0^{2^{-n}} \p(f(t/2)<(X_t-st)/t\le f(2^{-n}))\mathrm{d} t\to 0, \quad\text{as }n\to\infty. \end{gather} Then the following statements hold. \begin{itemize}[leftmargin=2.5em, nosep] \item[{\nf{(i)}}] If \eqref{eq:post-min-Pi-large}--\eqref{eq:post-min-Pi-mean} hold for $c=1$, then $\liminf_{t\da0}(C'_{t+\tau_s}-s)/f(t)=\infty$ a.s. \item[{\nf(ii)}] If~\eqref{eq:post-min-Pi-large} fails for every $c>0$, then $\liminf_{t\da0}(C'_{t+\tau_s}-s)/f(t)=0$ a.s. \item[\nf{(iii)}] If $\liminf_{t\da0}(C'_{t+\tau_s}-s)/f(t)>1$ a.s., then \eqref{eq:post-min-Pi-large} holds for any $c>1$. \end{itemize} \end{theorem} Some remarks are in order. \begin{remark} \label{rem:limsup_cond_post_min}\phantom{empty} \begin{itemize}[leftmargin=2em, nosep] \item[(a)] Any continuous regularly varying function $f$ of index $r>0$ satisfies the assumption in the theorem: $\lim_{c\da 0}\lim_{t\da 0}f(ct)/f(t)=\lim_{c\da 0}c^r=0$. Moreover, the assumption $f(t)\le 1=f(1)$ for $t\in(0,1]$ is not necessary but makes conditions~\eqref{eq:post-min-Pi-large}--\eqref{eq:post-min-Pi-mean} take a simpler form. \item[(b)] The proof of Theorem~\ref{thm:post-min-lower} is based on the analysis of the upper fluctuations of $\tau$ at slope $s$. Condition~\eqref{eq:post-min-Pi-large} ensures $(\tau_{u+s}-\tau_{s})_{u\ge 0}$ jumps finitely many times over the boundary $u\mapsto f^{-1}(u)$, condition~\eqref{eq:post-min-Pi-mean} makes the small-jump component of $(\tau_{u+s}-\tau_{s})_{u\ge 0}$ (i.e. the sum of the jumps at times $v\in[s,u+s]$ of size at most $f^{-1}(v)$) have a mean that tends to $0$ as $u\da 0$ and condition~\eqref{eq:post-min-Pi-var} controls the deviations of $(\tau_{u+s}-\tau_{s})_{u\ge 0}$ away from its mean. \item[{(c)}] Note that~\eqref{eq:post-min-Pi-mean} holds if $\int_0^1\p(f(2^{-n}t/2)<(X_{2^{-n}t}-s2^{-n}t)/(2^{-n}t)\le f(2^{-n}))\mathrm{d} t\to 0$ as $n\to\infty$, which, by the dominated convergence theorem, holds if $\p(f(u/2)<(X_{u}-su)/u\le f(u/t))\to 0$ as $u\da 0$ for a.e. $t\in(0,1)$. \item[{(d)}] Condition~\eqref{eq:post-min-Pi-var} in Theorem~\ref{thm:post-min-lower} requires access to the inverse $f^{-1}$ of the function $f$. In the special case when the function $f$ is concave, this assumption can be replaced with an assumption given in terms of $f$ (cf. Proposition~\ref{prop:Y_limsup} and Corollary~\ref{cor:L_liminf}). However, it is important to consider non-concave functions $f$, see Corollary~\ref{cor:power_func_liminf_post_min} below.\qedhere \end{itemize} \end{remark} \subsubsection{Simple sufficient conditions for the assumptions of Theorem~\ref{thm:post-min-lower} }\label{subsec:simp_suff_cond_tau_s} Let $f$ be as in Theorem~\ref{thm:post-min-lower}. By Theorem~\ref{thm:Y_limsup}(c) below (with the measure $\Pi(\mathrm{d} x,\mathrm{d} t)=\p((X_t-st)/t \in \mathrm{d} x)t^{-1}\mathrm{d} t$), the following condition implies~\eqref{eq:post-min-Pi-var}--\eqref{eq:post-min-Pi-mean}: \begin{equation} \label{eq:suff_low_post_min} \int_0^1 \E\left[\frac{1}{f^{-1}((X_t-st)/t)}\1_{\{f(t/2)< (X_t-st)/t\le 1\}}\right]\mathrm{d} t <\infty. \end{equation} If estimates on the density of $X_t$ are available (e.g., via assumptions on the generating triplet of $X$), \eqref{eq:suff_low_post_min} can be simplified further, see Corollary~\ref{cor:power_func_liminf_post_min} below. Throughout, we denote by $(\gamma,\sigma^2,\nu)$ the generating triplet of $X$ (corresponding to the cutoff function $x\mapsto\1_{(-1,1)}(x)$, see~\cite[Def.~8.2]{MR3185174}), where $\gamma \in \R$ is the drift parameter, $\sigma^2 \ge 0$ is the Gaussian coefficient and $\nu$ is the L\'evy measure of $X$ on $\R$. We also define the functions \[ \ov\sigma^2(\varepsilon)\coloneqq \sigma^2+\ov\sigma^2_+(\varepsilon)+\ov\sigma^2_-(\varepsilon), \quad \ov\sigma^2_+(\varepsilon)\coloneqq\int_{(0,\varepsilon)}x^2\nu(\mathrm{d} x),\quad \ov\sigma^2_-(\varepsilon)\coloneqq\int_{(-\varepsilon,0)}x^2\nu(\mathrm{d} x),\quad \text{for $\varepsilon>0$.} \] Recall that, in regime (FS), we have $\sigma^2=0$ (see~\cite[Prop.~1.6]{SmoothCM}). Given two positive functions $g_1$ and $g_2$, we say $g_1(\varepsilon)=\Oh(g_2(\varepsilon))$ as $\varepsilon\da 0$ if $\limsup_{\varepsilon\da 0}g_1(\varepsilon)/g_2(\varepsilon)<\infty$. Similarly, we write $g_1(\varepsilon)\approx g_2(\varepsilon)$ as $\varepsilon\da 0$ if $g_1(\varepsilon)=\Oh(g_2(\varepsilon))$ and $g_2(\varepsilon)=\Oh(g_1(\varepsilon))$. \begin{corollary} \label{cor:power_func_liminf_post_min} Fix $\beta\in(0,1]$ and let $s\in\mathcal{L}^+(\mathcal{S})$ and $f$ be as in Theorem~\ref{thm:post-min-lower}. \begin{itemize}[leftmargin=2em, nosep] \item[{\nf{(a)}}] If $\liminf_{\varepsilon \da 0}\varepsilon^{\beta-2}\ov\sigma^2(\varepsilon)>0$, $f$ is differentiable with positive derivative $f'>0$ and the integrals $\int_0^1\int_{t/2}^1(f'(y)/y)t^{1-1/\beta}\mathrm{d} y\mathrm{d} t$ and $\int_0^1t^{-1/\beta}f(t)\mathrm{d} t$ are finite, then $\liminf_{t\da 0}(C'_{t+\tau_s}-s)/f(t)=\infty$ a.s. \item[{\nf{(b)}}] Assume $\int_0^1 ((t^{-1/\beta}f(t))\wedge t^{-1})\mathrm{d} t=\infty$ and either of the following hold: \begin{itemize}[leftmargin=2em, nosep] \item[{\nf{(i)}}] $\ov\sigma^2(\varepsilon)\approx\varepsilon$ and $|\int_{(-1,1)\setminus(-\varepsilon,\varepsilon)}x\nu(\mathrm{d} x)|=\Oh(1)$ as $\varepsilon\da 0$, \item[{\nf{(ii)}}] $\beta\in(0,1)$ and $\ov\sigma^2_\pm(\varepsilon)\approx\varepsilon^{2-\beta}$ as $\varepsilon\da 0$ for both signs of $\pm$, \end{itemize} then $\liminf_{t\da 0}(C'_{t+\tau_s}-s)/f(t)=0$ a.s. \end{itemize} \end{corollary} We stress that the sufficient conditions in Corollary~\ref{cor:power_func_liminf_post_min} are all in terms of the characteristics of the L\'evy process $X$ and the function $f$. \begin{remark} \label{rem:power_func_liminf_post_min}\phantom{empty} \begin{itemize}[leftmargin=2em, nosep] \item[{{(a)}}] The assumptions in Corollary~\ref{cor:power_func_liminf_post_min} are satisfied by most processes in the class $\mathcal{Z}_{\alpha,\rho}$ of L\'evy processes in the small-time domain of attraction of an $\alpha$-stable distribution, see Subsection~\ref{subsec:upper-post-min} below (cf.~\cite[Eq.~(8)]{MR3784492}). Thus, the assumptions of part (a) in Corollary~\ref{cor:power_func_liminf_post_min} hold for any $X\in\mathcal{Z}_{\alpha,\rho}$ and $\beta<\alpha$ (by Karamata's theorem~\cite[Thm~1.5.11]{MR1015093}, we can take $\beta=\alpha$ if the normalising function $g$ of $X$ satisfies $\liminf_{t\da0}t^{-1/\alpha}g(t)>0$). Moreover, the assumptions of cases (b-i) and (b-ii) hold for processes in the domain of normal attraction (i.e. if the normalising function equals $g(t)= t^{1/\alpha}$ for all $t>0$) with $\rho\in(0,1)$ and $\beta=\alpha\in(0,1]$, see~\cite[Thm~2]{MR3784492}. In particular, these assumptions are satisfied by stable processes with $\alpha\in(0,1]$ and $\rho\in(0,1)$. \item[{{(b)}}] Both integrals in part (a) of Corollary~\ref{cor:power_func_liminf_post_min} are finite or infinite simultaneously whenever $f'$ is regularly varying at $0$ with nonzero index by Karamata's theorem~\cite[Thm~1.5.11]{MR1015093}. Thus, in that case, under the conditions of either (b-i) or (b-ii), the limit $\liminf_{t\da 0}(C'_{t+\tau_s}-s)/f(t)$ equals $0$ or $\infty$ according to whether $\int_0^1t^{-1/\beta}f(t)\mathrm{d} t$ is infinite or finite, respectively. \item[{{(c)}}] The case $\beta>1$ is not considered in Corollary~\ref{cor:power_func_liminf_post_min}(a) and (b-ii) since in this case we would have $\mathcal{L}^+(\mathcal{S})=\emptyset$ by~\cite[Prop.~1.6]{SmoothCM}. \qedhere \end{itemize} \end{remark} \begin{proof}[Proof of Corollary~\ref{cor:power_func_liminf_post_min}] Assume without loss of generality that $s=0\in\mathcal{L}^+(\mathcal{S})$ (equivalently, we consider the process $(X_t-st)_{t\ge 0}$ for $s\in\mathcal{L}^+(\mathcal{S})$). (a) Our assumptions and~\cite[Thm~3.1]{picard_1997} show that the density $x\mapsto p_X(t,x)$ of $X_t$ exists for $t>0$ and moreover $\sup_{x\in\R}p_X(t,x)\le Ct^{-1/\beta}$ for some $C>0$ and all $t\in(0,1]$. Thus,~\eqref{eq:suff_low_post_min} is implied by \begin{equation} \label{eq:simple_suff_cond_post_min_density} \int_0^1 \int_{tf(t/2)}^t\frac{1}{f^{-1}(x/t)}t^{-1/\beta} \mathrm{d} x \mathrm{d} t =\int_0^1 \int_{t/2}^1\frac{f'(y)}{y}t^{1-1/\beta}\mathrm{d} y \mathrm{d} t<\infty, \end{equation} where we have used the change of variable $x=tf(y)$. Similarly, the bound on the density of $X_t$ shows that condition~\eqref{eq:post-min-Pi-large} holds if $\int_0^1 t^{-1/\beta}f(t)\mathrm{d} t<\infty$. Thus, the result follows from Theorem~\ref{thm:post-min-lower}. (b) In either case (i) or (ii), our assumptions and~\cite[Thm~4.3]{picard_1997} show that $Ct^{-1/\beta}\le p_X(t,x)$ for some $C>0$ and all $|x|\le t^{1/\beta}$. Thus $\p(0<X_t\le tf(t/c))\ge ((tf(t/c))\wedge t^{1/\beta})Ct^{-1/\beta}$, implying that~\eqref{eq:post-min-Pi-large} fails for some $c>0$ whenever $\int_0^1 ((t^{-1/\beta}f(t/c))\wedge t^{-1})\mathrm{d} t=\infty$. A simple change of variables shows that this integral is either finite for all $c>0$ or infinite for all $c>0$. The result then follows from Theorem~\ref{thm:post-min-lower}(ii). \end{proof} The following is another simple corollary of Theorem~\ref{thm:post-min-lower}. This result can also be established using similar arguments to those used in~\cite[Cor.~3]{MR1747095}, see the discussion ensuing the proof of~\cite[Cor.~3]{MR1747095}. \begin{corollary} \label{cor:stable_liminf_post_min} Let $X$ be a Cauchy process, $f$ be as in Theorem~\ref{thm:post-min-lower} and pick $s\in\R$. Then the limit $\liminf_{t\da 0}(C'_{t+\tau_s}-s)/f(t)$ equals $0$ (resp. $\infty$) a.s. if $\int_0^1 t^{-1}f(t)\mathrm{d} t$ is infinite (resp. finite). \end{corollary} \begin{proof} Assume without loss of generality that $s=0$. Then the law of $X_t/t$ does not depend on $t>0$ and hence the integral in~\eqref{eq:suff_low_post_min} equals \[ \int_0^1\E\bigg[\frac{\1_{\{t/2<f^{-1}(X_1)\le 1\}}}{f^{-1}(X_1)}\bigg]\mathrm{d} t =\E\bigg[\int_0^1\frac{\1_{\{t/2<f^{-1}(X_1)\le 1\}}}{f^{-1}(X_1)}\mathrm{d} t\bigg] \le 2\p(X_1\in(0,1])<\infty. \] Moreover, condition~\eqref{eq:post-min-Pi-large} simplifies to $\int_0^1\p(0<X_1\le f(t/c))t^{-1}\mathrm{d} t<\infty$, which is equivalent to the integral $\int_0^1t^{-1}f(t/c)\mathrm{d} t$ being finite since $X_1$ has a bounded density that is bounded away from zero on $[0,1]$. The change of variables $t'=t/c$ shows that this integral is either finite for all $c>0$ or infinite for all $c>0$. Thus, Theorem~\ref{thm:post-min-lower} gives the result. \end{proof} \begin{comment} \begin{corollary} \label{cor:stable_limsup_post_min} Suppose $X$ is an $\alpha$-stable process with $\alpha\in(0,1)$ and $\rho\in(0,1]$, and let $f$ be as in Theorem~\ref{thm:post-min-lower}. Then $0 \in \mathcal{L}^+(\mathcal{S})$ and the following statements hold. \begin{itemize}[leftmargin=2.5em, nosep] \item[{\nf{(a)}}]If $\liminf_{t \da 0}t^{1-1/\alpha}f(t)>0$ then $\liminf_{t \da 0}C'_{t+\tau_0}/f(t)=0$ a.s. \begin{itemize}[leftmargin=1em, nosep] \item[{\nf{(a-i)}}] $\liminf_{t\da 0}C'_{t+\tau_0}/t^{p}$ equals $0$ (resp. $\infty$) a.s. if $p\in(0,1/\alpha-1]$ (resp. $p\in(1/\alpha-1,\infty)$) \end{itemize} \item[{\nf{(b)}}] Suppose $t^{1-1/\alpha}f(t) \to 0$ as $t \da 0$, then~\eqref{eq:post-min-Pi-large} can be simplified in the following cases: \begin{itemize}[leftmargin=1em, nosep] \item[{\nf{(b-i)}}] If $X$ has jumps of both signs, then~\eqref{eq:post-min-Pi-large} if equivalent to $\int_0^1 t^{-1/\alpha}f(t/c) \mathrm{d} t<\infty$. \item[{\nf{(b-ii)}}] If $X$ is a subordinator, then~\eqref{eq:post-min-Pi-large} is implied by $$ \! \int_0^1 \int_0^{t^{1-1/\alpha}f(t/c)} x^{(1-\alpha/2)/(\alpha-1)}\exp\big(-|1-\alpha|(x/\alpha)^{\alpha/(\alpha-1)}\big) \mathrm{d} x\frac{\mathrm{d} t}{t}<\infty. $$ \end{itemize} \end{itemize} \end{corollary} \begin{proof} \david{For condition~\eqref{eq:suff_low_post_min} to hold (which would imply conditions~\eqref{eq:post-min-Pi-var}--\eqref{eq:post-min-Pi-mean}), it suffices to have \begin{equation} \label{eq:suff_low_post_min_stable} \int_0^1 \int_{t^{1-1/\alpha}f(t/2)}^{t^{1-1/\alpha}}\frac{1}{f^{-1}(t^{1/\alpha-1}x)}p_X(1,x)\mathrm{d} x\mathrm{d} t =\int_0^1\int_0^{2f^{-1}(y)} \frac{1}{f^{-1}(y)}p_X(1,t^{1-1/\alpha}y)\mathrm{d} t\mathrm{d} y<\infty. \end{equation} Is this used anywhere?} (a) Under the assumption $\liminf_{t\da 0}t^{1-1/\alpha}f(t)>0$, condition~\eqref{eq:post-min-Pi-large} fails for all $c>0$, implying that $\liminf_{t\da 0}C'_{t+\tau_0}/f(t)=0$ a.s. by Theorem~\ref{thm:post-min-lower}(ii). In particular, Theorem~\ref{thm:post-min-lower} and Corollary~\ref{cor:power_func_liminf_post_min} show that $\liminf_{t\da 0}C'_{t+\tau_0}/t^{p}$ equals $0$ (resp. $\infty$) a.s. if $p\in(0,1/\alpha-1]$ (resp. $p\in(1/\alpha-1,\infty)$). (b) Suppose that $t^{1-1/\alpha}f(t)\to 0$ as $t\da 0$. If $X$ has jumps of both signs, then~\eqref{eq:post-min-Pi-large} is equivalent to $\int_0^1t^{-1/\alpha}f(t/c)\mathrm{d} t<\infty$ since the density of $X_1$ is a positive finite constant at $0$ by~\cite[Eq.~(4.3.3)]{MR1745764}. If instead $X$ is a subordinator, then the density of $X_1$ decays exponentially at $0$ by~\cite[Thm~4.7.1]{MR1745764} and condition~\eqref{eq:post-min-Pi-large} can be simplified to \begin{equation*} \int_0^1 \int_0^{t^{1-1/\alpha}f(t/c)} x^{(1-\alpha/2)/(\alpha-1)}\exp\big(-|1-\alpha|(x/\alpha)^{\alpha/(\alpha-1)}\big) \mathrm{d} x\frac{\mathrm{d} t}{t}<\infty.\qedhere \end{equation*} \end{proof} \jorge{ This can all be deleted now, correct? \begin{example}[Stable processes] \label{ex:stable_limsup_post_min} Suppose $X$ is an $\alpha$-stable process with $\alpha\in(0,1]$ that is not spectrally negative. For condition~\eqref{eq:suff_low_post_min} to hold (which would imply conditions~\eqref{eq:post-min-Pi-var}--\eqref{eq:post-min-Pi-mean}), it suffices to have \begin{equation} \label{eq:suff_low_post_min_stable} \int_0^1 \int_{t^{1-1/\alpha}f(t/2)}^{t^{1-1/\alpha}}\frac{1}{f^{-1}(t^{1/\alpha-1}x)}p_X(1,x)\mathrm{d} x\mathrm{d} t =\int_0^1\int_0^{2f^{-1}(y)} \frac{1}{f^{-1}(y)}p_X(1,t^{1-1/\alpha}y)\mathrm{d} t\mathrm{d} y<\infty. \end{equation} For Cauchy process (i.e. $\alpha=1$), the integral in~\eqref{eq:suff_low_post_min_stable} above equals $2\p(X_1\in[0,1])<\infty$ and the integral in condition~\eqref{eq:post-min-Pi-large} equals $\int_0^1 t^{-1}\p(X_1\in(0,f(t/c))\mathrm{d} t$, which is finite if and only if $\int_0^1 t^{-1}f(t/c)\mathrm{d} t<\infty$. Hence, Theorem~\ref{thm:post-min-lower}(i) implies that $\liminf_{t\da 0}C'_{t+\tau_0}/f(t)$ equals $\infty$ (resp. $0$) a.s. if and only if $\int_0^1 t^{-1}f(t)\mathrm{d} t$ is finite (resp. infinite). This result could also have been proved using a variation of the argument used in the proof of~\cite[Cor.~3]{MR1747095} (which employs the characterisation of the law of $C'$ in~\cite[Thm~2]{MR1747095} in terms of a time-changed normalised gamma process). However, our result is a simply a corollary of Theorem~\ref{thm:post-min-lower}. For completeness, we mention that the law of iterated logarithm given in~\cite[p.~54]{MR1747095} states that $\limsup_{t\da 0}(C'_{t+\tau_s}-s)/f(t)=1/p_X(1,s)$ for any $s\in\R$ and the function $f(t)=(\log\log\log(1/t))/\log(1/t)$ defined for $t\in(0,\exp(-e^{e}))$. We cannot recover this law of iterated logarithm as a corollary of our results, but it can be derived using our methods, see Example~\ref{ex:Cauchy} below. Suppose $\alpha<1$. Under the assumption $\liminf_{t\da 0}t^{1-1/\alpha}f(t)>0$, condition~\eqref{eq:post-min-Pi-large} fails for all $c>0$, implying $\liminf_{t\da 0}C'_{t+\tau_0}/f(t)=0$ a.s. by Theorem~\ref{thm:post-min-lower}(ii). In particular, Example~\ref{ex:power_func_limsup_post_min} and Theorem~\ref{thm:post-min-lower} show that $\liminf_{t\da 0}C'_{t+\tau_0}/t^{p}$ equals $0$ (resp. $\infty$) a.s. if $p\in(0,1/\alpha-1]$ (resp. $p\in(1/\alpha-1,\infty)$). In the case $t^{1-1/\alpha}f(t)\to 0$ as $t\da 0$, condition~\eqref{eq:suff_low_post_min_stable} cannot be easily simplified but condition~\eqref{eq:post-min-Pi-large} can be simplified. Indeed, if $X$ has jumps of both signs, then~\eqref{eq:post-min-Pi-large} is equivalent to $\int_0^1t^{-1/\alpha}f(t/c)\mathrm{d} t<\infty$ since the density of $X_1$ is a positive finite constant at $0$ by~\cite[Eq.~(4.3.3)]{MR1745764}. If instead $X$ is a subordinator and $t^{1-1/\alpha}f(t)\to 0$ as $t\da 0$, then the density of $X_1$ decays exponentially at $0$ by~\cite[Thm~4.7.1]{MR1745764} and condition~\eqref{eq:post-min-Pi-large} can be simplified to \begin{equation*} \int_0^1 \int_0^{t^{1-1/\alpha}f(t/c)} x^{(1-\alpha/2)/(\alpha-1)}\exp\big(-|1-\alpha|(x/\alpha)^{\alpha/(\alpha-1)}\big) \mathrm{d} x\frac{\mathrm{d} t}{t}<\infty.\qedhere \end{equation*} \end{example}} \end{comment} \subsection{Regime (FS): upper functions at time \texorpdfstring{$\tau_s$}{tau}} \label{subsec:upper-post-min} The upper fluctuations of $C'_{t+\tau_s}-s$ are harder to describe than the lower fluctuations studied in Subsection~\ref{subsec:lower-post-min} above. The main reason for this is that in Theorem~\ref{thm:upper_fun_C'_post_min} below the $\limsup$ of $C'$ at a vertex time $\tau_s$ can be expressed in terms of the $\liminf$ of the vertex time process $\tau$, which requires strong two-sided control on the Laplace exponent $\Phi_{u+s}(w)-\Phi_{s}(w)$, defined in~\eqref{eq:cf_tau0}, of the variable $\tau_{u+s}-\tau_s$ as $w\to\infty$ and $u\da 0$. (In the proof of Theorem~\ref{thm:post-min-lower}, $\limsup$ of the vertex time process $\tau$ is needed, which is easier to control.) In turn, by~\eqref{eq:cf_tau0}, this requires sharp two-sided estimates on the probability $\p(0<X_t-st\le ut)$ as a function of $(u,t)$ for small $u,t>0$. In particular, it is important to have strong control on the density of $X_t$ for small $t>0$ on the ``pizza slice'' $\{(t,x):s<x/t\le u+s\}$ as $u\da 0$. We establish these estimates for the processes in the domain of attraction of an $\alpha$-stable process, leading to Theorem~\ref{thm:upper_fun_C'_post_min} below. We denote by $\mathcal{Z}_{\alpha,\rho}$ the class of L\'evy processes in the small-time domain of attraction of an $\alpha$-stable process with positivity parameter $\rho\in[0,1]$ (see~\cite[Eq.~(8)]{MR3784492}). In the case $\alpha<1$, relevant in the regime (FS) at slope $s$ equal to the natural drift $\gamma_0$, for each L\'evy process $X\in\mathcal{Z}_{\alpha,\rho}$ there exists a normalising function $g$ that is regularly varying at $0$ with index $1/\alpha$ and an $\alpha$-stable process $(Z_u)_{u\in[0,T]}$ with $\rho=\p(Z_1>0)\in[0,1]$ such that the weak convergence $((X_{ut}-\gamma_0ut)/g(t))_{u\in[0,T]}\cid(Z_u)_{u\in[0,T]}$ holds as $t\da 0$. Given $X\in\mathcal{Z}_{\alpha,\rho}$ with normalising function $g$, we define $G(t)\coloneqq t/g(t)$ for $t\in(0,\infty)$. \begin{theorem}\label{thm:upper_fun_C'_post_min} Suppose $X\in\mathcal{Z}_{\alpha,\rho}$ for some $\alpha\in(0,1)$ and $\rho\in(0,1]$. Define $f: (0,1) \to (0,\infty)$ through $f(t)\coloneqq 1/G(t\log^p (1/t))$, $t\in(0,1)$, for some $p\in\R$. Then the following hold for $s=\gamma_0$: \begin{itemize}[leftmargin=2.5em, nosep] \item[\nf(i)] if $p>1/\rho$, then $\limsup_{t\da 0}(C'_{t+\tau_s}-s)/f(t)=0$ a.s., \item[\nf(ii)] if $p<1/\rho$, then $\limsup_{t\da 0}(C'_{t+\tau_s}-s)/f(t)=\infty$ a.s. \end{itemize} \end{theorem} The class $\mathcal{Z}_{\alpha,\rho}$ is quite large and the assumption $X\in\mathcal{Z}_{\alpha,\rho}$ is essentially reduced to the L\'evy measure of $X$ being regularly varying at $0$, see~\cite[\S4]{MR3784492} for a full characterisation of this class. In particular, $\alpha$ agrees with the Blumenthal--Getoor index $\beta_\mathrm{BG}$ defined in~\eqref{eq:beta} below. Moreover, for $\alpha<1$ and $\rho \in (0,1]$, the assumption $X\in\mathcal{Z}_{\alpha,\rho}$ implies that $X$ is of finite variation with $\p(X_t-\gamma_0t>0)\to\rho$ as $t\da 0$, implying $\mathcal{L}^+(\mathcal{S})=\{\gamma_0\}$ by~\cite[Prop.~1.3 \& Cor.~1.4]{SmoothCM}. Note that the function $f$ in Theorem~\ref{thm:upper_fun_C'_post_min} is regularly varying at $0$ with index $1/\alpha-1$. The appearance of the positivity parameter $\rho$, a nontrivial function of the L\'evy measure of $X$, in Theorem~\ref{thm:upper_fun_C'_post_min} suggests that the upper fluctuations of $C'$ at time $\tau_s$ (for $s=\gamma_0$) are more delicate than its lower fluctuations described in Theorem~\ref{thm:lower_fun_C'}. Indeed, if $X\in\mathcal{Z}_{\alpha,\rho}$ is in the domain of normal attraction (i.e. $g(t)=t^{1/\alpha}$) and $\rho\in(0,1)$, then the fluctuations of $C'$ at vertex time $\tau_s$, characterised by Corollary~\ref{cor:power_func_liminf_post_min}(a) \& (b-ii) (with $\beta=\alpha$) and Remark~\ref{rem:power_func_liminf_post_min}(a), do not involve parameter $\rho$. In particular, by Theorem~\ref{thm:upper_fun_C'_post_min} and Corollary~\ref{cor:power_func_liminf_post_min}(b-ii), we have $\liminf_{t\da 0}(C'_{t+\tau_s}-s)/f(t)=0$ and $\limsup_{t\da 0}(C'_{t+\tau_s}-s)/f(t)=\infty$ a.s. for $f(t)=t^{1/\alpha-1}\log^{q}(1/t)$ and any $q\in[-1,(1/\alpha-1)/\rho)$, demonstrating the gap between the lower and upper fluctuations of $C'$ at vertex time $\tau_s$. \begin{remark} \label{rem:exclusions-tau}\phantom{empty} \begin{itemize}[leftmargin=2em, nosep] \item[(a)] The case where $X$ is attracted to Cauchy process with $\alpha=1$ is expected to hold for the functions $f$ in Theorem~\ref{thm:upper_fun_C'_post_min}. For such $X\in\mathcal{Z}_{1,\rho}$, a multitude of cases arise including $X$ having (i) less activity (e.g., $X$ is of finite variation), (ii) similar amount of activity (i.e., $X$ is in the domain of normal attraction) or (iii) more activity than Cauchy process (see, e.g.~\cite[Ex.~2.1--2.2]{SmoothCM}). In terms of the normalising function $g$ of $X$, these cases correspond to the limit $\lim_{t\da 0}t^{-1/\alpha}g(t)$ being equal to: (i) zero, (ii) a finite and positive constant or (iii) infinity. (Recall that in cases (ii) and (iii) $X$ is strongly eroded with $\mathcal{L}^+(\mathcal{S})=\R$, see~\cite[Ex.~2.1--2.2]{SmoothCM}, and in case (i) $X$ may be strongly eroded, by~\cite[Thm~1.8]{SmoothCM}, or of finite variation with $\mathcal{L}^+(\mathcal{S})=\{\gamma_0\}$ by~\cite[Prop~138]{SmoothCM} and the fact that $\lim_{t\da0}\p(X_t>0)=\rho\in(0,1)$.) However, we stress that our methodology can be used to obtain a description of the lower fluctuations of $C'$ at $\tau_s$ in cases (i), (ii) and (iii). This would require an application of Theorem~\ref{thm:limsup_L} along with two-sided estimates of the Laplace exponent $\Phi$ of the vertex time process in~\eqref{eq:cf_tau0}, generalising Lemma~\ref{lem:asymp_equiv_Psi_domstable_post_min} to the case $\alpha=1$. In the interest of brevity we do not give the details of this extension. \item[(b)] The boundary case $p=1/\rho$ can be analysed along similar lines. In fact, our methods can be used to get increasingly sharper results, determining the value of $\limsup_{t\da 0}(C'_{t+\tau_s}-s)/f(t)$ for functions $f$ containing powers of iterated logarithms, when stronger control over the densities of the marginals of $X$ is available. Such refinements are possible when $X$ is a stable process cf. Section~\ref{sec:concluding_rem}. In particular, we may prove the following law of iterated logarithm given in~\cite[p.~54]{MR1747095} for a Cauchy process $X$ with density $x\mapsto p_X(t,x)$ at time $t>0$: for any $s\in\R$ and the function $f(t)=(\log\log\log(1/t))/\log(1/t)$, we have $\limsup_{t\da 0}(C'_{t+\tau_s}-s)/f(t)=1/p_X(1,s)$ a.s. \qedhere \end{itemize} \end{remark} \subsection{Regime (IS): upper functions at time {\nf{0}}} \label{subsec:upper_fun_C} Throughout this subsection we assume $X$ has infinite variation, equivalent to $\liminf_{t\da0}C'_t=-\infty$ a.s.~\cite[Sec.~1.1.2]{SmoothCM}. The following theorem describes the upper fluctuations of $C'_t$ as $t\da 0$. \begin{theorem} \label{thm:C'_limsup} Let $f$ be continuous and increasing with $f(0)=0=\lim_{c\da0}\limsup_{t\da0}f(ct)/f(t)$ and $f(t)\le 1=f(1)$ for $t\in(0,1]$. Let $c>0$, denote $F(t)\coloneqq t/f(t)$ for $t>0$ and consider the conditions \begin{gather} \label{eq:C'_large} \int_0^1 \p(X_t\le -cF(t))\frac{\mathrm{d} t}{t}<\infty,\\ \label{eq:C'_var} \int_0^1 \E[(X_t/F(t))^2\1_{\{-2F(t)<X_t\le -t\}}]\frac{\mathrm{d} t}{t}<\infty,\\ \label{eq:C'_mean} 2^n\int_0^{2^{-n}} \p(-t/f(2^{-n})\ge X_t>-2F(t/2))\mathrm{d} t \to 0, \quad \text{ as }n \to \infty. \end{gather} Then the following statements hold. \begin{itemize}[leftmargin=2.5em, nosep] \item[{\nf(i)}] If~\eqref{eq:C'_large}--\eqref{eq:C'_mean} hold for $c=1$ and $f$ is concave, then $\limsup_{t \da 0} |C_t'|f(t)=0$ a.s. \item[{\nf(ii)}] If~\eqref{eq:C'_large} fails for all $c>0$, then $\limsup_{t\da0} |C_t'|f(t)=\infty$ a.s. \item[{\nf(iii)}] If $\limsup_{t \da 0} |C_t'|f(t)<1$ a.s., then~\eqref{eq:C'_large} holds for any $c>1$. \end{itemize} \end{theorem} Some remarks are in order. \begin{remark} \label{rem:limsup_cond_3}\phantom{empty} \begin{itemize}[leftmargin=2em, nosep] \item[{(a)}] Any continuous regularly varying function $f$ of index $r>0$ satisfies the assumption in the theorem, see Remark~\ref{rem:limsup_cond_post_min}(a) above. \item[{(b)}] The proof of Theorem~\ref{thm:C'_limsup} is based on the analysis of the upper fluctuations of the vertex time $\tau_{-1/u}$ as $u\da0$. The interpretation and purpose of conditions~\eqref{eq:C'_large}--\eqref{eq:C'_mean} are analogous to those of conditions~\eqref{eq:post-min-Pi-large}--\eqref{eq:post-min-Pi-mean}, respectively, see Remark~\ref{rem:limsup_cond_post_min}(b) above. \item[{(c)}] Note that~\eqref{eq:C'_mean} holds if $\int_0^1\p(-2F(2^{-n}t/2)< X_{2^{-n}t}\le -tF(2^{-n}))\mathrm{d} t\to 0$ as $n\to\infty$, which, by the dominated convergence theorem, is the case if $\p(-2F(u/2)<X_{u}\le -tF(u/t))\to 0$ as $u\da 0$ for a.e. $t\in(0,1)$. \item[{(d)}] The assumed concavity of $f$ in part (ii) can be dropped by modifying assumption~\eqref{eq:C'_var} into a condition involving the inverse of $f$ (cf. Corollary~\ref{cor:L_liminf} and Proposition~\ref{prop:Y_limsup}). We do not make this explicit in the statement of Theorem~\ref{thm:C'_limsup} because the functions of interest in this context are typically concave.\qedhere \end{itemize} \end{remark} \begin{comment} Main points:\\ 2. $\Pi(\{(s,t)\,:\,s\in(0,1],\,t\ge h^{-1}(s)\}) =\int_0^\infty t^{-1}e^{-\lambda t}\p(X_t\le -t/(1\wedge h(t)))\mathrm{d} t<\infty$ if and only if $\int_0^1 t^{-1}\p(X_t\le -t/h(t))\mathrm{d} t<\infty$. (Otherwise we would have infinitely many faces of slope $<-1$.)\\ 3. $\int_{(0,1]\times(0,1)}h^{-1}(s)^{-2}t^2\1_{\{2h^{-1}(s)>t\}}\Pi(\mathrm{d} s,\mathrm{d} t) =\int_{(0,1)}te^{-\lambda t}\E[h^{-1}(-t/X_t)^{-2}\1_{\{-t/h(t/2)<X_t\le -t\}}]\mathrm{d} t$\\ 4. $2^n\int_{(0,h(2^{-n})]\times(0,2^{-n})} t\1_{\{2h^{-1}(s)>t\}}\Pi(\mathrm{d} s,\mathrm{d} t)=$\\ 5. $\int_{(0,1]\times(0,1)}s^{-2}h(t)^2\1_{\{2s>h(t)\}}\Pi(\mathrm{d} s,\mathrm{d} t) =\int_{(0,1)}h(t)^2t^{-1}e^{-\lambda t}\E[(X_t/t)^{2}\1_{\{-2t/h(t)<X_t< -t\}}]\mathrm{d} t$\\ \end{comment} \subsubsection{Simple sufficient conditions for the assumptions of Theorem~\ref{thm:C'_limsup}} \label{subsec:simple_suff_cond_time_0} The tail probabilities of $X_t$ appearing in the assumptions of Theorem~\ref{thm:C'_limsup} are not analytically available in general. In this subsection we present sufficient conditions, in terms of the generating triplet $(\gamma,\sigma^2,\nu)$ of $X$, implying the assumptions in~\eqref{eq:C'_large}--\eqref{eq:C'_mean} of Theorem~\ref{thm:C'_limsup}. Recall that $\ov\sigma^2(\varepsilon) =\sigma^2 + \int_{(-\varepsilon,\varepsilon)}x^2\nu(\mathrm{d} x)$ for $\varepsilon>0$, and define: \begin{equation} \label{eq:ov_functions} \ov \gamma(\varepsilon) \coloneqq \gamma-\int_{(-1,1)\setminus(-\varepsilon,\varepsilon)}x\nu(\mathrm{d} x), \qquad \ov\nu(\varepsilon) \coloneqq \nu(\R\setminus(-\varepsilon,\varepsilon)), \quad\text{for all }\varepsilon>0. \end{equation} Let $f$ and $F$ be as in Theorem~\ref{thm:C'_limsup} and note that $F(t)\in(0,1]$ since $f$ is concave with $f(1)=1$. The inequalities in Lemma~\ref{lem:upper_tail_bound} (with $p=2$, $\varepsilon=F(t)\in(0,1]$ and $K=cF(t)$), applied to $\p(|X_t|\ge cF(t))$ and $\E[\min\{X_t^2,4F(t)^2\}]\ge\E[X_t^2\1_{\{|X_t|\le 2F(t)\}}]$, show that the condition \begin{equation} \label{eq:C'_suff_var} \int_0^1 \big[F(t)^{-2}\big(\ov \gamma^2(F(t))t+\ov\sigma^2(F(t))\big) + \ov\nu(F(t))\big]\mathrm{d} t <\infty, \end{equation} implies~\eqref{eq:C'_large}--\eqref{eq:C'_var}. Similarly, by Remark~\ref{rem:limsup_cond_3}(c) and Lemma~\ref{lem:upper_tail_bound}, the following condition implies~\eqref{eq:C'_mean}: \begin{equation} \label{eq:C'_suff_mean} \big[F(t)^{-2}\big(\ov \gamma^2(F(t))t+\ov\sigma^2(F(t))\big) + \ov\nu(F(t))\big]t \to 0, \qquad\text{as }t\da 0. \end{equation} These simplifications lead to the following corollary. \begin{corollary} \label{cor:power_func_limsup} Suppose $\ov\nu(\varepsilon)+\varepsilon^{-2}\ov\sigma^2(\varepsilon)+\varepsilon^{-1}|\ov\gamma(\varepsilon)|=\Oh(\varepsilon^{-\beta})$ as $\varepsilon\da 0$ for some $\beta\in[1,2]$ and, as before, let $F(t)=t/f(t)$. If we have $F(t)^{-\beta}t\to 0$ as $t\to 0$ and $\int_0^1F(t)^{-\beta}\mathrm{d} t<\infty$, then $\limsup_{t\da 0}|C'_t|f(t)=0$ a.s. \end{corollary} \begin{proof} By virtue of Theorem~\ref{thm:C'_limsup}(i), it suffices to verify~\eqref{eq:C'_suff_var} and~\eqref{eq:C'_suff_mean}. By assumption, we have $[F(t)^{-2}\ov\sigma^2(F(t))+\ov\nu(F(t))]t=\Oh(F(t)^{-\beta}t)$ and $F(t)^{-2}\ov\gamma(F(t))^2t^2=\Oh((F(t)^{-\beta}t)^2)$, which tend to $0$ as $t\da 0$, implying~\eqref{eq:C'_suff_mean}. Condition~\eqref{eq:C'_suff_var} follows similarly, completing the proof. \end{proof} Define the Blumenthal--Getoor index $\beta_\mathrm{BG}\in[0,2]$ of $X$~\cite{MR0123362} as follows: \begin{equation} \label{eq:beta} \beta_\mathrm{BG} \coloneqq \inf\{q\in[0,2]\,:\,I_q<\infty\}, \qquad\text{where}\qquad I_q\coloneqq \int_{(-1,1)\setminus\{0\}}|x|^q\nu(\mathrm{d} x), \quad q>0. \end{equation} Note that, in our setting, $X$ has infinite variation and hence $\beta_\mathrm{BG}\ge 1$. Since $I_\beta<\infty$ for any $\beta>\beta_\mathrm{BG}$, \cite[Lem.~1]{LevySupSim} shows that $\beta$ satisfies the assumptions of Corollary~\ref{cor:power_func_limsup}. Hence $\limsup_{t\da0}|C'_t|t^{p}=0$ a.s. for any $p>1-1/\beta_\mathrm{BG}\in[0,1/2]$ by Corollary~\ref{cor:power_func_limsup}. Stronger results are possible when stronger conditions are imposed on the law of $X$. For instance, for stable processes we have the following consequence of Theorem~\ref{thm:C'_limsup}. \begin{corollary} \label{cor:stable_limsup} Let $X$ be an $\alpha$-stable process with $\alpha\in[1,2)$. Then the following statements hold. \begin{itemize}[leftmargin=2em, nosep] \item[\nf{(a)}] If $t\mapsto t^{-1/\alpha}F(t)$ is bounded as $t\da 0$, then $\limsup_{t\da 0}|C'_t|f(t)=\infty$ a.s. \item[\nf{(b)}] If $t^{-1/\alpha}F(t)\to\infty$ as $t\da0$ and $X$ is not spectrally positive, then the limit $\limsup_{t\da 0}|C'_t|f(t)$ is equal to $\infty$ (resp. $0$) a.s. if the integral $\int_0^1F(t)^{-\alpha}\mathrm{d} t$ is infinite (resp. finite). \end{itemize} \end{corollary} \begin{proof} The scaling property of $X$ gives $\p(X_t\le -cF(t))=\p(X_1\le -ct^{-1/\alpha}F(t))$ for any $c,t>0$. If $t\mapsto t^{-1/\alpha}F(t)$ is bounded, then $\liminf_{t\da 0}\p(X_t\le -cF(t))>0$ making~\eqref{eq:C'_large} fail for all $c>0$. In that case, we have $\limsup_{t\da 0}|C'_t|f(t)=\infty$ a.s. by Theorem~\ref{thm:C'_limsup}(ii), proving part (a). To prove part (b), suppose $X$ is not spectrally positive and let $t^{-1/\alpha}F(t)\to\infty$ as $t\da 0$. Then $x^\alpha\p(X_1\le -x)$ converges to a positive constant as $x\to\infty$, implying the following equivalence: $\int_0^1t^{-1}\p(X_t\le -ct^{-1/\alpha}F(t))\mathrm{d} t<\infty$ if and only if $\int_0^1 F(t)^{-\alpha}\mathrm{d} t<\infty$, where we note that the last integral does not depend of $c>0$. If $\int_0^1 F(t)^{-\alpha}\mathrm{d} t<\infty$, then~\eqref{eq:C'_suff_var}--\eqref{eq:C'_suff_mean} hold and Theorem~\ref{thm:C'_limsup}(i) gives $\limsup_{t\da 0}|C'_t|f(t)=0$ a.s. If instead $\int_0^1 F(t)^{-\alpha}\mathrm{d} t=\infty$, then $\int_0^1t^{-1}\p(X_t\le -ct^{-1/\alpha}F(t))\mathrm{d} t=\infty$ for all $c>0$, so Theorem~\ref{thm:C'_limsup}(ii) implies that $\limsup_{t\da 0}|C'_t|f(t)=\infty$ a.s., completing the proof. \end{proof} For Cauchy process (i.e. $\alpha=1$), Corollary~\ref{cor:stable_limsup} contains the dichotomy in~\cite[Cor.~3]{MR1747095} for the upper functions of $C'$ at time $0$. We note here that results analogous to Corollary~\ref{cor:stable_limsup} can be derived for a spectrally positive stable process $X$ (and for Brownian motion), using the exponential (instead of polynomial) decay of the probability $\p(X_1\le x)$ in $x$ as $x\to-\infty$, see~\cite[Thm~4.7.1]{MR1745764}. \subsection{Regime (IS): lower functions at time {\nf{0}}} \label{subsec:lower_fun_C'} As explained before, obtaining fine conditions for the lower fluctuations of $C'$ is more delicate than in the case of upper fluctuations of $C'$ at $0$. The main reason is that the proof of Theorem~\ref{thm:lower_fun_C'} requires strong control on the Laplace exponent $\Phi_u(w)$ of $\tau_u$, defined in~\eqref{eq:cf_tau0}, as $w\to\infty$ and $u\to-\infty$. This in turn requires sharp two-sided estimates on the negative tail probability $\p(X_t\le ut)$ as a function of $(u,t)$ as $u\to-\infty$ and $t\da 0$ jointly. Due to the necessity of such strong control, in the following result we assume $X\in\mathcal{Z}_{\alpha,\rho}$ for some $\alpha>1$. In other words, we assume there exist some normalising function $g$ that is regularly varying at $0$ with index $1/\alpha$ and an $\alpha$-stable process $(Z_s)_{s \in [0,T]}$ with $\rho=\p(Z_1>0)\in(0,1)$ such that $(X_{ut}/g(t))_{u \in [0,T]} \cid (Z_u)_{u \in [0,T]}$ as $t\da 0$. Recall that $G(t)=t/g(t)$ for $t>0$. \begin{theorem}\label{thm:lower_fun_C'} Let $X\in\mathcal{Z}_{\alpha,\rho}$ for some $\alpha\in(1,2]$ (and hence $\rho\in(0,1)$). Let $f: (0,1) \to (0,\infty)$ be given by $f(t)\coloneqq G(t\log^p(1/t))$, for some $p\in\R$ and all $t\in(0,1)$. Then the following statements hold: \begin{itemize}[leftmargin=2.5em, nosep] \item[\nf(i)] if $p>1/(1-\rho)$, then $\liminf_{t\da 0}|C'_t|f(t)=\infty$ a.s., \item[\nf(ii)] if $p<1/(1-\rho)$, then $\liminf_{t\da 0}|C'_t|f(t)=0$ a.s. \end{itemize} \end{theorem} \begin{remark} \label{rem:exclusions-0}\phantom{empty} \begin{itemize}[leftmargin=2em, nosep] \item[(a)] The assumption $X\in\mathcal{Z}_{\alpha,\rho}$ for some $\alpha>1$ implies that $X$ is of infinite variation. Note that the function $f$ in Theorem~\ref{thm:lower_fun_C'} is regularly varying at $0$ with index $1-1/\alpha$. The `negativity' parameter $1-\rho=\lim_{t\da 0}\p(X_t<0)\in(0,1)$ is a nontrivial function of the L\'evy measure of $X$. The fact that $1-\rho$ features as a boundary point in the power of the logarithmic term in Theorem~\ref{thm:lower_fun_C'} indicates that the lower fluctuations of $C'$ at time $0$ depends in a subtle way on the characteristics of $X$. Such dependence is, for instance, not present for the upper fluctuations of $C'$ at time $0$ when $X$ is $\alpha$-stable, see Corollary~\ref{cor:stable_limsup} above. Indeed, for an $\alpha$-stable process $X$, Theorem~\ref{thm:lower_fun_C'} and Corollary~\ref{cor:stable_limsup}(b) show that $\liminf_{t\da 0}|C'_{t}|f(t)=0$ and $\limsup_{t\da 0}|C'_{t}|f(t)=\infty$ a.s. for $f(t)=t^{1-1/\alpha}\log^{q}(1/t)$ and any $q\in[-1/\alpha,(1-1/\alpha)/(1-\rho))$, demonstrating the gap between the lower and upper fluctuations of $C'$ at time $0$. \item[(b)] The case where $X$ is attracted to Cauchy process with $\alpha=1$ is expected to hold for the functions $f$ in Theorem~\ref{thm:lower_fun_C'}. As explained in Remark~\ref{rem:exclusions-tau}(a) above, many cases arise, with even some abrupt processes being attracted to Cauchy process (see~\cite[Ex.~2.2]{SmoothCM}). We again stress that, in this case, our methodology can be used to obtain a description of the upper fluctuations of $C'$ at time~$0$ via Theorem~\ref{thm:Y_limsup} and two-sided estimates, analogous to Lemma~\ref{lem:asymp_equiv_Phi_domstable}, of the Laplace exponent $\Phi$ in~\eqref{eq:cf_tau0} of the vertex time process. In the interest of brevity, we omit the details of such extensions. \item[(c)] As with Theorem~\ref{thm:upper_fun_C'_post_min} above (see Remark~\ref{rem:exclusions-tau}(b)), the boundary case $p=1/(1-\rho)$ in Theorem~\ref{thm:lower_fun_C'} can be analysed along similar lines. In fact, our methods can be used to get increasingly sharper results for the lower fluctuations of $C'$ at time $0$ when stronger control over the negative tail probabilities for the marginals $X$ is available. Such improvements are possible, for instance, when $X$ is $\alpha$-stable. We decided to leave such results for future work in the interest of brevity. For completeness, however, we mention that the following law of iterated logarithm proved in~\cite[Cor.~3]{MR1747095} can also be proved using our methods (see Example~\ref{ex:Cauchy} below): $\liminf_{t\da 0}|C'_t|f(t)=p_X(1,0)$ a.s., where $x\mapsto p_X(t,x)$ is the density of $X_t$.\qedhere \end{itemize} \end{remark} \subsection{Upper and lower function of the L\'evy path at vertex times} \label{subsec:applications} In this section we establish consequences for the lower (resp. upper) fluctuations of the L\'evy path at vertex time $\tau_s$ (resp. time $0$) in terms of those of $C'$. Recall $X_{t-}\coloneqq \lim_{u \ua t} X_u$ for $t > 0$ (and $X_{0-}\coloneqq X_0$) and define $m_s\coloneqq \min\{X_{\tau_s},X_{\tau_s-}\}$ for $s\in\mathcal{L}^+(\mathcal{S})$. \begin{lemma}\label{lem:upper_fun_Lev_path_post_slope} Suppose $s \in \mathcal{L}^+(\mathcal{S})$. Let the function $f:[0,\infty) \to [0,\infty)$ be continuous and increasing and define the function $\tilde f(t)\coloneqq\int_0^t f(u)\mathrm{d} u$, $t\ge 0$. Then the following statements hold for any $M>0$. \begin{itemize}[leftmargin=2.5em, nosep] \item[\nf(i)] If $\liminf_{t\da 0} (C'_{t+\tau_s}-s)/f(t)>M$ a .s. then $\liminf_{t \da 0} (X_{t+\tau_s}-m_s-st)/\tilde f(t)\ge M$ a.s. \item[\nf(ii)] If $\limsup_{t\da 0} (C'_{t+\tau_s}-s)/f(t)<M$ a.s. then $\liminf_{t \da 0} (X_{t+\tau_s}-m_s-st)/\tilde f(t)\le M$ a.s. \end{itemize} \end{lemma} The proof of Lemma~\ref{lem:upper_fun_Lev_path_post_slope} is pathwise. The lemma yields the following implications \begin{itemize}[leftmargin=2.5em, nosep] \item[\nf(i)] $\liminf_{t\da 0} (C'_{t+\tau_s}-s)/f(t)=\infty\implies\liminf_{t \da 0} (X_{t+\tau_s}-m_s-st)/\tilde f(t)=\infty$, \item[\nf(ii)] $\limsup_{t\da 0} (C'_{t+\tau_s}-s)/f(t)=0\implies\liminf_{t \da 0} (X_{t+\tau_s}-m_s-st)/\tilde f(t)=0$. \end{itemize} The upper fluctuations of $X$ at vertex time $\tau_s$ cannot be controlled via the fluctuations of $C'$ since the process may have large excursions away from its convex minorant between contact points. Moreover, the limits $\liminf_{t\da 0} (C'_{t+\tau_s}-s)/f(t)=0$ or $\limsup_{t\da 0} (C'_{t+\tau_s}-s)/f(t)=\infty$, do not provide sufficient information to ascertain the value of the lower limit $\liminf_{t\da 0}(X_{t+\tau_s}-m_s-st)/\tilde f(t)$, since this limit may not be attained along the contact points between the path and its convex minorant. Theorems~\ref{thm:post-min-lower} a nd~\ref{thm:upper_fun_C'_post_min} give sufficient conditions, in terms of the law of $X$, for the assumptions in Lemma~\ref{lem:upper_fun_Lev_path_post_slope} to hold. This leads to the following corollaries. \begin{corollary} \label{cor:post_tau_s_Levy_path} Let $s \in \mathcal{L}^+(\mathcal{S})$ and let $f$ be a continuous and increasing function with $f(0)=0=\lim_{c\da0}\limsup_{t\da0}f(ct)/f(t)$, $f(1)=1$ and $f(t)\le 1$ for $t\in(0,1]$. If conditions~\eqref{eq:post-min-Pi-large}--\eqref{eq:post-min-Pi-mean} hold for $c=1$, then $\liminf_{t \da 0} (X_{t+\tau_s}-m_s-st)/\tilde f(t)=\infty$ a.s. where we denote $\wt f(t)\coloneqq \int_0^t f(u) \mathrm{d} u$. \end{corollary} Denote by $\varpi(t)\coloneqq t^{-1/\alpha}g(t)$ the slowly varying (at $0$) component of the normalising function $g$ of a process in the class $\mathcal{Z}_{\alpha,\rho}$. Recall that $G(t)=t/g(t)$ for $t>0$. \begin{corollary} \label{cor:post-tau_s-Levy-path-attraction} Let $X\in\mathcal{Z}_{\alpha,\rho}$ for some $\alpha\in(0,1)$ and $\rho\in(0,1]$. Given $p\in\R$, denote $\tilde f(t)\coloneqq\int_0^t G(u\log^p(u^{-1}))^{-1}\mathrm{d} u$ for $t> 0$. Then the following statements hold for $s=\gamma_0$. \begin{itemize}[leftmargin=2.5em, nosep] \item[\nf(i)] If $p>1/\rho$, then $\liminf_{t \da 0} (X_{t+\tau_{s}}-m_{s}-st)/\tilde f(t)=0$ a.s. \item[\nf(ii)] If $\alpha\in (1/2,1)$, $p<-\alpha/(1-\alpha)$ and $(\varpi(c/t)/\varpi(1/t)-1)\log\log(1/t) \to 0$ as $t \da 0$ for some $c\in(0,1)$, then $\liminf_{t \da 0} (X_{t+\tau_{s}}-m_{s}-st)/\tilde f(t)=\infty$ a.s. \item[\nf{(iii)}] If $\alpha \in (0,1/2]$, then $\liminf_{t \da 0} (X_{t+\tau_{s}}-m_{s}-st)/ t^q=\infty$ a.s. for any $q>1/\alpha\ge 2$. \end{itemize} \end{corollary} \begin{remark} \phantom{empty} \begin{itemize}[leftmargin=2em, nosep] \item[(a)] The function $\tilde f$ is regularly varying at $0$ with index $1/\alpha$. This makes conditions in Corollary~\ref{cor:post-tau_s-Levy-path-attraction} nearly optimal in the following sense: the polynomial rate in all three cases is either $1/\alpha$ (cases (i) and (ii) in Corollary~\ref{cor:post-tau_s-Levy-path-attraction}) or arbitrarily close to it (case (iii) in Corollary~\ref{cor:post-tau_s-Levy-path-attraction}). If $\alpha>1/2$, then the gap is in the power of the logarithm in the definition of $\tilde f$. \item[(b)] When the natural drift $\gamma_0=0$, Corollary~\ref{cor:post-tau_s-Levy-path-attraction} describes the lower fluctuations (at time $0$) of the post-minimum process $X^\ra=(X^\ra_t)_{t\in[0,T-\tau_0]}$ given by $ X^\ra_t\coloneqq X_{t+\tau_0}-m_0$ (note that $m_0=\inf_{t\in[0,T]}X_t$). The closest result in this vein is~\cite[Prop.~3.6]{MR1947963} where Vigon shows that, for any infinite variation L\'evy process $X$ and $r>0$, we have $\liminf_{t\da 0}X^\ra_t/t\ge r$ a.s. if and only if $\int_0^1\p(X_t/t\in[0,r])t^{-1}\mathrm{d} t<\infty$. Our result considers non-linear functions and a large class of finite variation processes. \item[(c)] By~\cite[Thm~2]{MR3160578}, the assumption $X\in\mathcal{Z}_{\alpha,\rho}$ and $\gamma_0=0$ implies that the post-minimum process, conditionally given $\tau_0$, is a L\'evy meander. Hence, Corollary~\ref{cor:post-tau_s-Levy-path-attraction} also describes the lower functions of the meanders of L\'evy processes in $\mathcal{Z}_{\alpha,\rho}$. A similar remark applies to the results in Corollary~\ref{cor:post_tau_s_Levy_path}.\qedhere \end{itemize} \end{remark} When $X$ has infinite variation, the process $X$ and $C$ touch each other infinitely often on any neighborhood of $0$ (see~\cite{SmoothCM}), leading to the following connection in small time between the paths of $X$ and its convex minorant $C$. \begin{lemma}\label{lem:upper_fun_Lev_path} Let the function $f:[0,\infty) \to [0,\infty)$ be continuous and increasing with $f(0)=0$ and finite $\tilde f(t)\coloneqq\int_0^t f(u)^{-1}\mathrm{d} u$, $t\ge 0$. Then the following statements hold for any $M>0$. \begin{itemize}[leftmargin=2.5em, nosep] \item[\nf(i)] If $\limsup_{t\da 0} |C_t'|f(t)<M$ a.s., then $\limsup_{t \da 0} (-X_t)/\tilde f(t)\le M$ a.s. \item[\nf(ii)] If $\liminf_{t\da 0} |C_t'|f(t)>M$ a.s., then $\limsup_{t \da 0} (-X_t)/\tilde f(t)\ge M$ a.s. \end{itemize} \end{lemma} Theorem~\ref{thm:C'_limsup} and the corollaries thereafter give sufficient explicit conditions for the assumption in Lemma~\ref{lem:upper_fun_Lev_path}(i) to hold. Similarly, Theorem~\ref{thm:lower_fun_C'} gives a fine class of functions $f$ satisfying the assumption in Lemma~\ref{lem:upper_fun_Lev_path}(ii) for a large class of processes. Such conclusions on the fluctuations of the L\'evy path of $X$ would not be new as the fluctuations of $X$ at $0$ are already known, see~\cite{SoobinKimLeeLIL,MR2480786,MR2591911}. In particular, the upper functions of $X$ and $-X$ at time $0$ were completely characterised in~\cite{MR2591911} in terms of the generating triplet of $X$. Let us comment on some two-way implications of our results, the literature and Lemma~\ref{lem:upper_fun_Lev_path}. \begin{remark} \label{rem:upper_fun_Lev_path}\phantom{empty} \begin{itemize}[leftmargin=2em, nosep] \item[(a)] By~\cite{MR0002054}, the assumption in Theorem~\ref{thm:C'_limsup}(ii) implies that $\limsup_{t\da0}|X_t|/F(t)=\infty$ a.s. where we recall that $F(t)=t/f(t)$. Similarly, by~\cite{MR0002054}, if $\limsup_{t\da0}|X_t|/F(t)=\infty$ a.s. then the assumption in Theorem~\ref{thm:C'_limsup}(ii) must hold for either $X$ or $-X$, which, by time reversal, implies that at least one of the limits $\limsup_{t\da0} |C_t'|f(t)$ or $\limsup_{t\da0} |C_{T-t}'|f(t)$ is infinite a.s. This conclusion is similar to that of Lemma~\ref{lem:upper_fun_Lev_path}, the main difference being the use of either $\tilde f$ or $F$. Note however, that if $f$ is regularly varying with index different from $1$, then~\cite[Thm~1.5.11]{MR1015093} implies $\lim_{t\da 0}\tilde f(t)/F(t)\in(0,\infty)$. \item[(b)] The contrapositive statements of Lemma~\ref{lem:upper_fun_Lev_path} give information on $C'$ in terms of $-X$. Indeed, if we have $\limsup_{t \da 0}(-X_t)/\tilde f(t)>0$, then $\limsup_{t\da 0} |C_t'|f(t)>0$. Similarly, if $\limsup_{t \da 0} (-X_t)/\tilde f(t)<\infty$, then $\liminf_{t\da 0} |C_t'|f(t)<\infty$.\qedhere \end{itemize} \end{remark} The connections between the fluctuations of $X$ and those of $C'$ at time $0$ are intricate. Although the one-sided fluctuations of $X$ at $0$ were essentially characterised in~\cite[Thm~3.1]{MR2591911}, its combination with Lemma~\ref{lem:upper_fun_Lev_path} is not sufficiently strong to obtain conditions for any of the following statements: $\limsup_{t\da0}|C'_t|f(t)=0$, $\limsup_{t\da0}|C'_t|f( t)>0$, $\liminf_{t\da0}|C'_t|f(t)<\infty$ or $\liminf_{t\da0}|C'_t|f(t)=\infty$ a.s. \begin{comment} However, the fluctuations of either cannot fully characterise those of the other, as detailed by the following consequence of Corollary~\ref{cor:stable_limsup} and~\cite[p.~91]{MR1312150} (and the references therein). \begin{corollary} \label{cor:stable_X_vs_C'} Let $X$ be an $\alpha$-stable process that is not spectrally negative with $\alpha\in(1,2]$ and let $f$ be increasing and regularly varying at $0$ with index $1-1/\alpha$. Then $\limsup_{t \da 0} |C'_t|/f(t)$ and $\limsup_{t \da 0} (-X_t)/F(t)$ are both equal to $\infty$ (resp. $0$) a.s. if and only if $\int_0^1F(t)^{-\alpha}\mathrm{d} t$ is infinite (resp. finite). \end{corollary} \begin{corollary} \label{cor:stable_Savov} Let $X$ be an $\alpha$-stable process with $\alpha\in(1,2]$. The following statements hold. \begin{itemize}[leftmargin=1.5em, nosep] \item[{\nf(a)}] Let $f(t)=t^{1-1/\alpha}\log^p(1/t)$ and $\tilde f(t)=t^{1/\alpha}\log^{-p}(1/t)$ for $t\in(0,1)$. Then we a.s. have: \begin{itemize}[leftmargin=.5em, nosep] \item[{\nf(i)}] $\limsup_{t\da0}|C'_t|f(t)=\limsup_{t\da0}(-X_t)/\tilde f(t)=\liminf_{t\da0}|C'_t|f(t)=0$ for $p<-\tfrac{1}{\alpha}$, \item[{\nf(ii)}] $\limsup_{t\da0}|C'_t|f(t)=\limsup_{t\da0}(-X_t)/\tilde f(t)=\infty$ and $\liminf_{t\da0}|C'_t|f(t)=0$ for $p\in[-\tfrac{1}{\alpha},\tfrac{\alpha-1}{\alpha(1-\rho)})$, \item[{\nf(iii)}] $\limsup_{t\da0}|C'_t|f(t)=\limsup_{t\da0}(-X_t)/\tilde f(t)=\liminf_{t\da0}|C'_t|f(t)=\infty$ for $p>\tfrac{\alpha-1}{\alpha(1-\rho)}$. \end{itemize} \item[(b)] Let $f(t)=t^{1-1/\alpha}\ell(t)$ for a positive continuous slowly varying function $\ell$ at $0$. Let $\wt f(t)=t^{1/\alpha}\ell(t)^{-1}$ and $\wt f^{-1}(t)=t^\alpha\ell^*(t)$ a continuous asymptotic inverse of $f$ at $0$. Then $\limsup_{t\da0}|C'_t|f(t)$ equals $\infty$ (resp. $0$) a.s. if $\int_0^1\ell(t)^\alpha t^{-1}\mathrm{d} t$ is infinite (resp. finite) while $\limsup_{t\da0}(-X_t)/\tilde f(t)$ equals $\infty$ (resp. $0$) a.s. if $\int_0^1\ell^*(t)t^{-1}\mathrm{d} t$ is infinite (resp. finite). \end{itemize} \end{corollary} The limits of $\limsup_{t\da0}|C'_t|f(t)$ and $\limsup_{t\da0}(-X_t)/\tilde f(t)$ coincide a lot, however, they do not agree as soon as $\ell(t)^\alpha$ is not asymptotically equivalent to $\ell^*(t)=\ell(t^\alpha)^\alpha$. This is, for instance, the case when $\ell(t)=\exp(\sqrt{\log(1/t)})$ since, in that case, we have \end{comment} \section{Small-time fluctuations of non-decreasing additive processes} \label{sec:additive} Consider a pure-jump right-continuous non-decreasing additive (i.e. with independent and possibly non-stationary increments) process $Y=(Y_t)_{t\ge 0}$ with $Y_0=0$ a.s. and its mean jump measure $\Pi(\mathrm{d} t,\mathrm{d} x)$ for $(t,x)\in[0,\infty)\times(0,\infty)$, see~\cite[Thm~15.4]{MR1876169}. Then, by Campbell's formula~\cite[Lem.~12.2]{MR1876169}, its Laplace transform satisfies \begin{equation} \label{eq:defn_Psi_additive_process} \E\big[e^{-uY_t}\big] =e^{-\Psi_{t}(u)}, \quad\text{where}\quad \Psi_{t}(u)\coloneqq \int_{(0,\infty)}(1-e^{-ux})\Pi((0,t],\mathrm{d} x), \quad \text{for any }u\ge 0. \end{equation} Let $L_t\coloneqq \inf\{u>0: Y_u>t\}$ for $t\ge0$ (with convention $\inf\emptyset=\infty$) denote the right-continuous inverse of $Y$. Our main objective in this section is to describe the upper and lower fluctuations of $L$, extending known results for the case where $Y$ has stationary increments (making $Y$ a subordinator) in which case $\Pi(\mathrm{d} t,\mathrm{d} x)=\Pi((0,1],\mathrm{d} x)\mathrm{d} t$ for all $(t,x)\in[0,\infty)\times(0,\infty)$ (see e.g.~\cite[Thm 4.1]{MR1746300}). \subsection{Upper functions of {\nf{\emph{L}}}} \label{subsec:upper_func_gen} The following theorem is the main result of this subsection. \begin{theorem}\label{thm:limsup_L} Let $f:(0,1)\to(0,\infty)$ be increasing with $\lim_{t \downarrow 0}f(t)=0$ and $\phi:(0,\infty) \to (0,\infty)$ be decreasing with $\lim_{u \to \infty} \phi(u)=0$. Let the positive sequence $(\theta_n)_{n\in\N}$ satisfy $\lim_{n\to \infty}\theta_n=\infty$ and define the associated sequence $(t_n)_{n\in\N}$ given by $t_n\coloneqq \phi(\theta_n)$ for any $n \in \N$.\\ {\nf{(a)}} If $\sum_{n=1}^\infty \exp(\theta_n t_n-\Psi_{f(t_n)}(\theta_n))<\infty$ then $\limsup_{t \da 0}L_t/f(t)\leq \limsup_{n \to \infty}f(t_n)/f(t_{n+1})$ a.s.\\ {\nf{(b)}} If $\lim_{u\to\infty}\phi(u)u=\infty$, $\sum_{n=1}^\infty [\exp(-\Psi_{f(t_n)}(\theta_n))-\exp(-\theta_n t_n)]=\infty$ and $\sum_{n=1}^\infty \Psi_{f(t_{n+1})}( \theta_n)<\infty$, then $\limsup_{t \downarrow 0} L_t/f(t)\geq 1$ a.s. \end{theorem} \begin{remark} \label{rem:limsup_L}\phantom{empty} \begin{itemize}[leftmargin=2em, nosep] \item[(a)] Theorem~\ref{thm:limsup_L} plays a key role in the proofs of Theorems~\ref{thm:upper_fun_C'_post_min} and~\ref{thm:lower_fun_C'}. Before applying Theorem~\ref{thm:limsup_L}, one needs to find appropriate choices of the free infinite-dimensional parameters $h$ and $(\theta_n)_{n\in\N}$. This makes the application of Theorem~\ref{thm:limsup_L} hard in general and is why, in Theorems~\ref{thm:upper_fun_C'_post_min} and~\ref{thm:lower_fun_C'}, we are required to assume that $X$ lies in the domain of attraction of an $\alpha$-stable process. \item[(b)] If $Y$ has stationary increments (making $Y$ a subordinator), the proof of~\cite[Thm~4.1]{MR1746300} follows from Theorem~\ref{thm:limsup_L} by finding an appropriate function $f$ and sequences $(\theta_n)_{n\in\N}$ (done in~\cite[Lem.~4.2 \& 4.3]{MR1746300}) satisfying the assumptions of Theorem~\ref{thm:limsup_L}. In this case, the function $f$ is given in terms of the single-parameter Laplace exponent $\Psi_1$, see details in~\cite[Thm~4.1]{MR1746300}.\qedhere \end{itemize} \end{remark} \begin{proof}[Proof of Theorem~\ref{thm:limsup_L}] (a) Since $L$ is the right-inverse of $Y$, we have $\{L_{t_n} > f(t_n)\} = \{t_n \ge Y_{f(t_n)}\}$ for $n \in \N$. Using Chernoff's bound (Markov's inequality), we obtain \[ \p\big(t_n \ge Y_{f(t_n)}\big) \le e^{\theta_n t_n}\E\big[\exp\big(-\theta_n Y_{f(t_n)}\big)\big] =\exp(\theta_n t_n -\Psi_{f(t_n)}(\theta_n)), \quad \text{ for all }n \ge 1. \] The assumption $\sum_{n=1}^\infty \exp(\theta_n t_n-\Psi_{f(t_n)}(\theta_n))<\infty$ thus implies $\sum_{n=1}^\infty \p (L_{t_n}>f(t_n))<\infty$. Hence, the Borel--Cantelli lemma yields $\limsup_{n \to \infty} L_{t_n}/f(t_n)\le 1$ a.s. Since $L$ is non-decreasing and $(t_n)_{n\in \N}$ is decreasing monotonically to zero, we have \begin{equation*} \limsup_{t \da 0} \frac{L_t}{f(t)} \le\limsup_{n \to\infty}\sup_{t\in[t_{n+1},t_n]} \frac{L_{t_n}}{f(t)} \le \limsup_{n \to \infty} \frac{L_{t_n}}{f(t_n)}\cdot\limsup_{n \to\infty} \frac{f(t_n)}{f(t_{n+1})} \le \limsup_{n \to\infty} \frac{f(t_n)}{f(t_{n+1})} \quad \text{ a.s.,} \end{equation*} which gives (a). (b) It suffices to establish that the following limits hold: $\liminf_{n\to\infty}(Y_{f(t_n)}-Y_{f(t_{n+1})})/t_n\le 1$ a.s. and $\limsup_{n\to\infty}Y_{f(t_{n+1})}/t_n\le \delta$ a.s. for any $\delta>0$. Indeed, by taking $\delta\da 0$ along a countable sequence, the second limit gives $\limsup_{n\to\infty}Y_{f(t_{n+1})}/t_n=0$ a.s. and hence $\liminf_{n \to \infty} Y_{f(t_n)}/ t_n\le 1$ a.s. For any $t>0$ with $Y_{f(t)}\le t$ we have $L_t>f(t)$. Since the former holds for arbitrarily small values of $t>0$ a.s., we obtain $\limsup_{t \da 0}L_t/f(t)\ge 1$ a.s. We will prove that $\liminf_{n\to\infty}(Y_{f(t_n)}-Y_{f(t_{n+1})})/t_n\le 1$ a.s. and $\limsup_{n\to\infty}Y_{f(t_{n+1})}/t_n\le \delta$ a.s. for any $\delta>0$, using the Borel--Cantelli lemmas. Applying Markov's inequality, we obtain the upper bound $\p(Y_t> s)\le(1-e^{-\theta s})^{-1}\E[1-e^{-\theta Y_t}]$ for all $t,s,\theta>0$, implying \begin{equation*} \p\big(Y_{f(t_n)}\le t_n\big) \ge \frac{\exp(-\Psi_{f(t_n)}(\theta_n))-\exp(-\theta_n t_n)} {1-\exp(-\theta_n t_n)}, \qquad \text{ for all }n \ge 1. \end{equation*} Since $\theta_nt_n=\theta_n\phi(\theta_n)\to \infty$ as $n \to \infty$, the denominator of the lower bound in the display above tends to $1$ as $n \to \infty$, and hence the assumption $\sum_{n=1}^\infty [\exp(-\Psi_{f(t_n)}(\theta_n))-\exp(-\theta_n t_n)]=\infty$ implies $\sum_{n=1}^\infty \p (Y_{f(t_n)}<t_n)=\infty$. Since $Y$ has non-negative independent increments and \begin{equation*} \sum_{n=1}^\infty \p(Y_{f(t_n)}-Y_{f(t_{n+1})}<t_n) \ge\sum_{n=1}^\infty\p(Y_{f(t_n)}< t_n)=\infty, \end{equation*} the second Borel--Cantelli lemma yields $\liminf_{n\to\infty}(Y_{f(t_n)}-Y_{ f(t_{n+1})})/ t_n \le 1$ a.s. To prove the second limit, use Markov's inequality and the elementary bound $1-e^{-x}\le x$ to get \begin{equation*} \p\big(Y_{f(t_{n+1})}> \delta t_n\big) \le \frac{\E[1-\exp(-\theta_n Y_{ f( t_{n+1})})]}{1-\exp(-\delta\theta_n t_n)} = \frac{1-\exp(-\Psi_{ f(t_{n+1})}(\theta_n))}{1-\exp(-\delta\theta_n t_n)} \le \frac{\Psi_{ f(t_{n+1})}(\theta_n)}{1-\exp(-\delta\theta_n t_n)}, \end{equation*} for all $n \in \N$. Again, the denominator tends to $1$ as $n\to\infty$ and the assumption $\sum_{n = 1}^\infty\Psi_{f(t_{n+1})}(\theta_n)<\infty$ implies $\sum_{n=1}^\infty\p(Y_{ f(t_{n+1})}> \delta t_n)<\infty$. The Borel--Cantelli lemma implies $\limsup_{n\to\infty}Y_{f(t_{n+1})}/t_n\le \delta$ a.s. and completes the proof. \end{proof} \subsection{Lower functions of {\nf{\emph{L}}}} \label{subsec:lower_func_gen} To describe the lower fluctuations of $L$, it suffices to describe the upper fluctuations of $Y$. The following result extends known results for subordinators (see, e.g.~\cite[Thm 1]{MR210190}). Given a continuous increasing function $h$ with $h(0)=0$ and $h(1)=1$, consider the following statements, used in the following result to describe the upper fluctuations of~$Y$: \begin{gather} \label{eq:h_limsup} \limsup_{t\da0}Y_t/h(t)=0,\quad \text{a.s.},\\ \label{eq:h_leq_io} \limsup_{t\da0}Y_t/h(t)<1, \quad \text{a.s.},\\ \label{eq:Pi_large} \Pi(\{(t,x)\,:\,t\in(0,1],\,x\ge h(t)\})<\infty,\\ \label{eq:Pi_var} \int_{(0,1]\times(0,1)}\frac{x^2}{h(t)^2}\1_{\{2h(t)>x\}} \Pi(\mathrm{d} t,\mathrm{d} x)<\infty,\\ \label{eq:Pi_mean} 2^n\int_{(0,h^{-1}(2^{-n})]\times (0,2^{-n})} x\1_{\{2h(t)>x\}} \Pi(\mathrm{d} t,\mathrm{d} x) \da 0, \quad\text{ as }n\to\infty, \quad \text{ and }\\ \label{eq:Pi_mean_var} \int_{(0,1]\times(0,1)}\frac{x}{h(t)}\1_{\{2h(t)>x\}} \Pi(\mathrm{d} t,\mathrm{d} x)<\infty. \end{gather} \begin{theorem}\label{thm:Y_limsup} Let $h$ be continuous and increasing with $h(0)=0$ and $h(1)=1$. Then the following implications hold: \begin{multicols}{3} \begin{itemize}[leftmargin=2em, nosep] \item[\nf{(a)}] \eqref{eq:h_limsup}$\implies$\eqref{eq:h_leq_io}$\implies$\eqref{eq:Pi_large}, \item[\nf{(b)}] \eqref{eq:Pi_large}--\eqref{eq:Pi_mean}$\implies$\eqref{eq:h_limsup}, \item[\nf{(c)}] \eqref{eq:Pi_mean_var}$\implies$\eqref{eq:Pi_var}--\eqref{eq:Pi_mean}. \end{itemize} \end{multicols} \end{theorem} \begin{figure} \centering \begin{tikzpicture}[node distance=1cm and 3.4cm] \node[rectangle,draw] (a) {\eqref{eq:h_limsup}}; \node[rectangle,draw] (b)[below=of a]{\eqref{eq:h_leq_io}}; \node[rectangle,draw] (c)[right=of a]{\eqref{eq:Pi_mean}}; \node[rectangle,draw] (d)[below=of c]{\eqref{eq:Pi_var}}; \node[rectangle,draw] (e)[below=of d]{\eqref{eq:Pi_large}}; \node[rectangle,draw] (f)[right=of c]{\eqref{eq:Pi_mean_var}}; \node[rectangle,draw] (g)[right=of e]{\eqref{eq:Pi_var_inv}}; \node[rectangle,draw] (h)[right=of g]{\eqref{eq:Pi_mean_inv}}; \node[rectangle,draw] (FS)[right=of f]{\eqref{eq:Pi_mean_var_inv}}; \draw [decorate, decoration = {brace, raise=10pt, amplitude=10pt, mirror},thick] (4.15,0.2) -- (4.15,-3.2); \draw[-implies,double equal sign distance] (a) -- (b); \draw[implies-,double equal sign distance] (c) -- (f); \draw[implies-,double equal sign distance] (d) -- (f); \draw[-implies,double equal sign distance] (b) .. controls +(down:20mm) and +(left:20mm) .. (e); \draw[implies-,double equal sign distance] (a)--(3.45,-1.5); \draw[implies-,double equal sign distance] (d) -- (g) node[pos=0.5,above,sloped]{$h^{-1}$ concave}; \draw [decorate, decoration = {brace, raise=10pt, amplitude=10pt, mirror},thick] (4,-3.15) -- (13.1,-3.15); \draw[-implies,double equal sign distance] (FS)--(g)node[pos=0.5,above,sloped]{$h^{-1}$ concave}; \draw[-implies,double equal sign distance] (FS)--(h)node[pos=0.5,above,sloped]{$h^{-1}$ concave}; \draw[implies-,double equal sign distance] (a) .. controls (-1.6,-3.9) .. (8.55,-3.9)node[pos=0.8,below,sloped]{$h^{-1}$ concave}; \draw[-,thick] (3.45,-1.5) -- (d); \draw[-,thick] (8.55,-3.85) -- (g); \end{tikzpicture} \caption{A graphical representation of the implications in Theorem~\ref{thm:Y_limsup} and Proposition~\ref{prop:Y_limsup}.} \label{fig:overview_technical_results} \end{figure} \begin{remark}\label{rem:Y_limsup}\nf If $h$ is as in Theorem~\ref{thm:Y_limsup} and $\Pi(\{(t,x)\,:\,t\in(0,1],\,x\ge ch(t)\})=\infty$ for all $c>0$, then it follows from the negation of Theorem~\ref{thm:Y_limsup}(a) that $\limsup_{\da0}Y_t/h(t)=\infty$ a.s. \end{remark} In the description of the lower fluctuations of $L$, we are typically given the function $h^{-1}$ directly instead of $h$. In those cases, the conditions in Theorem~\ref{thm:Y_limsup} may be hard to verify directly (see e.g. the proof of Theorem~\ref{thm:C'_limsup}(i)). To alleviate this issue, we introduce alternative conditions describing the upper fluctuations of $Y$ in terms of the function $h^{-1}$. However, this requires the additional assumption that $h^{-1}$ is concave, see Proposition~\ref{prop:Y_limsup} below. Consider the following conditions on $h^{-1}$: \begin{gather} \label{eq:Pi_var_inv} \int_{(0,1]\times(0,1)}\frac{h^{-1}(x)^2}{t^2}\1_{\{2t\ge h^{-1}(x)\}}\Pi(\mathrm{d} t, \mathrm{d} x)<\infty,\\ \label{eq:Pi_mean_inv} 2^n\int_{(0,2^{-n}]\times(0,h(2^{-n}))}h^{-1}(x)\1_{\{2t\ge h^{-1}(x)\}}\Pi(\mathrm{d} t, \mathrm{d} x)\da 0, \quad\text{as }n\to\infty, \quad \text{ and }\\ \label{eq:Pi_mean_var_inv} \int_{(0,1]\times(0,1)}\frac{h^{-1}(x)}{t}\1_{\{2t\ge h^{-1}(x)\}}\Pi(\mathrm{d} t,\mathrm{d} x)<\infty. \end{gather} \begin{proposition} \label{prop:Y_limsup} Let $h$ be convex and increasing with $h(0)=0$ and $h(1)=1$. Then the following implications hold: \begin{multicols}{3} \begin{itemize}[leftmargin=2em, nosep] \item[\nf{(a)}] \eqref{eq:Pi_var_inv}$\implies$\eqref{eq:Pi_var}, \item[\nf{(b)}] \eqref{eq:Pi_mean_var_inv}$\implies$\eqref{eq:Pi_var_inv}--\eqref{eq:Pi_mean_inv}, \item[\nf{(c)}] \eqref{eq:Pi_large} and \eqref{eq:Pi_var_inv}--\eqref{eq:Pi_mean_inv}$\implies$\eqref{eq:h_limsup}. \end{itemize} \end{multicols} \end{proposition} The relation between the assumptions of Theorem~\ref{thm:Y_limsup} and Proposition~\ref{prop:Y_limsup} (concerning $h$ and $h^{-1}$) is described in Figure~\ref{fig:overview_technical_results}. The following elementary result explains how the upper fluctuations of $Y$ (described by Theorem~\ref{thm:Y_limsup}) are related to the lower fluctuations of $L$. \begin{lemma} \label{lem:Y_vs_L} Let $h$ be a continuous increasing function with $h(0)=0$ and denote by $h^{-1}$ its inverse. Then the following implications hold for any $c>0$: \begin{itemize}[leftmargin=2em, nosep] \item[\nf{(a)}] $\liminf_{t\da0}L_t/h^{-1}(t/c)>1 \implies\limsup_{t\da0}Y_t/h(t)\le c$, \item[\nf{(b)}] $\limsup_{t\da0}Y_t/h(t)< c \implies\liminf_{t\da0}L_t/h^{-1}(t/c)\ge 1$. \end{itemize} \end{lemma} \begin{proof} The result follows from the implications $L_u>t\implies u\ge Y_t\implies L_u\ge t$ for any $t,u>0$. Indeed, if $\liminf_{u\da0}L_u/h^{-1}(u/c)>1$ then $L_u>h^{-1}(u/c)$ for all sufficiently small $u>0$ implying that $Y_t\le ch(t)$ for all sufficiently small $t>0$ and hence $\limsup_{t\da 0}Y_t/h(t)\le c$. This establishes part (a). Part (b) follows along similar lines. \end{proof} A combination of Lemma~\ref{lem:Y_vs_L}, Theorem~\ref{thm:Y_limsup}, Proposition~\ref{prop:Y_limsup} and Remark~\ref{rem:Y_limsup} yield the following corollary. \begin{corollary} \label{cor:L_liminf} Let $h$ be a continuous and increasing function with $h(0)=0$ and $h(1)=1$ such that $\lim_{c\da0}\limsup_{t\da0}h^{-1}(ct)/h^{-1}(t)=0$. Then the following results hold: \begin{itemize}[leftmargin=2.5em, nosep] \item[{\nf(i)}] If $\liminf_{t\da0}L_t/h^{-1}(t/c)> 1$ a.s. for some $c\in(0,1)$ then~\eqref{eq:Pi_large} holds. \item[{\nf(ii)}] Suppose~\eqref{eq:Pi_large}--\eqref{eq:Pi_mean} hold, then $\liminf_{t\da0}L_t/h^{-1}(t)=\infty$ a.s. \item[{\nf(ii')}] Suppose $h$ is convex and conditions~\eqref{eq:Pi_large} and~\eqref{eq:Pi_var_inv}--\eqref{eq:Pi_mean_inv} hold, then $\liminf_{t\da0}L_t/h^{-1}(t)=\infty$ a.s. \item[{\nf(iii)}] If $\Pi(\{(t,x)\,:\,t\in(0,1],\,x\ge ch(t)\})=\infty$ for all $c>0$ then $\liminf_{t\da0}L_t/h^{-1}(t)=0$ a.s. \end{itemize} \end{corollary} \begin{comment} \begin{proof} (FS) Note by the duality before the statement of Corollary~\ref{cor:L_liminf} above, that $\liminf_{t \da 0}L_t/h^{-1}(t)=\infty$ a.s. if and only if $\limsup_{t \da 0}Y_t/h(t)=0$ a.s., which by Theorem~\ref{thm:Y_limsup} implies \eqref{eq:Pi_large}. (IS) Theorem~\ref{thm:Y_limsup} implies that $\limsup_{t \da 0} Y_t/h(t)=0$ a.s. which as above hold if and only if $\liminf_{t \da 0}L_t/h^{-1}(t)=\infty$ a.s. (ii') Proposition~\ref{prop:Y_limsup} and the above duality yields this result. (iii) Follows by Remark~\ref{rem:Y_limsup} and the above duality. \end{proof} \end{comment} To prove Theorem~\ref{thm:Y_limsup} we require the following lemma. For all $t\ge 0$ denote by $\Delta_t\coloneqq Y_t-Y_{t-}$ the jump of $Y$ at time $t$, so that $Y_t=\sum_{u\le t}\Delta_u$ since $Y$ is a pure-jump additive process. We also let $N$ denote the Poisson jump measure of $Y$, given by $N(A)\coloneqq |\{t:(t,\Delta_t)\in A\}|$ for $A\subset[0,\infty)\times (0,\infty)$ and note that its mean measure is $\Pi(\mathrm{d} t,\mathrm{d} x)$. \begin{lemma}\label{lem:upper_func_Y} Let $h$ be continuous and increasing with $h(0)=0$ and $h(1)=1$. Assume \eqref{eq:Pi_large}--\eqref{eq:Pi_mean} hold, then $\limsup_{t \da 0}Y_t/h(t) =\limsup_{t \da 0}Y_{h^{-1}(t)}/t=0$ a.s. \end{lemma} \begin{proof} For all $n \in \N$, we let $B_n\coloneqq [2^{-n},\infty)$ and set $C_n\coloneqq h^{-1}((2^{-n-1},2^{-n}])\times B_n$. Then we have \begin{equation*} \sum_{n \in \N}\p(N(C_n)\geq 1) = \sum_{n \in \N}\big(1-e^{-\Pi(C_n)}\big) \le \sum_{n \in \N} \Pi(C_n), \end{equation*} by the definition of $N$ and the inequality $1-e^{-x}\le x$. Note that $\sum_{n \in \N} \Pi(C_n)<\infty$ by~\eqref{eq:Pi_large}, since \[ \sum_{n\in\N} \Pi(C_n) \le\Pi(\{(t,x)\,:\, t\in[0,1],\, x\ge h(t)\})<\infty. \] By the Borel--Cantelli lemma, there exists some $n_0\in\N$ with $N(h^{-1}((2^{-n-1},2^{-n}])\times B_n)=0$ a.s. for all $n \geq n_0$. By the mapping theorem, the random measure $N_h(A\times B)\coloneqq N(h^{-1}(A)\times B)$ for any measurable $A,B\subset[0,\infty)$, is a Poisson random measure with mean measure $\Pi_h(A\times B)\coloneqq\Pi(h^{-1}(A),B)$. Note that $Y_{h^{-1}(t)} =\int_{(0,h^{-1}(t)]\times (0,\infty)}xN(\mathrm{d} u,\mathrm{d} x) =\int_{(0,t]\times (0,\infty)}xN_h(\mathrm{d} u,\mathrm{d} x)$ for $t\ge 0$ and, for any $n \geq n_0$ and $t \in (2^{-n-1},2^{-n}]$, we have $|Y_{h^{-1}(t)}/t|\le\zeta_n\coloneqq 2^{n+1}\sum_{m=n}^\infty\xi_m$, where \begin{equation*} \xi_m\coloneqq \int_{(2^{-m-1},2^{-m}]\times (0,2^{-m})} x N_h(\mathrm{d} u,\mathrm{d} x), \qquad m\in\N. \end{equation*} To complete the proof, it suffices to show that $\zeta_n\da 0$ a.s. as $n\to\infty$. Fubini's theorem yields \begin{align*} 2^{-n-1}\E[\zeta_n] &=\sum_{m=n}^\infty\int_{(2^{-m-1},2^{-m}]\times (0,2^{-m})} x\Pi_h(\mathrm{d} u,\mathrm{d} x)\\ &=\int_{(0,2^{-n}]\times (0,2^{-n})} x\sum_{m=n}^\infty \1_{\{x<2^{-m}\}}\1_{\{u\le 2^{-m}<2u\}} \Pi_h(\mathrm{d} u,\mathrm{d} x)\\ &\le \int_{(0,2^{-n}]\times (0,2^{-n})} x\1_{\{2u>x\}} \Pi_h(\mathrm{d} u,\mathrm{d} x) =\int_{(0,h^{-1}(2^{-n})]\times (0,2^{-n})} x\1_{\{2h(v)>x\}} \Pi(\mathrm{d} v,\mathrm{d} x). \end{align*} By assumption~\eqref{eq:Pi_mean}, we deduce that $\E[\zeta_n]\da0$ as $n\to\infty$. Similarly, note that \begin{equation*} \Var(\zeta_n) = 4^{n+1}\sum_{m= n}^\infty\int_{(2^{-m-1},2^{-m}]\times (0,2^{-m})} x^2 \Pi_h(\mathrm{d} u,\mathrm{d} x), \end{equation*} and hence, by Fubini's theorem and assumption~\eqref{eq:Pi_var}, we have \begin{align*} \sum_{n=1}^\infty\Var(\zeta_n) &= \sum_{m=1}^\infty\sum_{n=1}^m 4^{n+1}\int_{(2^{-m-1},2^{-m}]\times (0,2^{-m})} x^2 \Pi_h(\mathrm{d} t,\mathrm{d} x)\\ &\le\sum_{m=1}^\infty 4^{m+2}\int_{(2^{-m-1},2^{-m}]\times (0,2^{-m})} x^2 \Pi_h(\mathrm{d} u,\mathrm{d} x)\\ &=\int_{(0,1]\times (0,1)}x^2\sum_{m=1}^\infty 4^{m+2}\1_{\{x<2^{-m}\}}\1_{\{u<2^{-m}<2u\}} \Pi_h(\mathrm{d} u,\mathrm{d} x)\\ &\le 4^2\int_{(0,1]\times(0,1)}\frac{x^2}{u^2}\1_{\{2u>x\}}\Pi_h(\mathrm{d} u,\mathrm{d} x)\\ &=4^2\int_{(0,h^{-1}(1)]\times(0,1)}\frac{x^2}{h(v)^2}\1_{\{2h(v)>x\}}\Pi(\mathrm{d} v,\mathrm{d} x)<\infty. \end{align*} Thus, we find that the sum $\sum_{n=1}^\infty(\zeta_n-\E[\zeta_n])^2$ has finite mean equal to $\sum_{n=1}^\infty\Var(\zeta_n)<\infty$ and is thus finite a.s. Hence, the summands must tend to $0$ a.s. and, since $\E[\zeta_n]\to 0$, we deduce that $\zeta_n \da 0$ a.s. as $n \to \infty$. \end{proof} \begin{proof}[Proof of Theorem~\ref{thm:Y_limsup}] It is obvious that~\eqref{eq:h_limsup} implies~\eqref{eq:h_leq_io}. If~\eqref{eq:h_leq_io} holds, then $Y_t<h(t)$ for all sufficiently small $t$. Thus, the path bound $Y_t\ge\Delta_t$ implies $\p(N(\{(t,x)\,:\,t \in [0,1],\,x>h(t)\})<\infty)=1$ and hence~\eqref{eq:Pi_large}. By Lemma~\ref{lem:upper_func_Y}, conditions~\eqref{eq:Pi_large}--\eqref{eq:Pi_mean} imply~\eqref{eq:h_limsup}, so it remains to show that~\eqref{eq:Pi_mean_var} implies~\eqref{eq:Pi_var} and~\eqref{eq:Pi_mean}. It is easy to see that~\eqref{eq:Pi_mean_var} implies~\eqref{eq:Pi_var}. Moreover, if~\eqref{eq:Pi_mean_var} holds, then \begin{multline*} 2^n\int_{(0,h^{-1}(2^{-n})]\times (0,2^{-n})} x\1_{\{2h(t)>x\}} \Pi(\mathrm{d} t,\mathrm{d} x)\\ \le \int_{(0,h^{-1}(1)]\times (0,1)} \frac{x}{h(t)}\1_{\{2h(t)>x\}} \1_{{(0,h^{-1}(2^{-n})]\times (0,2^{-n})}}(t,x) \Pi(\mathrm{d} t,\mathrm{d} x), \end{multline*} where the upper bound is finite for all $n\in\N$ and tends to $0$ as $n\to\infty$ by the monotone convergence theorem, implying~\eqref{eq:Pi_mean}. \end{proof} \begin{proof}[Proof of Proposition~\ref{prop:Y_limsup}] Since $h^{-1}$ is concave with $h^{-1}(0)=0$, then $x\mapsto h^{-1}(x)/x$ is decreasing, so the condition $h(t)>x/2$ implies $(x/2)/h(t)\le h^{-1}(x/2)/t\le h^{-1}(x)/t$. The inequality $h^{-1}(x)/x\le h^{-1}(x/2)/(x/2)$ implies that $\{(t,x)\,:\,2h(t)>x\}\subset\{(t,x)\,:\,2t>h^{-1}(x)\}$, proving the first claim: \eqref{eq:Pi_var_inv} implies~\eqref{eq:Pi_var}. Since $h^{-1}$ is concave with $h^{-1}(0)=0$, it is subadditive, implying \[ \zeta_t \coloneqq \sum_{u\le t}h^{-1}(\Delta_u)\ge h^{-1}(Y_t). \] Since $\limsup_{t\da 0}\zeta_t/t\le c$ implies $\limsup_{t\da 0}Y_t/h(ct)\le 1$ for $c>0$ and $h$ is a convex function, it suffices to show that $\limsup_{t\da 0}\zeta_t/t=0$ a.s. Note that $\zeta$ is an additive process with jump measure $\Pi(\mathrm{d} t, h(\mathrm{d} x))$. Applying Theorem~\ref{thm:Y_limsup} to $\zeta$ with the identity function yields the result, completing the proof. \end{proof} \begin{remark}\label{rem:stationary_increments} We now show that, when the increments of $Y$ are stationary (making $Y$ a subordinator), Theorem~\ref{thm:Y_limsup} gives a complete characterisation of the upper functions of $Y$, recovering~\cite[Thm~1]{MR210190} (see also~\cite[Prop.~4.4]{MR1746300}). This is done in two steps. Suppose $h$ is convex and $Y$ has stationary increments with mean jump measure $\Pi(\mathrm{d} t, \mathrm{d} x)=\Pi((0,1],\mathrm{d} x)\mathrm{d} t$. Then $h^{-1}$ is concave and the additive process $\wt Y_t\coloneqq \sum_{s\le t}h^{-1}(\Delta_s)\ge h^{-1}(Y_t)$ has mean jump measure $\Pi(\mathrm{d} t,h(\mathrm{d} x))$, making it a subordinator. Theorem~\ref{thm:Y_limsup} applied to $\wt Y$ and the identity function makes all conditions~\eqref{eq:Pi_large}--\eqref{eq:Pi_mean} equivalent to $\int_{(0,1)}h^{-1}(x)\Pi((0,1],\mathrm{d} x)<\infty$ and therefore, by Theorem~\ref{thm:Y_limsup}, also equivalent to the condition $\limsup_{t\da0}\wt Y_t/t=0$ a.s. Note that condition~\eqref{eq:Pi_large} for $\wt Y$ and the identity function coincides with condition~\eqref{eq:Pi_large} for $Y$ and $h$. This equivalence, together with the fact that the limit $\limsup_{t\da 0}\wt Y_t/t=0$ implies $\limsup_{t\da 0}Y_t/h(t)=0$, shows that both limits are either $0$ a.s. or positive a.s. jointly. Thus, $\limsup_{t\da0}Y_t/h(t)=0$ a.s. if and only if $\int_{(0,1)}h^{-1}(x)\Pi((0,1],\mathrm{d} x)<\infty$ and, if the latter condition fails, then $\limsup_{t\da0} Y_t/h(t)=\infty$ a.s. by Remark~\ref{rem:Y_limsup}. This is precisely the criterion given in~\cite[Thm~1]{MR210190} (see also~\cite[Prop.~4.4]{MR1746300}). \end{remark} Remark~\ref{rem:stationary_increments} shows that condition~\eqref{eq:Pi_large} perfectly describes the upper fluctuations of $Y$ when $Y$ has stationary increments, making conditions~\eqref{eq:Pi_var} \&~\eqref{eq:Pi_mean} appear superfluous. These conditions are, however, not superfluous since~\eqref{eq:Pi_large} by itself cannot fully characterise the upper fluctuations of $Y$, as the following example shows. \begin{example}\label{ex:breakequivalence} Let $\Pi(\mathrm{d} t,\mathrm{d} x) = \sum_{n\in\N}n^{-1}2^n\delta_{(2^{-n},2^{-n}/n)}(\mathrm{d} t,\mathrm{d} x)$, where $\delta_x$ denotes the Dirac measure at $x$, and consider the corresponding additive process $Y$ (whose existence is ensured by~\cite[Thm~15.4]{MR1876169}). Since $\p(\xi\ge\mu)\ge 1/5$ for every Poisson random variable $\xi$ with mean $\mu\ge 2$~\cite[Eq.~(6)]{pelekis2017lower}, we get $\sum_{n\in\N}\p(N(\{(2^{-n},2^{-n}/n)\})\ge 2^n/n)=\infty$. The second Borel--Cantelli lemma then shows that $\Delta_{2^{-n}}\ge 1/n^2$ i.o. Thus, $Y_{2^{-n}}/2^{-n}\ge 2^{n}\Delta_{2^{-n}}\ge 2^n/n^2$ i.o., implying $\limsup_{t\da0}Y_t/t=\infty$ a.s. even when condition~\eqref{eq:Pi_large} holds. In fact, $\Pi(\{(t,x)\,:\,t\in(0,1],\,x\ge ct\})<\infty$ for all $c>0$. \end{example} \section{The vertex time process and the proofs of the results in Section~\ref{sec:small-time-derivative}} \label{sec:proofs} We first recall basic facts about the vertex time process $\tau=(\tau_s)_{s \in \R}$. Fix a deterministic time horizon $T>0$, let $C$ be the convex minorant of $X$ on $[0,T]$ with right-derivative $C'$ and recall the definition $\tau_s=\inf\{t>0:C'_t>s\}$ for any slope $s\in\R$. By the convexity of $C$, the right-derivative $C'$ is non-decreasing and right-continuous, making $\tau$ a non-decreasing right-continuous process with $\lim_{s\to-\infty}\tau_s=0$ and $\lim_{s\to\infty}\tau_s=T$. Intuitively put, the process $\tau$ finds the times in $[0,T]$ at which the slopes increase as we advance through the graph of the convex minorant $t\mapsto C_t$ chronologically. We remark that the vertex time process can be constructed directly from $X$ without any reference to the convex minorant $C$, as follows (cf.~\cite[Thm~11.1.2]{MR1739699}): for each slope $s\in\R$ and time epoch $t\ge 0$, define $X^{(s)}_t\coloneqq X_t-st$, $\un X^{(s)}_t\coloneqq \inf_{u\in[0,t]}X_u^{(s)}$ and note $\tau_s= \sup\big\{t\in[0,T]\,:\, X_{t-}^{(s)}\wedge X_t^{(s)}=\un X_T^{(s)}\big\}$, where $X_{u-}^{(s)}\coloneqq \lim_{v\uparrow u}X_{u-}^{(s)}$ for $u>0$ and $X_{0-}^{(s)}\coloneqq X_{0}^{(s)}=0$. Put differently, subtracting a constant drift~$s$ from the L\'evy process $X$ ``rotates'' the convex hull so that the vertex time $\tau_s$ becomes the time the minimum of $X^{(s)}$ during the time interval $[0,T]$ is attained. \subsection{The vertex time process over exponential times} \label{subsec:tau_exp} Fix any $\lambda>0$ and let $E$ be an independent exponential random variable with unit mean. Let $\wh C\coloneqq(\wh C_t)_{t \in [0,E/\lambda]}$ be the convex minorant of $X$ over the exponential time-horizon $[0,E/\lambda]$ and denote by $\wh\tau$ the right-continuous inverse of $\wh C'$, i.e. $\wh\tau_s\coloneqq \inf\{u\in [0,E/\lambda]:\wh C'_u>s\}$ for $s\in\R$. Hence, in the remainder of the paper, the processes with (resp. without) a `hat' will refer to the processes whose definition is based on the path of $X$ on $[0,E/\lambda]$ (resp. $[0,T]$), where $E$ is an exponential random variable with unit mean independent of $X$ and $T>0$ is fixed and deterministic. It is more convenient to consider the vertex time processes over an independent exponential time horizon rather than the fixed time horizon $T$, as this does not affect the small-time behaviour of the process (see Corollary~\ref{cor:trivial} below), while making its law more tractable. Moreover, as we will see, to analyse the fluctuations of $\wh C'$ over short intervals, it suffices to study those of $\wh\tau$. By~\cite[Cor.~3.2]{fluctuation_levy}, the process $\wh\tau$ has independent but non-stationary increments and its Laplace exponent is given by \begin{equation} \label{eq:cf_tau} \E[e^{-u \wh \tau_s}] =e^{-\Phi_s(u)}, \quad \text{ where } \quad \Phi_s(u)\coloneqq \int_0^\infty (1-e^{-u t})e^{-\lambda t}\p(X_t\le st)\frac{\mathrm{d} t}{t}, \end{equation} for all $u \ge 0$ and $s\in\R$. The following lemma states that, after a vertex time, the convex minorants $C$ and $\wh C$ must agree for a positive amount of time, see Figure~\ref{fig:CM_agree} for a pictorial description. \begin{lemma} \label{lem:CM_agree} For any $s\in\mathcal{L}^+(\mathcal{S})$, on the event $\{\tau_s<E/\lambda\le T\}$, we have $\tau_s=\wh\tau_s$ and the convex minorants $C$ and $\wh C$ agree on and interval $[0,\tau_s+m]$ for a random $m>0$. If $X$ is of infinite variation, the functions $C$ and $\wh C$ agree on an interval $[0,m]$ for a random variable $m$ satsifying $0<m\le\min\{T,E/\lambda\}$ a.s. \end{lemma} Since the L\'evy process $X$ and the exponential time $E$ are independent, $\p(\tau_s<E/\lambda\le T)>0$. \begin{proof} The proof follows directly from the definition of the convex minorant of $f$ as the greatest convex function dominated by the path of $f$ over the corresponding interval. Let $f$ be a measurable function on $[0,t]$ with piecewise linear convex minorant $M^{(t)}$. Then, for any vertex time $v\in(0,t)$ of $M^{(t)}$ and any $u\in(v,t]$, the convex minorant $M^{(u)}$ of $f$ on $[0,u]$ equals $M^{(t)}$ over the interval $[0,v]$. The result then follows since the condition $s\in\mathcal{L}^+(\mathcal{S})$ (resp. $X$ has infinite variation) implies that there are infinitely many vertex times immediately after $\tau_s$ (resp. $0$). \end{proof} \begin{figure} \centering \includegraphics[width=0.65\textwidth]{CM_agree.png} \caption{The picture shows a path of $X$ (black) and its convex minorants $C$ (red) on $[0,T]$ and $\wh C$ (blue) on $[0,E/\lambda]$. Both convex minorants agree until time $m$, after which they may behave very differently.} \label{fig:CM_agree} \end{figure} The following result shows that local properties of $C$ agree with those of $\wh C$. Multiple extensions are possible, but we opt for the following version as it is simple and sufficient for our purpose. \begin{corollary}\label{cor:trivial} Fix any measurable function $f:(0,\infty) \to (0,\infty)$.\\ {\nf{(a)}} If $s\in\mathcal{L}^+(\mathcal{S})$, then the following limits are a.s. constants on $[0,\infty]$: \[ \limsup_{t\da 0}\frac{C'_{t+\tau_s}-s}{f(t)}=\limsup_{t\da 0}\frac{\wh C'_{t+\wh \tau_s}-s}{f(t)} \quad \text{and}\quad \liminf_{t\da 0}\frac{C'_{t +\tau_s}-s}{f(t)}=\liminf_{t\da 0}\frac{\wh C'_{t+\wh \tau_s}-s}{f(t)}. \] {\nf{(b)}} If $X$ is of infinite variation, then the following limits are a.s. constants on $[0,\infty]$: \[ \limsup_{t\da 0}C'_t/f(t)=\limsup_{t\da 0}\wh C'_t/f(t) \qquad\text{and}\qquad \liminf_{t\da 0}C'_t/f(t)=\liminf_{t\da 0}\wh C'_t/f(t). \] \end{corollary} \begin{proof} We will prove part (a) for $\liminf$, with the remaining proofs being analogous. First note that the assumption $s\in\mathcal{L}^+(\mathcal{S})$ implies that $(\tau_{u+s}-\tau_s)_{u\ge 0}$ and the additive processes $(\wh\tau_{u+s}-\wh\tau_s)_{u\ge 0}$ have infinite activity as $u\da 0$. Thus, applying Blumenthal's 0--1 law~\cite[Cor.~19.18]{MR1876169} to $(\wh\tau_{u+s}-\wh\tau_s)_{u\ge 0}$ (and using the fact that $\wh C'_{\wh\tau_s}=s$ a.s.), implies that $\liminf_{t\da 0}(\wh C'_{t+\wh\tau_s}-s)/f(t)$ is a.s. equal to some constant $\mu$ in $[0,\infty]$. Moreover, by the independence of the increments of $\wh\tau_s$, this limit holds even when conditioning on the value of $\wh\tau_s$. Recall further that $\wh\tau_s=\tau_s$ on the event $\{\tau_s<E/\lambda\le T\}$ by Lemma~\ref{lem:CM_agree}. By Lemma~\ref{lem:CM_agree} and the independence of $E$ and $X$, we a.s. have \begin{align*} 0<\p(\tau_s<E/\lambda\le T\,|\,\tau_s) &=\p\Big(\liminf_{t\da 0}(\wh C'_{t+\wh\tau_s}-s)/f(t)=\mu, \,\tau_s<E/\lambda\le T\,\Big|\,\tau_s\Big)\\ &=\p\Big(\liminf_{t\da 0}(C'_{t+\tau_s}-s)/f(t)=\mu, \,\tau_s<E/\lambda\le T\,\Big|\,\tau_s\Big)\\ &=\p\Big(\liminf_{t\da 0}(C'_{t+\tau_s}-s)/f(t)=\mu\,\Big|\,\tau_s\Big) \p(\tau_s<E/\lambda\le T\,|\,\tau_s), \end{align*} implying that $\liminf_{t\da 0}(C'_{t+\tau_s}-s)/f(t)=\mu$ a.s. \end{proof} By virtue of Corollary~\ref{cor:trivial} it suffices to prove all the results in Section~\ref{sec:small-time-derivative} for $\wh C$ instead of $C$. This allows us to use the independent increment structure of the right inverse $\wh\tau$ of the right-derivative $\wh C'$. \begin{example}[Cauchy process] \label{ex:Cauchy} If $X$ is a Cauchy process, then the Laplace exponent of $\wh\tau_u$ factorises $\Phi_u(w)=\p(X_1\le u)\int_0^\infty(1-e^{-wt})e^{-\lambda t}t^{-1}\mathrm{d} t$ for any $u\in\R$ and $w\ge 0$. This implies that $\wh\tau$ has the same law as a gamma subordinator time-changed by the distribution function $u\mapsto\p(X_1\le u)=\frac{1}{2}+\frac{1}{\pi}\arctan(cu+\mu)$ for some $c>0$ and $\mu=\tan(\pi(\frac{1}{2}-\rho))$. This result can be used as an alternative to~\cite[Thm~2]{MR1747095}, in conjunction with classical results on the fluctuations of a gamma process (see, e.g.~\cite[Ch.~4]{MR1746300}), to establish~\cite[Cor.~3]{MR1747095} and all the other results in~\cite{MR1747095}. \end{example} The proofs of the results in Section~\ref{sec:small-time-derivative} are based on the results of Section~\ref{sec:additive}: we will construct a non-decreasing additive process $Y=(Y_t)_{t\ge 0}$, started at $0$, in terms of $\wh\tau$ and apply the results in Section~\ref{sec:additive} to $Y$ and its inverse $L=(L_u)_{u\ge 0}$. These proofs are given in the following subsections. \subsection{Upper and lower functions at time \texorpdfstring{$\tau_s$}{tau} - proofs} \label{subsec:proofs_c'_tau_s} Let $s\in\mathcal{L}^+(\mathcal{S})$. Fix any $\lambda>0$ and let $Y_u\coloneqq\wh\tau_{u+s}-\wh \tau_s$, $u\ge 0$. Then the right-inverse $L_u\coloneqq\inf\{t>0\,:\,Y_t>u\}$ of $Y$ equals $L_u=\wh C'_{u+\wh\tau_s}-s$ for $u\ge 0$. Note that $Y$ has independent increments and~\eqref{eq:cf_tau} implies \begin{equation} \label{eq:psi_defn_post_min} \Psi_u(w) \coloneqq-\log \E[e^{-wY_u}] =\int_0^\infty (1-e^{-wt})\Pi((0,u],\mathrm{d} t), \quad\text{for all $w,u \ge 0$} \end{equation} where $\Pi(\mathrm{d} u,\mathrm{d} t)=e^{-\lambda t}\p((X_t-st)/t \in \mathrm{d} u)t^{-1}\mathrm{d} t$ is the mean jump measure of $Y$. \begin{proof}[Proof of Theorem~\ref{thm:post-min-lower}] Since $s\in\mathcal{L}^+(\mathcal{S})$, all three parts of the result follow from a direct application of Proposition~\ref{prop:Y_limsup} and Corollary~\ref{cor:L_liminf} to the definitions of $Y$ and $L$ above. \end{proof} To prove Theorem~\ref{thm:upper_fun_C'_post_min}, we require the following two lemmas. The first lemma establishes some general regularity for the densities of $X_t$ as a function of $t$ and the second lemma provides a strong asymptotic control on the function $\Psi_s(u)$ as $s\da 0$ and $u\to\infty$. Recall that, when $X$ is of finite variation, $\gamma_0=\lim_{t\da 0}X_t/t$ denotes the natural drift of $X$. \begin{lemma}\label{lem:generalized_Picard} Let $X \in \mathcal{Z}_{\alpha,\rho}$ for some $\alpha \in (0,1)$ and $\rho \in (0,1]$ and denote by $g$ its normalising function.\\ {\nf{(a)}} Define $Q_t\coloneqq (X_t-\gamma_0t)/g(t)$, then $Q_t$ has an infinitely differentiable density $p_t$ such that $p_t$ and each of its derivatives $p_t^{(k)}$ are uniformly bounded: $\sup_{t\in(0,1]}\sup_{x\in\R}|p_t^{(k)}(x)|<\infty$ for any $k\in\N\cup\{0\}$.\\ {\nf{(b)}} Define $\wt Q_t\coloneqq X_t/\sqrt{t}$, then $\wt Q_t$ has an infinitely differentiable density $\tilde p_t$ such that $\tilde p_t$ and each of its derivatives $\tilde p_t^{(k)}$ are uniformly bounded: $\sup_{t\in[1,\infty)}\sup_{x\in\R}|\tilde p_t^{(k)}(x)|<\infty$ for any $k\in\N\cup\{0\}$. \end{lemma} For two functions $f_1,f_2:(0,\infty)\to(0,\infty)$ we say $f_1(t)\sim f_2(t)$ as $t\da 0$ if $\lim_{t\da 0}f_1(t)/f_2(t)=1$. \begin{proof}[Proof of Lemma~\ref{lem:generalized_Picard}] Part (a). We assume without loss of generality that $g(t)\le 1$ for $t\in(0,1]$, and note that $Q_t$ is infinitely divisible. Denote by $\nu_{Q_t}$ the L\'evy measure of $Q_t$, and note for $A \subset \R$ that $\nu_{Q_t}(A)=t\nu(g(t)A)$ and \[ \ov \sigma^2_{Q_t}(u) \coloneqq \int_{(-u,u)}x^2 \nu_{Q_t}(\mathrm{d} x) =\frac{t}{g(t)^2}\int_{(-ug(t),ug(t))}x^2 \nu(\mathrm{d} x)=\frac{t}{g(t)^2}\ov\sigma^2(ug(t)), \] for $t \in (0,1]$ and $u \in \R\setminus \{0\}$. The regular variation of $\ov\nu$ (see~\cite[Thm~2]{MR3784492}), Fubini's theorem and Karamata's theorem~\cite[Thm~1.5.11(ii)]{MR1015093} imply that, as $u\da0$, \begin{align*} \ov\sigma^2(u) &=-\int_0^u x^2\ov\nu(\mathrm{d} x) =-\int_0^u 2\int_0^x z\mathrm{d} z\ov\nu(\mathrm{d} x) =-\int_0^u\int_z^u 2 z\ov\nu(\mathrm{d} x)\mathrm{d} z\\ &=\int_0^u 2 z(\ov\nu(z)-\ov\nu(u)) \mathrm{d} z =\int_0^u 2 z\ov\nu(z) \mathrm{d} z-u^2\ov\nu(u) \sim \frac{\alpha}{2-\alpha}u^2\ov\nu(u). \end{align*} Since $X\in\mathcal{Z}_{\alpha,\rho}$, \cite[Thm~2]{MR3784492} implies that $g^{-1}(u)u^{-2}\ov\sigma^2(u)\to c_0$ for some $c_0>0$ as $u\da 0$. Thus, \begin{equation*} 0<\inf_{z\in(0,1]}\frac{g^{-1}(z)}{z^2}\ov\sigma^2(z) \le \inf_{u,t\in(0,1]}\frac{g^{-1}(ug(t))}{u^2g(t)^2}\ov\sigma^2(ug(t)). \end{equation*} Since $g$ is regluarly varying with index $1/\alpha$, we suppose that $g(t)=t^{1/\alpha}\varpi(t)$ for a slowly varying function $\varpi$. Thus, Potter's bounds~\cite[Thm~1.5.6]{MR1015093} imply that, for some constant $c>1$ and all $t,u\in(0,1]$, we have $\varpi(t)/\varpi(tu^\beta)\le cu^{-\beta\delta}$ for $\delta=1/\beta-1/\alpha>0$. Hence, we obtain $ug(t)\le cg(tu^\beta)$ and moreover $g^{-1}(ug(t))\le c^{\beta} tu^\beta$ for all $t\in(0,1]$ and $u\in(0,1/c]$. Multiplying the rightmost term on the display above (before taking infimum) by $tu^\beta/g^{-1}(ug(t))$ gives \begin{equation} \label{eq:picard_condition} \inf_{t\in(0,1]}\inf_{u\in(0,1/c]} u^{\beta-2}\ov\sigma_{Q_t}^2(u) =\inf_{t\in(0,1]}\inf_{u\in(0,1/c]}\frac{tu^{\beta}} {u^2g(t)^2}\ov\sigma^2(ug(t)) >0. \end{equation} Hence,~\cite[Lem.~2.3]{picard_1997} gives the desired result. Part (b). As before, we see that $\ov\sigma^2_{\wt Q_t}(u)=\ov\sigma^2(u\sqrt{t})$. Hence, the left side of~\eqref{eq:picard_condition} gives \[ \inf_{t\in[1,\infty)}\inf_{u\in(0,1]}u^{\beta-2}\ov\sigma^2_{\wt Q_t}(u) =\inf_{u\in(0,1]}u^{\beta-2}\ov\sigma^2(u)>0, \] for any $\beta\in(0,\alpha)$. Thus,~\cite[Lem.~2.3]{picard_1997} gives the desired result. \end{proof} \begin{lemma} \label{lem:asymp_equiv_Psi_domstable_post_min} Let $X\in\mathcal{Z}_{\alpha,\rho}$ for some $\alpha\in(0,1)$ and $\rho\in(0,1]$, denote by $g$ its normalising function and define $G(t)=t/g(t)$ for $t>0$. The following statements hold for any sequences $(u_n)_{n \in \N} \subset (0,\infty)$ and $(s_n)_{n \in \N}\subset (0,\infty)$ such that $u_n\to\infty$ and $s_n\da 0$ as $n \to \infty$: \begin{itemize}[leftmargin=2.5em, nosep] \item[\nf(i)]~if $u_nG^{-1}(s_n^{-1})\to \infty$, then $\Psi_{s_n}(u_n) \sim \rho\log(u_nG^{-1}(s_n^{-1}))$, \item[\nf(ii)]~if $u_nG^{-1}(s_n^{-1})\to 0$, then $\Psi_{s_n}(u_n)=\Oh([u_nG^{-1}(s_n^{-1})]^q+s_n)$ for any $q\in (0,1]$ with $q<1/\alpha-1$. \end{itemize} \end{lemma} \begin{proof} Part (i). Define $Q_t\coloneqq (X_t-\gamma_0t)/g(t)$ and note that \[ \Psi_{s_n}(u_n) =\int_0^\infty (1-e^{-tu_n})e^{-\lambda t} \p\big(0<Q_t \leq s_nG(t) \big)\frac{\mathrm{d} t}{t}, \quad \text{for all }n \in \N. \] Fix $\delta\in(0,\rho/3)$, let $\kappa_n\coloneqq G^{-1}(\delta/s_n)$ and note that $\kappa_n\da 0$ as $n\to\infty$. We will now split the integral in the previous display at $\kappa_n$ and $1$ and find the asymptotic behaviour of each of the resulting integrals. The integral on $[1,\infty)$ is bounded as $n\to\infty$: \begin{align*} \int_1^\infty (1-e^{-tu_n})e^{-\lambda t} \p\big(0<Q_t \leq s_nG(t) \big)\frac{\mathrm{d} t}{t} \le \int_1^\infty e^{-\lambda t}\frac{\mathrm{d} t}{t}<\infty. \end{align*} Next, we consider the integral on $[\kappa_n,1)$. By Lemma~\ref{lem:generalized_Picard}(a), there exists a uniform upper bound $C>0$ on the densities of $Q_t$, $t\in(0,1]$. An application of~\cite[Thm~1.5.11(i)]{MR1015093} gives, as $n\to\infty$, \begin{align*} \int_{\kappa_n}^1 (1-e^{-u_nt})e^{-\lambda t} \p\big(0<Q_{t} \le s_n G(t)\big)\frac{\mathrm{d} t}{t} &\le C\int_{\kappa_n}^1 s_n G(t) \frac{\mathrm{d} t}{t} \sim \frac{\alpha C}{1-\alpha} s_nG(\kappa_n) = \frac{\delta\alpha C}{1-\alpha}<\infty. \end{align*} Since we will prove that $\Psi_{s_n}(u_n)\to\infty$ as $n\to\infty$, the asymptotic behaviour of $\Psi_{s_n}(u_n)$ will be driven by asymptotic behaviour of the integral on $(0,\kappa_n)$: \begin{equation} \label{eq:J_tn} J_n^0 \coloneqq \int_0^1 (1-e^{-u_n\kappa_nt})e^{-\lambda \kappa_nt} \p\big(0<Q_{\kappa_n t} \leq s_nG(\kappa_nt)\big)\frac{\mathrm{d} t}{t}. \end{equation} We will show that, asymptotically as $n \to \infty$, we may replace the probability in the integrand with the probability $\p(0<Z<\delta t^{1-1/\alpha})$ in terms of the limiting $\alpha$-stable random variable $Z$. Since $Z$ has a bounded density (see, e.g.~\cite[Ch.~4]{MR1745764}), the weak convergence $Q_t \cid Z$ as $t \da 0$ implies that the distributions functions converge in Kolmogorov distance by~\cite[1.8.31--32, p.~43]{MR1353441}. Thus, since $\kappa_n\to0$ as $n\to\infty$, there exists some $N_\delta\in\N$ such that \[ \sup_{n\ge N_\delta}\sup_{t\in(0,\kappa_n]}\sup_{x\in\R} |\p(0<Q_t\le x)-\p(0<Z\le x)|<\delta, \] where $\delta\in(0,\rho/3)$ is as before, arbitrary but fixed. In particular, the following inequality holds $\sup_{n\ge N_\delta}\sup_{t\in(0,\kappa_n]} |\p(0<Q_t\le s_n G(t))-\p(0<Z\le s_nG(t))|<\delta$. For any $N\ge N_\delta$ the triangle inequality yields \begin{align*} B_{\delta,N} &\coloneqq\sup_{n\ge N}\sup_{t\in(0,1]} |\p(0<Z<\delta t^{1-1/\alpha}) - \p(0<Q_{t\kappa_n}\le s_n G(t\kappa_n))|\\ &\le \delta + \sup_{n\ge N}\sup_{t\in(0,1]} |\p(0<Z<\delta t^{1-1/\alpha}) - \p(0<Z\le s_n G(t\kappa_n))|\\ &\le \delta + \sup_{n\ge N}\sup_{t\in(0,1]} \p(m_{t,n}<Z<M_{t,n}), \end{align*} where $m_{t,n}\coloneqq\min\{s_n G(t\kappa_n),\delta t^{1-1/\alpha}\}$ and $M_{t,n}\coloneqq\max\{s_n G(t\kappa_n),\delta t^{1-1/\alpha}\}$. We aim to show that $B_{\delta,N_\delta'}<2\delta$ for some $N_\delta'\in\N$. By~\cite[Ch.~4]{MR1745764}, there exists $K>0$ such that the stable density of $Z$ is bounded by the function $x\mapsto Kx^{-\alpha-1}$ for all $x>0$. Thus, since $M_{t,n}-m_{t,n}=|\delta t^{1-1/\alpha} - s_nG(t\kappa_n)|$, we have \begin{equation} \label{eq:min_max_Z} \begin{split} \p(m_{t,n}<Z<M_{t,n}) &\le Km_{t,n}^{-\alpha-1} |\delta t^{1-1/\alpha} - s_nG(t\kappa_n)|\\ &\le K((\delta t^{1-1/\alpha})^{-\alpha-1} + (s_nG(t\kappa_n))^{-\alpha-1}) |\delta t^{1-1/\alpha} - s_nG(t\kappa_n)|. \end{split} \end{equation} To show that this converges uniformly in $t\in(0,1]$, we consider both summands. First, we have \[ (\delta t^{1-1/\alpha})^{-\alpha-1} |\delta t^{1-1/\alpha} - s_nG(t\kappa_n)|\\ =\delta^{-\alpha} \bigg| t^{1-\alpha} - \frac{(t\kappa_n)^{(1-\alpha^2)/\alpha}G(t\kappa_n)} {\kappa_n^{(1-\alpha^2)/\alpha}G(\kappa_n)}\bigg|, \] which tends to $0$ as $n\to\infty$ uniformly in $t\in(0,1]$ by~\cite[Thm~1.5.2]{MR1015093} since $t\mapsto t^{(1-\alpha^2)/\alpha}G(t)$ is regularly varying at $0$ with index $1-\alpha>0$ (recall that $g$ is regularly varying at $0$ with index $1/\alpha$ and $G(t)=t/g(t)$). Similarly, since $s_n=\delta /G(\kappa_n)$, we have \[ (s_nG(t\kappa_n))^{-\alpha-1} |\delta t^{1-1/\alpha} - s_nG(t\kappa_n)|\\ =\delta^{-\alpha} \bigg| \frac{(t\kappa_n)^{1-1/\alpha} G(t\kappa_n)^{-\alpha-1}}{\kappa_n^{1-1/\alpha}G(\kappa_n)^{-\alpha-1}} - \frac{G(t\kappa_n)^{-\alpha}}{G(\kappa_n)^{-\alpha}}\bigg|. \] Since both terms in the last line converge to $\delta^{\alpha}t^{1-\alpha}$ as $n\to\infty$ uniformly in $t\in(0,1]$ by~\cite[Thm~1.5.2]{MR1015093}, the difference tends to $0$ uniformly too. Hence, the right side of~\eqref{eq:min_max_Z} converges to $0$ as $n\to\infty$ uniformly in $t\in(0,1]$. Thus, for a sufficiently large $N'_\delta$, we have \begin{equation} \label{eq:B_delta} \sup_{n\ge N'_\delta}\sup_{t\in(0,1]} |\p(0<Z<\delta t^{1-1/\alpha}) - \p(0<Q_{t\kappa_n}\le s_n G(t\kappa_n))| =B_{\delta,N'_\delta}<2\delta. \end{equation} We now analyse a lower bound on the integral $J_n^0$ in~\eqref{eq:J_tn}. By~\eqref{eq:B_delta}, for all $n \ge N_\delta'$, we have \[ J_n^0 \ge \int_0^{1} (1-e^{-u_n\kappa_nt})e^{-\lambda\kappa_nt} \big(\p\big(0<Z \le \delta t^{1-1/\alpha}\big)-2\delta\big)\frac{\mathrm{d} t}{t}. \] Recall that $\kappa_n=G^{-1}(\delta/s_n)$, define $\xi_n\coloneqq G^{-1}(1/s_n)$ and note from the regular variation of $G^{-1}$ that $\kappa_n/\xi_n\to \delta^{\alpha/(\alpha-1)}$ as $n\to\infty$, implying $\log(u_n\kappa_n)\sim\log(u_n\xi_n)$ as $n \to \infty$ since $u_n\xi_n\to\infty$. We split the integral from the display above at $\log(u_n\kappa_n)^{-1}$ and note that \begin{multline*} \int_{\log(u_n\kappa_n)^{-1}}^1 (1-e^{-u_n\kappa_nt})e^{-\lambda\kappa_nt} \big(\p\big(0<Z \le \delta t^{1-1/\alpha}\big)+2\delta\big)\frac{\mathrm{d} t}{t}\\ \le \big(1+2\delta\big)\int_{\log(u_n\kappa_n)^{-1}}^1 \frac{\mathrm{d} t}{t} =\big(1+2\delta\big)\log (\log(u_n\kappa_n)) \sim\big(1+2\delta\big)\log (\log(u_n\xi_n)), \quad\text{as }n\to\infty. \end{multline*} For the integral over $(0,\log(u_n\kappa_n)^{-1})$, first note that, for all sufficiently large $n\in\N$, we have \[ \p(0<Z \le \delta t^{1-1/\alpha}) \ge \p(0<Z \le \delta \log(u_n\kappa_n)^{1/\alpha-1}) >\rho-\delta, \qquad t\in(0,\log(u_n\kappa_n)^{-1}), \] since $u_n\kappa_n\to\infty$. Thus, we have \begin{multline*} \int_0^{\log(u_n\kappa_n)^{-1}} (1-e^{-u_n\kappa_nt})e^{-\lambda\kappa_nt} \big(\p\big(0<Z \le \delta t^{1-1/\alpha}\big)-2\delta\big)\frac{\mathrm{d} t}{t} \\ \ge \big(\rho-3\delta\big)e^{-\lambda\kappa_n/\log(u_n\kappa_n)}\int_0^{\log(u_n\kappa_n)^{-1}}(1-e^{-u_n\kappa_nt}) \frac{\mathrm{d} t}{t}\sim \big(\rho-3\delta\big)\log(u_n\xi_n), \quad \text{ as }n \to \infty, \end{multline*} where the asymptotic equivalence follows from the fact that $u_n\kappa_n/\log(u_n\kappa_n)\to\infty $ as $n\to\infty$ and $\int_0^1 (1-e^{-xt})t^{-1}\mathrm{d} t\sim \log x$ as $x\to\infty$. (In fact, we have $\int_0^1 (1-e^{-xt})t^{-1}\mathrm{d} t= \log x+\Gamma(0,x)+\gamma$ for $x>0$ where $\Gamma(0,x)=\int_x^\infty t^{-1}e^{-t}\mathrm{d} t$ is the upper incomplete gamma function and $\gamma$ is the Euler--Mascheroni constant.) This shows that $\liminf_{n\to\infty}J_n^0/\log(u_n\xi_n)\ge \rho-3\delta>0$ since $\delta\in(0,\rho/3)$. Similarly,~\eqref{eq:B_delta} implies that for all $n\ge N'_\delta$, we have \begin{align*} J_n^0 &\le \int_0^{1} (1-e^{-u_n\kappa_nt})e^{-\lambda\kappa_nt} \big(\p\big(0<Z \le \delta t^{1-1/\alpha}\big)+2\delta\big) \frac{\mathrm{d} t}{t}\\ &\le (\rho+2\delta)\int_0^{1} (1-e^{-u_n\kappa_nt}) \frac{\mathrm{d} t}{t} \sim (\rho+2\delta)\log(u_n\xi_n), \quad\text{as }n\to\infty, \end{align*} implying $\limsup_{n\to\infty}J_n^0/\log(u_n\xi_n)\le \rho+2\delta$. Altogether, we deduce that \[ \rho-3\delta \le \liminf_{n\to\infty}\Psi_{s_n}(u_n)/\log(u_n\xi_n) \le\limsup_{n\to\infty}\Psi_{s_n}(u_n)/\log(u_n\xi_n) \le \rho+2\delta. \] Since $\delta\in(0,\rho/3)$ is arbitrary and the sequence $\Psi_{s_n}(u_n)/\log(u_n\xi_n)$ does not depend on $\delta$, we may take $\delta\da 0$ to obtain Part (i). Part (ii). We will bound each of the terms in $\Psi_{s_n}(u_n)=J_n^1+J_n^2+J_n^3$, where $\xi_n=G^{-1}(1/s_n)$ and \begin{gather*} J_n^1\coloneqq \int_0^{\xi_n} (1-e^{-u_nt})e^{-\lambda t} \p(0<Q_t\leq s_n G(t))\frac{\mathrm{d} t}{t}, \quad J_n^2\coloneqq \int_{\xi_n}^{1} (1-e^{-u_nt})e^{-\lambda t} \p(0<Q_t\leq s_n G(t))\frac{\mathrm{d} t}{t},\\ \text{and}\qquad J_n^3\coloneqq \int_{1}^\infty (1-e^{-u_nt})e^{-\lambda t} \p(0<X_t-\gamma_0t\leq s_n t)\frac{\mathrm{d} t}{t}. \end{gather*} Recall that our assumption in part (ii) states that $u_n\xi_n\to 0$ as $n\to\infty$. Using the elementary inequality $1-e^{-x}\le x$ for $x \ge 0$, we obtain $J_n^1=\Oh(u_n\xi_n)$ as $n \to \infty$. Next we bound $J_n^3$. Lemma~\ref{lem:generalized_Picard}(b) shows the existence of a uniform upper bound $\wt C>0$ on the densities of $X_t/\sqrt{t}$. Thus it holds that $\p(0<X_t-\gamma_0t\le s_n t)=\p(\gamma_0\sqrt{t}<X_t/\sqrt{t}\le (\gamma_0+s_n) \sqrt{t}) \le \wt C s_n\sqrt{t}$ and hence \begin{equation*} J_n^3 \le\wt C s_n\int_1^\infty t^{-1/2}e^{-\lambda t}\mathrm{d} t =\Oh(s_n), \quad \text{ as }n \to \infty. \end{equation*} It remains to bound $J_n^2$. Let $q\in(0,1]$ with $q<1/\alpha-1$ and $C>0$ be a uniform bound on the densities of $Q_t$ (whose existence is guaranteed by Lemma~\ref{lem:generalized_Picard}(a)). The elementary bound $1-e^{-x}\le x^q$ for $x\ge 0$ for $q\in (0,1]$ and~\cite[Thm~1.5.11(i)]{MR1015093} yield \[ J_n^2 \le C u_n^qs_n \int_{\xi_n}^1t^q G(t) \frac{\mathrm{d} t}{t} \sim \frac{C}{1/\alpha-q-1} u_n^q s_n G(\xi_n) \xi_n^q = \Oh (u_n^q\xi_n^q ), \quad \text{ as }n \to \infty.\qedhere \] \end{proof} \begin{proof}[Proof of Theorem~\ref{thm:upper_fun_C'_post_min}] Throughout this proof we let $\phi(u)\coloneqq \gamma u^{-1} (\log\log u)^r$, for some $\gamma>0$, $r \in \R$. Part (i). Since $p$ is arbitrary on $(1/\rho,\infty)$ and $f(t)=1/G(t\log^p(1/t))$, it suffices to show that $\limsup_{t\da0}(\wh C'_{t+\wh\tau_s}-s)/f(t)=\limsup_{t \downarrow 0} L_t/f(t)<\infty$ a.s. (Recall that $L_t=C'_{t+\wh\tau_s}-s$ and $\Psi_u(w)=-\log\E[e^{-wY_u}]$ for all $u,w\ge 0$.) By Theorem~\ref{thm:limsup_L}(a), it suffices to find a positive sequence $(\theta_n)_{n \in \N}$ with $\lim_{n\to \infty}\theta_n=\infty$ such that $\sum_{n=1}^\infty \exp(\theta_n t_n-\Psi_{f(t_n)}(\theta_n))<\infty$ and $\limsup_{n \to \infty} f(t_n)/f(t_{n+1})<\infty$ where $t_n\coloneqq \phi(\theta_n)$. Let $\theta_n\coloneqq e^n$ and $r=0$. Note that the regular variation of $f$ at $0$ yields $\limsup_{n \to \infty}f(t_n)/f(t_{n+1})=\lim_{n \to \infty}f(t_n)/f(t_{n+1})= e^{1-1/\alpha}$. Thus, it suffices to prove that the series above is finite. Since $t_n=\phi(\theta_n)$, it follows that $t_n\theta_n=\gamma$. Note from the definition of $f$ that \begin{equation} \label{eq:stable_f_asymp4_pm} uG^{-1}(f(\phi(u))^{-1}) =u h(u)(\log (\phi(u)^{-1}))^{p} =\gamma (\log (\gamma^{-1} u))^{p} \sim\gamma (\log u)^{p}\to\infty, \quad \text{ as }u \to \infty. \end{equation} By Lemma~\ref{lem:asymp_equiv_Psi_domstable_post_min}(i) we have $\Psi_{f(t_n)}(\theta_n)\sim \rho\log(\theta_n G^{-1}(f(t_n)^{-1}))$ as $n \to \infty$, since $\theta_n G^{-1}(f(t_n)^{-1}) \sim \gamma(\log\theta_n)^{p}\to \infty$ as $n \to \infty$ by~\eqref{eq:stable_f_asymp4_pm}. Fix some $\varepsilon>0$ with $(1-\varepsilon)\rho p>1$. Note that $\Psi_{f(t_n)}(\theta_n) \ge (1-\varepsilon)\rho p\log\log\theta_n$ for all sufficiently large $n$. It suffices to show that the following sum is finite: \[ \sum_{n=1}^\infty \exp\big(\gamma -(1-\varepsilon)\rho p\log\log\theta_n\big). \] Since $(1-\varepsilon)\rho p>1$, the sum in the display above is bounded by a multiple of $\sum_{n=1}^\infty n^{-(1-\varepsilon)\rho p}<\infty$. Part (ii). As before, since $p$ is arbitrary in $(0,1/\rho)$, it suffices to show that $\limsup_{t \downarrow 0} L_t/f(t)\ge 1$ a.s. By Theorem~\ref{thm:limsup_L}(b), it suffices to find a positive sequence $(\theta_n)_{n \in \N}$ satisfying $\lim_{n\to \infty}\theta_n=\infty$, such that $\sum_{n=1}^\infty (\exp(-\Psi_{f(t_n)}(\theta_n))-\exp(-\theta_n t_n))=\infty$ and $\sum_{n=1}^\infty \Psi_{f(t_{n+1})}( \theta_n)<\infty$. Let $r=\gamma=1$, choose $\sigma>1$ and $\varepsilon>0$ to satisfy $\sigma(1+\varepsilon)\rho p<1$ and set $\theta_n\coloneqq e^{n^\sigma}$ for $n \in \N$. We start by showing that the second sum in the paragraph above is finite. Since $\sigma>1$,~\eqref{eq:stable_f_asymp4_pm} yields \begin{equation} \label{eq:stable_f_asymp3_pm} \theta_nG^{-1}(f(t_{n+1})^{-1})\sim \frac{\theta_n}{\theta_{n+1}}(\log\theta_{n+1})^{p}\log\log\theta_{n+1}\da 0, \quad \text{ as }n \to \infty. \end{equation} Hence, Lemma~\ref{lem:asymp_equiv_Psi_domstable_post_min}(ii) with $q \in (0,1]$ and $q<1/\alpha-1$ and~\eqref{eq:stable_f_asymp3_pm} imply \begin{equation*} \Psi_{f(t_{n+1})}(\theta_n) =\Oh\big([\theta_nG^{-1}(f(t_{n+1})^{-1})]^{q}+f(t_{n+1})\big), \quad\text{as }n\to\infty. \end{equation*} By~\eqref{eq:stable_f_asymp3_pm}, it is enough to show that \begin{align*} \sum_{n=1}^\infty \bigg(\frac{\theta_n}{\theta_{n+1}}(\log\theta_{n+1})^{p}\log\log\theta_{n+1}\bigg)^{q}<\infty, \qquad\text{and}\qquad \sum_{n=1}^\infty f(t_{n+1})<\infty. \end{align*} Newton's generalised binomial theorem implies that $\theta_n/\theta_{n+1}=\exp(n^\sigma-(n+1)^\sigma)\le \exp(-\sigma n^{\sigma-1}/2)$ for all sufficiently large $n$. Since $\log\theta_{n+1}\sim n^\sigma$, we conclude that the first series in the previous display is indeed finite. The second series is also finite since $f\circ h$ is regularly varying at infinity with index $(\alpha-1)/\alpha<0$ (recall that $t_{n+1}=\phi(\theta_{n+1})$). Next we prove that $\sum_{n=1}^\infty (\exp(-\Psi_{f(t_n)}(\theta_n))-\exp(-\theta_n t_n))=\infty$. Note that we have $\exp(-\theta_nt_n)=\exp(-\log \log\theta_n)=n^{-\sigma}$, which is summable. Applying Lemma~\ref{lem:asymp_equiv_Psi_domstable_post_min}(i) and~\eqref{eq:stable_f_asymp4_pm}, we see that $\Psi_{f(t_n)}(\theta_n)\sim \rho\log(\theta_n G^{-1}(f(t_n)^{-1}))$ as $n \to \infty$. As in Part~(i), it is easy to see that for every $\varepsilon>0$, the inequality $\Psi_{f(t_n)}(\theta_n) \le (1+\varepsilon)\rho p\log\log\theta_n$ holds for all sufficiently large $n$. Thus $\exp(-\Psi_{f(t_{n})}(\theta_n)) \ge n^{-\sigma(1+\varepsilon)\rho p}$ is not summable (since $\sigma(1+\varepsilon)\rho p<1$): $\sum_{n=1}^\infty \exp(-\Psi_{f(t_n)}(\theta_n))=\infty$, completing the proof. \end{proof} \subsection{Upper and lower functions at time \texorpdfstring{$0$}{0} - proofs} \label{subsec:proofs_c'_at_0} Fix any $\lambda>0$. Let $Y_s \coloneqq \tau_{-1/s}$ for $s\in(0,\infty)$ and note that the mean jump measure of $Y_s$ is given by \begin{equation*} \Pi(\mathrm{d} s, \mathrm{d} t) \coloneqq t^{-1}e^{-\lambda t}\p(-t/X_t\in \mathrm{d} s)\mathrm{d} t, \end{equation*} implying $\Pi((0,s], \mathrm{d} t) =t^{-1}e^{-\lambda t}\p(X_t\le -t/s)\mathrm{d} t$. Since $\wh C'$ is the right-inverse of $\wh\tau$, we have the identity $\wh C'_t=-1/L_t$ where $L_t\coloneqq\inf\{s>0\,:\, Y_s>t\}$. Thus, $\limsup_{t\da0}|\wh C'_t| f(t)$ equals $0$ (resp. $\infty$) if and only if $\liminf_{t\da 0}L_t/f(t)$ equals $\infty$ (resp. $0$). Corollary~\ref{cor:L_liminf} and Proposition~\ref{prop:Y_limsup} above are the ingredients in the proof of Theorem~\ref{thm:C'_limsup}. \begin{proof}[Proof of Theorem~\ref{thm:C'_limsup}] Since the conditions in Theorem~\ref{thm:Y_limsup} only involve integrating the mean measure $\Pi$ of $Y$ near the origin, we may ignore the factor $e^{-\lambda t}$ in the definition of the mean measure $\Pi$ above. After substituting $\Pi(\mathrm{d} u, \mathrm{d} t) = t^{-1}\p(-t/X_t\in \mathrm{d} u)\mathrm{d} t$ in conditions \eqref{eq:Pi_large} and \eqref{eq:Pi_var_inv}--\eqref{eq:Pi_mean_inv}, we obtain the conditions in~\eqref{eq:C'_large}--\eqref{eq:C'_mean}. Thus, Corollary~\ref{cor:L_liminf} and the identity $\wh C_t=-1/L_t$ yield the claims in Theorem~\ref{thm:C'_limsup}. \end{proof} The following technical lemma which establishes the asymptotic behaviour of the characteristic exponent $\Phi$ defined in~\eqref{eq:cf_tau}. This result plays an important role in the proof of Theorem~\ref{thm:lower_fun_C'}. We will assume that $X \in\mathcal{Z}_{\alpha,\rho}$. For simplicity, by virtue of~\cite[Eq.~(1.5.1)~\&~Thm~1.5.4]{MR1015093}, we assume without loss of generality that: $g(t)=1$ for $t\ge 1$, $g$ is continuous and decreasing on $(0,1]$ and the function $G(t)=t/g(t)$ is continuous and increasing on $(0,\infty)$. Hence, the inverse $G^{-1}$ of $G$ is also continuous and increasing. \begin{lemma} \label{lem:asymp_equiv_Phi_domstable} Let $X\in\mathcal{Z}_{\alpha,\rho}$ for some $\alpha\in(1,2]$ and $\rho\in(0,1)$ and assume $\E[X_1^2]<\infty$ and $\E[X_1]=0$. The following statements hold for any sequences $(u_n)_{n \in \N} \subset (0,\infty)$ and $(s_n)_{n \in \N}\subset \R_-$ such that $u_n\to\infty$ and $s_n\to-\infty$ as $n \to \infty$: \begin{itemize}[leftmargin=2.5em, nosep] \item[\nf(i)] if $u_nG^{-1}(|s_n|^{-1})\to \infty$, then $\Phi_{s_n}(u_n) \sim (1-\rho)\log(u_nG^{-1}(|s_n|^{-1}))$, \item[\nf(ii)] if $u_nG^{-1}(|s_n|^{-1})\da 0$, then $\Phi_{s_n}(u_n)=\Oh( [u_nG^{-1}(|s_n|^{-1})]^{(\alpha-1)/2}+|s_n|^{-2})$. \end{itemize} \end{lemma} \begin{proof} Part (i). Denote $Q_t\coloneqq X_t/g(t)$ and note that, for all $n \in \N$, \begin{align*} \Phi_{s_n}(u_n) &=\int_0^\infty (1-e^{-tu_n})e^{-\lambda t} \p\big(Q_t \leq s_nG(t) \big)\frac{\mathrm{d} t}{t}. \end{align*} For every $\delta>0$ let $\kappa_n\coloneqq G^{-1}(\delta/|s_n|)$ and note that $\kappa_n\da 0$ as $n\to\infty$. The integral in the previous display is split at $\kappa_n$ and we control the two resulting integrals. We start with the integral on $[\kappa_n,\infty)$. For any $q\in(0,\alpha)$ we claim that $K\coloneqq\sup_{t\ge 0}\E[|Q_t|^q]<\infty$. Indeed, since $\E[X_t^2]<\infty$, $t^{-1/2}g(t)Q_t$ converges weakly to a normal random variable as $t\to\infty$. Applying~\cite[Lem.~3.1]{bang2021asymptotic} gives $\sup_{t\ge 1}\E[|Q_t|^q]t^{-q/2}g(t)^q<\infty$, and hence $\sup_{t\ge 1}\E[|Q_t|^q]<\infty$ since $t^{-1/2}g(t)$ is bounded from below for $t\ge 1$. Similarly,~\cite[Lem.~4.8--4.9]{MR4161123} imply that $\sup_{t\le 1}\E[|Q_t|^q]<\infty$, and thus $K<\infty$. Markov's inequality then yields \begin{equation} \label{eq:upper_bound_stable_prob} K\ge \sup_{n\in\N}\sup_{t\ge\kappa_n} |s_n|^{q}G(t)^{q} \p(Q_t\le s_nG(t)). \end{equation} Let $q'\coloneqq q(1-1/\alpha)>0$ and note that $G(t)^{-q}$ is regularly varying at $0$ with index $-q'$. By~\eqref{eq:upper_bound_stable_prob} we have $\p(Q_t\le s_nG(t))\le K|s_n|^{-q}G(t)^{-q}$ for all $t\ge \kappa_n$ and $n\in\N$. Hence, Karamata's theorem~\cite[Thm~1.5.11]{MR1015093} gives \begin{align*} \int_{\kappa_n}^\infty (1-e^{-u_nt})e^{-\lambda t} \p\big(Q_t \le s_nG(t)\big)\frac{\mathrm{d} t}{t} &\le K\int_{\kappa_n}^\infty |s_n|^{-q}e^{-\lambda t}G(t)^{-q} \frac{\mathrm{d} t}{t}\\ &\sim \frac{K}{q'}|s_n|^{-q} G(\kappa_n)^{-q} =\frac{K}{q'\delta^q} <\infty, \quad\text{as }n\to\infty. \end{align*} Thus, the integral $\int_{\kappa_n}^\infty (1-e^{-u_nt})e^{-\lambda t} \p\big(Q_t \le s_nG(t)\big)t^{-1}\mathrm{d} t$ is bounded as $n\to\infty$. It remains to establish the asymptotic growth of the corresponding integral on $(0,\kappa_n)$. Since the limiting $\alpha$-stable random variable $Z$ has a bounded density (see, e.g.~\cite[Ch.~4]{MR1745764}), the weak convergence of $Q_t \cid Z$ as $t \da 0$ extends to convergence in Kolmogorov distance by~\cite[1.8.31--32, p.~43]{MR1353441}. Thus, there exists some $N_\delta\in\N$ such that \[ \sup_{n\ge N_\delta}\sup_{t\in[0,\kappa_n]} |\p(Q_t\le s_n G(t))-\p(Z\le s_nG(t))|<\delta. \] Since $G(\kappa_n)=\delta/|s_n|$ and $\p(Z\le 0)=1-\rho$, the triangle inequality yields \[ B_\delta \coloneqq\sup_{n\ge N_\delta}\sup_{t\in[0,\kappa_n]} |1-\rho - \p(Q_t\le s_n G(t))| \le |1-\rho - \p(Z\le-\delta)|+\delta. \] which tends to $0$ as $\delta\da 0$. Define $\xi_n\coloneqq G^{-1}(1/|s_n|)$ for and note from the regular variation of $G^{-1}$ that $\kappa_n/\xi_n\to \delta^{\alpha/(\alpha-1)}$ as $n\to\infty$, implying $\log(u_n\kappa_n)\sim\log(u_n\xi_n)$ as $n \to \infty$ since $u_n\xi_n\to\infty$. As in the proof of Lemma~\ref{lem:asymp_equiv_Psi_domstable_post_min} above, we have $\int_0^1 (1-e^{-xt})t^{-1}\mathrm{d} t \sim \log x$ as $x\to\infty$. Since $u_n\xi_n\to\infty$ and $\xi_n\da 0$ as $n\to\infty$, we have \begin{align*} \int_0^{\kappa_n} (1-e^{-u_nt})e^{-\lambda t} \p\big(Q_t \le s_nG(t)\big)\frac{\mathrm{d} t}{t} &\le (1-\rho+B_\delta) \int_0^{\kappa_n} (1-e^{-u_nt})e^{-\lambda t} \frac{\mathrm{d} t}{t}\\ &\sim(1-\rho+B_\delta)\log(u_n\xi_n), \quad \text{as }n \to \infty. \end{align*} This implies that $\limsup_{n\to\infty}\Phi_{s_n}(u_n)/\log(u_n\xi_n)\le 1-\rho+B_\delta$. A similar argument can be used to obtain $\liminf_{n\to\infty}\Phi_{s_n}(u_n)/\log(u_n\xi_n)\ge 1-\rho-B_\delta$. Since $\delta>0$ is arbitrary and $B_\delta\da 0$ as $\delta\da 0$, we deduce that $\Phi_{s_n}(u_n)\sim(1-\rho)\log(u_n\xi_n)$ as $n\to\infty$. Part (ii). We will bound each of the terms in $\Phi_{s_n}(u_n)=J_n^1+J_n^2+J_n^3$, where $\xi_n=G^{-1}(1/|s_n|)$ and \begin{gather*} J_n^1\coloneqq \int_0^{\xi_n} (1-e^{-u_nt})e^{-\lambda t} \p(X_t\leq s_n t)\frac{\mathrm{d} t}{t}, \qquad J_n^2\coloneqq \int_{1}^\infty (1-e^{-u_nt})e^{-\lambda t} \p(X_t\leq s_n t)\frac{\mathrm{d} t}{t},\\ \text{and}\qquad J_n^3\coloneqq \int_{\xi_n}^{1} (1-e^{-u_nt})e^{-\lambda t} \p(Q_t\leq s_n G(t))\frac{\mathrm{d} t}{t}. \end{gather*} The elementary inequality $1-e^{-x}\le x$ for $x\ge 0$ implies that the integrand of $J_n^1$ is bounded by $u_n$. Hence, we have $J_n^1=\Oh(u_n\xi_n)=\Oh((u_n\xi_n)^{(\alpha-1)/2})$ as $n\to\infty$. To bound $J_n^2$, we use Markov's inequality as follows: since $\E[X_t^2] = \E[X_1^2]t$ for all $t>0$, we have $\p(X_t\le s_n t) \le \E[X_1^2]t/(|s_n|^2 t^2) =\E[X_1^2]|s_n|^{-2} t^{-1}$, for all $n\in\N$, $t>0$. Thus, we get \begin{align*} J_n^2 \le \frac{\E[X_1^2]}{|s_n|^2}\int_{1}^\infty \frac{\mathrm{d} t}{t^2} =\frac{\E[X_1^2]}{|s_n|^2}=\Oh(|s_n|^{-2}), \quad \text{ as }n \to \infty. \end{align*} It remains to bound $J_n^3$. Let $r\coloneqq(\alpha-1)/2$, pick any $q\in(\alpha/2,\alpha)$ and recall from Part~(i) that $K=\sup_{t\ge 0}\E[|Q_t|^q]<\infty$. Note that $q'=q(1-1/\alpha)>r$, so Karamata's theorem~\cite[Thm~1.5.11]{MR1015093}, the inequality in~\eqref{eq:upper_bound_stable_prob} and the elementary bound $1-e^{-x}\le x^r$ for $x\ge 0$ yield \begin{align*} J_n^3 \le Ku_n^r\int_{\xi_n}^{1} t^r|s_n|^{-q} G(t)^{-q}\frac{\mathrm{d} t}{t} \sim \frac{Ku_n^r}{q'-r}\xi_n^{r}|s_n|^{-q} G(\xi_n)^{-q} = \frac{K}{q'-r}(u_n\xi_n)^r, \quad \text{ as }n \to \infty. \end{align*} We conclude that $J_n^3=\Oh((u_n\xi_n)^r)$ as $n\to\infty$, completing the proof. \end{proof} \begin{proof}[Proof of Theorem~\ref{thm:lower_fun_C'}] Throughout this proof we let $\phi(u)\coloneqq \gamma u^{-1} (\log\log u)^r$, for some $\gamma>0$ and $r \in \R$. By Remark~\ref{rem:modify_nu} we may and do assume without loss of generality that $(X_t)_{t \geq 0}$ has a finite second moment and zero mean. Part (i). Since $p$ is arbitrary on $(1/(1-\rho),\infty)$, it suffices to show that $\liminf_{t\da 0}|\wh C_t'|f(t)>0$ a.s. where $f(t)=G(t\log^p(1/t))$. Since $\wh C_t'=-1/L_t$, this is equivalent to $\limsup_{t \downarrow 0} L_t/f(t)<\infty$ a.s. Recall that $\Psi_u(w)=\log\E[e^{-wY_u}]=\log\E[e^{-w\wh\tau_{-1/u}}]=\Phi_{-1/u}(w)$ for all $u>0$ and $w\ge 0$. By virtue of Theorem~\ref{thm:limsup_L}(a), it suffices to show that $\sum_{n=1}^\infty \exp(\theta_n t_n-\Psi_{f(t_n)}(\theta_n))<\infty$ and $\limsup_{n \to \infty} f(t_n)/f(t_{n+1})<\infty$ for $t_n\coloneqq \phi(\theta_n)$ and a positive sequence $(\theta_n)_{n \in \N}$ with $\lim_{n\to \infty}\theta_n=\infty$. Let $\theta_n\coloneqq e^n$ and $r=0$. Note that the regular variation of $f$ at $0$ yields $\limsup_{n \to \infty}f(t_n)/f(t_{n+1})=\lim_{n \to \infty}f(t_n)/f(t_{n+1})= e^{1-1/\alpha}$. Thus, it suffices to prove that the series is finite. Since $t_n=\phi(\theta_n)$, it follows that $t_n\theta_n=\gamma$. Note from the definition of $f$ that \begin{equation} \label{eq:stable_f_asymp4} uG^{-1}(f(\phi(u))) =u \phi(u)(\log (\phi(u)^{-1}))^{p} =\gamma (\log (\gamma^{-1} u))^{p} \sim\gamma (\log u)^{p}\to\infty, \quad \text{ as }u \to \infty. \end{equation} By Lemma~\ref{lem:asymp_equiv_Phi_domstable}(i) we have $\Psi_{f(t_n)}(\theta_n)=\Phi_{-1/f(t_n)}(\theta_n)\sim (1-\rho)\log(\theta_n G^{-1}(f(t_n)))$ as $n \to \infty$, since $\theta_n G^{-1}(f(t_n)) \sim \gamma(\log\theta_n)^{p}\to \infty$ as $n \to \infty$ by~\eqref{eq:stable_f_asymp4}. Fix some $\varepsilon>0$ with $(1-\varepsilon)(1-\rho)p>1$. Note that we have $\Psi_{f(t_n)}(\theta_n) \ge (1-\varepsilon)(1-\rho)p\log\log\theta_n$ for all sufficiently large $n$. It is enough to show that the following sum is finite: \[ \sum_{n=1}^\infty \exp\big(\gamma -(1-\varepsilon)(1-\rho)p\log\log\theta_n\big). \] Since $(1-\varepsilon)(1-\rho)p>1$, this sum is bounded by a multiple of $\sum_{n=1}^\infty n^{-(1-\varepsilon)(1-\rho)p}<\infty$. Part (ii). As before, since $p$ is arbitrary in $(0,1/(1-\rho))$, it suffices to show $\liminf_{t\da 0}|\wh C'_t|f(t)<\infty$ a.s. By Theorem~\ref{thm:limsup_L}(b), it suffices to show that there exists some $r>0$ and a positive sequence $(\theta_n)_{n \in \N}$ satisfying $\lim_{n\to \infty}\theta_n=\infty$, such that $\sum_{n=1}^\infty (\exp(-\Psi_{f(t_n)}(\theta_n))-\exp(-\theta_n t_n))=\infty$ and $\sum_{n=1}^\infty \Psi_{f(t_{n+1})}( \theta_n)<\infty$. Let $\gamma=r=1$, choose $\sigma>1$ and $\varepsilon>0$ satisfying $\sigma(1+\varepsilon)p(1-\rho)<1$ (recall $p(1-\rho)<1$) and set $\theta_n\coloneqq e^{n^\sigma}$. We start by showing that the second sum is finite. Since $\sigma>1$,~\eqref{eq:stable_f_asymp4} yields \begin{equation} \label{eq:stable_f_asymp3} \theta_nG^{-1}(f(t_{n+1}))\sim \frac{\theta_n}{\theta_{n+1}}(\log\theta_{n+1})^{p}\da 0, \quad \text{ as }n \to \infty. \end{equation} Hence, the time-change $\wh C'_t=-1/L_t$, Lemma~\ref{lem:asymp_equiv_Phi_domstable}(ii) and~\eqref{eq:stable_f_asymp3} imply \begin{align*} \Psi_{f(t_{n+1})}(\theta_n) =\Phi_{-1/f(t_{n+1})}(\theta_n) &=\Oh\big([\theta_nG^{-1}(f(t_{n+1}))]^{(\alpha-1)/2}+f(t_{n+1})^2\big), \quad\text{as }n\to\infty. \end{align*} By~\eqref{eq:stable_f_asymp3}, it is enough to show that \begin{align*} \sum_{n=1}^\infty \bigg(\frac{\theta_n}{\theta_{n+1}}(\log\theta_{n+1})^{p}\log\log\theta_{n+1}\bigg)^{(\alpha-1)/2}<\infty, \qquad\text{and}\qquad \sum_{n=1}^\infty f(t_{n+1})^2<\infty. \end{align*} Newton's generalised binomial theorem implies that $\theta_n/\theta_{n+1}=\exp(n^\sigma-(n+1)^\sigma)\le \exp(-\sigma n^{\sigma-1}/2)$ for all sufficiently large $n$. Since $\log\theta_{n+1}\sim n^\sigma$, we conclude that the first series in the previous display is indeed finite. The second series is also finite since $f\circ h$ is regularly varying at infinity with index $-(\alpha-1)/\alpha$ (recall that $t_{n+1}=\phi(\theta_{n+1})$). Next we prove that $\sum_{n=1}^\infty (\exp(-\Psi_{f(t_n)}(\theta_n))-\exp(-\theta_n t_n))=\infty$. First observe that the terms $\exp(-\theta_nt_n)=\exp(-\log \log\theta_n)=n^{-\sigma}$ are summable. Applying Lemma~\ref{lem:asymp_equiv_Phi_domstable}(i) and~\eqref{eq:stable_f_asymp4}, we obtain $\Psi_{f(t_n)}(\theta_n)\sim (1-\rho)\log(\theta_n G^{-1}(f(t_n)))$ as $n \to \infty$. As in Part~(i), for all sufficiently large~$n$ we have $\Psi_{f(t_n)}(\theta_n) \le (1+\varepsilon)p(1-\rho) \log\log\theta_n$. Thus $\exp(-\Psi_{f(t_{n})}(\theta_n)) \ge n^{-\sigma(1+\varepsilon)p(1-\rho)}$ and, since $\sigma(1+\varepsilon)p(1-\rho)<1$, we deduce that $\sum_{n=1}^\infty \exp(-\Psi_{f(t_n)}(\theta_n))=\infty$, completing the proof. \end{proof} \subsection{Proofs of Subsection~\ref{subsec:applications}} In this subsection we prove the results stated in Subsection~\ref{subsec:applications}. \begin{comment}[Proof of Lemma~\ref{lem:upper_fun_Lev_path}] Assume that $X$ has infinite variation, and let the function $f:[0,\infty) \to [0,\infty)$ be non-decreasing, with $f(0)=0$ and recall that $\tilde f(t)\coloneqq\int_0^t f(u)^{-1}\mathrm{d} u$, $t\ge 0$. We stress that the following arguments are made outside of a common null event (i.e., with probability $0$) Part (i). Suppose $\limsup_{t\da 0}|C'_t|f(t)=0$, then for every $\varepsilon>0$ there exists some $\delta>0$ such that $|C'_t|\le\varepsilon/f(t)$ and $C'_t<0$ for all $t<\delta$. Hence, we obtain $|C_t|\le \varepsilon\tilde f(t)$ for all $t<\delta$. Since $C\le X$ and $C$ is negative, we have $-X_t\le\varepsilon\tilde f(t)$, implying $\limsup_{t\da 0}(-X_t)/\tilde f(t)=0$. Part (ii). Suppose $\tilde f(t)$ is concave and $\liminf_{t\da 0}|C'_t|f(t)=\infty$. Then for every $M>0$ there exists some $\delta>0$ such that $|C'_t|\ge M/f(t)$ and $C'_t<0$ for all $t<\delta$. Hence we obtain $|C_t|\ge M\tilde f(t)$ for all $t<\delta$. We claim that $X_t\le -c\tilde f(t)$ i.o. on $[0,\delta]$ for any $c>0$, implying $\limsup_{t\da0}(-X_t)/\tilde f(t)=\infty$. Suppose instead that $X_t\ge -c\tilde f(t)$ on an interval $t\in[0,\vartheta]$ for some $c,\vartheta>0$. Let $\wt C$ be the convex minorant of $X$ on $[0,\vartheta]$. The maximality of $\wt C$ among the class of convex functions dominated by $X$ implies that $\wt C_t\ge -c\tilde f(t)$. Hence we would have $|\wt C_t|\le c\tilde f(t)$ and, since the convex minorants $\wt C$ and $C$ agree on some interval $[0,\vartheta']$ by Lemma~\ref{lem:CM_agree}, $|C_t|\le c\tilde f(t)$ on the interval $t\in[0,\min\{\vartheta,\vartheta'\}]$. This is a contradiction, proving our claim and completing the proof. \end{comment} \begin{proof}[Proofs of Lemmas~\ref{lem:upper_fun_Lev_path_post_slope} and~\ref{lem:upper_fun_Lev_path}] We first prove Lemma~\ref{lem:upper_fun_Lev_path_post_slope}. Let $s\in\mathcal{L}^+(\mathcal{S})$ and let the function $f:[0,\infty) \to [0,\infty)$ be continuous and increasing with $f(0)=0$ and define the function $\tilde f(t)\coloneqq\int_0^t f(u)\mathrm{d} u$, $t\ge 0$. Note that $m_s= X_{\tau_s}\wedge X_{\tau_s-}$ equals $C_{\tau_s}$ since $\tau_s$ is a contact point between $t\mapsto X_t\wedge X_{t-}$ and its convex minorant $C$. Part (i). By assumption, for any $M>0$ there exists $\delta>0$ such that $C'_{t+\tau_s}-s \ge Mf(t)$ for $t\in (0,\delta)$. Since $\int_0^t (C'_{u+\tau_s}-s)\mathrm{d} u=C_{t+\tau_s}-m_s-st$ it follows that $C_{t+\tau_s}-m_s-st\ge M \tilde f(t)$ for all $t \in [0,\delta)$. Note that the path of $X$ stays above its convex minorant, implying $C_{t+\tau_s}-m_s-st\le X_{t+\tau_s}-m_s-st$. Thus, $X_{t+\tau_s}-m_s-st\ge M \tilde f(t)$ for all $t \in [0,\delta)$, implying that $\liminf_{t \da 0}(X_{t+\tau_s}-m_s-st)/\tilde f(t)\ge M$. Part (ii). Assume that $\tilde f$ is convex on a neighborhood of $0$, and that $\limsup_{t\da 0} (C'_{t+\tau_s}-s)/f(t)=0$. Then, for all $M>0$ there exists some $\delta>0$ such that $C'_{t+\tau_s}-s\le M f(t)$ for all $t\in [0,\delta)$. Integrating this inequality gives $C_{t+\tau_s}-m_s-st\le M \tilde f(t)$ for all $t \in [0,\delta)$. Since $s\in\mathcal{L}^+(\mathcal{S})$, there exists a decreasing sequence of slopes $s_n\da s$ such that $t_n=\tau_{s_n}-\tau_s\da 0$ and $X_{t_n+\tau_s}\wedge X_{t_n+\tau_s-}=C_{t_n+\tau_s}$ for all $n\in\N$. Thus, either $X_{t_n+\tau_s}-m_s-st_n \le M\tilde f(t_n)$ i.o. or $X_{t_n+\tau_s-}-m_s-st_n \le M\tilde f(t_n)$ i.o. Since $\tilde f$ is continuous, we deduce that $\liminf_{t \da 0}(X_{t+\tau_s}-m_s-st)/\tilde f(t)\le M$. The proof of Lemma~\ref{lem:upper_fun_Lev_path} follows along similar lines with $\tilde f(t)=\int_0^t f(u)^{-1}\mathrm{d} u$, $t>0$, the slope $s=-\infty$ and $m_{-\infty}=X_0=0$. \end{proof} \begin{proof}[Proof of Corollary~\ref{cor:post-tau_s-Levy-path-attraction}] Part (i) follows from Theorem~\ref{thm:upper_fun_C'_post_min} and Lemma~\ref{lem:upper_fun_Lev_path_post_slope}(ii). Part (ii). Assume $\alpha\in(1/2,1)$. By Theorem~\ref{thm:post-min-lower} and Lemma~\ref{lem:upper_fun_Lev_path_post_slope}(i) it suffices to prove that \eqref{eq:post-min-Pi-large}--\eqref{eq:post-min-Pi-mean} hold for $c=1$. As described in Subsection~\ref{subsec:simp_suff_cond_tau_s}, condition~\eqref{eq:simple_suff_cond_post_min_density} implies~\eqref{eq:post-min-Pi-var}--\eqref{eq:post-min-Pi-mean}. By Lemma~\ref{lem:generalized_Picard}, the density of $(X_t-st)/g(t)$ is uniformly bounded in $t>0$. Hence, the following condition implies~\eqref{eq:simple_suff_cond_post_min_density}: \begin{equation} \label{eq:cor_RV_suff1} \int_0^1\int_{f(t/2)}^1 \frac{1}{f^{-1}(x)} \mathrm{d} x \frac{t}{g(t)}\mathrm{d} t<\infty. \end{equation} Similarly,~\eqref{eq:post-min-Pi-large} holds with $c=1$ if $\int_0^1 (f(t)/g(t))\mathrm{d} t<\infty$. Thus, it remains to show that~\eqref{eq:cor_RV_suff1} holds and $\int_0^1 (f(t)/g(t))\mathrm{d} t<\infty$. We first establish~\eqref{eq:cor_RV_suff1}. Let $a=\alpha/(1-\alpha)$ and note that $f(t)\coloneqq 1/G(t(\log t^{-1})^p)=t^{1/a}\wt\varpi(t)$ where the slowly varying function $\wt\varpi$ is given by $\wt\varpi(t)=\log^{p/a}(1/t)\varpi(t\log^{p}(1/t))$. Thus, by~\cite[Thm~1.5.12]{MR1015093}, the inverse $f^{-1}$ of $f$ admits the representation $f^{-1}(t)=t^a \wh\varpi(t)$ for some slowly varying function $\wh\varpi(t)$. This slowly varying function satisfies \begin{equation} \label{eq:hat_varpi} t=f^{-1}(f(t)) =f(t)^a\wh\varpi(f(t)) \implies \wh\varpi(f(t))\sim t/f(t)^a\sim 1/\wt\varpi(t)^a, \qquad \text{ as } t \da 0. \end{equation} Since $a>1$, the function $f^{-1}$ is not integrable at $0$. Thus, by Karamata's theorem~\cite[Thm~1.5.11]{MR1015093} and~\eqref{eq:hat_varpi}, the inner integral in~\eqref{eq:cor_RV_suff1} satisfies \[ \int_{f(t/2)}^1\frac{1}{f^{-1}(x)}\mathrm{d} x \sim \frac{1}{a-1}f(t/2)^{1-a}\wh\varpi(f(t))^{-1} \sim \frac{2^{(a-1)/a}}{a-1}f(t)^{1-a}\wt\varpi(t)^a, \qquad\text{as }t\da 0. \] Since $t/g(t)=t^{-1/a}/\varpi(t)$ for $t>0$, condition~\eqref{eq:cor_RV_suff1} holds if and only if the following integral is finite \begin{equation*} \int_0^1 f(t)^{1-a} \frac{\wt\varpi(t)^a}{\varpi(t)} t^{-1/a}\mathrm{d} t =\int_0^1 \log^{p/a}(1/t)\frac{\varpi(t\log^{p}(1/t))}{\varpi(t)}\frac{\mathrm{d} t}{t}. \end{equation*} The integrand is asymptotically equivalent to $\log^{p/a}(1/t)$ since $\varpi(t\log^{p}(1/t))/\varpi(t) \to 1$ as $t \da 0$ uniformly on $[0,1]$ by~\cite[Thm~2.3.1]{MR1015093} and our assumption on $\varpi$. Thus, the condition $p<-a$ makes the integral in display finite, proving condition~\eqref{eq:cor_RV_suff1}. To prove that $\int_0^1 (f(t)/g(t))\mathrm{d} t<\infty$, take any $\delta>0$ with $p(1/a-\delta)<-1$ (recall $p/a<-1$ by assumption) and apply Potter's bound \cite[Thm~1.5.6(iii)]{MR1015093} with $\delta$ to obtain, for some constant $K>0$, \[ \int_0^1 \frac{f(t)}{g(t)}\mathrm{d} t=\int_0^1 \frac{g(t\log^p(1/t))}{g(t)\log^p(1/t)}\frac{\mathrm{d} t}{t} \le K\int_0^1 \log^{p(1/a-\delta)}(1/t)\frac{\mathrm{d} t}{t}<\infty. \] Part (iii). The result follows from Corollary~\ref{cor:power_func_liminf_post_min} and Lemma~\ref{lem:upper_fun_Lev_path_post_slope}(i). \end{proof} \section{Concluding remarks} \label{sec:concluding_rem} The points on the boundary of the convex hull of a L\'evy path where the slope increases continuously were characterised (in terms of the law of the process) in our recent paper~\cite{SmoothCM}. In this paper we address the question of the rate of increase for the derivative of the boundary at these points in terms of lower and upper functions, both when the tangent has finite slope and when it is vertical (i.e. of infinite slope). Our results cover a large class of L\'evy processes, presenting a comprehensive picture of this behaviour. Our aim was not to provide the best possible result in each case and indeed many extensions and refinements are possible. Below we list a few that arose while discussing our results in Section~\ref{sec:small-time-derivative} as well as other natural questions. \begin{itemize}[leftmargin=2em, nosep] \item Find an explicit description of the lower (resp. upper) fluctuations in the finite (resp. infinite) slope regime for L\'evy processes in the domain of attraction of an $\alpha$-stable process in terms of the normalising function (cf. Corollaries~\ref{cor:power_func_liminf_post_min} and~\ref{cor:stable_limsup}). In the finite slope regime, this appears to require a refinement of~\cite[Thm~4.3]{picard_1997} for processes in this class. \item In Theorems~\ref{thm:upper_fun_C'_post_min} and~\ref{thm:lower_fun_C'} we find the correct power of the logarithmic factor, in terms of the positivity parameter $\rho$, in the definition of the function $f$ for processes in the domain of attraction of an $\alpha$-stable process. It is natural to ask what powers of iterated logarithm arise and how the boundary value is linked to the characteristics of the L\'evy process. This question might be tractable for $\alpha$-stable processes since power series and other formulae exist for their transition densities~\cite[Sec.~4]{MR1745764}, allowing higher order control of the Laplace transform $\Phi$ in Lemmas~\ref{lem:asymp_equiv_Psi_domstable_post_min} and~\ref{lem:asymp_equiv_Phi_domstable}. \item Find the analogue of Theorems~\ref{thm:upper_fun_C'_post_min} and~\ref{thm:lower_fun_C'} for processes attracted to Cauchy process (see Remarks~\ref{rem:exclusions-tau}(a) and~\ref{rem:exclusions-0}(b) for details). \item Find L\'evy processes for which there exists a deterministic function $f$ such that any of the following limits is positive and finite: $\limsup_{t\da 0}(C'_{t+\tau_s}-s)/f(t)$, $\liminf_{t\da 0}(C'_{t+\tau_s}-s)/f(t)$, $\limsup_{t\da 0}|C'_{t}|f(t)$ or $\liminf_{t\da 0}|C'_{t}|f(t)$. By Corollaries~\ref{cor:power_func_liminf_post_min} and~\ref{cor:stable_limsup}, such a function does not exist for the limits $\liminf_{t\da 0}(C'_{t+\tau_s}-s)/f(t)$ or $\limsup_{t\da 0}|C'_{t}|f(t)$ within the class of regularly varying functions and $\alpha$-stable processes with jumps of both signs. \end{itemize} \printbibliography \section*{Acknowledgements} \thanks{ \noindent JGC and AM are supported by EPSRC grant EP/V009478/1 and The Alan Turing Institute under the EPSRC grant EP/N510129/1; AM was supported by the Turing Fellowship funded by the Programme on Data-Centric Engineering of Lloyd's Register Foundation; DB is funded by the CDT in Mathematics and Statistics at The University of Warwick. All three authors would like to thank the Isaac Newton Institute for Mathematical Sciences, Cambridge, for support and hospitality during the programme for Fractional Differential Equations where work on this paper was undertaken. This work was supported by EPSRC grant no EP/R014604/1. }
1,108,101,565,378
arxiv
\section{Introduction} In a high-dimensional regression setting, we are faced with many potential explanatory variables (features), often with most of these features having zero or little true effect on the response. Model selection methods can be applied to find a small submodel containing the most relevant features, for instance, via sparse model fitting methods such as the lasso \cite{tibshirani1996regression}, or in a setting where the sparsity respects a grouping of the features, the group lasso \cite{yuan2006model}. In practice, however, we may not be able to determine whether the set of features (or set of groups of features) selected might contain many false positives. For the (non-grouped) sparse setting, the knockoff filter \cite{barber2015} creates ``knockoff copies'' of each variable to act as a control group, detecting whether the lasso (or another model selection method) is successfully controlling the false discovery rate (FDR), and tuning this method to find a model as large as possible while bounding FDR. In this work, we will extend the knockoff filter to the group sparse setting, and will find that by considering features, and constructing knockoff copies, at the group-wise level, we are able to improve the power of this method at detecting true signals. Our method can also extend to the multitask regression setting \cite{obozinski2006multi}, where multiple responses exhibit a shared sparsity pattern when regressed on a common set of features. As for the knockoff method, our work applies to the setting where $n\geq p$. \section{Background} We begin by giving background on several models and methods underlying our work. \subsection{Group sparse linear regression} We consider a linear regression model, $Y=X\beta+z$, where $y\in \mathbb{R}^{n}$ is a vector of responses and $X\in\mathbb{R}^{n\times p}$ is a known design matrix. In a grouped setting, the $p$ features are partitioned into $m$ groups of variables, $G_1,\dots,G_m\subseteq \{1,\dots,p\}$, with group sizes $p_1,\cdots,p_m$. The noise distribution is assumed to be $z\sim\mathcal{N}(0,\sigma^2\mathbf{I}_n)$. We assume sparsity structure in that only a small portion of $\beta_{G_i}$'s are nonzero, where $\beta_{G_i}\in\mathbb{R}^{p_i}$ is the subvector of $\beta$ corresponding to the $i$th group of features. When not taking group into consideration, a commonly used method to find a sparse vector of coefficients $\beta$ is the lasso \cite{tibshirani1996regression}, an $\ell_1$-penalized linear regression, which minimizes the following objective: function \begin{equation}\label{eqn:lasso}\widehat{\beta} (\lambda) = \argmin_{\beta} \left\{\norm{y - X \beta}_2^2 +\lambda \norm{\beta}_1\right\}\;.\end{equation} To utilize the feature grouping, so that an entire group of features is selected simultaneously, \citet{yuan2006model} proposed following grouped lasso penalties: \begin{equation}\label{eqn:glasso}\widehat{\beta} (\lambda) = \argmin_{\beta} \left\{\norm{y - X \beta}_2^2 +\lambda\grpnorm{\beta}\right\}\;.\end{equation} where $\grpnorm{\beta}=\sum_{i=1}^m \norm{\beta_{G_i}}_2$. This penalty promotes sparsity at the group level; for large $\lambda$, few groups will be selected (i.e.~$\beta_{G_i}$ will be zero for many groups), but within any selected group, the coefficients will be dense (all nonzero). The $\ell_2$ norm penalty on $\beta_{G_i}$ may sometimes be rescaled relative to the size of the group. \subsection{Multitask learning} In a multitask learning problem with a linear regression model, we consider the model \begin{equation}\label{eqn:multitask_model}Y = X B + E\end{equation} where the response $Y\in\mathbb{R}^{n\times r}$ contains $r$ many response variables measured for $n$ individuals, $X\in\mathbb{R}^{n\times p}$ is the design matrix, $B\in\mathbb{R}^{p\times r}$ is the coefficient matrix, and $E\in\mathbb{R}^{n\times r}$ is the error matrix, for which we assume a Gaussian model: its rows $e_i$, for $i=1,\dots,n$, are \iid draws from a zero-mean Gaussian, $e_n \stackrel{\mathrm{iid}}{\sim} \mathcal{N}(0,\Sigma)$, with unknown covariance structure $\Sigma\in\mathbb{R}^{r\times r}$. If the number of features $p$ is large, we may believe that only a few of the features are relevant; in that case, most rows of $B$ will be zero---that is, $B$ is row-sparse. In a low-dimensional setting, we may consider the multivariate normal model, with likelihood determined by both the coefficient matrix $B$ and the covariance matrix $\Sigma$. in a high-dimensional setting, combining this likelihood with a sparsity-promoting penalty may be computationally challenging, and so a common approach is to ignore the covariance structure of the noise and to simply use a least-squares loss together with a penalty, \begin{equation}\label{eqn:multitask_lasso}\widehat{B} = \argmin_B \left\{\frac{1}{2}\fronorm{Y - X B}^2 +\lambda \norm{B}_{\ell_1/\ell_2}\right\}\;,\end{equation} where $\fronorm{\cdot}$ is the Frobenius norm and where the $\ell_1/\ell_2$ norm in the penalty is given by $\norm{B}_{\ell_1/\ell_2} = \sum_i \sqrt{\sum_j B_{ij}^2}$. This penalty promotes row-wise sparsity of $B$: for large $\lambda$, $\widehat{B}$ will have many zero rows, however the nonzero rows will themselves be dense (no entry-wise sparsity). It is common to reformulate this $\ell_1$-penalized multitask linear regression as a group lasso problem. First, we reorganize the terms in our model. We form a vector response $y\in\mathbb{R}^{nr}$ by stacking the columns of $Y$: \[y=\textnormal{vec}(Y) = (Y_{11}, \dots, Y_{n1}, \dots, Y_{1r},\dots, Y_{nr})^\top\in\mathbb{R}^{nr},\] and a new larger design matrix by repeating $X$ in blocks: \[\mathbb{X} = \mathbf{I}_r\otimes X = \left(\begin{array}{cccc} X & 0 & \dots & 0 \\ 0 & X & \dots & 0 \\ && \dots & \\ 0 & 0 & \dots & X\end{array}\right) \in\mathbb{R}^{nr \times pr}.\] (Here $\otimes$ is the Kronecker product.) Define the coefficient vector $\beta = \textnormal{vec}(B)\in\mathbb{R}^{pr}$ and noise vector $\epsilon = \textnormal{vec}(E)\in\mathbb{R}^{nr}$. Then the multitask model~\eqnref{multitask_model} can be rewritten as \begin{equation}\label{eqn:multitask_vector_model}y = \mathbb{X}\beta + \epsilon,\end{equation} where $\epsilon$ follows a Gaussian model, $\epsilon\sim\mathcal{N}(0,\Sigma\hspace{-3.5pt}{\color{white}1}\hspace{-7pt}\Sigma)$, for \[{\Sigma\hspace{-3.5pt}{\color{white}1}\hspace{-7pt}\Sigma} = \Sigma\otimes\mathbf{I}_n = \left(\begin{array}{ccc}\Sigma_{11}\mathbf{I}_n & \dots & \Sigma_{1r}\mathbf{I}_n \\ \dots & \dots & \dots \\ \Sigma_{r1}\mathbf{I}_n & \dots & \Sigma_{rr}\mathbf{I}_n\end{array}\right).\] The group sparse structure of $\beta$ is determined by groups \[G_j = \{ j, j+p, \dots,j+ (r-1)p\}\] for $j=1,\dots,p$; this corresponds to the row sparsity of $B$ in the original formulation~\eqnref{multitask_model}. Then, the multitask learning problem has been reformulated into a group-sparse regression problem---and so, the multitask lasso~\eqnref{multitask_lasso} can equivalently be solved by the group lasso optimization problem \begin{equation}\label{eqn:multitask_as_group_lasso}\widehat{\beta} = \argmin_{\beta}\left\{\frac{1}{2} \norm{ y - \mathbb{X}\beta}^2_2 + \lambda\grpnorm{\beta}\right\}\;.\end{equation} \subsection{The group false discovery rate} The original definition of false discovery rate (FDR) is the expected proportion of incorrectly selected features among all selected features. When the group rather than individual feature is of interest, we prefer to control the false discovery rate at the group level. Mathematically, we define the group false discovery rate ($\fdr_{\textnormal{group}}$) as \begin{equation} \fdr_{\textnormal{group}}=\EE{\frac{\#\{i:\beta_i=0,i\in \widehat{S}\}}{\#\{i:i\in \widehat{S}\} \vee 1}} \end{equation} the expected proportion of selected groups which are actually false discoveries. Here $\widehat{S}=\{i:\widehat{\beta}_i\neq 0\}$ is the set of all selected group of features, while $a\vee b$ denotes $\max\{a,b\}$. \subsection{The knockoff filter for sparse linear regression} In the sparse (rather than group-sparse) setting, the lasso~\eqnref{lasso} provides an accurate estimate for the coefficients in a sparse linear model, but performing inference on the results, for testing the accuracy of these estimates or the set of features selected, remains a challenging problem. The knockoff filter \cite{barber2015} addresses this question, and provides a method controlling the false discovery rate (FDR) of the selected set at some desired level $q$ (e.g.~$q=0.2$). To run this method, there are two main steps: constructing knockoffs, and filtering the results. First a set of $p$ knockoff features is constructed: for each feature $X_j$, $j=1,\dots,p$, it is given a knockoff copy $\widetilde{X}_j$, where the matrix of knockoffs $\widetilde{X} = [\widetilde{X}_1 \ \dots \ \widetilde{X}_p]$ satisfies, for some vector $s\geq 0$, \begin{equation}\label{eqn:knockoff_condition} \widetilde{X}^\top \widetilde{X} = X^\top X, \ \widetilde{X}^\top X = X^\top X - \diag\{s\}.\end{equation} Next, the lasso is run on an augmented data set with response $y$ and $2p$ many features $X_1,\dots,X_p,\widetilde{X}_1,\dots,\widetilde{X}_p$: \[\widehat{\beta}(\lambda) = \argmin_{b\in\mathbb{R}^{2p}} \left\{\norm{y - [ X \ \widetilde{X} ] b}_2^2 +\lambda \norm{\beta}_1\right\}.\] This is run over a range of $\lambda$ values decreasing from $+\infty$ (a fully sparse model) to $0$ (a fully dense model). If $X_j$ is a true signal---that is, it has a nonzero effect on the response $y$---then this should be evident in the lasso: $X_j$ should enter the model earlier (for larger $\lambda$) than its knockoff copy $\widetilde{X}_j$. However, if $X_j$ is null---that is, $\beta_j=0$ in the true model---then it is equally likely to enter before or after $\widetilde{X}_j$. Next, to filter the results, let $\lambda_j$ and $\widetilde{\lambda}_j$ be the time of entry into the lasso path for each feature and knockoff: \[\lambda_j = \sup\{\lambda : \widehat{\beta}(\lambda)_j\neq 0\}, \widetilde{\lambda}_j = \sup\{\lambda:\widehat{\beta}(\lambda)_{j+p}\neq 0\},\] and let $\widehat{S}(\lambda), \widetilde{S}(\lambda)\subseteq\{1,\dots,p\}$ be the sets of original features, and knockoff features, which have entered the lasso path before time $\lambda$, and before their counterparts: \[\widehat{S}(\lambda) = \{j:\lambda_j > \widetilde{\lambda}_j\vee\lambda\}\text{ and }\widetilde{S}(\lambda) =\{j:\widetilde{\lambda}_j > \lambda_j\vee\lambda\}.\] Estimate the proportion of false discoveries in $\widehat{S}(\lambda)$ as \begin{equation}\label{eqn:fdphat} \textnormal{FDP}(\lambda)\approx\widehat{\fdp}(\lambda) = \frac{|\widetilde{S}(\lambda)|}{|\widehat{S}(\lambda)|\vee 1}.\end{equation} To understand why, note that since $X_j$ and $\widetilde{X}_j$ are equally likely to enter in either order if $X_j$ is null (no real effect), then $j$ is equally likely to fall into either $\widehat{S}(\lambda)$ or $\widetilde{S}(\lambda)$. Therefore, the numerator $|\widetilde{S}(\lambda)|$ should be an (over)estimate of the number of nulls in $\widehat{S}(\lambda)$---thus, the ratio estimates the FDP. Alternately, we can choose a more conservative definition \begin{equation}\label{eqn:fdphatplus} \textnormal{FDP}(\lambda)\approx\widehat{\fdp}_+(\lambda) = \frac{1+|\widetilde{S}(\lambda)|}{|\widehat{S}(\lambda)|\vee 1}.\end{equation} Finally, the knockoff filter selects $\widehat{\lambda} = \min\{\lambda:\widehat{\fdp}(\lambda)\leq q\}$, where $q$ is the desired bound on FDR level, and then outputs the set $\widehat{S}(\widehat{\lambda})$ as the set of ``discoveries''. The knockoff+ variant does the same with $\widehat{\fdp}_+(\lambda)$. Theorems 1 and 2 of \cite{barber2015} prove that the knockoff procedure bounds a modified form of the FDR, $\textnormal{mFDR} = \EE{\frac{(\text{\# of false discoveries})}{(\text{\# of discoveries}) + q^{-1}}}$, while the knockoff+ procedure bounds the FDR. \section{The knockoff filter for group sparsity}\label{sec:knockoff_background} In this section, we extend the knockoff method to the group sparse setting. This involves two key modifications: the construction of the knockoffs at a group-wise level rather than for individual features, and the ``filter'' step where the knockoffs are used to select a set of discoveries. Throughout the remainder of the paper, ``knockoff'' refers to the original knockoff method, while ``group knockoff' (or, later on, ``multitask knockoff'') refers to our new method. \subsection{Group knockoff construction} The original knockoff construction requires that $\widetilde{X}^\top X = X^\top X - \diag\{s\}$, that is, all off-diagonal entries are equal. When the features are highly correlated, this construction is only possible for vectors $s$ with extremely small entries; that is, $\widetilde{X}_j$ and $X_j$ are themselves highly correlated, and the knockoff filter then loses power as it is hard to distinguish between a real signal $X_j$ and its knockoff copy $\widetilde{X}_j$. In a group-sparse setting, we will see that we can relax this requirement on $\widetilde{X}^\top X$, thereby improving our power. In particular, the best gain will be in situations where within-group correlations are high but between-group correlations are low; this may arise in many applications, for example, when genes related to the same biological pathways are grouped together, we expect to see the largest correlations occuring within groups rather than between genes in different groups. To construct the group knockoffs, we require the following condition on the matrix $\widetilde{X}\in\mathbb{R}^{n\times p}$: \begin{multline}\label{eqn:construct_group_knockoffs} \widetilde{X}^\top \widetilde{X} = \Sigma\coloneqq X^\top X ,\text{ and } \widetilde{X}^\top X = \Sigma - S,\\ \text{ where $S\succeq 0$ is group-block-diagonal,} \end{multline} meaning that $S_{G_i,G_j} = 0$ for any two distinct groups $i\neq j$. Abusing notation, write $S = \diag\{S_1,\dots,S_m\}$ where $S_i\succeq 0$ is the $p_i\times p_i$ matrix for the $i$th group block, meaning that $S_{G_i,G_i} = S_i$ for each $i$ while $S_{G_i,G_j}=0$ for each $i\neq j$. Extending the construction of \cite{barber2015},\footnote{This construction is for the setting $n\geq 2p$; see \cite{barber2015} for a simple trick to extend to $n\geq p$.} we construct these knockoffs by first selecting $S=\diag\{S_1,\dots,S_m\}$ that satisfies the condition $S\preceq 2\Sigma$, then setting \[\widetilde{X} = X(\mathbf{I}_p -\Sigma^{-1}S)+\widetilde{U}C\] where $\widetilde{U}$ is a $n\times p$ orthonormal matrix orthogonal to the span of $X$, while $C^\top C = 2S - S \Sigma^{-1}S$ is a Cholesky decomposition. Now, we still need to choose the matrix $S\succeq 0$, which has group-block-diagonal structure, so that the condition $S\preceq 2\Sigma$ is satisfied (this condition ensures the existence of the Cholesky decomposition defining $C$). To do this, we choose the following construction: we set $S = \diag\{S_1,\dots,S_m\}$ where we choose $S_i = \gamma \cdot \Sigma_{G_i,G_i}$; the scalar $\gamma\in[0,1]$ is chosen to be as large as possible so that $S\preceq 2\Sigma$ still holds, which amounts to choosing \[\gamma = \min\left\{1, 2\cdot \lambda_{\min}\left(D\Sigma D\right)\right\}\] where $D = \diag\{\Sigma_{G_1,G_1}^{-\nicefrac{1}{2}},\dots,\Sigma_{G_m,G_m}^{-\nicefrac{1}{2}}\}$. This construction can be viewed as an extension of the ``equivariant'' knockoff construction of~\citet{barber2015}; their SDP construction, which gains a slight power increase in the non-grouped setting, may also be extended to the grouped setting but we do not explore this here. Looking back at the group knockoff matrix condition~\eqnref{construct_group_knockoffs}, we see that any knockoff matrix $\widetilde{X}$ satisfying~\eqnref{knockoff_condition} would necessarily also satisfy this group-level condition. However, the group-level condition is weaker; it allows more flexibility in constructing $\widetilde{X}$, and therefore, will enable more separation between a feature $X_j$ and its knockoff $\widetilde{X}_j$, which in turn can increase power to detect the true signals. \subsection{Filter step} After constructing the group knockoff matrix, we then select a set of discoveries (at the group level) as follows. First, we apply the group lasso~\eqnref{glasso} to the augmented data set, \[\widehat{\beta} = \argmin_{b\in\mathbb{R}^{2p}}\left\{\norm{y - [X \ \widetilde{X}]b}^2_2 +\lambda\grpnorm{b}\right\}.\] Here, with the augmented design matrix $[X \ \widetilde{X}]$, we we now have $2m$ many groups: one group $G_i$ for each group in the original design matrix, and one group $\widetilde{G}_i = \{j+p : j\in G_i\}$ corresponding to the same group within the knockoff matrix; the penalty norm is then defined as $\grpnorm{b}=\sum_{i=1}^m \norm{b_{G_i}}_2 + \sum_{i=1}^m \norm{b_{\widetilde{G}_i}}_2$. The filter process then proceeds exactly as for the original knockoff method, with groups of features in place of individual features. First we record the time when each group or knockoff group enters the lasso path, \[\lambda_i = \sup\{\lambda : \widehat{\beta}(\lambda)_{G_i}\neq 0\}, \widetilde{\lambda}_i = \sup\{\lambda:\widehat{\beta}(\lambda)_{\widetilde{G}_i}\neq 0\},\] then define the selected groups and knockoff groups as \[\widehat{S}(\lambda) = \{i:\lambda_i > \widetilde{\lambda}_i\vee\lambda\}\text{ and }\widetilde{S}(\lambda) =\{i:\widetilde{\lambda}_i > \lambda_i\vee\lambda\}\] (note that these sets are subsets of $\{1,\dots,m\}$, the list of groups, rather than counting individual features). Finally, estimate the proportion of false discoveries in $\widehat{S}(\lambda)$ exactly as in~\eqnref{fdphat}, and define $\widehat{\lambda} = \min\{\lambda:\widehat{\fdp}(\lambda)\leq q\}$ as before; the final set of discovered groups is given by $\widehat{S}(\widehat{\lambda})$. (For group knockoff+, we use the more conservative estimate of the group FDP, as for the knockoff.) \subsection{Theoretical Results} Here we turn to a more general framework for the group knockoff, working with the setup introduced in~\citet{barber2015}. Let $W\in\mathbb{R}^m$ be a vector of statistics, one for each group, with large positive values for $W_i$ indicating strong evidence that group $i$ may have a nonzero effect (i.e.~$\beta_{G_i}\neq 0$). $W$ is defined as a function of the augmented design matrix $[X \ \widetilde{X}]$ and the response $y$, which we write as $W = w([X \ \widetilde{X}], y)$. In the group lasso setting described above, the statistic is given by \[W_i = (\lambda_i \vee \widetilde{\lambda}_i ) \cdot \sign(\lambda_i - \widetilde{\lambda}_i).\] In general, we require two properties for this statistic: sufficiency and group-antisymmetry. The first is exactly as for (non-group) knockoffs; the second is a modification moving to the group sparse setting. \begin{definition}\label{def:sufficiency} The statistic $W$ is said to obey the sufficiency property if it only depends on the Gram matrix and feature-response inner products, that is, for any $X,\widetilde{X},y$, \begin{equation} w([X \ \widetilde{X}],y)=f([X,\widetilde{X}]^\top [X,\widetilde{X}], [X,\widetilde{X}]^\top y) \end{equation} for some function $f$. \end{definition} Before defining the group-antisymmetry property, we introduce some notation. For any group $i=1,\dots,m$, let $[ X \ \widetilde{X}]_{\textnormal{swap}(i)}$ be the matrix with \[\left([ X \ \widetilde{X}]_{\textnormal{swap}(i)}\right)_j = \begin{cases} X_j,&\text{ if $1\leq j\leq p$ and $j\not\in G_i$,}\\ \widetilde{X}_j,&\text{ if $1\leq j\leq p$ and $j\in G_i$,}\end{cases}\] and \[\left([ X \ \widetilde{X}]_{\textnormal{swap}(i)}\right)_{j+p} = \begin{cases} \widetilde{X}_j,&\text{ if $1\leq j\leq p$ and $j\not\in G_i$,}\\ X_j,&\text{ if $1\leq j\leq p$ and $j\in G_i$,}\end{cases}\] for each $j=1,\dots,p$. In other words, the columns corresponding to $G_i$ in the original component $X$, are swapped with the same columns of $\widetilde{X}$. \begin{definition}\label{def:group_antisymmetry} The statistic $W$ is said to obey the group-antisymmetry property if swapping two groups $X_i$ and $\widetilde{X}_i$ has the effect of switching the sign of $W_i$ with no other change to $W$, that is, \[w([ X \ \widetilde{X}]_{\textnormal{swap}(i)},y) = \mathbf{I}^{\pm}_i \cdot w([X \ \widetilde{X}],y),\] where $\mathbf{I}^{\pm}_i$ is the diagonal matrix with a $-1$ in entry $(i,i)$ and $+1$ in all other diagonal entries. \end{definition} Next, to run the group knockoff or group knockoff+ method, we proceed exactly as in~\cite{barber2015}; we change notation here for better agreement with the group lasso setting. Define \[\widehat{S}(t) = \{i: W_i \geq t\}\text{ and }\widetilde{S}(t) =\{i: W_i\leq -t\}.\] Then estimate the FDP as in~\eqnref{fdphat} for the knockoff method, or as in~\eqnref{fdphatplus} for knockoff+ (with parameter $t$ in place of the lasso penalty path parameter $\lambda$); then find $\widehat{t}$, the minimum $t\geq 0$ with $\widehat{\fdp}(t)$ (or $\widehat{\fdp}_+(t)$) no larger than $q$, and output the set $\widehat{S} = \widehat{S}(\widehat{t})$ of discovered groups. This procedure offers the following theoretical guarantee: \begin{theorem}\label{thm:main_group} If the vector of statistics $W$ satisfies the sufficiency and group-antisymmetry assumption, then the group knockoff procedure controls a modified group FDR, \[\mfdr_{\textnormal{group}} = \EE{\frac{\#\{i:\beta_i=0,i\in \widehat{S}\}}{\#\{i:i\in \widehat{S}\} +q^{-1}}}\leq q,\] while the group knockoff+ procedure controls the group FDR, $\fdr_{\textnormal{group}}\leq q$. \end{theorem} The proof of this result follows the original knockoff proof of~\citet{barber2015}, and we do not reproduce it here; the result is an immediate consequence of their main lemma, moved into the grouped setting: \begin{lemma}\label{lem:signs} Let $\epsilon\in \{\pm 1\}^{m}$ be a sign sequence independent of $W$, with $\epsilon_i=1$ for all non-null groups $i$ and $\epsilon_i\sim \{\pm 1\}$ independently with equal probability for all null groups $i$. Then we have \begin{equation} (W_1,\cdots,W_m)=_{d} (W_1\epsilon_1,\cdots, W_m\epsilon_m), \end{equation} where $=_{d}$ denotes equality in distribution. \end{lemma} This lemma can be proved via the sufficiency and group-antisymmetry properties, exactly as for the individual-feature-level result of~\citet{barber2015}. \section{Knockoffs for multitask learning}\label{sec:knockoff_multitask} For the multitask learning problem, the reformulation as a group lasso problem~\eqnref{multitask_as_group_lasso} suggests that we can apply the group-wise knockoffs to this problem as well. However, there is one immediate difficulty: the model for the noise $\epsilon$ in~\eqnref{multitask_as_group_lasso} has changed---the entries of $\epsilon$ are not independent, but instead follow a multivariate Gaussian model with covariance $\Sigma\hspace{-3.5pt}{\color{white}1}\hspace{-7pt}\Sigma$. In fact, we will see shortly that we can work even in this more general setting. Reshaping the data to form a group lasso problem as in~\eqnref{multitask_vector_model}, we will work with the vectorized response $y\in\mathbb{R}^{nr}$ and the repeated-block design matrix $\mathbb{X}\in\mathbb{R}^{nr\times pr}$. We will also construct a repeated-block knockoff matrix, \[\widetilde{\bbX} = \mathbf{I}_r\otimes \widetilde{X}=\left(\begin{array}{cccc} \widetilde{X} & 0 & \dots & 0 \\ 0 & \widetilde{X} & \dots & 0 \\ && \dots & \\ 0 & 0 & \dots & \widetilde{X}\end{array}\right),\] where $\widetilde{X}\in\mathbb{R}^{n\times p}$ is any matrix satisfying the original knockoff construction conditions~\eqnref{knockoff_condition} with respect to the original design matrix $X$. Applying the group knockoff methodology with this data $(\mathbb{X},y)$ and knockoff matrix $\widetilde{\bbX}$, we obtain the following result: \begin{theorem} For the multitask learning setting with an arbitrary covariance structure $\Sigma\in\mathbb{R}^{r\times r}$, the knockoff or knockoff+ methods control the modified group FDR or the group FDR, respectively, at the level $q$. \end{theorem} \begin{proof} In order to apply the result for the group-sparse setting to this multitask scenario, we need to address two questions: first, whether $\widetilde{\bbX}$ satisfies the group knockoff matrix conditions~\eqnref{construct_group_knockoffs}, and second, how to handle the issue of the non-\iid structure of the noise $\epsilon$. We first check the conditions~\eqnref{construct_group_knockoffs} for $\widetilde{\bbX}$. let $\widetilde{X}\in\mathbb{R}^{n\times p}$ be a knockoff matrix for $X$, satisfying~\eqnref{knockoff_condition}, and let $\Sigma = X^\top X$. Then we see that \begin{multline*}\widetilde{\bbX}^\top\widetilde{\bbX} = \mathbf{I}_r\otimes (\widetilde{X}^\top \widetilde{X}) = \mathbf{I}_r \otimes \Sigma = \mathbb{X}^\top \mathbb{X},\text{ and }\\ \widetilde{\bbX}^\top\mathbb{X} = \mathbf{I}_r\otimes (\widetilde{X}^\top X) = \mathbf{I}_r \otimes(\Sigma - \diag\{s\})\\ = \mathbb{X}^\top \mathbb{X} - \mathbf{I}_r\otimes \diag\{s\}\end{multline*} where $s$ is defined as in~\eqnref{knockoff_condition}. Since the difference $\mathbf{I}_r\otimes\diag\{s\}$ is a diagonal matrix, we see that $\widetilde{\bbX}$ satisfies the group knockoff condition~\eqnref{construct_group_knockoffs}; in fact, it satisfies the stronger (ungrouped) knockoff condition~\eqnref{knockoff_condition}. \begin{figure*}[t]\centering \includegraphics[width=\textwidth]{group_plot6.pdf} \caption{Results for the group-sparse regression simulation, comparing group knockoff and knockoff+ against the original knockoff and knockoff+ methods.} \label{fig:groupsparse_sim} \end{figure*} Next we turn to the issue of the non-identity covariance structure $\Sigma\hspace{-3.5pt}{\color{white}1}\hspace{-7pt}\Sigma$ for the noise term $\epsilon\in\mathbb{R}^{nr}$. First, write \[\Sigma\hspace{-3.5pt}{\color{white}1}\hspace{-7pt}\Sigma^{-\nicefrac{1}{2}} = \Sigma^{-\nicefrac{1}{2}}\otimes \mathbf{I}_n\] to denote an inverse square root for $\Sigma\hspace{-3.5pt}{\color{white}1}\hspace{-7pt}\Sigma$. Note also that \begin{multline}\label{eqn:commutes}\Sigma\hspace{-3.5pt}{\color{white}1}\hspace{-7pt}\Sigma^{-\nicefrac{1}{2}}\cdot \mathbb{X} = (\Sigma^{-\nicefrac{1}{2}}\otimes\mathbf{I}_n)\cdot (\mathbf{I}_r\otimes X) = \Sigma^{-\nicefrac{1}{2}}\otimes X\\ =(\mathbf{I}_r\otimes X)\cdot (\Sigma^{-\nicefrac{1}{2}}\otimes\mathbf{I}_p) = \mathbb{X} \cdot \Sigma\hspace{-3.5pt}{\color{white}1}\hspace{-7pt}\Sigma^{-\nicefrac{1}{2}}_*,\end{multline} for $\Sigma\hspace{-3.5pt}{\color{white}1}\hspace{-7pt}\Sigma^{-\nicefrac{1}{2}}_* = \Sigma^{-\nicefrac{1}{2}}\otimes\mathbf{I}_p$. Taking our vectorized multitask regression model~\eqnref{multitask_vector_model}, multiplying both sides by $\Sigma\hspace{-3.5pt}{\color{white}1}\hspace{-7pt}\Sigma^{-\nicefrac{1}{2}}$ on the left, and applying~\eqnref{commutes}, we obtain a ``whitened'' reformulation of our model, \begin{equation}\label{eqn:multitask_whitened} y^{\text{wh}} = \mathbb{X} \cdot (\Sigma\hspace{-3.5pt}{\color{white}1}\hspace{-7pt}\Sigma^{-\nicefrac{1}{2}}_* \beta) + \epsilon^{\text{wh}}\text{ for }\begin{cases}y^{\text{wh}} = \Sigma\hspace{-3.5pt}{\color{white}1}\hspace{-7pt}\Sigma^{-\nicefrac{1}{2}} y,\\ \epsilon^{\text{wh}} = \Sigma\hspace{-3.5pt}{\color{white}1}\hspace{-7pt}\Sigma^{-\nicefrac{1}{2}} \epsilon,\end{cases}\end{equation} where $\epsilon^{\text{wh}}\sim \mathcal{N}(0,\mathbf{I}_{nm})$ is the ``whitened'' noise. Now we are back in a standard linear regression setting, and can apply the knockoff method---note that we are working with a new setup: while the design matrix $\mathbb{X}$ is the same as in~\eqnref{multitask_vector_model}, we now work with response vector $y^{\text{wh}}$ and coefficient vector $\Sigma\hspace{-3.5pt}{\color{white}1}\hspace{-7pt}\Sigma^{-\nicefrac{1}{2}}_* \beta$. The group sparsity of the coefficient vector has not changed, due to the block structure of $\Sigma\hspace{-3.5pt}{\color{white}1}\hspace{-7pt}\Sigma^{-\nicefrac{1}{2}}$; we have \[(\Sigma\hspace{-3.5pt}{\color{white}1}\hspace{-7pt}\Sigma^{-\nicefrac{1}{2}}_* \beta)_{G_j} = \Sigma\hspace{-3.5pt}{\color{white}1}\hspace{-7pt}\Sigma_{G_j,G_j}^{-\nicefrac{1}{2}}\beta_{G_j}\] for each $j=1,\dots,p$, and so the ``null groups'' for the original coefficient vector $\beta$ (i.e.~groups $j$ with $\beta_{G_j}=0$) are preserved in this reformulated model. We need to check only that the group lasso output, namely $\widehat{\beta}$, depends on the data only through the sufficient statistics $\mathbb{X}^\top\mathbb{X}$ and $\mathbb{X}^\top y^{\text{wh}}$; here we use the ``whitened'' response $y^{\text{wh}}$ rather than the original response vector $y$ since the knockoff theory applies to linear regression with \iid Gaussian noise, as in the model~\eqnref{multitask_whitened} for $y^{\text{wh}}$. When we apply the group lasso, as in the optimization problem~\eqnref{multitask_as_group_lasso}, it is clear that the minimizer $\widehat{\beta}$ depends on the data $\mathbb{X},y$ only through $\mathbb{X}^\top\mathbb{X}$ and $\mathbb{X}^\top y$. Furthermore, we can write \[\mathbb{X}^\top y = \mathbb{X}^\top \Sigma\hspace{-3.5pt}{\color{white}1}\hspace{-7pt}\Sigma^{\nicefrac{1}{2}} y^{\text{wh}} = \Sigma\hspace{-3.5pt}{\color{white}1}\hspace{-7pt}\Sigma^{\nicefrac{1}{2}}_*\cdot (\mathbb{X}^\top y^{\text{wh}}),\] where we can show $\Sigma\hspace{-3.5pt}{\color{white}1}\hspace{-7pt}\Sigma^{\nicefrac{1}{2}}\cdot \mathbb{X} = \mathbb{X}\cdot \Sigma\hspace{-3.5pt}{\color{white}1}\hspace{-7pt}\Sigma^{\nicefrac{1}{2}}_*$ exactly as in~\eqnref{commutes} before. Therefore, $\widehat{\beta}$, depends on the data only through the sufficient statistics $\mathbb{X}^\top\mathbb{X}$ and $\mathbb{X}^\top y^{\text{wh}}$, as desired. Our statistics for the knockoff filter therefore will satisfy the sufficiency property. The group-antisymmetry property is obvious from the definition of the method. Therefore, applying our main result \thmref{main_group} for the group-sparse setting to the whitened model~\eqnref{multitask_whitened}, we see that the (modified or unmodified) group FDR control result holds for this setting. \end{proof} \section{Simulated data experiments} We test our methods in the group sparse and multitask settings. All experiments were carried out in Matlab \cite{MATLAB} and R \cite{R}, including the \texttt{grpreg} package in R \cite{breheny2015group}. \subsection{Group sparse setting} To evaluate the performance of our method in the group sparse setting, we compare it empirically with the (non-group) knockoff using simulated data from a group sparse linear regression, and examine the effects of sparsity level and feature correlations within and between groups. \subsubsection{Data} To generate the simulation data, we use the sample size $n=3000$ with number of features $p=1000$. In our basic setting, the number of groups is $m=200$ with corresponding number of features per group set as $p_i=5$ for each group $i$. To generate features, as a default we use an uncorrelated setting, drawing the entries of $X$ as \iid~standard normals, then normalize the columns of $X$. Our default sparsity level is $k=20$ (that is, $k$ groups with nonzero signal); $\beta_j$, for each $j$ inside a signal group, id chosen randomly from $\{\pm 3.5\}$. To study the effects of sparsity level and feature correlation, we then vary these default settings as follows (in each experiment, one setting is varied while the others remain at their default level): \begin{itemize} \item Sparsity level: we vary the number of groups with nonzero effects, $k\in\{10,12,14,\dots,50\}$. \item Between-group correlation: we fix within-group correlation $\rho = 0.5$, and set the between-group correlation to be $\gamma\rho$, with $\gamma\in\{0,0.1,0.2,\dots,0.9\}$. We then draw the rows of $X\in\mathbb{R}^{n\times p}$ independently from a multivariate normal distribution with mean $0$ and covariance matrix $\Sigma$, with diagonal entries $\Sigma_{jj}=1$, within-group correlations $\Sigma_{jk}=\rho$ for $j\neq k$ in the same group, and between-group correlations $\Sigma_{jk}=\gamma\rho$ for $j,k$ in different groups. Afterwards, we normalize the columns of $X$. \item Within-group correlation: as above, but we fix $\gamma=0$ (so that between-group correlation is always zero) and vary within-group correlation, with $\rho\in\{0,0.1,\dots,0.9\}$. \end{itemize} For each setting, we use target FDR level $q=0.2$ and repeat each experiment $100$ times. \begin{figure*}[t]\centering \includegraphics[width=0.24\textwidth]{plot_vary_k.pdf} \includegraphics[width=0.24\textwidth]{plot_vary_m.pdf} \includegraphics[width=0.24\textwidth]{plot_vary_rhox.pdf} \includegraphics[width=0.24\textwidth]{plot_vary_rhoy.pdf} \caption{Results for the multitask regression simulation, comparing multitask knockoff with the pooled and parallel knockoff methods.} \label{fig:multitask_sim} \end{figure*} \subsubsection{Results} Our results are displayed in \figref{groupsparse_sim}, which displays power (the proportion of true signals which were discovered) and FDR at the group level, averaged over all trials. We see that all four methods successfully control FDR at the desired level. Across all settings, the group knockoff is more powerful than the knockoff, showing the benefit of leveraging the group structure. The group knockoff+ and knockoff+ are each slightly more conservative than their respective methods without the ``+'' correction. From the experiments with zero between-group correlation and increasing within-group correlation $\rho$, we see that knockoff has rapidly decreasing power as $\rho$ increases, while group knockoff does not show much power loss. This highlights the benefit of the group-wise construction of the knockoff matrix; for the original knockoff, high within-group correlation forces the knockoff features $\widetilde{X}_j$ to be nearly equal to the $X_j$'s, but this is not the case for the group knockoff construction and the greater separation allows high power to be maintained. \subsection{Multitask regression setting} To evaluate the performance of our method in the multitask regression setting, we next perform a simulation to compare the multitask knockoff with the knockoff. (For clarity in the figures, we do not present results for the knockoff+ versions of these methods; the outcome is predictable, with knockoff+ giving slightly better FDR control but lower power.) For the multitask knockoff, we implement the method exactly as described in \secref{knockoff_multitask}. The $j$th feature is considered a discovery if the corresponding group is selected. For the knockoff, we use the group lasso formulation of the multitask model, given in~\eqnref{multitask_vector_model}, and apply the knockoff method to the reshaped data set $(\mathbb{X},y)$; we call this the ``pooled'' knockoff. We also run the knockoff separately on each of the $r$ responses (that is, we run the knockoff with data $(X,Y_j)$ where $Y_j$ is the $j$th column of $Y$, separately for $j=1,\dots,r$). We then combine the results: the $j$th feature is considered a discovery if it is selected in any of the $r$ individual regressions; this version is the ``parallel'' knockoff. \subsubsection{Data} To generate the data, our default settings for the multitask model given in~\eqnref{multitask_model} are as follows: we set the sample size $n=150$, the number of features $p=50$, with $m=5$ responses. The true matrix of coefficients $B$ has its $k=10$ rows nonzero, which are chosen as $2\sqrt{m}$ times a random unit vector. The design matrix $X$ is generated by drawing \iid~standard normal entries and then normalizing the columns, and the entries of the error matrix $E$ are also \iid standard normal. We set the target FDR level at $q=0.2$ and repeat all experiments $100$ times. These default settings will then be varied in our experiments to examine the roles of the various parameters (only one parameter is varied at a time, with all other settings at their defaults): \begin{itemize} \item Sparsity level: the number of nonzero rows of $B$ is varied, with $k\in\{2,4,6,\dots,20\}$. \item Number of responses: the number of responses $r$ is varied, with $r\in\{1,2,3,4,5\}$. \item Feature correlation: the rows of $X$ are \iid~draws from a $N(0,\Sigma_X)$ distribution, with a tapered covariance matrix which has entries $(\Sigma_X)_{jk} = (\rho_X)^{|j-k|}$, with $\rho_X\in\{0,0.1,0.2,\dots,0.9\}$. (The columns of $X$ are then normalized.) \item Response correlation: the rows of the noise $E$ are \iid~draws from a $N(0,\Sigma_Y)$ distribution, with a equivariant correlation structure which has entries $(\Sigma_Y)_{jj}=1$ for all $j$, and $(\Sigma_Y)_{jk}=\rho_Y$ for all $j\neq k$, with $\rho_Y\in\{0,0.1,0.2,\dots,0.9\}$. \end{itemize} \subsubsection{Results} Our results are displayed in \figref{multitask_sim}. For each method, we display the resulting FDR and power for selecting features with true effects in the model. The parallel knockoff is not able to control the FDR. This may be due to the fact that this method combines discoveries across multiple responses; if the true positives selected for each response tend to overlap, while the false positives tend to be different (as they are more random), then the false discovery proportion in the combined results may be high even though it should be low for each individual responses' selections. Therefore, while it is more powerful than the other methods, it does not lead to reliable FDR control. Turning to the other methods, both multitask knockoff and pooled knockoff generally control FDR at or near $q=0.2$ except in the most challenging (lowest power) settings, where as expected from the theory, the FDR exceeds its target level. Across all settings, multitask knockoff is more powerful than pooled knockoff, and same for the two variants of knockoff+. Overall we see the advantage in the multitask formulation, with which we are able to identify a larger number of discoveries while maintaining FDR control. \section{Real data experiment} We next apply the knockoff for multitask regression to a real data problem. We study a data set that seeks to idenitify drug resistant mutations in HIV-1 \cite{rhee2006genotypic}. This data set was analyzed by \cite{barber2015} using the knockoff method. Each observation, sampled from a single individual, identifies mutations along various positions in the protease or reverse transcriptase (two key proteins) of the virus, and measures resistance against a range of different drugs from three classes: protease inhibitors (PIs), nucleoside reverse transcriptase inhibitors (NRTIs), and nonnucleoside reverse transcriptase inhibitors (NNRTIs). In \cite{barber2015} the data for each drug was analyzed separately; the response $y$ was the resistance level to the drug while the features $X_j$ were markers for the presence or absence of the $j$th mutation. Here, we apply the multitask knockoff to this problem: for each class of drugs, since the drugs within the class have related biological mechanisms, we expect the sparsity pattern (i.e.~which mutations confer resistance to that drug) to be similar across each class. We therefore have a matrix of responses, $Y\in\mathbb{R}^{n\times r}$, where $n$ is the number of individuals and $r$ is the number of drugs for that class. We compare our results to those obtained with the knockoff method where drugs are analyzed one at a time (the ``parallel'' knockoff from the multitask simulation). \subsection{Data} Data is analyzed separately for each of the three drug types. To combine the data across different drugs, we first remove any drug with a high proportion of missing drug resistance measurements; this results in two PI drugs and one NRTI drug being removed (each with over 35\% missing data). The remaining drugs all have $<10\%$ missing data; many drugs have only $1-2\%$ missing data. Next we remove data from any individual that is missing drug resistance information from any of the (remaining) drugs. Finally, we keep only those mutations which appear $\geq 3$ times in the sample. The resulting data set sizes are: \begin{table}[ht]\small \centering \begin{tabular}{cccc} \hline Class &\# drugs ($r$) &\# observations ($n$)& \# mutations ($p$) \\ \hline PI &5 &701 &198\\ NRTI &5 &614 &283\\ NNRTI &3 &721 &308\\ \hline \end{tabular} \end{table} \subsection{Methods} For each of the three drug types, we form the $n\times r$ response matrix $Y$ by taking the log-transformed drug resistance measurement for the $n$ individuals and the $r$ drugs, and the $n\times p$ feature matrix $X$ recording which of the $p$ mutations are present in each of the $n$ individuals. We then apply the multitask knockoff as described in \secref{knockoff_multitask}, with target FDR level $q = 0.2$. For comparison, we also apply the knockoff to the same data (analyzing each drug separately), again with $q=0.3$. We use the equivariant construction for the knockoff matrix for both methods. \begin{figure}[t]\centering \includegraphics[width=\textwidth]{results_plot.pdf}\vspace{-.2in} \caption{Results on the HIV-1 drug resistance data set. For each drug class, we plot the number of protease positions (for PI) or reverse transcriptase (RT) positions (for NRTI or NNRTI) which were selected by the multitask knockoff or knockoff method. The color indicates whether or not the selected position appears in the treatment selected mutation (TSM) panel, and the horizontal line shows the total number of positions on the TSM panel.} \label{fig:hiv_results} \end{figure} \subsection{Results} We report our results by comparing the discovered mutations, within each drug class, against the treatment-selected mutation (TSM) panel \cite{rhee2005hiv}, which gives mutations associated with treatment by a drug from that class. As in \cite{barber2015} we report the counts by position rather than by mutation, i.e.~combinining all mutations discovered at a single position, since multiple mutations at the same position are likely to have related effects. To compare with the knockoff method, for each drug class we consider mutation $j$ to be a discovery for that drug class, if it was selected for any of the drugs in that class. The results are displayed in \figref{hiv_results}. In this experiment, we see that the multitask knockoff has somewhat fewer discoveries than the knockoff, but seems to show better agreement with the TSM panel. As in the multitask simulation, this may be due to the fact that the knockoff combines discoveries across several drugs; a low false discovery proportion for each drug individually can still lead to a high false discovery proportion once the results are combined. \section{Discussion} We have presented a knockoff filter for the group sparse regression and multitask regression problems, where sharing information within each group or across the set of response variables allows for a more powerful feature selection method. Extending the knockoff framework to other structured estimation problems, such as non-linear regression or to low-dimensional latent structure other than sparsity, would be interesting directions for future work. \bibliographystyle{imsart-nameyear}
1,108,101,565,379
arxiv
\section*{Introduction} \vskip-\baselineskip Spatially indirect excitons in a semiconductor system are a highly sought alternative for achieving quantum condensation and superfluidity in solid state devices at experimentally accessible temperatures. Excitons are electrons and holes that bind into pairs through their long-range Coulombic attraction. Spatially indirect excitons have the electrons and holes confined in two separated but closely adjacent quantum wells or quasi two-dimensional (2D) layers\cite{Lozovik1975,Lozovik1976}. In the spatially indirect configuration, the electron-hole attraction can be very strong, while at the same time the electrons and holes are prevented from mutually annihilating through recombination. From an application perspective, a supercurrent in the electron-hole superfluid could carry an electric current if the electron and hole layers are independently contacted in a counterflow configuration, directly leading to applications in dissipationless solid state electronics\cite{Su2008,Nandi2012}. Furthermore, the superfluid can be continuously tuned from the strongly coupled BEC bosonic regime to the BCS-BEC crossover regime of less strongly coupled fermionic pairs, simply by varying the carrier density using metal gates. In addition, when the electron and hole masses in a semiconductor are different, there are predictions of exotic superfluid phases\cite{Pieri2007}, including the Fulde-Ferrell-Larkin-Ovchinnikov (FFLO) phase\cite{Wang2017a} and the Sarma phase with two Fermi surfaces\cite{Forbes2005}. These exotic phases are predicted to occur at much higher temperatures than in mass-imbalanced ultracold atomic gas Fermi mixtures\cite{Kinnunen2018, Frank2018}. Two-dimensional van der Waals systems show particular promise because they offer the possibility of ultra-thin insulating barriers and conducting layers, with very strong electron-hole pairing interactions as a result. There have been predictions of quantum condensation of spatially indirect excitons in double bilayer graphene\cite{Perali2013} and double Transition Metal Dichalcogenide (TMD) monolayers\cite{Fogler2014}, and recent experiments have provided strong evidence for quantum condensation in these systems\cite{Burg2018,Wang2019}. One aspect of graphene bilayers and TMD monolayers is that the electron and hole effective masses are nearly equal, making these systems unsuitable for generating exotic superfluid phases. However, the most pressing limitation of the 2D van der Waals systems lies in the rudimentary methods employed for device fabrication. These entail a layer by layer assembly using pick and transfer techniques which are prone to layer wrinkling, contamination and misorientation\cite{Frisenda2018}, leading to very poor device yield with limited prospects for scalability. A more scalable approach is based on spatially indirect excitons in conventional semiconductor heterostructures, such as electrons and holes in GaAs double quantum wells (DQW)\cite{Croxall2008,Seamons2009}. However, despite indications of possible quantum condensation at very low temperatures (below $1$ K), concrete evidence for equilibrium BEC or superfluidity has remained elusive in this rather mature material system. It has been shown that the intrinsic properties of the GaAs/AlGaAs band structure constitute the main limitations for excitonic condensation\cite{SaberiPouya2020}, and also pose severe challenges in device fabrication and operation\cite{DasGupta2011a}. First, the type I GaAs/AlGaAs band alignment makes it difficult to develop independent and selective contacts to the electron and hole layers. Second, in GaAs electron-hole DQWs, the energy separation between electron and hole states is $\approx 1.5$ eV, requiring rather wide AlGaAs barriers to avoid interlayer leakage. As a consequence, the electron-hole mutual Coulomb attraction is relatively weak and exciton formation greatly suppressed. Third, GaAs heterostructures are grown by molecular beam epitaxy. This growth technique is not compatible with conventional complementary metal-oxide semiconductor technology, so that prospects for advanced manufacturing and large scale device integration are severely limited. For these reasons, investigation of other solid-state systems that may overcome some limitations of double layers in GaAs, graphene and TMDs is of great interest. In this letter we propose as a candidate for electron-hole superfluidity and BEC, an alternative mass-imbalanced solid-state system: a lattice-matched strained Si/Ge bilayer embedded into a germanium-rich SiGe crystal. Holes are confined in a compressively strained Ge quantum well and electrons in a tensile strained Si quantum well, with no barrier in between. This is possible since the Si/Ge interface offers a type II band alignment\cite{schaffler_high-mobility_1997,lee_strained_2004,virgilio_type-i_2006}, and thus electrons and holes can be kept separate but very close together. This enhances the strength of the electron-hole attraction while preventing unwanted recombination. This alternative route is promising since Si and Ge heterostructures have reached maturity in the past decade. Si and Ge heterostructures have very low disorder, with carrier mobilities exceeding one million in both constituents of the bilayer: the 2D electron gas in Si/SiGe\cite{lu_observation_2009} and the 2D hole gas in Ge/SiGe\cite{dobbie_ultra-high_2012}. Furthermore, the carrier density may be tuned over orders of magnitude by leveraging on industrial gate-stack technology\cite{wuetz_multiplexed_2020,lodari_low_2020}. This material system also integrates with advanced quantum technologies\cite{Vandersypen2017InterfacingCoherent,scappucci_germanium_2020}, including long-lived electron spin qubits in Si\cite{Yoneda2018A99.9,watson_programmable_2018}, hole spin qubit arrays in Ge\cite{hendrickx_four-qubit_2020,hendrickx_fast_2020}, and superconducting contacts to holes\cite{hendrickx_gate-controlled_2018,hendrickx_ballistic_2019,vigneau_germanium_2019}. A major advantage over the GaAs and TMD material systems is that the band alignments of the proposed Si/Ge bilayer should allow electrons in Si and holes in Ge to be contacted independently and selectively using a one step fabrication process. Due to strain, there is a very large mass imbalance between holes in Ge ($0.05 m_e$)\cite{lodari_light_2019} and electrons in Si ($0.2 m_e$)\cite{schaffler_high-mobility_1997,Zwanenburg2013}, opening the door to exploration of exotic superfluid phases. Finally, these Si/Ge heterostructures may be grown on $300$ mm Si wafers using mainstream chemical vapor deposition\cite{Pillarisetty2011Academic}, and may profit from advanced semiconductor manufacturing for high device yield and integration. Our calculations show that the envisaged Si/Ge bilayer supports a superfluid condensate. Tuning the carrier density continuously sweeps the superfluid across the BEC and BCS-BEC regimes of the superfluid, with an accompanying variation in the magnitude of the superfluid gap and the transition temperature. \section*{Results} \vskip-\baselineskip \subsection*{Material stack and device architecture} \vskip-\baselineskip Figure \ref{Fig:Materials}a illustrates the concept of a lattice-matched Si/Ge bilayer. A cubic (strain-relaxed) Ge-rich Si$_{1-x}$Ge$_x$ substrate, with a Ge concentration $x=0.8$, sets the overall in-plane lattice parameter of the stack. For layer thicknesses below the critical thickness for onset of plastic relaxation, the Si/Ge epilayers will grow with the same in-plane lattice constants as the underlying Si$_{0.2}$Ge$_{0.8}$\cite{matthews_defects_1976,Paul2010}. Therefore, the Ge layer will be compressively strained in the in-plane direction\cite{sammak_shallow_2019} and, conversely, the Si layer will be under tensile strain\cite{schaffler_high-mobility_1997}. A thickness of $3$ nm for each Ge and Si layer allows for such strain engineering\cite{Paul2010}, and is feasible experimentally given the recent advances in low-temperature chemical vapor deposition of SiGe heterostructures comprising Si and Ge quantum wells\cite{dyck_accurate_2017,sammak_shallow_2019}. Figure \ref{Fig:Materials}b shows the band-structure of the proposed lattice matched Si/Ge bilayer, where the Ge and Si layers are strained in opposite directions. The band-structure was calculated in an effective mass approach using deformation potential theory to take into account the impact of the strain field on the relevant band edges. We highlight three highly attractive features. First, there is effectively only one quantum well for electrons, spatially separated from the single quantum well for holes. There does exist a quantum well for electrons in Ge\cite{virgilio_type-i_2006}, but it is located so high in energy it would remain inactive for the present phenomenon. This is quite different from GaAs and TMD's, where each of the two layers has both electron and hole wells. The valence band profile shows that the wave-function of the fundamental heavy hole (HH) state is confined in the Ge quantum well. The large energy splittings between the fundamental HH state and the fundamental light hole (LH) state are the result of the compressive strain and the larger confinement mass of the HH with respect to the LH. The conduction band profile shows that the wave-function of the fundamental $\Delta_2$ state is confined in the Si quantum well, with other states being higher up in energy. There is no hole quantum well in the Si layer. The second feature is that the energy difference between the bottom of the conduction band and the top of the valence band is $\approx0.18$ eV. This is $\sim 8$ times smaller than in GaAs quantum wells, meaning a small interlayer bias should be sufficient for tuning of the electron and hole wave-function shape and position in the quantum well. Finally, based on previous theoretical predictions and experiments, such strained Ge and Si quantum wells will result in a large imbalance of the masses, with a very light in-plane effective mass for holes ($\approx0.05m_e$) and a much heavier in-plane effective mass for electrons ($\approx0.19m_e$). This has important implications for the superfluid, as detailed in the subsection {\it Screening polarizabilities in the superfluid state}. \begin{figure*} \begin{center} \includegraphics[angle=0,width=0.92\textwidth] {Fig1.pdf} \end{center} \caption{ \textbf{Lattice-matched double Si/Ge bilayer, bandstructure, device architecture.} \textbf{a} Below the critical thickness for plastic relaxation, epitaxial Ge and Si layers are lattice matched to the underlying SiGe, with compressive and tensile strain, respectively. \textbf{b} Band structure of low-lying bands resulting in a double quantum well structure. Due to the type II band alignment at the strained-Si/strained-Ge heterojunction, electrons ($\Delta_2$ states) are confined in tensile strained Si and holes (HH states) in compressively strained Ge. \textbf{c} Material stack with embedded Si/Ge bilayer and an envisaged device architecture, characterized by independent n$^{++}$ and p$^{++}$ contacts and gate electrodes to tune independently the electron and hole densities in the bilayer. } \label{Fig:Materials} \end{figure*} Figure \ref{Fig:Materials}c illustrates the entire material stack with the embedded Si/Ge bilayer and an envisaged device architecture. Starting from a Si(001) wafer and a thick strain-relaxed Ge layer deposited on top, a high-quality Si$_{0.2}$Ge$_{0.8}$ strain-relaxed buffer is obtained by reverse grading the Ge content in the alloy\cite{shah_reverse_2008,sammak_shallow_2019}. Importantly, the strain-relaxed Si$_{0.2}$Ge$_{0.8}$ buffer can be heavily $p$-type doped to serve as an epitaxial bottom gate. This bottom gate can be biased negatively to populate only the undoped Ge quantum well with holes. Following the deposition of the Si/Ge bilayer, an additional Si$_{0.2}$Ge$_{0.8}$ barrier separates the Si quantum well from a dielectric layer and top gate. The top gate may be biased positively to populate only the undoped Si quantum well with electrons. Separate and independent Ohmic contacts to the capacitively induced 2D electron gas in Si and hole gas in Ge is achieved by standard n$^{++}$ and p$^{++}$ ion implantation, respectively\cite{lee_strained_2004,Pillarisetty2011Academic}. Alternatively, the p$^{++}$ contact may be substituted with an aluminium layer or an in-diffused metallic germano-silicide, as routinely done in Ge/SiGe heterostructure field effect transistors\cite{sammak_shallow_2019}. By carefully designing the thickness of the Si$_{0.2}$Ge$_{0.8}$ barriers above and below the Si/Ge bilayer, we envisage that the carrier density may be tuned independently in the electron and hole layer in the low density regime, $n< 10^{11}$ cm$^{-2}$. Independent electrical contact to the two layers is easily achieved thanks to the remarkable, unique band structure of the material stack, with only one quantum well for electrons in the strained Si and only one accessible quantum well for holes in the strained Ge. Crucially, the prospect to make independent contacts to the electron-hole bilayer with a simple and robust process, in contrast to the challenging processing required for selective contacts in III-V and TMD materials, bodes well for superfluidity measurements in counterflow configurations\cite{su_how_2008,DasGupta2011a}. \subsection*{Superfluid properties} \vskip-\baselineskip \begin{figure}[h] \begin{center} \includegraphics[width=0.56\textwidth] {Fig2.pdf} \end{center} \caption{ \textbf{Properties of the zero-temperature superfluid gap.} \textbf{a} Maximum of the gap $\Delta_{\mathrm{max}}$ as a function of the equal carrier densities $n$. The blue and yellow areas represents the BEC and BCS-BEC crossover regimes. The BCS regime is suppressed by screening. \textbf{b} Momentum dependent gap $\Delta_{\mathbf{k}}$ at the three densities $n$ indicated by the colour-coded dots on the curve in \textbf{a}. \textbf{c} $\Delta_{\mathrm{max}}$ scaled by the electron and hole Fermi energies $E_{F,e}$ and $E_{F,h}$. } \label{Fig:SFGap} \end{figure} In Fig.\ \ref{Fig:SFGap} we report superfluid properties of the system, calculated including self-consistent screening of the electron-hole pairing interaction. Self-consistent treatment of the long-ranged Coulombic pairing interaction\cite{Lozovik2012} is a key element for determining electron-hole superfluid properties, in contrast to other superconductors and superfluids with short-range pairing interactions. Figure \ref{Fig:SFGap}a shows the maximum of the zero-temperature superfluid gap $\Delta_{\mathrm{max}}$ as a function of the electron and hole densities, assumed equal. As the density $n$ increases, $\Delta_{\mathrm{max}}$ first increases, passes through a maximum and then decreases. Above an onset density, $n_0\sim 5.9 \times 10^{10}$ cm$^{-2}$, $\Delta_{\mathrm{max}}$ very rapidly drops to negligible values, $\lesssim 1$ $\mu$eV. The behaviour of $\Delta_{\mathrm{max}}$ is a consequence of the self-consistency included in the screening\cite{Perali2013}. This has radically different effects in the different regimes of the superfluidity\cite{Neilson2014,LopezRios2018}. (i) At low carrier densities, the binding energy of the electron-hole pairs is large relative to the Fermi energy, and the excitons are strongly bound and compact. They resemble weakly-interacting, approximately neutral, bosons and screening is negligible. (ii) With increasing density, the number of pairs increases and the gap gets stronger. However, relative to the Fermi energy the gap gets weaker, and the superfluid moves from the BEC regime of bosons to the less strongly coupled BCS-BEC crossover regime of coupled fermionic pairs. (iii) When the density is further increased above the onset density, $n>n_0$, strong screening overcomes the weak electron-hole pair coupling in what would be the BCS regime, and the superfluidity is suppressed. The density ranges for the BEC and BCS-BEC crossover regimes are indicated in Fig.\ \ref{Fig:SFGap}a by the blue and yellow shaded areas. We characterize the regimes by the condensate fraction parameter $c$, the fraction of carriers bound in pairs\cite{Salasnich2005,Guidini2014}. The boundary between the BEC and BCS-BEC crossover regimes is determined by $c = 0.8$. The BCS-BEC crossover to BCS boundary would be located at $c=0.2$, but at the onset density $n_0$ the condensate fraction has only dropped to $c \sim 0.5$, implying the absence of a BCS regime. Figure \ref{Fig:SFGap}b shows the momentum dependence of the superfluid gap $\Delta_{\mathbf{k}}$ at three densities $n$. Near the onset density $n_0=5.9 \times 10^{10}$ cm$^{-2}$, which is in the BCS-BEC crossover regime, $\Delta_{\mathbf{k}}$ has a broad peak centred close to $k=k_F$, indicating proximity of the BCS regime. At density $n=3.0 \times 10^{10}$ cm$^{-2}$ corresponding to the maximum of $\Delta_{\mathrm{max}}$, which is near the boundary separating the BCS-BEC crossover and BEC regimes, the peak in $\Delta_{\mathbf{k}}$ has moved back to ${\mathbf{k}}=0$. At $n=0.5 \times 10^{10}$ cm$^{-2}$, which is in the deep BEC regime, $\Delta_{\mathbf{k}}$ extends out to large $k/k_F$. Figure \ref{Fig:SFGap}c shows $\Delta_{\mathrm{max}}$ scaled to the electron and hole Fermi energies. At the onset density, while $\Delta_{\mathrm{max}}<E_{F,h}$, it is surprising that $\Delta_{\mathrm{max}} \gg E_{F,e}$. This result significantly differs from the equal mass case, for which the onset density occurs around $\Delta_{\mathrm{max}}/E_{F,h}=\Delta_{\mathrm{max}}/E_{F,e}\sim 1$. We recall the physical argument that a sufficiently strong energy gap $\Delta_{\mathrm{max}}$ relative to the Fermi energy, will exclude from the screening low-lying excited states that otherwise would very significantly weaken the electron-hole attraction\cite{Neilson2014}. But it is puzzling why $\Delta_{\mathrm{max}}$ must become so much larger than $E_{F,e}$ before the screening is suppressed by the superfluidity. To explain this, we must look at the self-consistent screening polarizabilities in the superfluid state when the electron and hole masses are unequal. \subsection*{Screening polarizabilities in the superfluid state} \vskip-\baselineskip \begin{figure}[!h] \begin{center} \includegraphics[width=0.52\textwidth] {Fig3.pdf} \end{center} \caption{ \textbf{Polarizabilities.} The green (blue) solid lines are the normal electron (hole) polarizabilities in the presence of the superfluid $\Pi_e(\mathbf{q})$ ($\Pi_h(\mathbf{q})$). The red solid lines are the anomalous polarizabilities $\Pi_a(\mathbf{q})$ for the superfluid electron-hole pairs. Dashed green and blue lines are the corresponding effective polarizabilities $\Pi^{\rm{eff}}_{e,h}(\mathbf{q}) \equiv \Pi_{e,h}(\mathbf{q}) +\Pi_a(\mathbf{q})$. {\bf a}\ \ For density $n\simeq n_0=5.9 \times 10^{10}$ cm$^{-2}$, in the BCS-BEC crossover regime. {\bf b}\ \ For density $n=0.5 \times 10^{10}$ cm$^{-2}$, in the BEC regime. } \label{Fig:Pol} \end{figure} We now examine the effect of the different masses on the polarizabilities that control the self-consistent screening. Figure \ref{Fig:Pol} shows the normal polarizabilities for electrons and holes in the presence of the superfluid, $\Pi_e(\mathbf{q})$ and $\Pi_h(\mathbf{q})$ (Eqs.\ (\ref{Eq:Pi_e}) and (\ref{Eq:Pi_h})), and the anomalous polarizability for the superfluid electron-hole pairs, $\Pi_a(\mathbf{q})$ (Eq.\ (\ref{Eq:Pi_a})). The panels are for two densities, the first close to the onset density in the BCS-BEC crossover regime and the second in the BEC regime. If the effect of the superfluidity on the screening is not taken into account, the polarizabilities would be much larger than $\Pi_e(\mathbf{q})$ and $\Pi_h(\mathbf{q})$. In 2D they would be given in the momentum transfer range relevant for screening $\mathbf{q}/k_F\leq 2$, by their respective densities of states, \begin{align} \Pi_e^{(N)}(\mathbf{q}) &=\frac{m^\star_e}{\pi\hbar^2} \simeq 8.0\times 10^{10}\text{cm}^{-2} \text{meV}^{-1} \nonumber \\ \Pi_h^{(N)}(\mathbf{q}) &=\frac{m^\star_h}{\pi\hbar^2} \simeq 2.1\times 10^{10}\text{cm}^{-2} \text{meV}^{-1} \ . \label{Eq:Pi^(N)} \end{align} Such large polarizabilities would lead to very strong screening of the electron-hole interaction, resulting in extremely weak-coupled superfluidity and very low transition temperatures in the mK range\cite{Neilson2014}. The suppression of the normal polarizabilities in the superfluid state arises from the blocking by the superfluid gap of low-lying states in the energy spectrum (Eqs.\ (\ref{Eq:Pi_e}) and (\ref{Eq:Pi_h})). There is even more suppression of screening that comes from cancellation with the anomalous polarizability in the screened interaction (Eq.\ (\ref{Eq:VeffSF})). To highlight these cancellations, in Fig.\ \ref{Fig:Pol} we introduce effective polarizabilities, $\Pi^{\rm{eff}}_e(\mathbf{q}) = \Pi_e(\mathbf{q}) +\Pi_a(\mathbf{q})$ and $\Pi^{\rm{eff}}_h(\mathbf{q}) = \Pi_h(\mathbf{q})+\Pi_a(\mathbf{q})$. At $\mathbf{q}\!=\!0$ and for all densities, there is the property $\Pi_e(\!\mathbf{q}\!=\!0\!)=\Pi_h(\!\mathbf{q}\!=\!0\!)=-\Pi_a(\!\mathbf{q}\!=\!0\!)$, so that $\Pi^{\rm{eff}}_e(\!\mathbf{q}\!=\!0\!) = \Pi^{\rm{eff}}_h(\!\mathbf{q}\!=\!0\!)=\!0$, reflecting the absence of long distance screening for any non-zero superfluid gap. For non-zero $\mathbf{q}$, however, the cancellation of polarizabilities and the suppression of screening are very sensitive to which regime the superfluidity lies in. $\Delta_{\mathbf{k}}$ becomes narrower as we move from the BEC regime, across the BCS-BEC crossover regime, and towards the BCS regime (Fig.\ \ref{Fig:SFGap}b), narrowing both the momentum range for blocked excitations and the momentum range for which the cancellations are significant. Figure \ref{Fig:Pol}a is in the BCS-BEC crossover regime, where we see that the behaviour of $\Pi_e(\mathbf{q})$ for non-zero $\mathbf{q}$ is strikingly different from $\Pi_h(\mathbf{q})$. We discuss details of their differences in functional behaviour in the Supplementary Material. We also note the approximate cancellation of $\Pi_a(\mathbf{q})$ with $\Pi_h(\mathbf{q})$ but not with $\Pi_e(\mathbf{q})$. This is because $\Pi_a(\mathbf{q})$ depends on the strength of the pairing, and so scales with the reduced mass $m^\star_r$, which for this system is approximately equal to $m^\star_h$. In contrast $m^\star_r \ll m^\star_e$, so $\Pi_e(\mathbf{q})$ is larger than $\Pi_a(\mathbf{q})$ and does not cancel with it. We conclude for intermediate values of $k/k_F$, that $\Delta_{\mathbf{k}}$ strongly suppresses the effective polarization for holes, $\Pi^{\rm{eff}}_h(\mathbf{q})$, but does not suppress the effective polarization for electrons, $\Pi^{\rm{eff}}_e(\mathbf{q})$. This explains the puzzling result we noted in Fig. \ref{Fig:SFGap}c, that the screening is only suppressed when the gap reaches very large values of $\Delta_{\mathrm{max}}/E_{F,e} \gg 1$. In contrast, Fig.\ \ref{Fig:Pol}b shows in the deep BEC regime for $\mathbf{q}/k_F\lesssim 2$, that $\Pi_e(\mathbf{q})$ and $\Pi_h(\mathbf{q})$ are very similar and now both scale with the reduced mass $m^\star_r$, like $\Pi_a(\mathbf{q})$. This is because in this regime the electrons and holes are in strongly bound pairs that have lost their single particle character. As a result, $\Pi^{\rm{eff}}_e(\mathbf{q})$ and $\Pi^{\rm{eff}}_h(\mathbf{q})$ are very small over the momentum transfer range important for screening, reflecting near complete cancellation. Physically, the electron-hole pairs in the deep BEC are compact compared with the inter-particle spacing and approximately neutral, and this makes screening unimportant. \subsection*{Superfluid phase diagram} \vskip-\baselineskip The superfluid transition for a 2D system is a Berezinskii-Kosterlitz-Thouless (BKT) transition\cite{Kosterlitz1973}. For parabolic bands, the transition temperature $T_c^{BKT}$ becomes linearly proportional to the carrier density $n$ (Eq.\ (\ref{T_KT_n})). The highest transition temperature occurs at the onset density. The complete phase diagram is shown in Fig.\ \ref{fig:PhaseDia}. Despite the large dielectric constant and the small hole mass, we see that the transition temperatures are readily experimentally accessible, up to $T_c^{BKT}= 2.5$ K. At $T_c^{BKT}$, there is a thermally driven transition to a degenerate exciton gas, in which the system has lost its macroscopic coherence, but local pockets of superfluidity remain. These pockets persist up to a characteristic degeneracy temperature $T_d$\cite{Butov2004}. At $T_d$, the excitons lose degeneracy and the system becomes a classical exciton gas. When the density is increased to an onset density $n_0=5.9\times 10^{10}$ cm$^{-2}$, the superfluid gap drops nearly discontinuously to exponentially small values (Fig.\ \ref{Fig:SFGap}a), and $T_c^{BKT}$ drops to the sub-mK range. The drop in the gap is similar to a first order transition, and is caused by the sudden collapse of three solutions to the gap equation (Eq.\ (\ref{Eq:Delta})) into a single very low-energy solution\cite{Lozovik2012}. This is caused by strong screening of the electron-hole pairing attraction. \begin{figure}[h] \begin{center} \includegraphics[width=0.5\textwidth]{Fig4.pdf} \caption{\textbf{Phase diagram.} $T_c^{BKT}$ is the superfluid transition temperature. $n$ is the density of the electrons and holes. $T_d$ is the degeneracy temperature for the exciton gas. The superfluid BEC and BCS-BEC crossover regimes are indicated. Above $n\sim 6\times 10^{10}$ cm$^{-2}$, the superfluid transition temperature is in the mK range.} \label{fig:PhaseDia} \end{center} \end{figure} \section*{Discussion} \vskip-\baselineskip In summary, we predict superfluidity in experimentally accessible samples, densities and temperatures, with a superfluid gap of up to $3$~meV and a transition temperature up to $2.5$~K. At carrier densities higher than $6\times 10^{10}$ cm$^{-2}$, the relatively weak superfluidity is unable to suppress the increasingly strong screening, with a result that screening suppresses superfluidity. The nature of this self-consistent screening process is sensitive to the system property that the electron and hole masses are markedly different. Since the existence of exotic superfluid phases is dependent on unequal masses, and because the effect of unequal masses does not manifest itself in the BEC regime, ($n< 2\times 10^{10}$ cm$^{-2}$), the experimental search for these exotic phases should focus on intermediate densities in the BCS-BEC crossover regime ($2\times 10^{10} < n < 6\times 10^{10}$ cm$^{-2}$). In a realistic implementation of the proposed Si/Ge bilayer material stack, the Si/Ge interface will have a finite width due to segregation, diffusion, and intermixing associated with the chemical vapor deposition. However, by depositing the Si/Ge layers at sufficiently low temperatures ($\leq 500$~$^\circ$C as in Refs.~\onlinecite{sammak_shallow_2019,dyck_accurate_2017}), we foresee a distance for transitioning between the strained Ge and Si layer of much less than $1$~nm. The details of such Si/Ge transition region do not influence the main findings of the bandstructure calculations nor the superfluid properties, since the principle parameter affecting these is the distance separating the centres of the two wells. The superfluid gap shown in Fig.\ \ref{Fig:SFGap}a reaches $35$ K ($3$ meV), and the transition temperature could be increased up to a significant fraction of this value by implementing variations of the proposed Si/Ge material stack, for example by including a finite stack of bilayers that eliminates the limitations of a 2D system\cite{VanderDonck2020}. Such Si/Ge material stacks with up to five bilayers are within reach, considering the recent experimental progress in the deposition of multi-quantum wells on Ge-rich SiGe virtual substrates\cite{grange_atomic-scale_2020}. In this way, the transition temperature would allow for the design and development of an entire new class of dissipationless logic devices and electronics\cite{reddy_bilayer_2009} for CMOS-based cryogenic control of silicon quantum circuits\cite{xue_cmos-based_2020}. \section*{Methods} \vskip-\baselineskip For calculations of the superfluid properties, we take a structure with hole effective mass $m^\star_h=0.05m_e$ in the compressively strained Ge layer, and electron effective mass $m^\star_h=0.19m_e$ in the tensile strained Si layer. We use a uniform dielectric constant determined for Si and Ge quantum wells of equal width in contact, $\epsilon = 2\left(1/\epsilon_{Ge}+1/\epsilon_{Si}\right)^{-1}= 13.7$, with $\epsilon_{Ge}=16.2$ and $\epsilon_{Si}=11.9$. Lengths are expressed in units of the effective Bohr radius, $a^\star_{B} = \hbar^2 4\pi \epsilon_0\epsilon /({m^\star_r}e^2)=18.3$ nm, and energies in effective Rydbergs, $Ry^{\star}=e^2/(2a_{B}^{\star})=33$ K. $m^\star_r$ is the reduced effective mass. \subsection*{Mean field equations} \vskip-\baselineskip Because of the different masses, there are distinct electron and hole normal Matsubara Green functions, \begin{align} &G_{e}(\mathbf{k},\tau)=-\left\langle T_\tau\, c(\mathbf{k},\tau) c^\dagger(\mathbf{k},0)\right\rangle\, , \nonumber \\ &G_{h}(\mathbf{k},\tau)=-\left\langle T_\tau\, d(\mathbf{k},\tau) d^\dagger(\mathbf{k},0)\right\rangle\, . \label{Eq:normalGreenFunctions} \end{align} $T_\tau$ is the ordering operator in imaginary time $\tau$. $c^{\dagger}$ and $c$ ($d^{\dagger}$ and $d$) are the creation and destruction operators for electrons (holes) in their respective quantum wells. The corresponding anomalous Green function is, \begin{equation} F(\mathbf{k},\tau)=-\left\langle T_\tau\, c(\mathbf{k},\tau) d(\mathbf{k},0)\right\rangle\, . \label{Eq:anomalousGreenFunctions} \end{equation} In the weak-coupling BCS limit, Eqs.\ (\ref{Eq:normalGreenFunctions}) -- (\ref{Eq:anomalousGreenFunctions}) reduce to, \begin{align} G_{e}(i\omega_n, \mathbf{k})&=\frac{u_\mathbf{k}^2}{(i\omega_n-E^-_\mathbf{k})}+\frac{v_\mathbf{k}^2}{(i\omega_n+E^+_\mathbf{k})}\, ,\nonumber\\ G_{h}(i\omega_n, \mathbf{k})&=\frac{v_\mathbf{k}^2}{(i\omega_n+E^-_\mathbf{k})}+\frac{u_\mathbf{k}^2}{(i\omega_n-E^+_\mathbf{k})}\, ,\nonumber\\ F(i\omega_n, \mathbf{k})&=\frac{u_\mathbf{k}v_\mathbf{k}}{(i\omega_n-E^-_\mathbf{k})}-\frac{u_\mathbf{k} v_\mathbf{k}}{(i\omega_n+E^+_\mathbf{k})}\, , \label{Eq:GreenFunctions} \end{align} where $\omega_n\ (n=1,2,3\dots)$ are the fermionic Matsubara frequencies and \begin{equation} E^{\pm}_{\mathbf{k}} = E_{\mathbf{k}} \pm \delta \xi_{\mathbf{k}}\, ,\qquad E_{\mathbf{k}} = \sqrt{\xi_{\mathbf{k}}^2 + \Delta_{\mathbf{k}}^{2}}\, ,\qqua \delta \xi_{\mathbf{k}} =\frac{1}{2}\left(\xi^h_{\mathbf{k}} - \xi^e_{\mathbf{k}}\right)\, ,\qquad \xi_{\mathbf{k}}=\frac{1}{2}\left(\xi^e_{\mathbf{k}} + \xi^h_{\mathbf{k}}\right). \label{Eq:EnergyTerms} \end{equation} $\xi^e_{\mathbf{k}}= \frac{{k}^{2}}{2 m_{e}^\star}- \mu$ ($\xi^h_{\mathbf{k}}= \frac{{k}^{2}}{2 m_{h}^\star}- \mu$) is the electron (hole) single-particle energy band dispersion in the normal state, with $\mu$ the chemical potential. $\Delta_{\mathbf{k}}$ is the superfluid energy gap. The Bogoliubov amplitudes are, $u_{\mathbf{k}}^2 = \frac{1}{2} \left(1+\frac{\xi_{\mathbf{k}}}{E_{\mathbf{k}}}\right)$ and $v_{\mathbf{k}}^2 = \frac{1}{2} \left(1-\frac{\xi_{\mathbf{k}}}{E_{\mathbf{k}}}\right)$. We consider only equal electron and hole densities $n$. At zero temperature, the superfluid energy gap can be determined from the usual mean-field equation of BCS theory, even in the strongly interacting BCS-BEC crossover and BEC regimes: \begin{equation} \Delta_{\mathbf{k}}=\frac{1}{L^2}\!\sum_{\mathbf{k}',\omega_n}\!\!V^{sc}_{\mathbf{k}-\mathbf{k}'} F(i\omega_n, k')\!= \!-\frac{1}{L^2}\!\sum_{\mathbf{k}'}\!V^{sc}_{\mathbf{k}-\mathbf{k}'} \frac{\Delta_{\mathbf{k}'} }{2 E_{\mathbf{k}'}}\ , \label{Eq:Delta} \end{equation} where $V^{sc}_{\mathbf{k}-\mathbf{k}'}=V^{sc}_{\mathbf{q}}$ is the attractive screened electron-hole interaction. As expected, the only mass parameter entering in Eq.\ (\ref{Eq:Delta}) is the reduced mass $m^\star_r$. Equation (\ref{Eq:Delta}) is self-consistently solved coupled to the density equation, \begin{equation} n= \frac{2}{L^2} \sum_{\mathbf{k},\omega_n} G_\ell(i\omega_n, k)= \frac{2}{L^2} \sum_{\mathbf{k}} v_{\mathbf{k}}^2 \qquad \ell=e,h\ . \label{Eq:density} \end{equation} For given density $n$, Eq.\ (\ref{Eq:density}) determines the chemical potential $\mu$. \subsection*{Self-consistent screening} \vskip-\baselineskip Because the electron-hole interaction is Coulombic and long-ranged, it is essential to include screening in $V^{sc}_{\mathbf{q}}$. To determine the screening in the presence of a superfluid, we evaluate the density response functions within the Random Phase Approximation (RPA) for the double quantum well system in which the electrons and holes have different masses\cite{SaberiPouya2020}. For the polarization loops, we use the normal and anomalous Green’s functions, Eqs.\ (\ref{Eq:GreenFunctions}). Then the normal polarizabilities in the presence of the superfluid are, \begin{align} \Pi_e(\mathbf{q})&= \frac{2}{L^2} \sum_{\mathbf{k}}\sum_{\omega_n} G_e(i\omega_n, \mathbf{k})G_e(i\omega_n, \mathbf{k}-\mathbf{q})\nonumber\\ &=-\frac{2}{L^2} \sum_{\mathbf{k}} \left[\frac{u^2_{\mathbf{k}}v^2_{\mathbf{k}-\mathbf{q}}}{E^{+}_{\mathbf{k}-\mathbf{q}}+E^-_{\mathbf{k}}}+ \frac{v^2_{\mathbf{k}}u^2_{\mathbf{k}-\mathbf{q}}}{E^{-}_{\mathbf{k}-\mathbf{q}}+E^{+}_{\mathbf{k}}}\right]\, , \label{Eq:Pi_e} \end{align} \begin{align} \Pi_h(\mathbf{q})&= \frac{2}{L^2} \sum_{\mathbf{k}}\sum_{\omega_n} G_h(i\omega_n, \mathbf{k})G_h(i\omega_n, \mathbf{k}-\mathbf{q})\nonumber\\ &=-\frac{2}{L^2} \sum_{\mathbf{k}} \left[\frac{u^2_{\mathbf{k}}v^2_{\mathbf{k}-\mathbf{q}}}{E^{-}_{\mathbf{k}-\mathbf{q}} +E^+_{\mathbf{k}}}+\frac{v^2_{\mathbf{k}}u^2_{\mathbf{k}-\mathbf{q}}}{E^{+}_{\mathbf{k}-\mathbf{q}}+E^{-}_{\mathbf{k}}} \right]\ . \label{Eq:Pi_h} \end{align} The anomalous polarizability for the density response of the superfluid electron-hole pairs is, \begin{align} \Pi_{a}(\mathbf{q})&=\frac{2}{L^2} \sum_{\mathbf{k}}\sum_{\omega_n} F(i\omega_n, \mathbf{k})F(i\omega_n, \mathbf{k}-\mathbf{q})\nonumber\\ &=\frac{2}{L^2}\!\sum_{\mathbf{k}} \frac{\Delta_{\mathbf{k}}}{2E_{\mathbf{k}}}\frac{\Delta_{\mathbf{k}-\mathbf{q}}}{2E_{\mathbf{k}-\mathbf{q}}} \! \left[\!\frac{1}{E^{-}_{\mathbf{k}-\mathbf{q}}\!+\!E^+_{\mathbf{k}}} \!+\!\frac{1}{E^{+}_{\mathbf{k}-\mathbf{q}}\!+\!E^{-}_{\mathbf{k}}}\! \right]. \label{Eq:Pi_a} \end{align} For $\Delta_{\mathbf{k}}\equiv 0$, the $\Pi_e(\mathbf{q})$ and $\Pi_h(\mathbf{q})$ reduce to the usual Lindhard functions of the normal state, and $\Pi_a(\mathbf{q})$ vanishes. The expression for the static screened electron-hole interaction for unequal masses is, \begin{equation} V^{sc}_{\mathbf{q}}= \frac{V^{eh}_\mathbf{q}} {1-[\Pi_e(\mathbf{q})V^{ee}_\mathbf{q}+ \Pi_h(\mathbf{q})V^{hh}_\mathbf{q}] + 2V^{eh}_\mathbf{q}\Pi_a(\mathbf{q}) +[V^{ee}_\mathbf{q} V^{hh}_\mathbf{q}-(V^{eh}_\mathbf{q})^2] [\Pi_e(\mathbf{q})\Pi_h(\mathbf{q})-\Pi_a^2(\mathbf{q})]} \ . \label{Eq:VeffSF} \end{equation} $V^{ee}_{\mathbf{q}}$ ($V^{hh}_{\mathbf{q}}$) is the bare electron (hole) Coulomb repulsion within one quantum well, and $V^{eh}_{\mathbf{q}}$ is the bare attraction between the electrons and holes in opposite quantum wells: \begin{align} V^{ee}_{\mathbf{q}} =\frac{2\pi e^2}{\epsilon}\frac{1}{|\mathbf{q}|} \mathcal{F}^{ee}_\mathbf{q} \ ; \qquad V^{hh}_{\mathbf{q}} =\frac{2\pi e^2}{\epsilon}\frac{1}{|\mathbf{q}|} \mathcal{F}^{hh}_\mathbf{q} \ ; \qquad V^{eh}_{\mathbf{q}} =-\frac{2\pi e^2}{\epsilon}\frac{e^{-d_c |\mathbf{q}|}}{|\mathbf{q}|} \mathcal{F}^{eh}_\mathbf{q} \ . \label{Eq:bare_interactions} \end{align} We take for the separation parameter $d_c$ the distance between the centre of the two wells. The form-factors $\mathcal{F}_\mathbf{q}$ account for the density distribution of the electrons and holes within their respective finite-width wells\cite{Jauho1993}. We self-consistently solve the superfluid gap equation Eq.\ (\ref{Eq:Delta}), the density equation Eq.\ (\ref{Eq:density}), and the screened interaction in the presence of the superfluid Eq.\ (\ref{Eq:VeffSF}) iteratively, calculating the polarizabilities (Eqs.\ (\ref{Eq:Pi_e}) -- (\ref{Eq:Pi_a})) using the superfluid gaps determined in the preceding iteration. \subsection*{Transition temperature} \vskip-\baselineskip The superfluid transition temperature in this quasi-2D system is determined as a Berezinskii-Kosterlitz-Thouless (BKT) transition\cite{Kosterlitz1973}. For parabolic bands the transition temperature $T_c^{BKT}$ is well approximated by\cite{Botelho2006}, \begin{equation} T_c^{BKT} = \frac{\pi}{2}\rho_s(T_c^{BKT}) \ . \label{T_KT} \end{equation} Within mean-field theory the superfluid stiffness at zero temperature $\rho_s(0)=\hbar^2n/8 m^\star_r$ depends only on the carrier density $n$, independent of the pair coupling strength. We are able to neglect the temperature dependence of $\rho_s(T)$ in Eq.\ (\ref{T_KT}) since $\rho_s(T)$ is approximately constant for temperatures $T\ll \Delta_{\mathrm{max}}$. Thus the transition temperature is linearly proportional to the carrier density, \begin{equation} T_c^{BKT} = \frac{\hbar^2}{16m^\star_r}\pi n \ . \label{T_KT_n} \end{equation} \section*{Data Availability} \vskip-\baselineskip All data generated or analysed during this study are included in this published article and its supplementary information files. \section*{Acknowledgements} \vskip-\baselineskip S.C. acknowledges support of a postdoctoral fellowship from the Flemish Science Foundation (FWO-Vl). G.S.acknowledges support from a projectruimte grant associated with the Netherlands Organization of Scientific Research (NWO). The work was partially supported by the Australian Government through the Australian Research Council Centre of Excellence in Future Low-Energy Electronics (Project No. CE170100039). \section*{Author Contributions} \vskip-\baselineskip S.C., A.R.H., D.N. and G.S. conceived the idea; G.S. designed the Si/Ge material stack and device architecture; D.N. and F.M.P. supervised the project; S.C., A.P. and S.S.P. developed the theoretical framework, and S.C., S.S.P. and M.V. carried out the calculations. All authors contributed to the analysis and interpretation of the results, and S.C., A.R.H, D.N. and G.S. wrote the paper with input from all authors. \section*{COMPETING INTERESTS} \vskip-\baselineskip Competing financial interests: The authors declare no competing financial interests. \section*{Supplementary discussion} \vskip-\baselineskip \begin{figure*}[h] \begin{center} \includegraphics[angle=0,width=0.8\textwidth] {Fig5.pdf} \end{center} \caption{\textbf{ Excitation energies for the normal and superfluid states for three fixed densities.} The normal state electron (hole) single-particle excitations $\xi^{e}_k$ ($\xi^{h}_k$), and corresponding modified superfluid state excitations $E^+_k$ ($E^-_k$), scaled to the average Fermi energy $E_F$ (with the reduced mass). $\xi_k$ and $E_k$ are the corresponding averages. Densities: \textbf{a}\ \ $n\simeq n_0=5.9 \times 10^{10}$ cm$^{-2}$; \textbf{b}\ \ $n=0.5 \times 10^{10}$ cm$^{-2}$. } \label{Fig:energies} \end{figure*} We discuss here the origin of the differences in the functional behaviour of the electron and the hole polarizabilities in the superfluid state when the electron and hole masses are different. The behaviour of $\Pi_e(\mathbf{q})$ and $\Pi_h(\mathbf{q})$ is driven by the changes that the superfluid gap imposes on the excitation spectrum in going from the normal to the superfluid state. Supplementary Figure\ \ref{Fig:energies} compares the normal state spectrum $\xi^{e}_k$ ($\xi^{h}_k$) for the electron (hole) single-particle excitations with the corresponding superfluid state excitation spectrum $E^-_k$ ($E^+_k$) (Eqs.\ \ref{Eq:EnergyTerms}). The colour-coded shaded areas show the low-energy states in $\xi^{e,h}_k$ that are excluded by the gap from $E^\pm_k$, and thus cannot contribute to the polarizability in the superfluid state. It is this exclusion that weakens the screening. Figures \ref{Fig:energies}a and \ref{Fig:energies}b are for the densities corresponding to Figs.\ \ref{Fig:Pol}a and \ref{Fig:Pol}b in the main text. We recall in Fig.\ \ref{Fig:Pol}a of the main text near the onset density, that $\Pi_e(\mathbf{q})$ initially grows with increasing $\mathbf{q}$, passes through a maximum, and then goes slowly to zero, while $\Pi_h(\mathbf{q})$ decreases monotonically to zero. We can deduce the cause of this strikingly different behaviour from Fig.\ \ref{Fig:energies}a. The right panel for the pairs shows the familiar behaviour of $E_k= \sqrt{\xi_k^2 + \Delta_k^{2}}$, where $\xi_{\mathbf{k}}=\frac{1}{2}\left(\xi^e_{\mathbf{k}} + \xi^h_{\mathbf{k}}\right)$, which goes through a minimum at $k=k_F$ at an energy equal to $\Delta_k$. In our system, the $E_k$ is modified by the positive $\delta \xi_k=\frac{1}{2}\left(\xi^h_{\mathbf{k}} - \xi^e_{\mathbf{k}}\right)$, which takes into account the unequal masses. The middle panel for electrons shows $E^-_k= E_k - \delta \xi_k$ passing through a minimum that is lower than for $E_k$, because of the presence of $\delta \xi_k$. The pronounced minimum in $E^-_k$ leads to the maximum seen in $\Pi_e(\mathbf{q})$ (Fig.\ 3a of the main text). The left panel for holes shows, by contrast, that due to the addition of $\delta \xi_k$, $E^+_k$ has no minimum at all, leading to a $\Pi_h(\mathbf{q})$ that depends monotonically on $\mathbf{q}$. The net result is that $E^+_k$ is shifted up relative to the normal state $\xi^{h}_k$, while $E^-_k$ closely tracks the normal state $\xi^e_k$. This means that the superfluid gap $\Delta_{\mathbf{k}}$ is markedly less effective at blocking the excitation states for electrons than for holes in the range important for screening, $k<2k_F$. Figure \ref{Fig:Pol}b of the main text shows that the behaviour of $\Pi_e(\mathbf{q})$ markedly changes when the density is decreased. This is because by the time one arrives in the BEC regime, the $\Pi_e(\mathbf{q})$ for the superfluid state has become very similar in behaviour to $\Pi_h(\mathbf{q})$ out to large $\mathbf{q} \lesssim 4k_F$. Figure\ \ref{Fig:energies}b shows in the BEC regime that the very large $\Delta_k$, on the scale of $E_F$, excludes a huge number of low-lying excited states from participating in the screening, with $E^\pm_k$ shifted up in energy relative to $\xi^{e,h}_k$ by more than $30E_F$. Since here $\Delta_k\gg \delta \xi_k$, it follows that $E_k\simeq E^-_k\simeq E^+_k$, so the unequal masses no longer differentiate the polarizability properties. \bigskip \input{MainFile.bbl} \end{document}
1,108,101,565,380
arxiv
\section{Introduction} ``Supernova impostors" are a heterogeneous class of transient exhibiting luminosities between those of classical novae and supernovae (SNe), and a wide variety of light curves and spectral features (Smith et al. 2011; Van Dyk \& Matheson 2012). Some have been broadly characterised as extragalactic analogs to the historic super-Eddington eruptions of the Galactic luminous blue variable (LBV) stars $\eta$\,Carinae and P\,Cygni (Humphreys \& Davidson 1994; Van Dyk 2000; Smith et al. 2011; Smith et al. 2016a; Humphreys et al. 2016). However, the physical mechanisms involved remain unclear. Indeed, the variety of transients classifiable as SN impostors suggests that there are multiple evolutionary channels. Current possibilities include instabilities associated with late-stage nuclear burning (Shiode \& Quataert 2014; Smith \& Arnett 2014), violent binary encounters (Soker 2004; Kashi \& Soker 2010; Smith \& Frew 2011), and stellar mergers involving massive binary star systems (Smith et al. 2016b; Kochanek et al. 2014; Soker \& Kashi 2013). Recent studies of the fading optical--infrared (IR) remnants of luminous transients have shown that objects previously classified as SN\,impostors might actually be terminal explosions after all, in which a stellar core collapses, but with an incomplete or failed expulsion of the stellar mantle. Indeed, the fate of the prototype impostor SN 1997bs (Van Dyk et al. 2000) has recently come under question, based on the unexpectedly low luminosity for the optical-IR remnant relative to the directly identified stellar precursor (Adams \& Kochanek 2015). The fate of other historic transients for which high-quality precursor data were not available have also been the source of ongoing debate (e.g., SN\,1961V; Smith et al. 2011; Kochanek et al. 2011; Van Dyk \& Matheson 2012b), in part because the cooling outflows from nonterminal eruptions can form dust that obscures the star, and also because late-time line emission can result from persistent interaction between the outflow and an extended distribution of slower pre-existing circumstellar material (CSM). These issues underscore the importance of obtaining late-time multiwavelength monitoring observations of SN impostors, in order to track their post-outburst evolution and determine their ultimate fate. \begin{figure} \includegraphics[width=3.3in]{f1.pdf} \caption{{\it HST}/WFPC2 and ACS images (log stretch) of the cool hypergiant precursor of SN\,Hunt\,248 (reproduced from data presented by Mauerhan et al. 2015), from 3374 days before the onset of the 2014 eruption (broad-band filter images; the F658N image is from 3715 days before eruption). North is up and east is toward the left. } \label{fig:precursor} \end{figure} \begin{figure} \includegraphics[width=3.3in]{f2.pdf} \caption{{\it HST}/WFC3 images (log stretch) of the remnant of SN\,Hunt\,248, $\sim1$\,yr after the peak of the 2014 eruption. North is up and east is toward the left.} \label{fig:remnant} \end{figure} \begin{center} \begin{table} \caption{{\it HST} photometry of the remnant of SN\,Hunt\,248.} \renewcommand\tabcolsep{3.5pt} \scriptsize \begin{tabular}[b]{@{}lcccc} \hline \hline Instrument/Band & Magnitude & Flux ($\mu$Jy) & MJD & Epoch (days) \\ \hline \hline WFC3/F225W &$25.16\pm0.09$ & $0.068\pm 0.006 $ & 57204.05 & 374 \\ WFC3/F438W & $25.84\pm0.05$ & $0.193\pm 0.010 $ &57204.00 & 374 \\ WFC3/F555W & $25.46\pm0.03$ & $0.243\pm 0.007 $ &57199.87 & 370 \\ ACS/F814W & $24.51\pm0.04$ & $0.386\pm 0.015 $ &57200.38 & 371 \\ \hline \end{tabular}\label{tab:p48} \begin{flushleft} \scriptsize$^\textrm{a}$Uncertainties are statistical. Epochs are given as days from $V$-band peak (MJD\,56830.3; Mauerhan et al. 2015). \\ \end{flushleft} \end{table \end{center} SN\,Hunt\,248 was a luminous transient in NGC\,5806 classified as a SN impostor. The light curve exhibited a main peak equivalent in luminosity to the peak of $\eta$\,Car's historic outburst in the 1840s, and another subsequent peak of longer duration that was likely the result of interaction between the erupted material and slower pre-existing CSM expelled prior to the outburst (Mauerhan et al. 2015; Kankare et al. 2015). A particularly interesting aspect of SN\,Hunt\,248 is the detection of the luminous precursor star in archival data, shown in Figure\,\ref{fig:precursor} (images reproduced from Mauerhan et al. 2015). Multicolour photometry from the {\it Hubble Space Telescope (HST)} showed that the stellar precursor's position on the Hertzsprung-Russell (HR) diagram was consistent with that of a cool hypergiant star. The subsequent giant eruption from the star in 2014 provided observational support for a hypothesis that cool hypergiants might actually be relatively hot LBV stars enshrouded in an opaque wind that creates an extended pseudophotosphere (Smith \& Vink 2004). Detailed study of the aftermath of the eruption thus provides an interesting opportunity to probe the post-outburst state and recovery of the stellar remnant. Here we present ultraviolet (UV) through IR observations of SN\,Hunt\,248 with the \textit{HST} and the \textit{Spitzer Space Telescope} about 1\,yr after the giant outburst. In \S3 we model the mid-IR data as a source of thermal dust emission. In \S4 we discuss the effects of circumstellar extinction and implications for the nature of the remnant star. The times of all observation epochs are presented as days past $V$-band peak on 2014 June 21 (MJD 56830.3; UT dates are used throughout this paper). A foreground interstellar extinction value of $A_V=0.14$\,mag has been adopted (Mauerhan et al. 2015). \section{Observations} \subsection{{\it HST} Imaging} High-resolution imaging observations of SN\,Hunt\,248 were performed with the {\it HST} Wide-field Camera 3 ({\it HST}/WFC3) on 2015\,June\,26 and 30 (369 and 374 days after the peak of the 2014 eruption) under {\it HST} programmes GO-13684 and GO-13822 (PIs S. Van Dyk and G. Folatelli, respectively). Exposures were obtained in the F225W (NUV), F438W ($B$), F555W ($V$), and F814W ($I$) filters. A point source at the position of SN\,Hunt\,248 is securely detected in all bands, as shown in Figure\,\ref{fig:remnant}. Photometry of the source was extracted from the images using {\sc{dolphot}} (Dolphin 2000). We tried two different approaches to estimate the background, including the use of an annulus region to measure the sky (FitSky=1) and, alternatively, measuring the sky within the point-spread function (PSF) aperture (FitSky=3, best to use when the field is very crowded). Our annulus-based background subtraction produced the most consistent results for all bands, although the results from each setting are within the respective uncertainties. The photometry is listed in Table\,1. \begin{figure*} \includegraphics[width=7in]{f3.pdf} \caption{Colour composite of the 3.6\,$\mu$m (green) and 4.5\,$\mu$m (red) \textit{Spitzer}/IRAC template images of NGC\,5806 and a Palomar Transient Factory $R$-band image (blue), and the template-subtracted images of the region around SN\,Hunt\,248 (tiled frames).} \label{fig:spitzer} \end{figure*} \begin{center} \begin{table} \caption{\textit{Spitzer}/IRAC photometry of SN\,Hunt\,248.} \renewcommand\tabcolsep{4.pt} \scriptsize \begin{tabular}[b]{@{}lrrrl} \hline \hline MJD & Epoch (days) & 3.6\,$\mu$m & 4.5\,$\mu$m & Programme ID (PI) \\ \hline \hline 55066.9 & $-1763$ & $<9.49$ & $<5.99$ & 61063 (Sheth)\\ 56800.7 & $-30$ & $19.35\pm6.11$ & $11.81\pm5.09$& 10152 (Kasliwal) \\ 56934.6 & 104 & $110.26\pm6.38$ & $96.41\pm5.86$ & 10152 (Kasliwal) \\ 56963.4 & 133 & $65.56\pm6.14$ & $59.70\pm5.91$ & 10139 (Fox) \\ % 57155.5 & 325 & $32.72\pm7.82$ &$29.26\pm4.76$ & 11053 (Fox)\\ % 57158.2 & 328 & $34.30\pm5.44$ &$29.57\pm5.61$ & 11053 (Fox) \\ % \hline \end{tabular}\label{tab:p48} \begin{flushleft} \scriptsize$^\textrm{a}$Fluxes are in units of $\mu$Jy. Uncertainties are statistical. Epochs are given as days from $V$-band peak (MJD\,56830.3; Mauerhan et al. 2015). \\ \end{flushleft} \end{table \end{center} \subsection{\textit{Spitzer} Imaging} SN\,Hunt\,248 was observed on five epochs during the \textit{Spitzer Space Telescope} Warm Mission utilising channels 1 (3.6\,$\mu$m) and 2 (4.5\,$\mu$m) of the Infrared Array Camera (IRAC; Fazio et al. 2004). We acquired fully coadded and calibrated data from the {\it Spitzer} Heritage Archive\footnote{http://sha.ipac.caltech.edu/applications/Spitzer/SHA/} from programme IDs 61063 (PI K. Sheth), 10152 (PI M. Kasliwal), and 11053 (PI O. Fox). The images for all epochs were registered with an earlier pre-outburst image of the host galaxy, which was used as a subtraction template. The template-subtracted images are shown in Figure\,\ref{fig:spitzer}. We performed aperture photometry on the template-subtracted (PBCD / Level 2) images using a 6-pixel aperture radius and aperture corrections listed in Table\,4.7 of the \textit{Spitzer} IRAC Instrument Handbook\footnote{http://irsa.ipac.caltech.edu/data/SPITZER/docs/irac/}. The infrared photometry are listed in Table~2. \begin{figure*} \includegraphics[width=5.7in]{f4.pdf} \caption{Long-term light curve of SN\,Hunt\,248 (coloured filled circles), including the late-time mid-IR \textit{Spitzer} (black open and filled circles) and UV--optical {\it HST} data (coloured 5-pointed stars) presented here. Optical and near-IR photometric data of the precursor and main outburst are from Mauerhan et al. (2015) and Kankare et al. (2015). The optical photometry of NGC\,4490-OT is also shown as coloured triangles (magenta is ground-based clear-filter photometry; red, green, and blue are (respectively) late-time F814W, F555W, and F438W filter photometry from {\it HST}; see Smith et al. 2016b); mid-IR \textit{Spitzer} data on NGC\,4490-OT are shown as grey solid and dotted curves. The optical light curve of SN\,1997bs is displayed for comparison, including SN\,1997bs (dashed triple-dotted grey curve; Van Dyk et al. 2000). The green horizontal dashed line represents the faintest pre-outburst $V$-band absolute magnitude of the precursor star (see Mauerhan et al. 2015). The $V$-band light curve of the purported stellar merger V838\,Mon is also shown (green solid curve; Bond et al. 2003).} \label{fig:lc} \end{figure*} \begin{figure} \includegraphics[width=3.2in]{f5.pdf} \caption{$B-V$ colour evolution of SN\,Hunt\,248. The horizontal lines represent the value (thick line) and uncertainty envelope (thinner lines) of the stellar precursor detected with {\it HST} at $-$3391 days (see Mauerhan et al. 2015). Filled dots are data from Kankare et al. (2015). The filled 5-pointed star symbol represents our most recent measurement from Table\,1, which exhibits the same value as the stellar precursor.} \label{fig:color} \end{figure} \begin{figure} \includegraphics[width=2.9in]{f6.pdf} \caption{UV--IR SED of SN\,Hunt\,248, including the 2014 outburst (black squares), the precursor (green triangles), and the remnant (black circles). The expected UV--optical SED of an echo of the 2014 eruption (see text) is shown as grey circles. The SED of massive stellar merger candidate NGC 4490-OT is also shown for comparison (orange filled circles; data from Smith et al. 2016, and reddened by adopting their extinction estimate with the extinction relation of Cardelli, Clayton, \& Mathis (1989)). } \label{fig:sed} \end{figure} \section{Results \& Analysis} The absolute-magnitude light curve of SN\,Hunt\,248 is shown in Figure\,\ref{fig:lc}, including data from Mauerhan et al. (2015) and Kankare et al. (2015). At $\sim1$\,yr after the peak of the 2014 outburst, the source has dropped to a brightness of $V=25.46\pm0.03$\,mag, which is a factor of $\sim10$ fainter in the optical than the faintest pre-outburst state ever measured for the stellar precursor in the year 2005 ($V=22.91\pm0.01$\,mag; see Mauerhan et al. 2015). Yet, as illustrated in Figure\,\ref{fig:color}, the $B-V$ colour of $0.38\pm0.06$\,mag is consistent with no change from the precursor value of $0.39\pm0.02$\,mag, while the $V-I$ colour of $0.95\pm0.05$\,mag has become only slightly redder from the precursor value of $0.81\pm0.01$\,mag. The latest epoch of near-IR $H$ and $K$ photometry from Kankare et al. (2015) nearly coincides with our {\it HST} UV--optical data from days 369--374 and \textit{Spitzer} mid-IR photometry from 325. We thus combined these data to construct a spectral energy distribution (SED) for the source, shown in Figure\,\ref{fig:sed}; the strong IR component of the SED is clearly seen. \begin{table*} \caption{Dust-model parameters for SN\,Hunt\,248 at epochs 104--328 days post-peak$^a$.} \tabletypesize{\footnotesize} \begin{tabular}[b]{@{}lccccccc} & \multicolumn{3}{c}{Graphite} & & \multicolumn{3}{c}{Silicates} \\ & \multicolumn{3}{c}{------------------------------------------} & & \multicolumn{3}{c}{------------------------------------------} \\ {$a$ ($\mu$m)} & {$T_{\rm d}$ (K)} & {$M_{\rm d}$ (M$_{\odot}$)} & {$L_{\rm d}$ (L$_{\odot}$)} & &{$T_{\rm d}$ (K)} & {$M_{\rm d}$ (M$_{\odot}$)} & {$L_{\rm d}$ (L$_{\odot}$)} \\ \hline \multicolumn{8}{c}{104 days}\\ \hline 0.10 & 868 & 2.7e-5 & 3.0e+6 & & 1247 & 4.1e-5 & 5.5e+6 \\ 0.30 & 865 & 9.4e-6 & 2.6e+6 & & 1140 & 4.9e-5 & 5.5e+6 \\ 0.50 & 1086 & 3.6e-6 & 2.6e+6 & & 1038 & 5.6e-5 & 5.2e+6 \\ 0.75 & 1636 & 1.6e-6 & 4.7e+6 & & 940 & 6.4e-5 & 4.5e+6 \\ 1.00 & 1670 & 5.9e-7 & 5.2e+6 & & 856 & 7.3e-5 & 3.7e+6 \\ \hline \multicolumn{8}{c}{133 days}\\ \hline 0.10 & 830 & 2.0e-5 & 1.7e+6 & & 1071 & 3.1e-5 & 3.2e+6 \\ 0.30 & 827 & 7.0e-6 & 1.5e+6 & & 981 & 3.7e-5 & 3.2e+6 \\ 0.50 & 1024 & 2.7e-6 & 1.5e+6 & & 894 & 4.2e-5 & 3.0e+6 \\ 0.75 & 1487 & 1.2e-6 & 2.4e+6 & & 819 & 4.8e-5 & 2.7e+6 \\ 1.00 & 1514 & 1.6e-6 & 2.6e+6 & & 870 & 5.4e-5 & 2.3e+6 \\ \hline \multicolumn{8}{c}{325 days}\\ \hline 0.10 & 846 & 9.0e-6 & 8.8e+5 & &1199 & 1.4e-5 & 1.6e+6 \\ 0.30 & 843 & 3.2e-6 & 7.5e+5 & &1101 & 1.7e-5 & 1.6e+6 \\ 0.50 & 1050 & 1.2e-6 & 7.6e+5 & & 1006 & 1.9e-5 & 1.5e+6 \\ 0.75 & 1549 & 5.4e-7 & 1.3e+6 & & 914 & 2.2e-5 & 1.3e+6 \\ 1.00 & 1579 & 7.2e-7 & 1.4e+6 & & 835 & 2.5e-5 & 1.1e+6 \\ \hline \multicolumn{8}{c}{328 days}\\ \hline 0.10 & 882 & 7.7e-6 & 9.7e+5 & &1279 & 1.2e-5 & 1.7e+6 \\ 0.30 & 879 & 2.7e-6 & 8.2e+5 & &1166 & 1.4e-5 & 1.7e+6 \\ 0.50 & 1109 & 1.0e-6 & 8.3e+5 & & 1059 & 1.6e-5 & 1.6e+6 \\ 0.75 & 1697 & 4.4e-7 & 1.6e+6 & & 957 & 1.9e-5 & 1.4e+6 \\ 1.00 & 1733 & 5.9e-7 & 1.7e+6 & & 870 & 2.1e-5 & 1.2e+6 \\ \hline \end{tabular} \begin{flushleft} \scriptsize$^\textrm{a}${Only upper limits on dust parameters were obtainable for our earliest epoch at $-30$ days post-peak (not listed; see text). Average uncertainties for fit parameters $M_{\rm d}$, $T_{\rm d}$, and $L_{\rm d}$ are estimated at 30\%, 25\%, and 30\%, respectively (see text).} \end{flushleft} \end{table*} \begin{figure*} \includegraphics[width=3.3in]{f7a.pdf} \includegraphics[width=3.3in]{f7b.pdf} \caption{Infrared photometry of SN\,Hunt248 on day 328 post-peak compared with the SEDs of graphite (left panel) and silicate (right panel) model dust sources. $H$ and $K$ points are day-332 measurements from Kankare et al. (2015). } \label{fig:modfit} \end{figure*} \subsection{Dust modeling} The \textit{Spitzer} data were analysed under the assumption that the source of the mid-IR emission is hot dust. We fit the SED using simple models for graphite and silicate composition (Fox et al. 2010, 2011), with dust mass ($M_{\rm d}$) and temperature ($T_{\rm d}$) as free parameters. The flux is given by \begin{equation} \label{eqn:flux2} F_\nu = \frac{M_{\rm d}\,B_\nu(T_{\rm d})\,\kappa_\nu (a)} {d^2}, \end{equation} \noindent where $\kappa_\nu (a)$ is the dust absorption coefficient as a function of grain radius, and $d$ is the distance of the dust from the observer (Hildebrand 1983). We performed our calculations for grain sizes in the range 0.1--1.0\,$\mu$m, looking up their associated $\kappa_\nu (a)$ values from the Mie scattering derivations discussed by Fox et al. (2010, see their Figure\,4). For simplicity, we assume optically-thin dust of a constant grain radius and emitting at a single equilibrium temperature (e.g., Hildebrand 1983). The data were fit using the IDL routine {\tt MPFIT}. Table~3 lists the best-fitting parameters for $T_{\rm d}$, $M_{\rm d}$, and the dust luminosity $L_{\rm d}$ for graphite and silicates over a range of grain radii, for epochs 104 days through 328 days. The average statistical uncertainties for $M_{\rm d}$, $T_{\rm d}$, and $L_{\rm d}$ are estimated at $\sim30$\%, $\sim25$\%, and $\sim30$\%, respectively. This estimate was obtained by performing several fits on the \textit{Spitzer} data after offsetting the photometry by the photometric errors. For our earliest epoch just before the onset of the main eruption, 30 days before peak, satisfactory model fits for $M_{\rm d}$ and $T_{\rm d}$ were not obtainable, limiting the luminosity to $L<2\times10^6\,{\rm L}_{\odot}$. For the successfully modeled epochs thereafter, we measure no significant change with time for the dust parameters of a given model, within our quoted uncertainty ranges. For the models of graphite dust grains with $a=0.1$ and 0.3\,$\mu$m, the temperature remains 800--900~K, and inferred dust masses range between $\sim3\times10^{-6}\,{\rm M}_{\odot}$ and $\sim3\times10^{-5}\,{\rm M}_{\odot}$, with luminosities on the order of a few $\times10^6\,{\rm M}_{\odot}$. For larger grain sizes of $a=0.5$\,$\mu$m up to 1\,$\mu$m, the range of potential temperatures is hotter (1024--1720\,K). The masses of these larger grain models are systematically lower by a factor of a few, while the luminosities are comparable to those of the smaller grain models. For silicate grains, the model masses, temperatures, and luminosities are all slightly higher than for graphite---most notably for dust mass. However, the temperatures of the larger grain silicates are comparable to those of the smaller grain graphite models. We note, however, that $M_{\rm d}$ should probably be regarded as a lower limit, since there might also be a cooler component of dust to which our \textit{Spitzer} observations at 3.6\,{$\mu$m} and 4.5\,{$\mu$m} are not sensitive. Although our model parameters were fit using only the \textit{Spitzer} photometry at 3.6 and 4.5~$\mu$m, the epoch on day 328 post-peak was only 5 days before a ground-based near-IR $H$ and $K$ measurement from Kankare et al. (2015), so we used those data to further discriminate between the various SED models. This last epoch is also particularly important in that it is close in time to our UV--optical {\it HST} photometry of the remnant, and so can be used to estimate the expected UV--optical extinction from the dust parameters we derived (see \S4.1.2). As shown in Figure\,\ref{fig:modfit}, the results suggest that the average grain size for both the silicate and graphite models is likely to be substantially larger than the 0.1\,$\mu$m average grains size. Indeed, simple blackbody distributions of any temperature are too broad to fit the SED of SN\,Hunt248, and dust models for small grain sizes are also too broad and significantly overestimate the flux in the near-IR; the source is clearly a greybody. For graphite, the SED appears most consistent with 0.3\,$\mu$m grains, while for silicate dust, even larger grain sizes in the range 0.75--1.0\,${\mu}$m appear to provide the best match to the day 328 data. The size of the emitting region can be estimated by considering the radius of an equivalent blackbody having luminosity and temperature indicated by the model fits, \begin{equation} r_{\rm bb} = \bigg(\frac{L_{\rm d}}{4 {\rm \pi} \sigma T_{\rm d}^4}\bigg)^{1/2}. \end{equation} \noindent Focusing on the last epoch at 328 days, the best-matching graphite ($a=0.3\,\mu$m) and silicate ($a=0.75$--1.0\,$\mu$m) dust models indicate respective radii of $2.7\times10^{15}$\,cm and (3.1--3.3) $\times10^{15}$\,cm. We therefore assume an approximate value of $3\times10^{15}$\,cm for the following analysis and interpretation. \section{Discussion} \subsection{The nature of the remnant} \subsubsection{The origin of the dust} The SED of the UV--IR remnant of SN\,Hunt\,248, shown in Figure\,\ref{fig:sed}, appears very similar to that of NGC\,4490-OT (Smith et al. 2016b). In both cases, the dust is hot and emits like a greybody, and the UV--optical counterpart has noteworthy UV flux, especially NGC 4490-OT. As shown in Figures~\ref{fig:lc} and \ref{fig:sed}, the mid-IR brightness evolution of both objects exhibits very similar plateaus, and they both have IR luminosities that are comparable to the optical luminosities of their directly identified stellar precursors; taken at face value, this appears consistent with heating of the dust by a luminous surviving star. Interestingly, the estimated radius of the dust ($3\times10^{15}$\,cm) matches the expected expansion radius of the ejecta after 328 days, considering the measured outflow speed of $v\approx1200$\,km\,s$^{-1}$ (Mauerhan et al. 2015). Thus, the measurements seem consistent with dust condensation in the ejecta from the 2014 event. \begin{figure} \includegraphics[width=3.5in]{f8.pdf} \caption{Optical luminosity of the stellar precursor of SN\,Hunt\,248 (filled triangles; Mauerhan et al. 2015) and its mid-IR limits (black filled triangles with downward-facing arrows). The photometry was corrected only for interstellar extinction ($A_V=0.14$\,mag, $R_V=3.1$). The solid black curve is the SED of a star with $T_{\rm eff}=7000$\,K (Castelli \& Kurucz 2003), scaled to $L/{\rm L}_{\odot}=6.07$, and reddened by an additional component of grey circumstellar extinction ($A_V=0.86$\,mag, $R_V=5.4$). The SED equivalent to the previously estimated stellar parameters from Mauerhan et al. (2015), which did not account for circumstellar extinction, is represented by the black dashed curve. For comparison purposes, we also show the SEDs for the Galactic cool-warm hypergiants VY\,CMa, $\rho$\,Cas, and IRC$+$10420 (blue dashed triple-dotted, green dotted, and orange dashed curves, respectively; Shenoy et al. 2016), HR5171a (red solid curve; Humphreys et al. 1971), and IRAS\,17163$-$3907 (cyan dashed-dotted curve; Lagadec et al. 2011). The following distances were used to calculate the luminosity: SN\,Hunt\,248 (26.4\,Mpc; Mauerhan et al. 2015), $\rho$\,Cas (2.5\,kpc; Humphreys 1978); VY\,CMa (1.2\,kpc; Shenoy et al. 2016), IRC$+$10420 (5\,kpc; Shenoy et al. 2016), HR5171 (3.6\,kpc; Chesneau et al. 2014a), and IRAS\,17163$-$3907 (4.2\,kpc, average of range estimate from Lagadec et al. 2011). The IRAS\,17163$-$3907 data were corrected for interstellar extinction in this work, adopting $A_V=2.1$\,mag (Lagadec et al. 2011) and the extinction relation of Cardelli, Clayton, \& Mathis (1989). The other SEDs from the literature account only for interstellar extinction.} \label{fig:pre_sed} \end{figure} Alternatively, pre-existing dust may have been swept to large radius by the ejecta. After all, the spectra near peak brightness did exhibit the signatures of CSM interaction (see Mauerhan et al. 2015). Assuming a gas-to-dust ratio of 100, the dust masses inferred by our model ($\sim10^{-6}$ to $10^{-5}\,{\rm M}_{\odot}$) imply a total CSM mass of $\sim10^{-4}$ to $10^{-3}\,{\rm M}_{\odot}$. Therefore, if the 2014 eruption ejected only 0.1\,M$_{\odot}$ (which would be modest compared to the $>10\,{\rm M}_{\odot}$ ejected by $\eta$\,Car's historic event), the pre-existing CSM would not be massive enough to effectively decelerate the ejecta. Dust in the circumstellar environment could therefore have been swept to the expansion radius, if the grains survived the UV radiation and shock of the event. We speculate that this might explain the relatively large sizes of dust grains inferred by our models for the IR emission---i.e., the smallest circumstellar grains could have been destroyed by the 2014 outburst, leaving a distribution skewed toward larger sizes. Unfortunately, our \textit{Spitzer} mid-IR upper limits on the stellar precursor are not sufficiently deep to provide a meaningful constraint on the pre-existing dust mass, so we cannot tell if the dust mass was lower before the eruption than in the aftermath. For example, Figure\,\ref{fig:pre_sed} shows the SED of the precursor to SN\,Hunt\,248 along with those of several Galactic cool hypergiants that have measured circumstellar dust masses in the literature. Our limits are mutually consistent with a system like $\rho$\,Cas, which has a rather low estimated dust mass of $\sim3\times10^{-8}$\,M$_{\odot}$ (Jura \& Kleinmann 1990), and more extreme dusty systems like IRC$+$10420 (Shenoy et al. 2016) and IRAS\,17163$-$3907 (Lagadec et al. 2011), the latter of which has a much larger dust mass of $\sim0.04$\,M$_{\odot}$. The comparison in Figure\,\ref{fig:pre_sed} does, however, suggest that the IR excess from a system such as VY\,CMa, with a total dust mass of $\sim0.02$\,M$_{\odot}$ (Harwit et al. 2001; Muller et al. 2007), would have been detectable at 4.5\,${\mu}$m. It is therefore plausible that the $\sim10^{-5}$\,M$_{\odot}$ dust mass we inferred {\it post} eruption could have been pre-existing, yet not detectable by our \textit{Spitzer} observations. Finally, we should address the possibility that the IR (and perhaps optical) emission of the remnant is the result of a light echo of the 2014 outburst off of outer dusty CSM. In such a scenario there is both delayed scattering of UV--optical light and thermal IR reprocessing of the fraction of light that gets absorbed by the dust. However, assuming that such an echo is dominated by light from the peak of the outburst and obeys a $\propto\lambda^{-1.5}$ wavelength dependence (e.g., see Fox et al. 2015), while suffering the same extinction as the precursor, the expected UV--optical SED is totally inconsistent with the observed SED at $+$370 days (see Figure\,\ref{fig:sed}, grey curve). The thermal-IR remnant also appears to be inconsistent with thermal reprocessing of an echo, as the dust temperature requires a luminosity that is far above the peak of the 2014 event. This was determined using the same line of reasoning invoked for the analysis of the remnant of NGC\,4490-OT (see Smith et al. 2016, their \S3.2.3). Assuming the ratio of the efficiencies of UV absorption to IR emission is $Q_{\textrm{UV}}/Q_{\textrm{IR}}=0.3$ (Smith et al. 2016b), the luminosity required to heat dust at a distance $r$ to a temperature $T$ can be expressed as $L/{\rm L}_{\odot}\approx{5.7\times10^{12}}\,(T_d/1000\,\textrm{K})^4\,(r/\textrm{pc})^2$. At 328 days the minimum distance of the echo-heated dust is $r\approx0.3$\,pc. Thus, the range of possible dust temperatures inferred from our model fits and their uncertainties (650--1450\,K) requires a peak outburst luminosity of (1--25) $\times10^{11}$\,L$_{\odot}$, which is three orders of magnitude higher than the observed peak of the 2014 outburst. Furthermore, the temperature evolution of a thermally reprocessed echo is expected to evolve with time as $T\propto t^{-0.5}$ (Fox et al. 2011, 2015), and so we would have expected the temperature between 133 days and 325 days to have dropped from $\sim830$\,K to $\sim60$\,K; instead, the temperature evolution is consistent with no change between these epochs. We therefore conclude that a light echo is inconsistent with the available data, and therefore is not the source of the late-time UV--optical source and its thermal counterpart. The hypotheses of dust synthesis in the ejecta and swept-up CSM dust are far more consistent with the data. \subsubsection{Circumstellar extinction and intrinsic stellar parameters} If the UV--optical component of the SED is from a surviving star and the thermal emission is from circumstellar dust that absorbs stellar radiation, then we should consider the potential effect of dust absorption on the optical properties of the remnant. Under the assumption of a spherically symmetric shell geometry of thickness $\Delta r$, the optical depth of the dust at a given wavelength can be expressed by \begin{equation} \tau_\lambda = \kappa_\lambda(a)\,\rho \, \Delta r = \kappa_\lambda(a) \frac{M_{\rm d}}{4 \pi r_{\rm bb}^2}, \end{equation} \begin{figure} \includegraphics[width=3.4in]{f9.pdf} \caption{SED of the SN\,Hunt248 optical (infrared) remnant at 370 (325) days (black filled circles; corrected only for interstellar extinction $A_V=0.14$\,mag and $R_V=3.1$). The magenta curves are the SED of a model $T_{\rm eff}=15,000$\,K star (spectral type B4--B5; Castelli \& Kurucz 2003), reddened by $E(B-V)=0.48$\,mag (to illustrate the effect of circumstellar extinction) for two different extinction laws. The solid (dashed) curves represent extinction laws having $R_V=5.4 (3.1)$. The reddened models have been vertically scaled to match the $B$ and $V$ photometry. The blue curves are for a model SED of a $T_{\rm eff}=8500$\,K star (spectral type A3--A4) reddened by $E(B-V)=0.38$\,mag, shown to demonstrate that cooler models greatly underestimate the UV photometry. For reference, the dotted magenta curve near 0.2\,${\mu}$m wavelength shows the effect that a Galactic interstellar UV opacity bump would have on the B4--B5, $E(B-V)=0.48$\,mag, $R_V=3.1$ model. Our silicate IR emission model for $a=1.0$\,$\mu$m is also shown (dashed-dotted grey curve).} \label{fig:red_sed} \end{figure} \noindent where $\rho$ is the density of the dust shell and $\kappa_\lambda(a)$ is the absorption coefficient for the dust of a particular grain radius and at a particular wavelength. The $V$-band ($\lambda=0.555\,\mu$m) absorption coefficients for our best-matching graphite ($a=0.3\,\mu$m) and silicate ($a=0.75$--1.0\,$\mu$m) models are $\sim14700$\,cm$^{-2}$\,g$^{-1}$ and $\sim2000$--2600\,cm$^{-2}$\,g$^{-1}$, respectively (see Fox et al. 2010, their Figure\,4). Using the model dust masses in Table\,3 and the radius of $3\times10^{15}$\,cm derived in \S3.1, we estimate $V$-band optical depths of $\tau_V \approx 0.9$ for graphite and $\sim0.7$--0.9 for silicates. If we ignore the effect of grain albedo and optical scattering for the moment, then the extinction can be approximated by 1.086\,$\tau$, in which case we obtain $A_V\approx0.8$--1\,mag. The total extinction from the ISM and hot-dust component would therefore be approximately the same for both the graphite and silicate models, with $A_V\approx1.0$\,mag. This implies $M_V\approx-7.6$\,mag for the remnant and, thus, supergiant luminosity class. However, a more realistic treatment of the effective extinction accounts for the scattering albedo, $\omega$, of the grains: $A=1.086\,(1-\omega)^{1/2}\,\tau_V$. For a standard ISM-like distribution of graphitic (silicate) grains, the scattering albedo is 0.5\,(0.9) and would thus reduce the required extinction to 0.8\,(0.4) mag (Kochanek et al. 2012a). However, for large grains with $a=0.3$--1.0\,$\mu$m, the effective albedo could be considerably lower ($\omega\approx0.1$; Mulders et al. 2013) and thus scattering might have only a small impact on the effective extinction. Without reliable information on the albedo of the grains, all we can say is that the \textit{expected} extinction from the same hot-dust component that is responsible for the IR emission is $A_V<1$\,mag. \begin{figure} \includegraphics[width=3.3in]{f10.pdf} \caption{Modification of the HR diagram for LBVs and their kin from Mauerhan et al. (2015, see their Figure~8). The magenta coloured square is the previous estimate for the precursor star from Mauerhan et al. (2015), uncorrected for possible circumstellar extinction. The red square represents the revised best-matching model of the precursor SED, corrected for grey circumstellar extinction (see text). The blue square indicates the aftermath of the eruption, a hot B4--B5 supergiant, also corrected for grey circumstellar extinction. The luminosities of the remnant and the revised precursor were calculated by integrating the UV--IR SEDs of the best-matching models shown in Figures~\ref{fig:pre_sed} and \ref{fig:red_sed}. } \label{fig:HR} \end{figure} The effect of the estimated extinction on the colours of the star depends on the assumed value of total-to-selective extinction $R_V$, defined as $A_V/E(B-V)$, which is sensitive to the dust chemistry and grain-size distribution. If we were to hypothetically assume $A_V=1$\,mag and an ISM-like value of $R_V=3.1$ for SN\,Hunt248, then the associated $E(B-V)\approx0.3$\,mag would imply an intrinsic colour in the range $(B-V)_0 = 0.1$ mag, corresponding to a spectral type in the range A3--A4 (Fitzgerald 1970). However, such a spectral type provides a poor match to the UV--optical SED, as illustrated by Figure\,\ref{fig:red_sed}. A3--A4 stars exhibit a relative UV luminosity that is an order of magnitude lower than that of the optical bands. On the contrary, the strong UV flux of the data indicates that the star is significantly hotter, with a substantial Balmer continuum flux. Specifically, after matching stellar model SEDs having a wide range of temperatures (from Castelli \& Kurucz 2003) and over a wide range of $A_V$ and $R_V$, we found that the best match to the four bands of our measured UV--optical SED is provided by a star with $T_{\rm eff}=15,000$\,K (appropriate for a B4--B5 star of supergiant luminosity class; Zorec et al. 2009) with extinction parameters $A_V=2.6$\,mag and $R_V=5.4$ (with no ISM-like UV ``bump" in the extinction law). Cooler models cannot supply enough Balmer continuum, while hotter stellar SEDs with $T_{\rm eff} >15,000$\,K produce too much UV flux, and cannot provide a good match for any of the wide range of $A_V$ and $R_V$ values we attempted. We conservatively estimate a temperature uncertainty of $\delta T=1000$\,K for the remnant star. Clearly, the extinction value of $A_V=2.6$\,mag implied by our best-matching stellar SED is significantly higher than our estimates of the expected absorption from the hot-dust component, which suggested $A_V<1$\,mag. However, a higher value of extinction would not be surprising, given that the hot dust responsible for the IR emission probably comprises only a fraction of the total dust mass; indeed, it is plausible that there is cooler dust in the system that does not emit strongly at 3--5\,$\mu$m. Furthermore, the value of $A_V=2.6$\,mag implied by the SED would also explain the factor of $\sim10$ drop in apparent brightness of the remnant star relative to the precursor. Meanwhile, the high $R_V$ we inferred from the SED might actually be appropriate for circumstellar dust having a grain distribution skewed toward large sizes. For example, $\sim40$\% of the extinction in the interacting SN\,2010jl has been attributed to large graphitic dust grains with maximum sizes above $a=0.5\,\mu$m and possibly as large as $a>1.3\,\mu$m, which result in an estimated $R_V=6.4$ (Gall et al. 2014). In another example, observations of the red hypergiant VY\,CMa necessitate a circumstellar total-to-selective extinction value of $R_V=4.2$ (Massey et al. 2005), also potentially the result of a grain distribution skewed toward larger sizes. In addition, large grains in $\eta$\,Car's Homunculus nebula have been invoked to explain the apparently grey extinction of the central source (Andriesse, Donn, \& Viotti 1978; Robinson et al. 1987; Davidson et al. 1999; Smith \& Ferland 2007; Kashi \& Soker 2008). The integrated extinction-corrected luminosity of the best-matching B4--B5 SED is $L\approx1.2\times10^6\,{\rm L}_{\odot}$. We note that this is approximately twice the luminosity of our previous estimate for the cool hypergiant precursor (Mauerhan et al. 2015). However, that earlier work focused on matching stellar models to the $B$ and $V$ photometry alone (ignoring the poor fit to the $I$-band photometry) and assumed no circumstellar extinction. We thus revisited the precursor photometry in this work using stellar SED models from Castelli \& Kurucz (2003), reddened by an additional component of circumstellar extincition. We find that the best-matching stellar model is one in which there is substantial \textit{grey} circumstellar extinction, similar to our conclusion for the hotter B4--B5 remnant. As shown in Figure\,\ref{fig:pre_sed}, we obtain a reasonable match to the $B$, $V$, and $I$ precursor photometry using a stellar model SED with $T_{\rm eff}=7000$\,K (Castelli \& Kurucz 2003), reddened by $E(B-V)=0.16$\,mag and $R_V=5.4$ ($A_V=0.86$\,mag). This temperature is more or less equivalent to our previous estimate in Mauerhan et al. (2015), and thus remains consistent with the yellow (F-type) hypergiant classification. We note that if we had used a standard ISM-like $R_V=3.1$, then we achieve poor matches for a wide range of stellar models and extinction values. Moreover, stellar models with lower effective temperatures than 7000\,K exhibit $B$-band fluxes well below the photometry. Based on our attempted matches to models with a variety of temperatures, we conservatively estimate a temperature uncertainty of $\delta T=1000$\,K for the precursor. The integrated unreddened luminosity of our best-matching stellar model (shown in Figure\,\ref{fig:pre_sed}) is also $L\approx1.2\times10^6\,{\rm L_{\odot}}$, equivalent to that of the best-matching B4--B5 remnant model shown in Figure\,\ref{fig:red_sed}. Taken at face value, this is consistent with a temperature change of $\delta T\approx8000$\,K at constant luminosity of $\sim1.2\times10^6\,{\rm L}_{\odot}$. The revised luminosity estimate of the precursor warrants an examination of the star's associated transition in the HR diagram, which we show in Figure\,\ref{fig:HR}. After correcting the precursor photometry for the circumstellar extinction discussed above ($A_V=0.86$\,mag and $R_V=5.4$), the precursor star would occupy a region more luminous than the cool hypergiants, yet still within the observed temperature range exhibited by stars of this class (note, however, that circumstellar extinction may not be adequately addressed in other objects classified as cool hypergiants). After the eruption, the hotter remnant has migrated blueward, and lies in between the S\,Dor and red instability strips. Future observations will determine whether the remnant continues to migrate in the HR diagram toward the hotter S\,Dor instability strip occupied by quiescent LBVs, or if increasing extinction from ongoing dust condensation in the ejecta pushes it redward again. The revised parameters of the stellar precursor warrant reanalysis of the star's initial mass as well, which was previously estimated at $\sim30\,{\rm M}_{\odot}$ (Mauerhan et al. 2015). Figure\,\ref{fig:pre_iso} shows the data, after correcting for the purported grey circumstellar extinction parameters discussed above (for the remnant, $E(B-V)=0.48$\,mag and $R_V=5.4$; for the precursor, $E(B-V)=0.16$\,mag and $R_V=5.4$), along with evolutionary tracks from the Geneva rotating stellar models for 50\,M$_{\odot}$ and 60\,M$_{\odot}$ (Ekstr{\"o}m et al. 2012); the data appear to most closely match (but are slightly below) the 60\,M$_{\odot}$ model, which at the locations of both the remnant and precursor is undergoing core-He burning (Ekstr{\"o}m et al. 2012). This revised initial mass is approximately twice as high as the circumstellar extinction-free estimate by Mauerhan et al. (2015). Interestingly, the photometry of the remnant and precursor, which was corrected for different values of circumstellar extinction and based on the best SED matches, is consistent with no significant change in stellar luminosity between before and after the 2014 eruption. \begin{figure} \includegraphics[width=3.3in]{f11.pdf} \caption{The precursor (open black square) and remnant (filled black square) photometry of SN Hunt 248 on the HR diagram, after correcting for our estimated circumstellar extinction parameters (see text \S4.1.2). The evolutionary tracks are from Geneva rotating stellar models (Ekstr{\"o}m et al. 2012) at solar metallicity for initial masses of 50\,M$_{\odot}$ (green curve) and 60\,M$_{\odot}$ (magenta curve). } \label{fig:pre_iso} \end{figure} \subsection{The 2014 outburst, revisited} \subsubsection{Massive binary merger-burst?} The energy source of the 2014 eruption is uncertain. The structure of the outburst light curve, the outflow velocity, and the large-amplitude pre-outburst variability detected over prior decades might provide clues. We can speculate that the cool-hypergiant precursor was a massive interacting binary, perhaps similar to HR\,5171 (Chesneau et al. 2014a), and that its large pseudophotosphere was the signature of a common envelope. If this is the case, it is plausible that SN\,Hunt\,248's 2014 eruption was driven by a violent binary encounter, common-envelope ejection, or a merger-burst marking the coalescence of two massive stars (see Paczy{\'n}ski 1971; Vanbeveren et al. 1998, 2013; Podsiadlowski et al. 2010; Langer 2012; Justham et al. 2014; Portegies-Zwart \& van den Heuvel 2016). Indeed, such events might be more common than previously thought (Kochanek 2014), and their transients might actually explain a substantial fraction of SN impostors and LBV eruptions, including the historic outburst of $\eta$\,Car (Portegies Zwart \& van den Heuval 2016). \begin{figure} \includegraphics[width=3.45in]{f12.pdf} \caption{Peak outburst luminosity versus outflow velocity for SN\,Hunt\,248 and other stellar merger candidates, including NGC\,4490-OT (Smith et al. 2015) and the sample presented by Pejcha et al. (2016a, their Figure~21); $\eta$\,Car is also included. Uncertainties in expansion velocity are shown where available.} \label{fig:merger} \end{figure} Figure\,\ref{fig:merger} shows the peak outburst luminosity versus outflow velocity\footnote{We note that the outflow velocities of SN\,Hunt\,248 and NGC4490-OT were measured from their P\,Cygni absorption minima (Mauerhan et al. 2015; Smith et al. 2016b), whereas the outflow velocities of the sample in Pejcha et al. (2016a) were measured mostly by H$\alpha$ line widths, and $\eta$\,Car's velocity measurement is from detailed spectroscopic analysis of the Homunculus nebula (Smith et al. 2003).} for SN\,Hunt\,248 (Mauerhan et al. 2015); NGC\,4490-OT (Smith et al. 2016b); the sample of merger candidates presented by Pejcha et al. (2016a, their Figure~21); M101-OT, also considered a merger candidate or binary common-envelope ejection event (Blagorodnova et al. 2017); and $\eta$\,Car (Smith et al. 2003; Smith \& Frew 2011). Interestingly, SN\,Hunt\,248 is consistent with the apparent trend exhibited by this sample of merger candidates. Smith et al. (2016b) interpreted NGC\,4490-OT as a stellar merger involving a star of similarly high mass to SN\,Hunt248 ($\sim30$\,M$_{\odot}$), and reiterated the suggestion that $\eta$\,Car's historic eruption was the result of a massive merger. Indeed, $\eta$\,Car's position in Figure\,\ref{fig:merger} also fits in with the apparent trend exhibited by other merger candidates. V1309\,Sco was almost certainly a true merger, based on the exquisite light curve that showed the rapidly decreasing orbital period of an inspiraling binary (Tylenda et al. 2011). V838 Mon was thought to be a similar merger involving a B-type star (Tylenda et al. 2005; Munari et al. 2007), perhaps in a triple system with another tertiary B-type star (Chesneau et al. 2014b). Both V1309\,Sco and V838\,Mon exhibited double-peaked light curves, of shorter duration and brightness than those of SN\,Hunt\,248 and NGC\,4490-OT, but similar in multipeaked morphology. If they are mergers, the relatively long durations of SN\,Hunt248 and NGC\,4490-OT, compared with V1309\,Sco and V838\,Mon, are to be expected from their relatively high progenitor masses. However, it is important to note that simulations of common-envelope outflows that are shock-energised by the binary's orbital energy input (e.g., Pejcha et al. 2016a; MacLeod et al. 2017) have not yet reproduced the high outflow velocities we have measured for SN\,Hunt248 ($\sim1200$\,km\,s$^{-1}$; Mauerhan et al. 2015), so the apparent trend in Figure\,\ref{fig:merger} has yet to be theoretically established at the high-mass end. More explosive forms of energy input that might result in fast $\sim1000$\,km\,s$^{-1}$ outflow velocities have been proposed to occur during the common-envelope evolution of massive stars (see Podsiadlowski et al. 2010; Soker \& Kashi 2013; Tsebrenko \& Soker 2013), but it is not clear if such effects would result in a continuation or deviation from the apparent trend in Figure\,\ref{fig:merger}. A multipeaked light curve might be a natural consequence of a stellar merger or common-envelope ejection. A close binary of evolved massive stars that are headed for a merger will experience mass transfer, and this can occur even if the primary radius does not fully fill its Roche lobe, but fills it up with material from a slow wind (e.g., wind Roche-lobe overflow, WRLOF; Abate et al. 2013). RLOF may be nonconservative and WRLOF is nonconservative by nature (i.e., some mass is lost rather than exchanged). The process leads to the buildup of CSM with enhanced density in the equatorial plane of the binary, forming a spiral pattern that tightens with increasing radius and forms a dense torus-like structure surrounding the binary (Pejcha et al. 2016a, 2016b; Ohlmann et al. 2016). The subsequent explosive outflow from a merger-burst will encounter this toroidal CSM distribution and generate radiation from the resulting interaction (multiple peaks in the light curve). Any interaction-induced dust formation will mirror the geometry of the pre-existing CSM. Relatedly, the circumstellar environment of the purported post-merger system V838\,Mon exhibits an equatorial overdensity of dust several hundred AU in extent (Chesneau et al. 2014b). Hydrodynamic simulations have also shown that equatorially enhanced dust formation should be expected in the aftermath of mergers (Pejcha et al. 2016a). In comparing SN\,Hunt248 to stellar mergers, we should note that the B4--B5 spectral type we have estimated for the remnant would be much hotter than that of the immediate aftermath of the purported \textit{complete} merger V838 Mon, which became the coolest supergiant ever observed with $T\approx2000$\,K (L3 spectral type; Loebman et al. 2014). The cool source is presumably the inflated merger product, apparently contracting on a thermal timescale (Chesneau et al. 2014b). The relatively hot spectral type of the remnant of SN\,Hunt248 suggests that, if the eruption did indeed stem from a merging binary, then the individual stars might have avoided a complete merger while ejecting their common envelope. We speculate that the purported pseudophotosphere of the cool hypergiant was destroyed with the ejection of the inflated common envelope, revealing the stellar photosphere(s) of the hotter B4--B5 star(s) inside. With regard to binary origin, it is possible that the light of the remnant is dominated by a companion to the eruptive source, or perhaps even a third tertiary companion to a progenitor binary system that may have merged. The latter idea, although very speculative at this point, is motivated by the discovery of a tertiary B-type companion in the V838 Mon system, which eventually became heavily reddened by expanding ejecta dust $\sim5$\,yr after the event (Wisniewski et al. 2008; Tylenda et al. 2011). This comparison warrants continued monitoring of SN\,Hunt248 and NGC\,4490-OT. Tertiary stars of triple systems could play an important role in the merger of the tighter pair, as has been suggested for $\eta$\,Car (Portegies Zwart \& van den Heuval 2016). \subsubsection{Peculiar core-collapse supernova?} Finally, we discuss the possibility that the 2014 eruption of SN\,Hunt248 was a terminal explosion. This speculation is warranted, given renewed deliberation on the fates of transients previously classified as nonterminal SN impostors, including the prototype SN impostor SN\,1997bs, and SN\,2008S (Adams \& Kochanek 2015; Adams et al. 2016). Like SN\,Hunt248, SN\,1997bs exhibited relatively narrow spectral lines (no obvious sign of high-velocity ejecta), peaked at a luminosity below that of typical core-collapse SNe, and had a much shorter duration than common SNe~II-P. Interestingly, at $\sim1$\,yr post-eruption, SN\,1997bs exhibited an optical remnant very similar in brightness to that of SN\,Hunt248 (see Figure\,\ref{fig:lc}), but which continued to fade during subsequent coverage (Kochanek et al. 2012b). Remarkably, the most recent optical-IR data on SN\,1997bs appear to be consistent with a terminal explosion, as few plausible combinations of obscuring dust and surviving stellar luminosity can explain the late-time data. Could SN\,Hunt\,248 have been a terminal event, similar to what has been suggested for SN\,1997bs? If so, then the UV--optical remnant could either be a companion star, or it could be residual SN emission that coincidentally appears similar to the attenuated B4--B5 supergiant SED we constructed. In the latter possibility, we might expect the light curve of the optical remnant to continue evolving similarly to SN\,1997bs (see Figure\,\ref{fig:lc}), underscoring the need for continued UV--IR observations. \section{Summary and concluding remarks} We have presented space-based observations of the aftermath of SN\,Hunt248 with \textit{HST} and \textit{Spitzer}. The UV--optical SED is consistent with a B4--B5 supergiant attenuated by grey circumstellar extinction. Our modeling of the \textit{Spitzer} data suggests that the dust responsible for the IR emission is composed of relatively large grains ($a \gtrsim 0.3\,\mu$m), has a mass of $\sim10^{-6}$--$10^{-5}\,{\rm M}_{\odot}$ (depending on whether it is graphitic or silicate), and a temperature of $T_d\approx900$\,K. The large grain size indicated by our modeling results is consistent with the grey extinction we infer for the UV--optical remnant. However, the extinction expected from the hot-dust component alone is significantly below the amount suggested by the best-matching UV--optical SED, prompting us to speculate on the presence of cooler dust not detected by our 3.6 and 4.5\,$\mu$m photometry. Future mid-IR observations with the \textit{James Webb Space Telescope}~(\textit{JWST}) could reveal such cooler dust. We revised our analysis of the precursor-star photometry, and showed that the SED is well matched by an F-type supergiant that also suffers grey circumstellar extinction but of lesser magnitude than the remnant. Comparison of the extinction-corrected photometry to rotating stellar models indicates that the initial mass of the star could be nearly $\sim60\,{\rm M}_{\odot}$, approximately twice the value estimated by Mauerhan et al. (2015). We interpreted the 2014 outburst of SN\,Hunt248 in the context of binary mergers, as in the very similar case of NGC\,4490-OT (Smith et al. 2016b). If such an interpretation is correct, then the hot B4--B5 spectral type of the byproduct might suggest that the binary avoided a complete merger during the ejection of the common envelope. In this interpretation, it could be that the ejection of the common envelope resulted in the destruction of the cool hypergiant pseudophotosphere suggested by Mauerhan et al. (2015), and prompted the star's transition to B4--B5 spectral type. This hypothesis, of course, requires that the remnant light, particularly the UV flux, is dominated by the eruptive star, and not a binary companion or unrelated neighboring source. The nature of the stellar aftermath and the 2014 eruption will be elucidated further with future UV through IR monitoring of the source using {\it HST} and \textit{JWST}. Specifically, additional observations to track the evolution of the SED will allow for the construction of more complex models involving a surviving central source(s) attenuated by an evolving dust component. If dust has continued to condense in the ejecta since the last observations, then we expect that the UV--optical extinction will increase, regardless of whether the central light is from the eruptive source or a binary companion that has also been engulfed by the ejecta. If dust formation has ceased, however, then we might observe the future restrengthening of the optical flux, as the optical depth of an expanding dusty ejecta should decrease with geometric expansion over time as $\tau \propto t^{-2}$ (Kochanek et al. 2012b). In both cases, if dust is in a continually expanding unbound outflow, then it will also have cooled, and the 1--5\,$\mu$m IR excess will fade as the flux shifts to longer wavelengths. On the other hand, if future observations reveal that the dust has remained hot and emitting at near-IR wavelengths, it would indicate that there could be an additional circumstellar dust component close to the stellar source, perhaps similar to the case of $\eta$\,Car (e.g., Smith et al. 2010) or dusty Wolf-Rayet binaries (e.g., Williams et al. 2012). \section*{Acknowledgements} \scriptsize This work is based in part on observations made with the NASA/ESA {\it Hubble Space Telescope}, obtained from the Data Archive at the Space Telescope Science Institute (STScI), which is operated by the Association of Universities for Research in Astronomy (AURA), Inc., under NASA contract NAS5-26555. This work is also based in part on observations and archival data obtained with the {\it Spitzer Space Telescope}, which is operated by the Jet Propulsion Laboratory, California Institute of Technology, under a contract with NASA; support was provided by NASA through an award issued by JPL/Caltech. A.V.F.'s supernova group is also supported by Gary \& Cynthia Bengier, the Richard \& Rhoda Goldman Fund, the Christopher R. Redlich Fund, the TABASGO Foundation, and the Miller Institute for Basic Research in Science (U.C. Berkeley). \scriptsize
1,108,101,565,381
arxiv
\section*{Acknowledgements} The authors gratefully acknowledge, that the proposed research is a result of the research project “IIP-Ecosphere”, granted by the German Federal Ministry for Economics and Climate Action (BMWK) via funding code 01MK20006A. \section*{Author contributions statement} All authors provided critical feedback and helped shape the research. Y.W. and J.R. contributed to the development and evaluation of the proposed methodology. Y.W. and S.L. contributed towards the development of the evaluation metrics. J.U. provided invaluable domain knowledge and helped formulate the research problem. J.R., S.W., and G.P. equally conceived the original idea and experimental design. S.W. and G.P. coordinated and supervised the project. All authors reviewed the manuscript. \end{document} \section*{Introduction} \label{sec:intro} Time-series forecasting is increasingly being used for predicting future events within business and industry to enable informed decision making~\cite{CHAN16,ZHAO14}. In this paper we evaluate its potential to revolutionize automated vehicle manufacturing, where real-time information is collected by a \emph{production data acquisition} (PDA) system. The effects of errors in interlinked manufacturing systems have dire consequences, such as production delays and even production system failure. In industrial manufacturing, downtimes are associated with high costs. To counteract downtimes, research and development has so far focused on predictive maintenance of equipment~\cite{arena2022predictive} and the use of alternative manufacturing routes through the production process~\cite{denkena2021scalable}. However, these approaches do not explicitly focus on delays (micro-disturbances) in individual process steps, which are propagated throughout the process chain and amplified in the process. The optimal utilisation of a fully automated car body production line depends on the individual station-based work steps completing within their scheduled cycle times. However, various disturbances with statistical significance are often detected. In particular, \emph{source errors} (typically logged by the PDA system, e.g., \say{No components available.}), may not only impact the current station, but also have a detrimental effect on the downstream workstations (hereinafter referred to as \emph{stations}), resulting in \emph{knock-on errors\xspace} and delays. Even minimal delays that are barely noticeable by humans can result in high additional costs. While time-series forecasting for car-body production is challenging (due to discontinuities, spikes and segments \cite{FRAU15}), deviations in the manufacturing process can be identified through comprehensive production data acquisition and the structured evaluation of these data . However, currently process delays and anomalies are identified through rule based classifiers that are manually programmed and maintained using extensive domain knowledge. In addition, further efforts are incurred in the interpretation of the processed data. This prevents production staff from rapidly deploying targeted countermeasures. To the best of our knowledge no approach currently exists that automatically: i.)~learns to classify both source and knock-on errors\xspace; ii.)~establish a link between errors; and iii.)~measures the knock-on effect of source errors. In this work we take steps towards solving these challenges using machine learning (ML). Our contributions can be summarized as follows: \noindent{\textbf{i.)}} We introduce an ML-based \emph{vehicle manufacturing analysis system} (VMAS) for process monitoring and cycle time optimization. The system is designed to detect delays and malfunctions in the production process early and automatically without manual effort. Furthermore, it identifies cause-effect relationships and predicts critical errors using sequence-to-sequence (seq2seq) models \noindent{\textbf{ii.)}} To enable a fair comparison between different seq2seq architectures for predicting errors in this context, we introduce a novel \emph{Composite Time-weighted Action} (CTA) metric. Our metric allows stakeholders to weight the sequences of predictions output by our model, and choose to what extent immediate action duration predictions are prioritized over distant ones. \noindent{\textbf{iii.)}} Our VMAS is evaluated on PDA system data from the car body production of \replaced[id=JR]{Volkswagen Commercial Vehicles}{a large German car manufacturer}. This includes the benchmarking of a number of popular seq2seq models for learning cause-effect relationships, including LSTM, GRU and Transformer. Surprisingly our evaluation shows the prevalence of source and knock-on errors\xspace, which occur in \replaced[id=YD]{71.68\%}{ 75\%} of action sequences. The evaluation of prediction component meanwhile shows that the Transformer outperforms LSTM and GRU models, capable of accurately predicting the durations of up to seven actions into the future. \section*{Background} \comJR{overview subsection to be written last in this section.} \subsection*{Interlinked manufacturing system} \label{sec:interlinked_manufactring system} \comJR{sounds generic, is it still good for talking about car body assembly?} In 1994, U. Klages described a form of interlinked manufacturing system. In interlinked manufacturing systems, especially transfer lines for large-scale production of, for example, automotive parts, the individual machining units are often connected via a rigid transport system. The transfer lines are generally used to manufacture exactly one workpiece in large quantities. The transport system ensures the simultaneous transport of the workpieces from machining unit to machining unit of the transfer line. However, it also couples the processing units in such a way that all processing units must function together in order to ensure the function of the overall transfer line system. \cite{KLAG94} Nowadays, there are already more flexible manufacturing systems. In [GRUN04] a definition of a flexible manufacturing system (FMS) is presented: "A flexibly interlinked multi-machine system in which several complementary or substituting CNC machines, usually machining centres, measuring machines and others, are linked via a computer-aided workpiece transport system. \comJR{reference [GRUN04] is missing!} The automatically operated workpiece storage systems for raw and finished parts are also partly linked to the transport system". This description is suitable for interlinked manufacturing systems in this work. \comJR{data scientists would know about "Basic methodological concepts for the application of machine learning methods", or?} \section{Motivation} \section*{Vehicle Manufacturing Analysis System} In this section we introduce our vehicle manufacturing analyses system (VMAS), which we developed according to the cross-industry standard process for data mining (CRISP-DM) \cite{chapman1999crisp}. Our use-case has two separate databases that store \emph{cycle times} and \emph{error reports} data respectively. The PDA system in our use-case registers and stores action duration tuples~$u$ in the cycle times database. The data are processed by our VMAS, which consists of two main components: 1.) an error classification module for identifying source and knock-on errors\xspace within our dataset; and 2.) a duration prediction module, trained to predict the time required for $n$ future actions. We describe each component in detail below and a flowchart can be found in Figure~\ref{Flowchart}. \subsection*{Module 1: Error Classification} We begin with an actions dataset $\mathcal{D}_a$ and an error reports database that stores timestamped error logs as well as the duration of the logged errors. Each sample $x \in \mathcal{D}_a$, is a sequence of action duration tuples $x = (u_0, u_{1}, u_{2}, \text{...,} u_n)$, where $n$ is the number of actions executed during a \emph{complete sequence}. The error classification module of our workflow allows us to identify the most significant errors within our dataset, and distinguishes source from knock-on errors\xspace. More specifically, this module allows us to split samples from our dataset into four subsets: {normal} $\mathcal{D}_{n}$, {source errors} $\mathcal{D}_{s}$, {knock-on errors\xspace} $\mathcal{D}_{k}$ and {misc} $\mathcal{D}_{m}$. This splitting of the dataset into sub-sets serves two purposes: i.)~The classification in $\mathcal{D}_{s}$ and $\mathcal{D}_{k}$ helps the stakeholder to conduct an automated analysis of all actions and it eliminates the need for manual and often time consuming inspection of actions; ii.)~During preliminary trials we found that samples from $\mathcal{D}_{m}$ are exceedingly rare and disturb the training of the seq2seq models. Therefore, the error classification module also provides a valuable preprocessing step prior to training our seq2seq models to predict future delays. Below we first discuss our approach for labelling our samples, and then formally define the conditions for a sequence $x$ to belong to one of the four subsets. We note that for our VMAS there is an assumption that \emph{all} source errors are \emph{logged errors}. \textbf{Labelling:} We use the maximum likelihood estimation (MLE) method for the labelling of anomalous behavior. For each action $a$, a normal (Gaussian) distribution is sought that fits the existing data distribution with respect to the frequency of each duration (for an example see Figure~\ref{fig:peak_detection}). The density function of the normal distribution contains two parameters: the expected value $\mu$ and standard deviation $\sigma$, which determine the shape of the density function and the probability corresponding to a point in the distribution. The MLE method is a parametric estimation procedure that finds $\mu$ and $\sigma$ that seem most plausible for the distribution of the observation $z$: \begin{equation} \label{eq:4.1_from_original} f(z \mid \mu, \sigma^2) = \frac{1}{\sqrt{2\pi\sigma^2}} \exp\left(-\frac{(z-\mu)^2}{2\sigma^2}\right). \end{equation} The density function describes the magnitude of the probability of $z$ coming from a distribution with $\mu$ and $\sigma$. The joint density function can be factorised as follows: \begin{equation} \label{eq:4.2_from_original} f(z_1, z_2, ..., z_n \mid \vartheta) = \Pi^n_{i=1}f(z_i \mid \vartheta) \end{equation} For a fixed observed variable, the joint density function of $z$ can be interpreted. This leads to likelihood function: \begin{equation} \label{eq:4.3_from_original} L(\vartheta) = \Pi^n_{i=1}f_\vartheta(z_i) \end{equation} The value of $\vartheta$ is sought for which the sample values $z_1, z_2, ..., z_n$ have the largest density function. Therefore, the higher the likelihood, the more plausible a parameter value $\vartheta$ is. As long as the likelihood function is differentiable, the maximum of the function can be determined. Thus, the parameters $\mu$ and $\sigma$ can be obtained. \begin{figure}[h] \centering \includegraphics[width=0.8\textwidth]{img/histogram_MLE.pdf} \caption{Peak detection in a histogram of durations for a specific action utilizing MLE as a threshold; peaks above the MLE threshold (red) are considered as \replaced[id=JR]{significant}{pattern} errors. } \label{fig:peak_detection} \end{figure} Next, we seek to identify \emph{high frequency peaks} with respect to the durations $d^a$ for an action $a$, that exceed the nominal duration $d^a_{norm}$. We are interested in significant errors, where we use the MLE threshold to determine if an error is significant or not. We denote significant errors as $d^a_{sig}$. These abnormal and distinct duration are indicating \replaced[id=JR]{a recurring behaviour}{the recurring pattern error behaviour}. We formally define the criteria for each sub-set below: \begin{itemize} \item \textbf{Source errors} are samples where for each complete sequence $x$, we have at least one action duration that is considered critical, of statistical significance, and is accompanied by an error message. More formally: a complete action sequence $x$ is considered a \emph{source error sequence} $x \in \mathcal{D}_{s}$ iff there exists an action duration tuple $u \in x$, where the duration is $d^a_{sig}$ and there is a corresponding error message in the error reports database. \item \textbf{Knock on errors} meet the same criteria as source errors, but lack an accompanying error message for $d^a_{sig}$. Therefore, a complete action sequence $x$ is considered a \emph{knock-on error\xspace sequence} $x \in \mathcal{D}_{s}$ iff there exists an action duration tuple $u \in x$, where the duration is $d^a_{sig}$ and there is not a corresponding error message in in the error reports database. \item \textbf{Normal} samples don't include $d^a_{sig}$. Therefore, a complete sequence $x$ is considered a \emph{normal sequence} $x \in \mathcal{D}_{n}$ iff for all $u \in x$ there does not exist a duration $d^a_{sig}$. \item \textbf{Misc.} contains two types of complete action sequences: i.) where for an action $u$ there is a duration $d^a_{sig}$ that is above a defined global threshold $d^a_{globalmax}$, meaning the duration is either intended (e.\,g.,\ the production line is paused), or staff are handling them; and ii.) where $x$ consists only of duration $d$ that exceed the nominal duration, but each of low significance, i.\,e.,\ not exceeding the corresponding MLE threshold. \end{itemize} It is worth noting that $\mathcal{D}_{n} \cup \mathcal{D}_{s} \cup \mathcal{D}_{k}$ may contain individual $d^a$ above the nominal duration, but below the threshold determined by the MLE, and therefore are errors of low significance. There can also exist an intersection between source and knock-on errors\xspace. Furthermore, the labelling of knock-on errors\xspace is deliberately modular, as different methods can be applied here based on the stakeholder's requirements. Naturally this will impact the subsequent training of our seq2seq models, and therefore their predictions. \subsection*{Module 2: Action Duration Prediction} \label{subsec:seq2seq} While our error classification module assigns labels to past errors, our second module focuses on the prediction of future errors. Upon removing misc samples, we utilize our dataset to train seq2seq models to predict knock-on errors\xspace. Given a sequence of action duration tuples our objective is to predict the time required by each of the next $n$ steps. We therfore convert the data received from the error classification module into a dataset containing pairs $(x, y) \in \mathcal{D}$, where each $x$ is a sequence of action duration tuples $x = (u_{t-n}, u_{t-n+1}, u_{t-n+2}, \text{...,} u_t)$, and $y$ is the duration of the $n$ actions that follow $y = (d^a_{t}, d^a_{t+1}, d^a_{t+2}, ..., d^a_{t+n})$. Using these data, we train and evaluate popular seq2seq models, including LSTM~\cite{hochreiter1997long}, GRU~\cite{chung2014empirical} and the Transformer~\cite{VASW17}. The later is of particular interest, as it represents the current state-of-the-art for a number of seq2seq tasks. Vaswani et al.~\cite{VASW17} presented the Transformer architecture for the Natural Language Processing (NLP) or Transductor task domain. Previous RNN/CNN architectures pose a natural obstacle to the parallelization of sequences. The Transformer architecture replaces the recurrent architecture by its attention mechanism and encodes the symbolic position in the sequence. This relates two distant sequences of input and output, which in turn can take place in parallel. The time for training is thereby significantly shortened. At the same time, the sequential computation is reduced and the complexity $O(1)$ of dependencies between two symbols, regardless of their distance from each other in the sequence, remains the same~\cite{VASW17}. Next we consider a novel metric for fairly evaluating models of different architectures -- in particular regarding the number of steps $n$ -- using a single scalar. \begin{figure}[h] \centering \includegraphics[width=0.8\textwidth]{img/flowchart.pdf} \caption{Flowchart of our vehicle manufacturing analyses system (VMAS). First the PDA system data is processed by our Error Classification Module, resulting in four sub-sets: source errors, knock-on errors\xspace, normal and misc. The resulting source and knock-on error\xspace sets can then be used by our stakeholders for obtaining valuable insights w.\,r.\,t.\ causes of delays. Next, upon excluding misc samples, we use our data for training sequence-to-sequence models for predicting future delays. } \label{Flowchart} \end{figure} \section*{Data preparation} \comJR{statistische Signifikanz von Musterfehler erst im Kapitel data.tex erwähnen} \comJR{erwähnen in data.tex über ob Störungsmeldungen vorliegne und wieviele} \subsection{} \subsection{Error labelling} First, a file of "Time\_difference" of an action (e.g. R04 process) is imported. There are only two columns, "Sequence" and "Time\_difference" (see Figure 4 .10). "Sequence" stands for the series of this event in the associated action database and "Time\_difference" of course stands for the duration of the action. There is a value of Time\_difference at every second row, because each action consists of a start event and an end event. At the end event, the value of Time\_difference (as well as the duration of the action) is documented. To simplify counting, all duration values are converted to integers. By counting the number of occurrences of different durations, the duration with the most occurrences is obtained, i.e. the main peak or mode (see Figure 4 .11). According to the sequence numbers, all data points from main peaks are found in the corresponding action database and marked as normal data. If only one duration is found, all data from that action will be considered normal. The next step is to find the correct second peak. To do this, a theoretical normal distribution (Gaussian distribution) is sought that fits the existing data distribution. To fit a distribution curve, the maximum likelihood estimation (MLE) method is required. The density function of the normal distribution contains two important parameters: expected value $\mu$ and standard deviation $\sigma$(. These two parameters determine the shape of the density function and the probability corresponding to a point in the distribution. \comGP{Insert eq 4.10} \begin{equation} \label{eq:4.1_from_original} f(x \mid \mu, \sigma^2) = \frac{1}{\sqrt{2\pi\sigma^2}} \exp\left(-\frac{(x-\mu)^2}{2\sigma^2}\right) \end{equation} The method of MLE is a parametric estimation procedure that can estimate two parameters of density function that the determined distribution seems most plausible according to its distribution of the observed data. \comGP{Variables need to be inserted} MLE assumes an observed variable whose density function depends on two unknown parameters $\mu$ and $\sigma$(together as below). The density function describes the magnitude of the probability of coming from a distribution with. The joint density function can be factorised as follows: \comGP{eq 4.2} \begin{equation} \label{eq:4.2_from_original} f(x_1, x_2, ..., x_n \mid \vartheta) = \Pi^n_{i=1}f(x_i \mid \vartheta) \end{equation} For a fixed observed variable, the joint density function of can be interpreted. This leads to likelihood function: \begin{equation} \label{eq:4.3_from_original} L(\vartheta) = \Pi^n_{i=1}f_\vartheta(x_i) \end{equation} The value of $\vartheta$ is sought for which the sample values $x_1, x_2, ..., x_n$ have the largest density function. This is obvious, the higher the likelihood, the more plausible a parameter value $\vartheta$ is. As long as the likelihood function is differentiable, the maximum of the function can be determined. Thus, the parameters $\mu$ and $\sigma^2$ obtains. In this work, all data of duration of an action are to be understood as the variables $X$. Moreover, the mode, i.e. the duration with the most occurrences, is defined as the expected value $\mu$. This leaves only one unknown parameter, the standard deviation $\sigma$. The implementation of the determination of $\sigma$ in Python is shown as in the appendix (see Figure 8 .35). After this, the parameters as well as the normal distribution for the corresponding data and the total probability density function are determined. If the proportion of the number of occurrences of a duration to the total data exceeds the value of the corresponding probability density function, the data with this duration are selected as data from second peak (as well as third peak, etc.) or as potential original errors. Next, the selected data is compared with the data in the error message database, e.g. whether the start and end time of the error is within the period of the action and whether the information of the error such as Area and Station matches the information of the action. All corresponding data are marked as original errors and the rest are marked as drag errors. In addition, the number of the error is also documented. (see the two rightmost columns in Figure 4 .12) In the context of this work, we only focus on the actions in station 7240 with vehicle number 0021. Finally, all action data are marked as normal, original error, dragging error and undefined according to the above steps. In addition, each action forms a new file with label. \textbf{Integrate Training Data} After error labelling, all files with label are integrated so that we can get a new raw production database with label. Thus, a training database is generated with action number, error type label and number of error (Figure 4 .13). \subsection{Modelling} \textbf{Pre-processing of the training data} Initially, only all rows with AC number in the training database are selected. Such rows contain the timestamp and duration of the associated action, which defines a time period of the action. Therefore, the rows with event number are no longer required. Then the columns like Station and Car Code are deleted. In this work, only the data from station 7240 with Car Code 0021 is considered. In the future, data from other station with different car type will be added. In addition, the lines with AC 000, 006 and 013 are excluded. These actions stand for "Takt", "Rob 05 Prozess" and "Rob 04 Prozess" respectively (see Figure 4 .8). The action Takt describes a complete action sequence of a vehicle in station 7240. The action Rob 05 Prozess and Rob 04 Prozess describe all production actions of Robot 05 and Robot 04 respectively. It leads to overlapping information and negative impact on the training of the model because the training database contains every action information. But AC 000 (clock) will remain temporary and still play a role in the next steps. Next, all data is left integer or unchanged as needed and finally normalised. The timestamp is processed as the number of seconds that have passed since 01.01.2020. The Event is converted as an integer from 0 to the number of actions. For error type, normal, original error, drag error and undefined are each to be understood as an integer from 1 to 4. The data of Time\_diff(s) and Error Count remain unchanged. The processed database is shown as in Figure 4 .14. \comGP{4.14} At the same time, the number of data with different error types in training data is better represented (see Figure 4.15). \comGP{4.15} After pre-processing the database, a total of 162,339 rows of data are obtained, including 89,465 normal actions, 20,817 drag errors, 664 original errors and 51,393 undefined data. \textbf{Filtering outliers or outlier sequence} In practice, there are outliers of duration that are oversized. Such data can have a negative impact on the training of the model, although it seems rare. The probable causes may be downtime during holidays or random failure. To minimise this impact, all data defined as outliers are excluded. There is no basic maximum duration (Max. Duration) of the action yet. Therefore, different Max. Duration (e.g. 600, 400, 200 seconds) are set. This is disadvantageous to delete each of these dates because the original sequence of actions is changed. An alternative is to exclude all production sequences that contain the outlier. The action clock i.e. AC 000 is to be understood as the beginning of a production sequence. Therefore, a production sequence is defined by two continuous AC 000. There is an example of a production sequence with an outlier (see Figure 4 .16). \comGP{4.16} After eliminating the outlier (AA), numerous rows of data are deleted. With a maximum duration of 600 seconds, 1,167 rows of data are filtered. After eliminating the production sequences with outliers (APS), a total of 14,835 rows of data are filtered. This is acceptable because of the huge amount of data (162,339 rows in total). Chapter 5 compares the performance of the models under AA and models under APS. In addition, the effect of different maximum allowed durations is discussed. The overview of the filtered data under different maximum durations is presented in Table 4.1. \comGP{Table 4.1} \begin{table}[] \large% \resizebox{\columnwidth}{!}{ \begin{tabular}{|l|l|l|l|} \hline \textbf{Editing} & \textbf{Number of filtered data} & \textbf{Proportion of filtered data (total 162,339)} & \textbf{Max. Duration (seconds)} \\ \hline \hline AA & 1.167 & 0,72\% & 600 \\ \hline APS & 14.835 & 9,14\% & 600 \\ \hline AA & 1.688 & 1,04\% & 400 \\ \hline APS & 21.529 & 13,26\% & 400 \\ \hline AA & 3.745 & 2,31\% & 200 \\ \hline APS & 49.035 & 30,21\% & 200 \\ \hline \end{tabular}} \caption{The number of filtered data under different durations} \label{tab:The_number_of_filtered_data_under_different_durations} \end{table} \textbf{Normalisation and listing according to the number of look-back steps} In the next step, all data are normalised from -1 to 1 to allow calculation in LSTM unit. Then a pattern for the prediction of the LSTM model is designed. It is desired that the model looks 3 ,5 or more steps in the past and then predicts the next two actions and their duration. Therefore, a listing of the processed database must be performed to achieve a desired shape (e.g. 3 look-back steps and 2 steps in the future): \comGP{insert plot} The number here only represents the order of the action, instead of the actual action number. Each Act\_x here is to be understood as a row of the processed training database. This is implemented in Python (see Figure 8 .36) and the result is shown below (Figure 4 .17). \comGP{Figure 4.17} The data in Figure 4 .17 is not yet normalised for better understanding. The variables 1 to 5 are respectively timestamp, action number, duration, error type and error count. Only the next 2 action numbers and associated durations are predicted (var2 and var3). This is the final processed data. The first 3 steps or 15 rows of data (from t-3 to t-1) will be understood as the input to LSTM model and the last 2 steps or 4 rows (from t to t+1) of data will be understood as the correct solution of prediction. In case of discrepancies between the output of the LSTM model and the correct solution, the LSTM tries to adjust the neural weights to obtain a higher accuracy. Finally, the data is divided into training and testing data. In this paper, the first 80\% data is considered as the training data and the last 20\% data is considered as the test data. The amount of data in the training data and test data varies depending on the previous processing method (e.g. AA or APS, number of look-back steps). For example, the data is processed with APS. After listing, there are 110,085 rows of training data and 27,520 rows of test data. (see Table 4 .2) \comGP{Table 4.2} \begin{table}[] \resizebox{\columnwidth}{!}{ \begin{tabular}{|l|l|l|l|} \hline \textbf{Processing} & \textbf{Review steps} & \textbf{Training data} & \textbf{Test data} \\ \hline \hline AA & 3 & 121.351 & 30.336 \\ \hline APS & 3 & 110.087 & 27.520 \\ \hline AA & 5 & 121.349 & 30.336 \\ \hline APS & 5 & 110.085 & 27.520 \\ \hline AA & 7 & 121.348 & 30.335 \\ \hline APS & 7 & 110.084 & 27.519 \\ \hline \end{tabular}} \caption{Amount of training data and test data} \label{tab:Amount_of_training_data_and_test_data} \end{table} \textbf{Separation of the new vehicle} As told in section 4.1, the vehicles to be produced always enter the station one after the other until they are produced. It is possible to perform a separation between different vehicles. If there are different types of vehicles in the future database, it will be necessary to make separation between vehicles. The implementation is described below. Assume that a production sequence contains from Act\_1 to Act\_9. When Act\_1 is displayed, it means that a new vehicle comes to the station for production and the previous vehicle has finished production. The separation is done like this: \comGP{figure} \begin{table}[] \resizebox{\columnwidth}{!}{ \begin{tabular}{|l|l|l||l|l|} \hline Act\_5 & Act\_6 & Act\_7 & Act\_8 & Act\_9 \\ \hline Act\_6 & Act\_7 & Act\_8 & Act\_9 & empty \\ \hline Act\_7 & Act\_8 & Act\_9 & empty & empty \\ \hline Act\_1 & Act\_2 & Act\_3 & Act\_4 & Act\_5 \\ \hline … & … & … & … & … \\ \hline \end{tabular}} \caption{Example of the separation.} \label{tab:Amount_of_training_data_and_test_data} \end{table} In section 4.3.1 it is mentioned that AC 000 should remain short. AC 000 is to be understood here as the first action as well as Act\_1. This makes the lines with AC 000 the breakpoint lines. After the separation, the lines with AC 000 are also excluded. It will be proved in the following work that such separation has a positive effect on the quality of the prediction. Therefore, the model without separation and the model with separation will be compared in the next chapter. The set-up is shown below. (see figure 4 .18) \comGP{Figure 4.18} The amount of training data and test data is changed after separation. The change can be observed in Table 4.3. \comGP{table 4.3} \begin{table}[] \resizebox{\columnwidth}{!}{ \begin{tabular}{|l|l|l||l|l|} \hline Processing & Separation & Retrospective steps & Training Data & Test Data\\ \hline AA & Yes & 3 & 106.177 & 26.543 \\ \hline APS & Yes & 3 & 95.747 & 23.935 \\ \hline AA & Yes & 5 & 91.007 & 22.750 \\ \hline APS & Yes & 5 & 81.410 & 20.351 \\ \hline AA & Yes & 7 & 75.843 & 18.959 \\ \hline APS & Yes & 7 & 67.080 & 16.768 \\ \hline \end{tabular}} \caption{Amount of training data and test data after separation .} \label{tab:Amount_of_training_data_and_test_data_after_separation} \end{table} \subsection{Creating the LSTM model} \comJR{irrelevant section (?)} Keras is an open-source deep learning library with intuitive code and clear feedback written in Python. Keras library makes it quick and easy to create an LSTM model. It is important to set appropriate hyperparameters of the model, for example, the number of layers (hidden layers), nodes (nodes), etc. Suitable parameters play a positive role in various tasks. At the same time, the choice of parameters is also a difficult problem in building machine learning models. Currently, it is not easy to determine the appropriate optimal parameters based on existing tasks. Therefore, a preliminary experiment is conducted in this paper to determine the initial parameters. Next, these parameters are adjusted in the experiment. In other literature, a model with 2 LSTM layers and 80 nodes is built to detect the anomaly of a robotic system \cite{DING20}. In the preliminary experiment of this work, it is tried to build models with 2 to 4 LSTM layers and 50 to 100 nodes. After referring to similar work and preliminary experiments, the parameters of 4 LSTM layer and 50 nodes are selected. The activation function for output layer is tanh (see Figure 4.19). This function has an amplitude from -1 to 1. It is explained why all data are normalised from -1 to 1 earlier. For the loss function, "MAE" (mean absolute error) is selected and the optimiser is "Adam" (adaptive moment estimation). The closer the predicted result is to the actual value, the better the predictive ability of the model. Under MAE Loss function, the absolute deviation between predicted and actual value is calculated. Adam is a commonly used deep learning optimisation algorithm, which is used here for training to minimise MAE. To control the training time of the model within an acceptable range, the model is trained with 50 epochs and 128 batch-size. The batch-size is the number of samples of training data in each iteration. The number of iterations is the total number of training data divided by the batch-size. 1 epoch means that all iterations are completed. Theoretically, there is an optimal batch-size under a fixed number of epochs. If the batch-size is too small, it is possible that the loss function will no longer converge. On the contrary, if batch-size is too large, more epochs will be needed. This will lead to an increase in time. After preliminary tests, 128 batch-size is found to be a relatively good parameter among 50 epochs. The training time is 5 to 6 minutes. The visualisation of the model, the result of the training and the code in Python are shown below. (Figure 4 .20, Figure 4 .21 and Figure 8 .37) \comGP{Figure 4.20 (move to supplementary material.} \comGP{Figure 4.21} When training the model, it turns out that increasing the number of nodes in the model has a positive effect on the results. Therefore, an experiment was set up and the models were grouped according to the factors mentioned in the previous section (such as Steps, Separation, Max. Duration, AA and APS) and their performances were compared. (see Table 4 .4) \comGP{Table 4.4} \begin{table}[] \resizebox{\columnwidth}{!}{ \begin{tabular}{|l|l|l|l|l|l|l|l|} \hline No. & Name & Separation & Review steps (Steps) & Node & AA & APS & Max. Duration \\ \hline 1 & OT\_Model\_1 & Nein & 3 & 50 & Nein & Nein & \\ \hline 2 & OT\_Model\_2 & Nein & 3 & 50 & Ja & Nein & 600 \\ \hline 3 & OT\_Model\_3 & Nein & 3 & 50 & Nein & Ja & 600 \\ \hline 4 & T\_Model\_1 & Ja & 3 & 50 & Nein & Nein & \\ \hline 5 & T\_Model\_2 & Ja & 3 & 50 & Ja & Nein & 600 \\ \hline 6 & T\_Model\_3 & Ja & 3 & 50 & Nein & Ja & 600 \\ \hline 7 & T\_Model\_1\_5s & Ja & 5 & 75 & Nein & Nein & \\ \hline 8 & T\_Model\_2\_5s & Ja & 5 & 75 & Ja & Nein & 600 \\ \hline 9 & T\_Model\_3\_5s & Ja & 5 & 75 & Nein & Ja & 600 \\ \hline 10 & T\_Model\_3\_5s\_1 & Ja & 5 & 100 & Nein & Ja & 600 \\ \hline 11 & T\_Model\_3\_5s\_2 & Ja & 5 & 150 & Nein & Ja & 600 \\ \hline 12 & T\_Model\_3\_5s\_3 & Ja & 5 & 200 & Nein & Ja & 600 \\ \hline 13 & T\_Model\_3\_5s\_4 & Ja & 5 & 100 & Nein & Ja & 400 \\ \hline 14 & T\_Model\_3\_5s\_5 & Ja & 5 & 100 & Nein & Ja & 200 \\ \hline 15 & T\_Model\_1\_7s & Ja & 7 & 100 & Nein & Nein & \\ \hline 16 & T\_Model\_2\_7s & Ja & 7 & 100 & Ja & Nein & 600 \\ \hline 17 & T\_Model\_3\_7s & Ja & 7 & 100 & Nein & Ja & 600 \\ \hline \end{tabular}} \caption{Numbers and names of the models for experiment.} \label{tab:numbers_and_names_of_the_models_for_experiment} \end{table} \section*{Empirical Evaluation} \label{sec:empirical_eval} \subsection*{Experiment Setup} For the empirical evaluation we first discuss the result of applying our error classification module to the dataset provided by \replaced[id=JR]{Volkswagen Commercial Vehicles}{our industry partner}. This dataset contains hierarchical actions. However, to enhance our sequence-to-sequence model training we remove the hierarchy of actions to lessen the noise in the data. Therefore, in the last data preprocessing step we remove the hierarchy of actions, as superordinate actions document the total times of subactions. We focus on a single station to test the hypothesis that pattern errors can be learnt from the completion time of actions within an action sequence. We consider an exemplary station that has 22 actions. This workstation is of particular interest for \replaced[id=JR]{Volkswagen Commercial Vehicles}{our industry partner}, as delays are frequently observed. For our error classification module we set the global threshold as ten times $d^a_{globalmax} = 10 \times d^a_{max}$. For the scalar for obtaining $d^a_{globalmax}$ we ran preliminary trials with 3, 5, 10, but found the former two removed a large proportion of data points, impacting the accuracy of the predictions of the seq2seq models. We therefore chose a scalar of 10, allowing us to retain \replaced[id=JR]{94.8\%}{97\%} of the data points. The parameters chosen for our seq2seq models can be found in Table~\ref{tab:hyperparams}. Four different seq2seq architectures $n$-$m$ are compared with respect to length of the input sequence $n$ and the number of outputs $m$: 5--2, 5--5, 5--7, 7--7. We conducted 10 training runs per model architecture, and the results in Table~\ref{tab:Model_results} are the averages from applying the models to our test data, using a 80\% training, 20\% test split. For the evaluation we set the F1 threshold $b = 10\%$. \added[id=JR]{Our preliminary tests were also conducted with a $5\%$ and $20\%$ threshold. The evaluation with $20\%$ of different models is challenging due to too much separation uncertainty. A threshold of $5\%$ becomes problematic for actions that last only a comparatively short time, since the noise in the time detection is larger than the targeted prediction quality.} After considering only actions below $d^a_{max}$ and then calculate the RMSE from all of them we get $k = 5.14$. \begin{table} \begin{center} \begin{tabular}{| c | c|} \hline \multicolumn{2}{|c|}{\textbf{LSTM or GRU Model configurations}} \\ \hline Nodes per layer & 100 \\ \hline Layers & 4 \\ \hline Dropout & 0.2 \\ \hline \hline \multicolumn{2}{|c|}{\textbf{Transformer Model Parameters}} \\ \hline Number of heads & 2 \\ \hline Head Size & 256 \\ \hline Feed Forward Dimension & 1024 \\ \hline Number of Transformer Blocks & 4 \\ \hline MLP Units & 1024 \\ \hline Dropout & 0.1 \\ \hline \hline \multicolumn{2}{|c|}{\textbf{General}} \\ \hline Epochs & 50 \\ \hline Batch Size & 128 \\ \hline Optimizer & Adam \\ \hline Learning rate & 0.001 \\ \hline \end{tabular} \caption{Hyperparameters} \label{tab:hyperparams} \end{center} \end{table} \subsection*{Error Classification Results} \replaced[id=GP]{Upon applying the error classification module to our dataset we first remove 2106 out of 40536 sequences that contain outliers~(5.2\%). Next, we apply our MLE based approach, finding that 3.94\% of samples sequences containing at least one source error (without knock-on errors\xspace), 61.20\% containing knock-on errors\xspace and 6.54\% containing both. With respect to normal and misc samples, we have 0.068\% only normal, 0.0902\% only misc, and 18.62\% \emph{only} misc and normal.}{Upon applying the error classification module to our dataset we obtain 9.48\% sequences containing source errors, 70.34\% knock-on errors\xspace, 95.13\% misc sequences and 85.65\% normal sequences.} \added[id=JR]{An analysis of the dataset following preprocessing reveals that \replaced[id=GP]{71.68}{75}\% of sequences contain at least one error.} Therefore, surprisingly the majority of the sequences contain either a source or knock-on errors\xspace. As mentioned, during preliminary trials we also find that the small percentage of misc sequences can negatively impact the performance of the seq2seq models. We discuss this in more detail in the evaluation of our seq2seq model results below. \subsection*{Sequence-to-Sequence Model Results} In this section we shall first compare the results for the four different seq2seq architecture types based on length of the input sequences and predictions. Then we shall take a closer look at the impact of the choice for the TARMSE weighting factor $\tau$ for evaluating our models. An overview of the results obtained for each setting is provided in Table~\ref{tab:Model_results}, where the balance between $TARMSE$ and $F1$ is $\tau = 0.5$. Finally, we conduct an ablation study, showing the extent to which including misc samples impacts the performance of our seq2seq models. \textbf{Setup 5-2:} We first consider the results for training a seq2seq model to predict two future action durations based on five historic actions (setup 5-2). The TARMSE of the GRU and LSTM models is at $0.2 \pm 0.05$ and $0.22 \pm 0.08$ while the Transformer performs best with $0.41 \pm 0.01$. Yet the summarized F1 score is lower at $0.8 \pm 0.01$ while the GRU and LSTM are better with $0.94 \pm 0.01$ or $0.95 \pm 0.01$. Combined the CTA shows us that the GRU at $59.89 \pm 3.02$ and LSTM at $58.49 \pm 4.08$ are minimal worse w.\,r.\,t.\ mean than the Transformer at $60.55 \pm 0.76$. However, the standard deviation shows us that the Transformer is more consistent. \textbf{Setup 5-5:} In the next setup 5-5 we see a similar behavior to the 5-2 setup. The TARMSE is for the GRU and LSTM at $0.24 \pm 0.02$ and $0.23 \pm 0.01$ respectively, and for the Transformer it is $0.44 \pm 0.00$. The F1 is $0.94 \pm 0.02$ for the GRU, $0.93 \pm 0.01$ for the LSTM and $0.80 \pm 0.02$ for the Transformer. The CTA shows that the Transformer is better with $61.69 \pm 0.85$ than GRU's $58.67 \pm 1.33$ and LSTM's $58.11 \pm 1.53$. \textbf{Setup 7-5:} Next we keep the number of future predictions the same but consider a history of seven actions. The TARMSE for GRU is $0.22 \pm 0.03$, LSTM is $0.20 \pm 0.04$ and Transformer slightly increasing than the previous 5-5 setup to now $0.49 \pm 0.01$. The F1 score slightly decrease to $0.89 \pm 0.02$ for the GRU, $0.91 \pm 0.03$ for the LSTM and $0.81 \pm 0.01$ for the Transformer. We notice a slight improvement in the CTA for the Transformer at $64.75 \pm 0.59$ while the GRU at $55.83 \pm 2.12$ and LSTM at $55.80 \pm 2.19$ decrease and notably the standard deviation is significantly higher now compared to the 5-5 setup. \textbf{Setup 7-7:} Lastly we consider seven previous actions in a sequence and let the models predict seven actions into the future. The TARMSE of the GRU and LSTM are both at $0.21 \pm 0.05$ and the for Transformer at $0.48 \pm 0.00$. It should be noted that the standard deviation for the Transformer is considered that low that the rounding shows zero here. The F1 is for the GRU at $0.88 \pm 0.02$, for the LSTM at $0.92 \pm 0.01$ and for the Transformer similar to before $0.80 \pm 0.01$. For the GRU and LSTM the CTA are at $54.24 \pm 3.58$ and $56.22 \pm 2.55$ while the Transformer is at $63.88 \pm 0.70$. Across all setups we can observe that the the Transformer shows better performance when predicting future actions by considering the TARMSE. We see an improving trend in the TARMSE for the Transformer the more input actions are considered and prediction range increased. However the F1 score is higher for the GRU and LSTM models. \begin{center} \begin{table*} \resizebox{\textwidth}{!}{ \begin{tabular}{|l|l|l|l|l|l|l|l|l|l|l|l|l|} \hline \textbf{Metric} & \textbf{GRU 5-2} & \textbf{LSTM 5-2} & \textbf{TF 5-2} & \textbf{GRU 5-5} & \textbf{LSTM 5-5} & \textbf{TF 5-5} & \textbf{GRU 7-5} & \textbf{LSTM 7-5} & \textbf{TF 7-5} & \textbf{GRU 7-7} & \textbf{LSTM 7-7} & \textbf{TF 7-7} \\ \hline $\mathbf{RMSE_1}$ & $4.14 \pm 0.24$ & $4.01 \pm 0.39$ & $2.95 \pm 0.02$ & $4.07 \pm 0.18$ & $4.03 \pm 0.21$ & $3.04 \pm 0.02$ & $3.99 \pm 0.22$ & $4.08 \pm 0.32$ & $2.68 \pm 0.04$ & $4.10 \pm 0.32$ & $4.05 \pm 0.40$ & $2.72 \pm 0.03$ \\ \hline $\mathbf{RMSE_2}$ & $4.12 \pm 0.63$ & $4.06 \pm 0.49$ & $3.29 \pm 0.05$ &$3.72 \pm 0.29$ & $3.88 \pm 0.21$ & $2.70 \pm 0.02$ & $3.97 \pm 0.19$ & $4.15 \pm 0.35$ & $2.55 \pm 0.02$ & $4.00 \pm 0.29$ & $4.07 \pm 0.26$ & $2.56 \pm 0.02$ \\ \hline $\mathbf{RMSE_3}$ & NA & NA & NA & $3.50 \pm 0.18$ & $3.84 \pm 0.30$ & $2.43 \pm 0.02$ & $4.09 \pm 0.41$ & $4.21 \pm 0.24$ & $2.59 \pm 0.02$ & $4.24 \pm 0.41$ & $4.30 \pm 0.16$ & $2.57 \pm 0.03$ \\ \hline $\mathbf{RMSE_4}$ & NA & NA & NA & $3.51 \pm 0.23$ & $3.68 \pm 0.34$ & $2.35 \pm 0.02$ & $3.77 \pm 0.35$ & $3.92 \pm 0.34$ & $2.57 \pm 0.04$ & $4.06 \pm 0.28$ & $3.82 \pm 0.32$ & $2.57 \pm 0.02$ \\ \hline $\mathbf{RMSE_5}$ & NA & NA & NA & $4.29 \pm 0.41$ & $4.09 \pm 0.53$ & $2.77 \pm 0.06$ & $4.23 \pm 0.52$ & $4.29 \pm 0.84$ & $2.78 \pm 0.10$ & $4.15 \pm 0.27$ & $4.15 \pm 0.29$ & $2.66 \pm 0.05$ \\ \hline $\mathbf{RMSE_6}$ & NA & NA & NA & NA & NA & NA & NA & NA & NA & $3.49 \pm 0.31$ & $3.51 \pm 0.98$ & $2.86 \pm 0.06$ \\ \hline $\mathbf{RMSE_7}$ & NA & NA & NA & NA & NA & NA & NA & NA & NA & $3.28 \pm 0.24$ & $3.68 \pm 0.56$ & $2.70 \pm 0.07$ \\ \hline $\mathbf{F1_1}$ & $0.94 \pm 0.02$ & $0.95 \pm 0.01$ & $0.80 \pm 0.02$ & $0.94 \pm 0.02$ & $0.94 \pm 0.02$ & $0.79 \pm 0.03$ & $0.89 \pm 0.03$ & $0.91 \pm 0.04$ & $0.81 \pm 0.01$ & $0.87 \pm 0.04$ & $0.92 \pm 0.02$ & $0.80 \pm 0.01$ \\ \hline $\mathbf{F1_2}$ & $0.95 \pm 0.01$ & $0.95 \pm 0.01$ & $0.82 \pm 0.03$ & $0.95 \pm 0.02$ & $0.94 \pm 0.01$ & $0.81 \pm 0.02$ & $0.89 \pm 0.02$ & $0.90 \pm 0.02$ & $0.81 \pm 0.02$ & $0.89 \pm 0.03$ & $0.90 \pm 0.01$ & $0.80 \pm 0.02$ \\ \hline $\mathbf{F1_3}$ & NA & NA & NA & $0.89 \pm 0.03$ & $0.89 \pm 0.02$ & $0.79 \pm 0.03$ & $0.94 \pm 0.01$ & $0.94 \pm 0.01$ & $0.81 \pm 0.01$ & $0.91 \pm 0.02$ & $0.93 \pm 0.01$ & $0.79 \pm 0.01$ \\ \hline $\mathbf{F1_4}$ & NA & NA & NA & $0.89 \pm 0.01$ & $0.90 \pm 0.01$ & $0.80 \pm 0.01$ & $0.94 \pm 0.01$ & $0.94 \pm 0.01$ & $0.81 \pm 0.01$ & $0.93 \pm 0.02$ & $0.94 \pm 0.01$ & $0.79 \pm 0.01$ \\ \hline $\mathbf{F1_5}$ & NA & NA & NA & $0.92 \pm 0.01$ & $0.93 \pm 0.01$ & $0.78 \pm 0.02$ & $0.92 \pm 0.01$ & $0.92 \pm 0.03$ & $0.75 \pm 0.02$ & $0.85 \pm 0.04$ & $0.89 \pm 0.02$ & $0.75 \pm 0.02$ \\ \hline $\mathbf{F1_6}$ & NA & NA & NA & NA & NA & NA & NA & NA & NA & $0.85 \pm 0.03$ & $0.88 \pm 0.04$ & $0.72 \pm 0.02$ \\ \hline $\mathbf{F1_7}$ & NA & NA & NA & NA & NA & NA & NA & NA & NA & $0.85 \pm 0.04$ & $0.88 \pm 0.03$ & $0.76 \pm 0.03$ \\ \hline $\mathbf{TARMSE}$ & $0.20 \pm 0.05$ & $0.22 \pm 0.08$ & $0.41 \pm 0.01$ & $0.24 \pm 0.02$ & $0.23 \pm 0.03$ & $0.44 \pm 0.00$ & $0.22 \pm 0.03$ & $0.20 \pm 0.04$ & $0.49 \pm 0.01$ & $0.21 \pm 0.05$ & $0.21 \pm 0.05$ & $0.48 \pm 0.00$ \\ \hline $\mathbf{F1}$ & $0.94 \pm 0.01$ & $0.95 \pm 0.01$ & $0.80 \pm 0.01$ & $0.94 \pm 0.02$ & $0.93 \pm 0.01$ & $0.80 \pm 0.02$ & $0.89 \pm 0.02$ & $0.91 \pm 0.03$ & $0.81 \pm 0.01$ & $0.88 \pm 0.02$ & $0.92 \pm 0.01$ & $0.80 \pm 0.01$ \\ \hline $\mathbf{CTA}$ \textbf{(\%)} & $59.89 \pm 3.02$ & $58.49 \pm 4.08$ & $60.55 \pm 0.76$ & $58.67 \pm 1.33$ & $58.11 \pm 1.53$ & $61.69 \pm 0.85$ & $55.83 \pm 2.12$ & $55.80 \pm 2.19$ & $64.75 \pm 0.59$ & $54.24 \pm 3.58$ & $56.22 \pm 2.55$ & $63.88 \pm 0.70$ \\ \hline \end{tabular}} \caption{Results table comparing each of our LSTM, GRU and Transformer (TF) architectures $n$-$m$, where $n$ represents the number of inputs, and $m$ the number of model outputs. The table provides RMSE and F1 scores for each outputs, as well as TARMSE, averge F1 and composite time-weighted actions (CTA) scores (using $\tau = 0.5$ for the later).} \label{tab:Model_results} \end{table*} \end{center} \begin{figure}[h] \centering \includegraphics[width=0.8\textwidth]{img/GRU-LSTM-TF_weightingComparison.png} \caption{Impact of the weighting parameter $\tau$ for the composite time-weighted actions metric. Depicted are the results for GRU, LSTM and Transformer (TF) considering seven actions in the past and predicting seven future actions.} \label{CTA_weighting} \end{figure} \textbf{CTA Weighting Factor:} We note that the weighting factor $\tau$ influences our final result for the CTA. In Figure \ref{CTA_weighting} we demonstrate the weighting factor between TARMSE and F1 for the chosen models in our setup with seven past actions to be considered and seven actions need to be predicted. GRU and LSTM demonstrate here that due to their higher F1 score they initially start higher than the Transformer model. With increasing $\tau$ the Transformer model surpasses the GRU model ($\tau = 0.229$) and LSTM model ($\tau = 0.308$) because of its better TARMSE. \textbf{Ablation Study:} As mentioned above, during preliminary trials we found that the inclusion of misc samples reduced the performance of the seq2seq models when used during training. We illustrate this in Figure~\ref{fig:ablation_study}, where we observe the RMSE of four model groups. In each group the model is the same, but differ in the training and test set. In each group the first experiment (1, 4, 7, 10) includes the misc samples. The second of each group (2, 5, 8, 11) has \added[id=JR]{their} extreme element in the sequence removed, effectively skipping one process steps always. The third of each group (3, 6, 9, 12) has the entire misc samples removed. The exclusion of the extreme element in the misc samples improves the model performance by a factor three to four. Since the removal of an extreme element in the misc samples \replaced[id=JR]{does not}{doesn't} mirror the real world application we opted to remove the entire sequence and achieve in average an additional $18\%$ model performance increase. \begin{figure}[h] \centering \includegraphics[width=0.8\textwidth]{img/rmse_plot_v3.pdf} \caption{RMSE effect of model performances including misc samples (1, 4, 7, 10), samples where the extreme outlier element were removed (2, 5, 8, 11) and misc samples completely removed (3, 6, 9, 12).} \label{fig:ablation_study} \end{figure} \section*{Outlook \& Conclusion} \section*{Conclusion} \added[id=GP]{In car body production, the car body is processed according to the order requirements at interlinked production stations. Frequently, faults are detected at stations, where the resulting disturbances not only affect the station itself, but also have a negative impact on the downstream stations. To address this problem we introduce a novel vehicle manufacturing analyses system that can identify the fault cause-effect relationships, and predict future delays. The evaluation of our framework on data from the car body production of \replaced[id=JR]{Volkswagen Commercial Vehicles}{a large German car manufacturer} shows that source and knock-on errors\xspace are surprisingly prevalent, occurring in \replaced[id=GP]{71.68}{75}\% of action sequences. Furthermore, we show that the prediction component of our model does well at predicting the durations of up to seven actions into the future, using state-of-the-art sequence-to-sequence models, including the Transformer. Therefore deployable framework can be used to efficiently process data for identifying source and knock-on errors\xspace, as well as predicting future delays that can benefit from an early intervention.} \section{Relevant Work} \label{sec:relevant_work} \section*{Related Work} \label{sec:relevant_work} Within the context of intelligent industrial production a significant amount of research has been dedicated towards forecasting, failure prediction and anomaly detection using time series data~\cite{DING20,JIAN20}. The literature in this area provides an overview of the suitability of approaches designed to solve these problems when applied to various production contexts, often featuring a comparison between traditional machine learning approaches and that of advanced deep neural networks. Failure prediction for instance has often been limited to standard key performance indicators. Moura et al.~\cite{Mour11} evaluate the effectiveness of support vector machines in forecasting time-to-failure and reliability of engineered components based on time series data. Yadav et al.~\cite{Yada12} present a procedure to forecast time-between-failure of software during its testing phase by employing fuzzy time series approach. Others use artificial neural networks or statistical approaches to model machine tool failure durations continuously and cause-specific \cite{Denk20, Olad06}. \emph{Recurrent neural network} (RNN) models meanwhile are capable of identifying long-term dependencies from time-series data directly~\cite{chung2014empirical}. Successes here include: multi-step time-series forecasting of future system load with the goal of performing anomaly detection and system resource management, enabling the automated scaling in anticipation of changes to the load~\cite{Jero18}; and using stacked LSTM networks to detect deviations from normal behaviour without any pre-specified context window or pre-processing~\cite{Malh15}. However, the performance of encoder-decoder architectures relying on memory cells alone typically suffers, as the encoding step must learn a representation for an (potentially lengthy) input sequence. Here \emph{attention based encoder-decoder architectures} provide a solution, where the hidden states from all encoder nodes are made available at every time step. In-fact, pioneering work by \cite{VASW17} demonstrated that one can dispense with recurrent units and rely solely on the attention, introducing the Transformer. Further improvements can be obtained via Transformers implemented with GRUs~\cite{parisotto2020stabilizing}. Not surprisingly attention based approaches are increasingly being applied to industry problems~\cite{9536749}. Li et al.~ \cite{LI21} present a novel approach to extracting dynamic time-delays to reconstruct multivariate data for an improved attention-based LSTM prediction model and apply it in the context of industrial distillation and methanol production processes. But they do not explicitly consider failure propagation in concatenated manufacturing systems to evaluate failure criticality and to generate a reliable failure impact prediction. Attention-based models have also been applied to failure prediction and rated as favorable. LI et al.~\cite{LI22} propose an attention-based deep survival model to convert a sequence of signals to a sequence of survival probabilities in the context of real-time monitoring. While Jiang et al.~\cite{JIAN20} use time series multiple channel convolutional neural network integrated with the attention-based LSTM network for remaining useful life prediction of bearings. Near real-time disturbance detection becomes possible with the attention-based LSTM encoder– decoder network by Yuan et al.~\cite{YUAN20}, which allows to align an input time series with the output time series and to dynamically choose the most relevant contextual information while forecasting. In contrast to previous work, we propose a workflow and evaluate seq2seq approaches for failure impact prediction in concatenated manufacturing systems. \section*{Problem Definition} \label{sec:problem_definition} In car manufacturing, the vehicle body is processed through visiting a sequence of fully-automated stations. Each station comprises an ensemble of manufacturing robots (see Figure~\ref{AssemblyLine}). The stations are clocked out to measure the timeliness of the vehicles to-be assembled until they exit the production line. At each station actions are performed, which we define as a triple $a = (s, v, i)$, consisting of a station $s$, vehicle code $v$, and action~ID~$i$. The production of vehicles also includes variants (left or right-hand drive vehicles for example) and therefore the nominal action is variable dependent on the vehicle variant, which is information included in the vehicle code. Each action describes a specific and accomplished production step (for example transportation or manufacturing step). We are interested in the duration $d$ required to complete each executed action $a$, which can be viewed as an action duration tuple $u=(s, v, i, d)$. For notational convenience we shall refer to $d^a$ as the duration taken by an action $a$. In a clocked out vehicle production system, for each action $a$ there exists an expected maximum allowed duration $d^a_{max}$. The duration of an action $a$ must therefore be less than, or equal to, this expected allowed maximum time: $d^a \leq d^a_{max}$. In this work, we focus on sequences of actions and their durations, i.e., chains of action\replaced[id=GP]{ duration tuples}{s}, defined as $x = (u_1, u_2, ... , u_{n})$. It is worth noting however, that actions can overlap, e.g., be executed in parallel. Therefore, it is not the case that one particular action has to have completed its task before another action can start. The sequence of actions is also dependent on the vehicle variant. Malfunctions are a recurring problem in production. In the rare instance that a malfunction causes a long period of downtime, usually a situation analysis is conducted and possible fix is performed by staff engineers in the factory. However, our focus is on the small, seemingly insignificant and common delays, that not only have an effect on a station itself, but where subsequent perturbation propagate to downstream stations, causing further delays. Here we consider executed actions with two types of errors resulting from delays, where the duration $d^a > d^a_{max}$: i.) \emph{source errors}, $u_s$ where an abnormal action duration is accompanied by an error message; ii.) \emph{knock-on errors\xspace}, where an action $u_k$ with an abnormally long action duration is not accompanied by an error message. In this work we are interested in knock-on errors that occur after a source error (i.\,e.,\ a logged error) within the sequence of actions: $(\text{..., } u_{s} \text{, ..., } u_{k} \text{, ...})$. An individual source error may appear inconspicuous, since source errors do not have to deviate significantly from the normal time. However, the knock-on errors\xspace, which also do not have to deviate much individually, can result in a significant accumulated time-delay. From the PDA system it is not possible to understand the scope of downstream actions and the knock-on effects of a source error. It is only possible to assert that downstream actions can accumulate time-delays without reported fault messages. Consequentially, this leads to a significant loss of effective production time overall. The analysis of the relationship between source and knock-on errors\xspace is challenging due to the latent entanglement of the individual processes of actions. \deleted[id=JR]{Therefore, in addition to source and knock-on errors\xspace we define \emph{pattern errors}, as an arrangement between source and subsequent propagating knock-on errors\xspace in the sequence of actions.} \deleted[id=JR]{Errors must show a high enough frequency that is statistically significant to be considered pattern errors.} An argument can be made that a rule-based model can determine the relationship of a \replaced[id=JR]{source and knock-on errors}{pattern error}. However, this approach requires extensive domain knowledge and the resulting model would not be transferable across stations. We hypothesize that deep learning-based seq2seq models are able to learn the nominal sequence of actions and, more importantly for the producer, the \replaced[id=JR]{recurring source and knock-on}{pattern} errors in them as well. If the \deleted[id=JR]{pattern}errors can be predicted with a satisfying accuracy, then it means inherent causal-effect rules are learned from the abundance of data. \begin{figure}[h] \centering \includegraphics[width=0.8\textwidth]{img/assemblyLine.pdf} \caption{Production line for assembling vehicle variants, illustrating normal behaviour and error behavior in a Gantt-chart, with respect to durations $d$. Several actions can happen in parallel. In the error behavior scenario a source error for an action duration tuple $u_s$ (marked red) can lead to multiple knock-on error\xspace $u_{k1}$ and $u_{k2}$ (marked orange). \label{AssemblyLine} \end{figure} \section*{Future Work} \label{sec;manager_insights} The results show that our VMAS can deliver interesting insights on real world data obtained from a PDA system for car manufacturing. However, in order to measure the added value for stakeholders, the approach must be evaluated using key performance indicators. Only in this way is it possible to derive optimization processes from the results in a targeted manner. In practice, due to the extensive training time required for training seq2seq models, it makes sense to make use of components from our VMAS for a two-stage approach. \noindent{\textbf{Stage 1:}} In a first integration stage, the results of the automatic peak detection and source error identification are used for the automatic identification of work steps which are particularly critical based on the frequency with which faults occur. Here, however, only a superficial analysis based the proportions of errors is possible. The deep-dive into the cause-effect relationships of the errors and, thus the identification of particularly critical faults, must still be done manually. \noindent{\textbf{Stage 2:}} Use a trained seq2seq model to automatically identify cause-effect relationships, and investigate which source faults actually result in the most disruption times and should therefore be eliminated first. Here, a measure such as the sum over all disturbance times would be required against which the each source error can be measured, to determine how critical it is. This would allow us to create a ranking, replacing the manual analysis from phase 1, after complete integration and successful training of the ML model. \subsection{Objective Function}\label{objective_function} \section{Composite Evaluation Metric} \section*{Composite Time-weighted Actions Metric} \replaced[id=JR]{A sequence of actions can consist either of nominal behaviour or error behaviour by having at least one source or knock-on error included.}{A sequence of actions can either consist of nominal behaviour, or include a pattern error form distinct behaviours.} To predict a distinct behavior we pass a partial sequence of actions to a seq2seq model to predict $n$ actions into the future. However, in production there are a number of scenarios (including our current one), where a greater weighting needs to be placed on the performance of the classifier with respect to short term predictions in order to enable a quick intervention. Therefore, to evaluate our model in this setting a metric is required that: i.) assigns a higher importance on the immediate predictions versus later predictions in the sequence of actions; ii.) allows a prediction of quality invariant of the number of predicted future steps $n$, in order to cross compare various setups; iii.) has high precision when predicting the duration of an action. For the evaluation of any seq2seq model we introduce the \emph{Composite Time-weighted Action} (CTA) metric. The CTA is a convex combination of a Time-weighted Action RMSE (which we introduce below) and an F1 score that uses a threshold $b$ \begin{equation} \text{CTA} = \tau (\text{TARMSE})+ (1-\tau)(\text{F1}). \end{equation} In the above equation stakeholders can use the weighting $\tau$ to either emphasize the TARMSE or precision when evaluating and comparing models. In the following we will discuss the two components. \textbf{Time-weighted Action RMSE (TARMSE):}\label{TWA-RMSE} To measure the performance of a model globally, we introduce a \emph{Time-weighted RMSE} that returns a single scalar metric for the $n$ model outputs. The model performance should not diminish if the starting point of predictions varies within the sequence of actions. For our current problem setting immediate predictions should also have a higher importance than later ones. In order to compensate for the increase of uncertainty we introduce a weighting factor $\beta_{i} = e^{-i}$ with $i$ being the action index. The following formula is considering only predictions which are below the expected allowed maximum time $d^a_{globalmax}$: \begin{equation}\label{eq:TARMSE} \text{TARMSE}(n,k) = \frac{1}{k\times S(n)} \sum_{i=1}^{n} \beta_{i} (k-R_{i}) \end{equation} with \begin{equation} S(n) = \frac{e^{-n}-1}{1-e} = \sum_{i=1}^{n} \beta_{i} \end{equation} and \begin{equation} \beta_{i} = e^{-i}. \end{equation} In Equation~\ref{eq:TARMSE} $R_i$ is the RMSE for action $i$ and the $k$ value is oriented to the mean standard deviation of all the times of actions in this station within the max tolerance. The standard deviation has the property of fitting a Gaussian distribution. Therefore, it can be considered as the amount of error that naturally occurs in the estimates of the target variable. \textbf{F1 Score:}\label{F1-score} By introducing a threshold value $b$, it is possible to gauge how many of the action predictions are considered correct and thereby obtain an evaluation of the binary classifier. The threshold $b$ is selected using domain knowledge. With the knowledge where the expected value for either the nominal or pattern error behaviour is, we can compare our predictions with the ground truth. Our reason for including the F1 score in our composition metric is that it will be used to evaluate models within a real-world production environment. Within our target domain, a low false positive warning rate is required, as otherwise workers will consider warnings as unreliable and not trustworthy. Given that alerts require investigation, false positives will result in a superfluous waste of time. \section*{Empirical Evaluation} \label{sec:empirical_eval}
1,108,101,565,382
arxiv
\section{Introduction} Three decades ago it was proposed that systems composed of an unconfined Fermi liquid of up, down, and strange quarks could be absolutely stable \cite{Wit, Bodmer, Chin, Terazawa}. In its simplest Fermi liquid picture, stability depends on whether it would be possible or not to lower the energy of a system composed of quarks $u$ and $d$ by converting (through weak interactions) approximately one third of its components into the much more massive strange quark due to the introduction of a third Fermi sea. Within the well-known MIT Bag Model \cite{MIT}, it has been shown that this stability may be realized for a wide range of parameters of strange quark matter (SQM) in bulk \cite{Farhi84}. Other calculations also indicate that SQM can be absolutely stable within different frameworks, e.g. shell model \cite{Jaf}. The attractive force between quarks that are antisymmetric in color tend to make quarks near the Fermi surface to pair at high densities, breaking the color gauge symmetry, and causing the phenomena of color superconductivity. Recently, studies have indicated that the color-flavor locked (CFL) state, in which all the quark modes are gaped, seems to be even more favorable energetically, widening the stability window \cite{CFL1, CFL2, CFL3, Madsen01, Lugones}. If strange quark matter or CFL matter is indeed the ground state of cold and dense baryonic matter there would be some important astrophysical implications. For instance, all neutron stars would actually have their interiors composed only of exotic matter \cite{SS1, SS2, SS3, Mac, SS5, SS6}, see also \cite{Madsen04, Weber05, Xu07} for recent reviews. The existence of strange stars would likely imply the presence of strangelets among cosmic ray primaries. A few injection scenarios have been considered as likely sites: the merging of compact stars (though not addressed in full detail yet) \cite{Mergers, Rosinska}, strange matter formation in type II supernova \cite{Mac}, and acceleration from strange pulsars \cite{ChengUsov}. Several cosmic ray events have been tentatively identified with primary strangelets (mainly the Centauro and Price events, and more recently data from the HECRO-81, ET event, and AMS01 experiments \cite{ET, Price, Data1, Data2, Data3, Boiko}) for the data obtained indicate a high penetration of the particle in the atmosphere, low charge-to-mass ratio, and exotic secondaries. New experiments are being designed that could identify these exotic primaries with the purpose to definitively test the validity of the Bodmer-Witten-Terazawa conjecture \cite{AMS, AMS2, Sandweiss}. For the description of these \textit{finite} size lumps of strange matter, (termed \textit{strangelets}) a few terms have to be added to the bulk one in the free energy (see \cite{Madsen98} and \cite{Mads} for details). Large lumps will have essentially the same structure than bulk matter, with a small depletion of the massive strange quarks near the surface resulting in a net positive charge, a feature also expected for smaller chunks \cite{Jaf},\cite{Mads} which thus resemble heavy nuclei. Strangelets without pairing at finite temperature have been first analyzed by Madsen \cite{Madsen98} in the $m_s=0$ approximation. A more complete description has been given by He et al. \cite{Chineses}, in which energy, radius, electric charge (unscreened), strangeness fraction and minimum baryon number were presented. CFL strangelets at $T=0$ were discussed in Refs. \cite{Madsen01, Peng}. More recently, a finite-temperature analysis using pertubative QCD appeared \cite{Schmitt}. We address in this paper the issues of surface and curvature energies at $T > 0$, which are potentially important for fragmentation of CFL SQM in astrophysical environments among other things. This paper in structured as follows: in section II we describe the theoretical approach used to determine the parameters characterizing CFL SQM at finite temperature for the construction of the windows of stability in bulk and strangelets; in section III, we present the numerical results for CFL SQM and compare then with unpaired SQM; in section IV, we present our final discussion and conclusions. \section{Windows of stability} \subsection{Bulk matter} Unpaired SQM in bulk contains $u$, $d$, $s$ quarks and also electrons to maintain charge neutrality. The chemical balance is maintained by weak interactions and neutrinos assumed to escape from the system. If SQM is in a CFL state in which quarks of all flavors and colors near the Fermi surface form pairs, an equal number of flavors is enforced by symmetry and the mixture is automatically neutral \cite{Rajagopal2001}. In this case, the condition is that the \textit{Fermi momentum} for the three quarks are equal, so that $3\mu=\mu_u+\mu_d+\mu_s$ and the common Fermi momentum is $\nu=2\mu-(\mu^2+m_s^2 / 3)^{1/2}$. For bulk CFL SQM, the thermodynamical potential for the system to order $\Delta^2$ is \cite{Rajagopal2001, AlfordReddy} \begin{equation} \Omega_{CFL}=\sum_i\Omega_i-\frac{3}{\pi^2}\Delta^2\mu^2+B \end{equation} \\ where $\Delta$ is the pairing energy gap and the term associated with this parameter is the binding energy of the diquark condensate. The term $\Omega_{free}=\sum_i\Omega_i$, mimics an unpaired state where all quarks have a common Fermi momentum, and $i$ stands for quarks $u,d,s$ and gluons (there is no electrons in the CFL state). On the basis of the MIT bag model with $\alpha_c$=0 \footnote{When considering a finite strong coupling constant, $\alpha_c$, it has been shown that it can in fact correspond to an effective reduction on the value of the MIT bag constant $B$ \cite{Farhi84}.}, the thermodynamic bulk potentials for each component of the unpaired ``toy model'' system are given by \begin{equation}\label{Omega} \Omega_i=\mp T \int^{\infty}_{0}dk g_i \frac{k^2}{2\pi^2}\ln\Big[1\pm\exp\Big(-\frac{\epsilon_i(k)-\mu_i}{T}\Big)\Big] \end{equation} \\ where the upper sign corresponds to fermions and the lower to bosons, $\mu$ and $T$ are the chemical potential and temperature, $k$ and $\epsilon_i$ the momentum and energy of the particle, respectively, and the factor $g_i$ is the statistical weight for quarks and gluons (6 for quarks and antiquarks and 16 for gluons). The limit of expression \ref{Omega} for $T\rightarrow 0$ is the one given in \cite{Farhi84}, when the integral is made for momenta ranging from zero to the Fermi one, since the Fermi-Dirac distribution at $T=0$ for the unpaired state presents a sharp cutoff at the Fermi momentum \footnote{The CFL state does not actually present a sharp Fermi surface. See \cite{Alford} for more details} being useless to perform the integration to $k\rightarrow \infty$. At finite temperature, however, the broadening of the Fermi-Dirac distribution occurs, hence the integration has to be extended as well. With these quantities we obtain the particle density given by $n_i=-\partial \Omega_i /\partial \mu_i$ (which accounts for the influence of the pairing condensate binding energy), and the total energy density, $E=\sum_{i}(\Omega_i+n_i\mu_i)-3\Delta^2\mu^2/\pi^2+B+TS$, where $S=-(\partial\Omega/\partial T)_{V,\mu}$ is the entropy. In spite that most of the analysis is performed using a constant value for $\Delta = \Delta_{0}$, the pairing gap is actually dependent of the temperature of the system. Following the studies of superconductivity in quark matter \cite{Schmitt, CFL3}, we used for this dependence \begin{eqnarray} \Delta(T)=2^{-1/3}\Delta_0\sqrt{1-\Big(\frac{T}{T_c}\Big)^2}\\ T_c=0.57\Delta(T=0)\times 2^{1/3}\equiv 2^{1/3}\times 0.57\Delta_0 \end{eqnarray} \\ where $T_c$ is the critical temperature of the superconducting system, above which the system can no longer support pairing between quarks. \subsection{Strangelets} As stated above, for the description of strangelets, it is necessary to add surface and curvature contributions to the thermodynamical potentials of bulk matter \begin{equation} \Omega_i=\mp T \int^{\infty}_{0}dk \frac{dN_i}{dk} \ln\Big[1\pm\exp\Big(-\frac{\epsilon_i(k)-\mu_i}{T}\Big)\Big] \end{equation} In the multiple reflection expansion \cite{Balian2, Hansson} the density of states is given by \begin{equation} \frac{dN_i}{dk}=g_i\Big\{\frac{1}{2\pi^2}k^2\mathscr{V}+f^{(i)}_S\Big(\frac{m_i}{k}\Big)k\mathscr{S}+f^{(i)}_C\Big(\frac{m_i}{k}\Big)\mathscr{C}\Big\} \end{equation} \\ where $\mathscr{V}$, $\mathscr{S}$ and $\mathscr{C}$ stand for the volume, surface area and curvature of the strangelet, respectively. The surface term for quarks is given by \cite{Berger} \begin{equation} f^{(q)}_S\Big(\frac{m_q}{k}\Big)=-\frac{1}{8\pi}\Big[1-\frac{2}{\pi}\arctan\Big(\frac{k}{m_q}\Big)\Big] \end{equation} For the curvature contribution, the following \textit{ansatz} \cite{Mad94} for massive quarks is adopted \begin{equation} f^{(q)}_C\Big(\frac{m_q}{k}\Big)=\frac{1}{12\pi^2}\Big\{1-\frac{3k}{2 m_q}\Big[\frac{2}{\pi}-\arctan\Big(\frac{k}{m_q}\Big)\Big]\Big\} \end{equation} \\ while for gluons \cite{Balian}, $f^{(g)}_C=-1/ 6\pi^2$. The energy is obtained as $E=\sum_i(\Omega_i+N_i\mu_i)-3\Delta^2\mu^2/\pi^2V+BV+TS$ and the mechanical equilibrium condition for a strangelet with vacuum outside is given by $B=-\sum_i\partial\Omega_i / \partial V$. The relation obtained for strangelets without pairing \cite{Chineses}, $\mu_u=\mu_d=\mu_s$, found by minimizing the free energy with respect to the net number of quarks of each species and subjected to the constraint $A=1/3 \sum_i N_i$, is here substituted by $\mu_u=\mu_d$ and $\mu_s=\sqrt{\mu_u^2+m_s^2}$ which is actually a second constraint imposed for pairing to hold. The value of the common chemical potential is then obtained numerically by imposing the mechanical equilibrium condition at a given set of $B$, $\Delta$ and $m_s$. The issue of Debye screening of the electric charge for strangelets without pairing is of major importance in determining the total charge of these particles \cite{Heiselberg}. In the expression for energy density, there is a term proportional to $A_0^2/\lambda_D^2$, where $A_0$ is the gauge field for the massless gauge boson and $\lambda_D$ is the Debye screening length. In this way, the general expression of the Debye screening length may be written as \begin{equation}\label{screen} \lambda_D^{-2}\propto \frac{\partial^2 \text{Energy density}}{\partial\mu_e^2} \end{equation} \\ where $\mu_e$ is the chemical potential for electric charge. It means that $\lambda_D$ is related to the response of a medium to a change in $\mu_e$. In CFL matter the massless gauge boson is the rotated or $\Tilde{Q}$ photon and the \~Q-charge of all the Cooper pairs forming the condensate is zero. Since all quasiparticles are gapped, which is due the unbroken \~U(1)$_{em}$ gauge symmetry in the ground state, the CFL phase is not an electromagnetic superconductor but a \~Q-insulator \cite{CFL1}. In CFL matter, the relevant electric chemical potential is $\mu_{\Tilde{Q}}$ and the root mean square of (\ref{screen}) is zero \cite{Alfordpers}, therefore the electric field is not screened and the charge of a strangelet in the CFL state will be defined by finite size effects only. \section{Numerical results} The so-called ``windows of stability'' (regions in the plane $m_s-B$) for CFL matter in the framework of the MIT Bag Model are shown in Fig. \ref{janela}. The minimum value for $B^{1/4}$ is $145$ MeV, because a lower $B$ would cause the spontaneous decay to non-strange matter ($u$ and $d$ quarks). As expected, the matter becomes less bound at finite temperature as can be seen both in the constant pairing gap approximation and when this parameter is temperature dependent. The influence of considering the more realistic case of $\Delta$ depending on the temperature, $\Delta=\Delta(T)$, shows an destabilization of the system due to an effective reduction in the gap parameter. This conclusion holds even if the system is quite close to the critical temperature for pairing between quarks. The system, then, tends to approach the curves for $\Delta=0$. But it does not exactly match the curves for $\Delta=0$ due to the existence of the extra term in the entropy ($\partial [-3\mu^2/\pi^2\Delta(T)^2]/\partial T \simeq 3\mu^2/\pi^2T_c/0.41$, when $T=T_c$) for the temperature dependent scenario. \begin{widetext} \begin{center} \begin{figure} \includegraphics[width=0.45\textwidth]{janelaT0.eps} \includegraphics[width=0.45\textwidth]{janelaT10.eps} \includegraphics[width=0.45\textwidth]{Delta100.eps} \includegraphics[width=0.45\textwidth]{Delta0100.eps} \caption{Stability windows, regions bounded by the vertical line at $B^{1/4}=145$ MeV and the curves of $E/A=939$ MeV (shown for different temperatures and $\Delta$), for CFL SQM. On the top left panel, at $T=0$ (following reference \cite{Lugones}) and on the top right panel, for CFL SQM at T=10 MeV and values of $\Delta$ as indicated. The full lines represent the calculations made considering $\Delta$ constant with temperature and the dashed lines for $\Delta=\Delta(T)$ (refer to text for details). On the bottom panels, stability window for CFL SQM at finite temperature and $\Delta=100$ MeV. The solid line is for null temperature, the dashed line for $T=10$ MeV, and the dotted line, $T=30$ MeV. On the left, the curves where obtained considering a fixed $\Delta$ and on the right, $\Delta=\Delta(T)$. All the curves presented are calculated for fixed $E/A=939$ MeV labelled with the corresponding value of $\Delta$, when necessary. The vertical line is the minimum $B$ value for stability.}\label{janela} \end{figure} \end{center} \end{widetext} We have also calculated the structure of spherical strangelets, numerically, with the results shown in Figs. \ref{energia} and \ref{energiaT} for the total energy of these particles as a function of different parameters characterizing them. Just as in the case of bulk matter, there is a competition when considering $\Delta=\Delta(T)$ between the lowering of the effective pairing parameter and the raising of the chemical potentials in the CFL quark matter and the extra term in the volumetric entropy when compared to the case of a constant pairing gap parameter. For finite size drops of SQM, the additional terms of surface and curvature contributing in the thermodynamic potential with opposite sign to the volumetric term are affected only by the changes in $\mu$ (higher in the $\Delta(T)$ case) and on the strangelet radius (lower but not significantly affected, the difference being less than 1\%). The overall result is that the stability for a given set of $m_s$, $B$, $\Delta_0$, $A$, and temperature is disfavored in the dependent delta scenario, as it is in the bulk case. \begin{widetext} \begin{center} \begin{figure} \includegraphics[width=0.45\textwidth]{ECFLT.eps} \includegraphics[width=0.45\textwidth]{ECFLB.eps} \includegraphics[width=0.45\textwidth]{ECFLm.eps} \includegraphics[width=0.45\textwidth]{ECFLD.eps} \includegraphics[width=0.45\textwidth]{ECFLA.eps} \caption{Energy per baryon number as a function of the baryonic number, bag constant, strange quark mass, pairing energy gap, and temperature, from left to right, top to bottom, respectively. The values of the fixed constants are indicated for each plot. The first four plots are performed for $T=0$ (full curves), $T=15$ MeV (dashed curves) and $T=30$ MeV (dotted curves). The last plot is performed for $A=100$ (full curve) and $A=1000$ (dashed curve).}\label{energia} \end{figure} \end{center} \begin{center} \begin{figure} \includegraphics[width=0.45\textwidth]{sigmaCFLm.eps} \includegraphics[width=0.45\textwidth]{CCFLm.eps} \caption{Surface and curvature energies of CFL strangelets, from left to right, respectively, as a function of $m_s$. The value of the fixed constants are $T=0$ (full curve), $T=15$ MeV (dashed curves), and $T=30$ MeV (dotted curve), $A=100$ MeV, $B^{1/4}=145$ MeV, and $\Delta=100$ MeV.}\label{massa} \end{figure} \end{center} \begin{center} \begin{figure} \includegraphics[width=0.45\textwidth]{sigmaCFLD.eps} \includegraphics[width=0.45\textwidth]{CCFLD.eps} \caption{Surface and curvature energies of CFL strangelets, from left to right, respectively, as a function of $\Delta$. The value of the fixed constants are $T=0$ (full curve), $T=15$ MeV (dashed curves), and $T=30$ MeV (dotted curve), $A=100$, $m_s=150$ MeV, and $B^{1/4}=145$ MeV. The critical temperature for $\Delta\lesssim 20$ MeV is below fifteen MeV and for $\Delta\lesssim 40$ MeV is below thirty MeV and that is why the curves at finite temperature are plotted starting at different values of the pairing gap energy.}\label{Delta} \end{figure} \end{center} \end{widetext} \begin{center} \begin{figure} \includegraphics[width=0.45\textwidth]{ECFLTDT.eps} \caption{Energy per baryon number of CFL strangelets as a function of $A$ calculated for $\Delta=\Delta(T)$. The value of the fixed constants are $T=0$ (full curve), $T=15$ MeV (dashed curves), and $T=30$ MeV (dotted curve), $m_s=150$ MeV, $B^{1/4}=145$ MeV, and $\Delta=100$ MeV.}\label{energiaT} \end{figure} \end{center} The behavior of the total energy per baryon number is to decrease when increasing the pairing gap and the strangelet's baryon number, and increase with increasing $B$, $m_s$ and $T$. These can be understood with a comparison with the behavior of the stability windows of SQM show in Fig. \ref{janela}. The calculations show that the coefficient $R_0$, defining the strangelet radius as in $R=R_0A^{1/3}$, decreases with increasing $A$ and $B$ but increases with $m_s$ and the $\Delta$ parameter (holding other parameters fixed on each comparison). It is also higher whenever there is an increase in the temperature for the thermic energy of quarks and gluons also increases. These behaviors are easy to understand: increasing the strangelet's baryon content, its parameters get closer to the bulk ones, resulting in a decrease in $R/A^{1/3}$. Also, when increasing the bag constant, the vacuum pressure on the strangelet's content is higher, explaining the radius dependence on this parameter. With increasing strange quark mass, the strange quark content decreases and so, with fixed $A$, the radius increases to maintain the constraint $A=\sum_iN_i$, the same reasoning applying to the raise in the pairing gap. In the following figures, the dependence of the surface and curvature energies, defined as the coefficient that appears multiplying $A^{2/3}$ and $A^{1/3}$ in the expression for the total energy, respectively, on these parameters is also shown. The surface and curvature contributions decrease at higher temperatures, a feature also seen in ordinary nuclear matter for the surface energy (see, for example \cite{Ravenhall, Bondorf} and references therein). The dependence of the surface energy with the strange quark mass shows a maximum at $m_s\approx 150$ MeV and goes to zero for massless quarks, and additionally shows a decrease for high values of the strange quark mass due to a depletion of this very massive component. Simple numerical fits for the surface and curvature energy of strangelets at finite temperature were also obtained (to second order in the temperature $T$) for $\Delta=\Delta_0=100$ MeV (that is, for a gap parameter independent of temperature), $B^{1/4}=145$ MeV, and $m_s=150$ MeV and are presented here \begin{widetext} \begin{equation} \sigma_{CFL}(T,A)=(81.09+0.013\,T-0.026\,T^2)\times(0.96+0.17\,e^{-\frac{A}{22.5}}+0.053\,e^{-\frac{A}{384.2}}) \end{equation} \begin{equation} C_{CFL}(T,A)=(163.85+0.003\,T-0.093\,T^2)\times(0.98+0.082\,e^{-\frac{A}{23.2}}+0.026\,e^{-\frac{A}{393.9}}) \end{equation} \end{widetext} As expected, the behavior of the parameters characterizing strangelets for CFL matter is qualitatively the same when compared to SQM without pairing, i. e., the system at finite temperature is less stable than at absolute zero but the surface and curvature contributions decrease, a feature well known for nucleon systems. One interesting point is that for $\Delta=100$ MeV, the chemical potential for the $s$ quark is very close to the common chemical potential for stable strangelets without pairing at the same temperature and with the same value of $B$ and $m_s$, but the chemical potential for the light quarks is much lower. As a consequence the surface energy (determined only by the massive strange quark) is almost equal for the two scenarios, but the curvature energy is much lower for CFL strangelets. It means that for values of the pairing gap lower than 100 MeV, the surface energy is higher in the CFL state than without pairing. Meanwhile the curvature energy is always lower for CFL strangelets, regardless the value of $\Delta$. The behavior of the electric charge for a strangelet with fixed $T$, $B$, $m_s$, and $\Delta=\Delta_0$ as a function of the baryon number is shown in Fig. \ref{charge}. The electric charge grows with temperature for large baryon number strangelets driven by the dependence on the number density of quarks with $T$. For the massless quarks, a non-zero temperature slightly favor their increase in number through the term $\mu T^2 V$. But for the massive $s$ quark, the effect is the opposite since $N_s=\int_0^{\infty} dNs/dk (1+\exp((\sqrt{k^2+m_s^2}-\mu_s)/T)) dk$. So $Z/A^{2/3}$ deviates from being close to a constant behavior, expected from the suppression of massive strange quarks near the surface at $T=0$ \cite{Madsen01}, due to the higher importance of volumetric imbalance on the number density of different quark flavors at higher temperatures. \begin{center} \begin{figure} \includegraphics[width=0.45\textwidth]{ZCFLT.eps} \caption{Electric charge of strangelets as a function of the baryon number for $T=0$ (full curve), $T=15$ (dashed curve), and $T=30$ (dotted curve), $B^{1/4}=145$ MeV, $m_s=150$ MeV, and $\Delta=100$.}\label{charge} \end{figure} \end{center} \section{Conclusions} CFL strange quark matter at $T=0$ is more stable than SQM without pairing \cite{Madsen01,Lugones} when one considers the strange quark mass, strong coupling constant and bag constant fixed. This result has been extended and quantified for $T > 0$ in the present work. Even when the temperature is as high as to be close to the critical temperature for pairing there is still room for (meta-) stability, depending on parameters choice. This suggests that the process of transition from a neutron star to a strange star could proceed right after its formation and the system might even skip the neutron star stage, if conditions for conversion of ordinary nuclear matter to the CFL state are met in the interior of these compact objects. As a general result, it is possible to observe that a finite temperature has always the consequence of destabilizing the system even when considering the dependence of $\Delta$ with $T$ which causes SQM to be slightly more disfavored than its ``constant $\Delta$'' version. We also notice a very distinctive feature between strangelets with and without pairing concerning the existence of the critical baryon number, $A_{crit}$. This quantity represents the minimum baryon number to which strangelets are stable against neutron decay. The effects of surface and curvature tend to destabilize strange matter at low baryon number. As a result, the energy for creating small lumps of SQM increases as the baryon number decreases till it reaches a value above the neutron decay threshold, i. e., till it is above $\sim 930$ MeV. As shown in \cite{Chineses} and \cite{Madsen98}, the critical baryon number exists even for null temperature. It is known (see \cite{Farhi84} and \cite{Lugones}) that the lower the value of the bag constant (of course, respecting the limit $B^{1/4}\geq 145$ MeV) and the higher the value of the pairing gap, the more stable strange quark matter in bulk is. In the case of CFL strangelets with high values of $\Delta$ and relatively low values of the bag constant, performing the analysis within the MIT bag model, the existence of $A_{crit}$ is not clear. It must be noted, however, that the liquid drop model does not provide a good description for low baryon number, being less reliable than shell models filling explicitly the quark states \cite{Jaf}. Another important feature is that although for high baryon number CFL strangelets seem to be absolutely stable even for temperatures of order 30 MeV when $\Delta=100$ MeV, values of the pairing constant much above a few hundreds of MeV are not expected to describe these systems. In being so, the critical temperatures for pairing of quarks inside strangelets are not expected to be higher than 70-100 MeV. For temperatures above this value, the quarks inside the strangelet would no longer be paired and the gain in energy for this state compared to non-superconducting strangelets would vanish. Since strangelets without pairing are not stable for temperatures as high as the maximum critical temperatures expected, the strange quark matter stability would vanish above $T_{crit}$ if the pairing gap is too high. We have also found that strangelets are more favorable in the CFL state, as expected. In particular, the curvature energy for CFL strangelets is lower than for ``normal'' strange quark matter, which may influence the fragmentation process of bulk CFL SQM, an important issue when considering the possible presence of these particles among cosmic rays and also when considering strangelet production in heavy ion collisions, although the very high temperatures disfavor the production of stable strangelets in these environments \cite{Madsen01}. \begin{acknowledgements} We acknowledge the very important advice of M. Alford on the issue of screening for CFL strangelets. This work is supported by Funda\c c\~ao de Amparo \`a Pesquisa do Estado de S\~ao Paulo. JEH acknowledges the partial financial support of the CNPq Agency (Brazil). \end{acknowledgements}
1,108,101,565,383
arxiv
\section{Introduction} \label{sec:introduction} Decentralized Finance (DeFi) refers to public blockchain-based financial infrastructure that uses smart contracts to replicate traditional financial services in a more open, interoperable, and transparent way \cite{Schaer.2021}. These smart contract-based services are usually referred to as protocols. They provide basic building blocks such as the opportunity to swap assets or allocate liquidity efficiently and can be reused and combined in any way. While decentralized exchanges and lending markets are arguably among the most prominent protocols and get a lot of attention, there are other crucial building blocks that are required for a well-functioning financial infrastructure. One of these building blocks is the ability to transfer risks. Consider the following general example: An economic agent has an investment opportunity that may result in a small loss or a large gain. Further assume that both outcomes have the same probability. The expected return would be positive and a risk-neutral (or risk-seeking) agent would be willing to engage. However, if the same opportunity is instead presented to a risk-averse agent, they may decline and forego a positive expected return due to the cost of uncertainty. If a financial market allows risk to be transferred, there is a simple solution. The risk-averse person can approach an entity with a higher risk tolerance and offer them a premium in return for their willingness to bear the risk. They essentially share the positive expected return and the risk would be borne by the entity with the higher risk tolerance. Similarly, a blockchain-based financial infrastructure becomes more efficient if smart contract risks are transferable. Risk-averse investors could share some of their expected return as a compensation for an insurance policy that covers the smart contract risks of the respective liquidity pool. DeFi users who are willing to bear additional risk could generate a higher yield. The existence of a market for risk transfer would be beneficial for everyone, as it allows all DeFi users to structure their portfolio in accordance with their individual risk preferences. There already exists a relatively large number of smart contract-based insurance protocols, including but not limited to Nexus Mutual \cite{Karp.2017}, Nsure \cite{Nsure.2020}, cozy.finance \cite{CozyFinance.2020}, Unslashed Finance \cite{Unslashed.2021} and Risk Harbor \cite{riskHarbor2}. While some of these protocols offer innovative solutions and have provided valuable contributions to the DeFi protocol space, they are arguably not fully decentralized and face various challenges. \textit{First}, insurance requires that the insurer can credibly demonstrate its ability to cover potential losses at all times. Centralized insurance is based on a combination of reputation and regulation. Moreover, centralized insurance companies rely on active asset and risk management to strike a balance between liquidity and capital efficiency. DeFi, on the other hand, is built on a pseudonymous system with little to no legal recourse. It relies on transparency and (over-)collateralization. Consequently, many implementations face trade-offs between capital efficiency, security and special privileges that allow for manual interventions. \textit{Second}, DeFi insurance protocols usually struggle with claim assessment. Generally speaking there are two options. (a) The insurance policy is parametric and relies on oracles and (b) the outcome is decided through a vote, by so-called claim assessors. Both approaches are quite subjective and can easily lead to false outcomes. The former introduces dependencies to external data providers and does not reflect true damages due to its parametric nature. The latter relies on a voting process among pseudonymous actors that can assume various roles within (and outside) the system. Moreover, truly decentralized voting will be either subject to sybil attacks \cite{douceur2002sybil} or whale dominance with potentially problematic incentives. There are good arguments, why neither the oracle-based nor the claim assessor-based approach should be considered fully decentralized. \textit{Third}, most protocols cannot prevent over-insurance. DeFi users can buy cover for protocols to which they have no exposure. This can create problematic incentives and -- depending on the jurisdiction -- result in conflict with the law. In this paper, we propose a novel DeFi insurance protocol that solves these issues. To the best of our knowledge it is the first proposal for a fully decentralized insurance protocol with no external dependencies. As part of this research project, we have also built a basic reference implementation of the protocol. The implementation can be found in the appendix. After this short introduction, we discuss related works from the DeFi, insurance and finance literature. In Section \ref{sec:protocol} we turn to the technical part, describe the protocol and perform a gas efficiency analysis. In Section \ref{sec:divergence} we study external incentives for liquidity providers and derive the implicit cost of liquidity provision for various pools involving our protocol's tranche tokens. In Section \ref{sec:discussion} we discuss our results, potential extensions and limitations. Finally, we conclude in Section \ref{sec:conclusion}. \section{Related work} \label{sec:literature} The motivation for a DeFi insurance protocol is closely linked to discussions on smart contract and DeFi risks, protocol failures and shock propagation. These issues have received an increasing amount of research attention and are an important part of the academic discourse on DeFi \cite{Atzei.2017,Gudgeon.2020,Macrinici.2018,Zheng.2020,Zhou.2022}. Our protocol can mitigate some of the consequences by allocating risk in a more efficient way. Moreover, market prices for risk premiums can serve as an indication of the perceived risk; similar to prediction markets. With regard to yield-generating lending protocols, different authors discuss the risks of illiquidity, dependencies and misaligned incentives \cite{Gudgeon.2020, Bartoletti.2021, Lehar.2022, Qin.2021}. Moreover, there are various papers discussing oracle reliability and potential manipulation \cite{Angeris.2020, Liu.2021}. Our proposal does not have any dependencies, allows the insurant to hedge against oracle exposure, and even works in situations where the insured protocols become illiquid. Existing DeFi insurance protocols are mostly based on principles of mutual insurance, where users participate in the commercial success of the protocol. In theory, mutuals can have certain advantages for large risk pools \cite{Albrecht.2017}, in the presence of transactional costs and governance issues \cite{Laux.2010}, and in addressing problems of adverse selection \cite{Ligon.2005}. However, due to centralized economic value capture in most mutuals, problems potentially remain with respect to default risks \cite{Tapiero.1986}. In a DeFi context, mutual-based insurance protocols usually rely on centralized or vote-based claim assessment and may depend on \emph{know your customer} (KYC) principles or introduce other forms of dependencies. Our protocol is fundamentally different from a mutual insurance. There is no centralized economic value capture and the protocol does not accumulate reserves. The general concept of our protocol is inspired by peer-to-peer (P2P) insurance and financial instruments with tranches, such as collateralized debt obligations (CDO). In a P2P insurance model, individuals pool their insurance premiums and use these funds to cover individual damages. P2P risk transfer is still at a very early stage of research, with seminal works including \cite{Denuit.2022,Denuit.2020b,Feng.2022b,Feng.2022,Denuit.2019, Denuit.2020}. Several authors have started to formally explore the organizational structure, optimality and pitfalls of P2P insurance \cite{Charpentier.2021,Clemente.2020,Levantesi.2022}. Our protocol is based on similar principles. In particular, we make use of different risk preferences and levels that allow individuals to pool their risks without the explicit need for an intermediary. However, there is an important difference between P2P insurance and our approach: P2P insurance usually covers individual risks. As such, P2P insurance is built on the general assumption that damages within the collective are uncorrelated and that premiums of the unaffected insurants can be used to compensate the ones that have suffered losses. Our protocol insures large scale risks that will affect all insurance holders. Consequently, we need explicit roles in accordance with the individuals' risk preferences. This is achieved by creating tranches with different seniorities and security guarantees. As such, our protocol incorporates some aspects of CDOs. CDOs have been discussed extensively in the subject-related literature \cite{duffie2001risk, armstrong2005understanding, lucas2006collateralized, bluhm2011valuation}. They split cash flows among tranches with different seniority. The most senior tranches are honored first and the most junior tranches bear the losses. In addition to traditional use cases, such as CDOs for bank refinancing, insurance risk also appears to be a suitable use-case for CDOs \cite{forrester2008insurance}. Likewise, CDOs are used widely in various applications, also outside traditional financial markets. For example, CDOs have already been discussed related to the support of microcredits \cite{bystroem2008microfinance}. This combination of P2P insurance, seniority-based promises and DeFi specifics builds the foundation of our protocol and allow us to propose a fully decentralized DeFi insurance. \section{Protocol} \label{sec:protocol} In this section, we present a decentralized risk hedging protocol, based on tranched insurance. First, we provide a quick overview and describe the core functionality of the protocol. Second, we take a more technical perspective and describe individual function calls and state transitions. Third, we discuss potential technical extensions and trade-offs. Fourth, we provide a short efficiency analysis and discussion of the protocol's computational costs (gas fees). \subsection{Protocol Overview} \label{sub:overview} The general idea of our insurance protocol is to pool assets from two third-party protocols, and allow users to split the pool redemption rights into two tranches: $A$ and $B$. If any of the third-party protocols suffer losses during the insurance period, those losses will be primarily borne by the $B$-tranche holders. $A$-tranche holders will only be negatively affected if 50\% or more of the pooled funds are irrecoverable, or if both protocols become temporarily illiquid and face (partial) losses. We effectively split the redemption rights into a riskier and less risky version and allow the market for $A$- and $B$-tranches to determine the fair risk premium in line with the users' expectations. The protocol consists of three main phases: \textit{risk splitting}, \textit{investing/divesting} and \textit{redemption}. In the \textbf{risk splitting} phase, anyone may allocate their preferred number of $C$-tokens to the insurance protocol. These $C$-tokens represent the underlying asset, e.g., a stablecoin or Ether. In exchange, the users receive equal denominations in $A$- and $B$-tranches, thereby ensuring that an equal number of both tranches will be created. $A$ and $B$ are ERC-20 compliant tokens and can be transferred separately. This allows the users to swap the tokens on decentralized exchanges to obtain a relative allocation of $A$- and $B$-tranches that reflects their risk preferences. At the beginning of the \textbf{invest/divest} phase, the insurance protocol allocates the accumulated collateral of $C$ equally into two protocols. In return, it receives interest-bearing tokens (wrapped liquidity shares) from each protocol. We denote these shares as $C_x$ and $C_y$. To make things less abstract, consider the following example: A stablecoin ($C$) gets allocated to two distinct yield-generating lending protocols. In return, the insurance protocol receives the respective interest-bearing tokens ($C_x$ and $C_y$). They are locked in the insurance contract, where they will accumulate interest over time. At the end of the invest/divest phase, the insurance protocol tries to liquidate the wrapped shares. This is a necessary step in preparation for the redemption of the $A$- and $B$-tranches. In a third step, the protocol enters the \textbf{redemption} phase. The goal of this phase is to compute potential losses and allow the $A$- and $B$-tranche holders to claim their respective share of the underlying. It is important to understand that the redemption phase can be executed in one of two distinct modes. Mode selection depends on the success of the liquidation at the end of the invest/divest phase. If the liquidation of $C_x$ and $C_y$ works as expected and the insurance protocol receives the collateral tokens $C$, then redemption can be conducted in \textit{liquid mode}. In this mode, it is straightforward to distribute the interest equally among all $A$- and $B$-tranche holders. Similarly, potential losses can be computed and primarily allocated to $B$-tranche holders. If the liquidation of $C_x$ or $C_y$ fails, the protocol enters \textit{fallback mode}. This can happen if a third-party protocol suffers from a liquidity crunch or if an external contract changes the expected behavior. In fallback mode, users redeem their tranche tokens directly for their preferred mix of $C_x$ and $C_y$ tokens. The higher tranche seniority of $A$-tranches is ensured through a timelock-based redemption sequence. In a first step, $A$-tranche holders get to choose if they want to claim their share in $C_x$, $C_y$ or a mix of the two. After the timelock is over, $B$-tranche holders can claim what is left. \subsection{Technical Implementation} \label{sub:technical} \begin{figure*} \begin{tikzpicture}[align=center, node distance=5cm, scale=0.6, every node/.style={scale=0.6}] \tikzstyle{arrow} = [thick, ->, >=stealth] \tikzstyle{box} = [rectangle, thick, rounded corners, draw = black, minimum height = 1.5cm, minimum width = 3cm] \coordinate (0) at (0, 0); \draw[fill=black] (0) circle (5pt); \node [box] (1) [right of=0, xshift = -1cm] {\textbf{ReadyToAccept} \\ $\bullet$ \texttt{splitRisk()}}; \node [box] (2) [right of=1, xshift = 1cm] {\textbf{ReadyToInvest} \\ $\bullet$ {\texttt{invest()}}}; \node [box] (3) [right of=2, xshift = 1cm] {\textbf{MainCoverActive}}; \node [box] (4) [right of=3, xshift = 1cm] {\textbf{ReadyToDivest} \\ $\bullet$ {\texttt{divest()}}}; \node [box] (6) [right of=4, xshift = 1cm] {\textbf{FallbackOnlyA}\\ $\bullet$ \texttt{claimA()}}; \node [box] (5) [above of=6, yshift=-2.5cm] {\textbf{Liquid} \\ $\bullet$ \texttt{claimAll()} \\ $\bullet$ \texttt{claim()}}; \node [box] (7) [below of=6, yshift = 2.5cm] {\textbf{FallbackAll}\\ $\bullet$ \texttt{claimA()} \\ $\bullet$ \texttt{claimB()}}; \coordinate (X) at (10, 2.8); \coordinate (Y) at (26.5, 2.2); \coordinate (Z) at (26.5, 2.8); \draw [arrow] (0) -- node [above] {Deployment} (1); \draw [arrow] (1) -- node [above, align=center] {\small{\texttt{block.timestamp}}\\ $=S$} (2); \draw [arrow] (2) -- node [above, align=center] {\small{\texttt{invest()}}\\successful} (3); \draw [arrow] (3) -- node [above, align=center] {\small{\texttt{block.timestamp}}\\ $=T_1$} (4); \draw [arrow] (22,0.75) -- (22,2.2) -- node [below, align=left, yshift =0cm] {\small{\texttt{divest()}}\\ successful} (Y); \draw [arrow] (6) -- node [left, align=center] {\small{\texttt{block.timestamp}}\\ $=T_3$} (7); \draw [arrow] (4) -- node [above, align=center, yshift=0cm] {\small{\texttt{block.timestamp}}\\ $=T_2$} (6); \draw[thick, arrow] (2) -- node [right, yshift=0.4cm] {\small{\texttt{block.timestamp}}\\ $=T_1$} (X) -- (Z); \end{tikzpicture} \caption{State Transition Diagram: Represents state transitions and their respective function sets.} \label{fig:states} \end{figure*} A reference implementation of the insurance contract is available in the appendix and demonstrates how our protocol can be used to provide insurance for two yield-generating protocols that wrap the Maker DAO stablecoin Dai \cite{Maker.2017}, denoted as $C$. The two yield-generating protocols are Aave version 2 \cite{Aave.2020} with aDai and Compound Finance \cite{Compound.2019} with cDai, denoted as $C_x$ and $C_y$ respectively. The reference implementation includes the full Solidity code for the Ethereum Virtual Machine-based (EVM) contract and can be used as a starting point for developers who want to create their own insurance contracts using a similar approach. In this subsection we provide an overview of the reference implementation's technical specifications, including the functions, variables and states. We present this information in a chronological order, following the timeline presented in Figure \ref{fig:sequential}. The states are referred to as: \textit{ReadyToAccept, ReadyToInvest, MainCoverActive, ReadyToDivest, Liquid, FallbackOnlyA} and \textit{FallbackAll}. Note that strictly speaking a smart contract cannot automatically transition from one state to another based on the passage of time; this is a fundamental limitation of smart contract technology. Any state change on the contract has to be initiated by a function call. Our implementation works around this by defining states as a set of successfully callable functions and reverting function calls, if they are outside the allowed time windows. Hence, the set may change based on time conditions. Before the first state, the initial parameters must be defined and contract deployed. The parameters include the addresses of the tokens involved in the contract, as well as the absolute values for the timestamps when state transitions occur. These forced state transitions are represented in Figures \ref{fig:states} and \ref{fig:sequential} as $S$, $T_1$, $T_2$ and $T_3$, where $S < T_1 < T_2 < T_3$. Furthermore, the constructor deploys two ERC-20 token contracts for $A$- and $B$-tranches, with the insurance contract as the sole, immutable owner. This means that only the insurance contract can mint and burn the tranche tokens. After deployment, the contract is in the \texttt{ReadyToAccept} state and the public function \texttt{splitRisk()} is available for anyone to call. The input parameter for the function is an amount of $C$ tokens. The \texttt{splitRisk()} function then transfers this amount of $C$ tokens from the caller to the insurance contract and issues a number of $A$- and $B$-tranche tokens equal to half that amount to the caller. For example, if the input is 100, the function will transfer 100 $C$ tokens from the caller to the insurance contract and issue 50 tranche $A$ tokens and 50 tranche $B$ tokens to the caller. It is important to note that the act of calling the \texttt{splitRisk()} function does not provide the user with any form of insurance cover. In order to obtain insurance cover -- or to assume more risk -- the user must sell or trade a portion of their tranche $A$ or tranche $B$ tokens. When time $S$ is reached, the contract transitions to the \texttt{ReadyToInvest} state and users can no longer mint new tranche tokens. The \texttt{invest()} function is available during this state and it is tailored to the specific needs of the protocols that are part of the insurance contract, with the goal of splitting the deposited $C$ tokens equally among the protocols. In the reference implementation, the function will send half of the available $C$ to Aave and the other half to Compound in exchange for their respective yield-bearing tokens, $C_x$ and $C_y$. After a successful \texttt{invest()} call, the insurance contact holds $C_x$ and $C_y$ of equal value and does no longer hold $C$. Calling the invest function incurs a transaction fee, paid by the caller while the benefits of the call are shared among all participants. To avoid the problem of a first mover disadvantage, to ensure that the call is executed in a timely fashion and to split the costs equally among all participants, the \texttt{invest()} function should compensate the caller for executing the transaction.\footnote{We did not include a compensation mechanism in the reference implementation. When implemented, it should cover at least the base fee of the transaction plus a fixed amount for the tip.} The unlikely case in which no successful \texttt{invest()} call is made before the forced state transition at $T_1$ will be covered later in this subsection. When a successful \texttt{invest()} call is made, the contract transitions to the \texttt{MainCoverActive} state and sets the variable \texttt{isInvested = true}. The contract is now exposed to the risks of the third-party protocols and the main period of insurance cover for the $A$-tranches begins. In this state, no functions can be called on the contract. However, the $A$- and $B$-tranches remain transferable. At time $T_1$, the contract will transition from the \texttt{MainCoverActive} state to the \texttt{ReadyToDivest} state, where the \texttt{divest()} function can be invoked. It has a similar structure to the \texttt{invest()} function, but instead of depositing the underlying assets into the third-party protocols, \texttt{divest()} tries to withdraw the underlying assets including any accumulated yield from the protocols. A \texttt{divest()} call is considered successful if no errors occur while withdrawing the assets and if both $C_x$ and $C_y$ have been fully converted back to $C$. \begin{figure*}[h] \centering \begin{tikzpicture}[thick \draw [->,thick] (0,0) -- (14,0) node (time) [right] {$t$}; \draw (1,-0.1) node [below, align=center] {$D$} node [above=0.2cm] {\small{\texttt{deployment}}}-- (1,0.1) ; \draw (3,-0.1) node [below] {$S$} node [above=0.2cm] {\small{\texttt{invest()}}}-- (3,0.1) ; \draw (9,-0.1) node [below] {$T_{1}$} node [above=0.2cm] {\small{\texttt{divest()}}}-- (9,0.1) ; \draw (9,0.8) -- (9,1.5) node [left, midway] {\small{if \texttt{inLiquidMode = TRUE}}}; \draw[<->] (9.1,1.1) -- (14,1.1)node [midway, above] {\small{\texttt{claimAll()}}} ; \draw (10,-0.8) -- (10,-1.5) node [left, midway] {\small{if \texttt{inLiquidMode = FALSE}}}; \draw (10,-0.1) node [below] {$T_{2}$} -- (10,0.1) ; \draw (11,-0.1) node [below] {$T_{3}$} -- (11,0.1) ; \draw[<->] (10.1,-1) -- (14,-1)node [midway, above] {\small{\texttt{claimA()}}} ; \draw[<->] (11,-1.5) -- (14,-1.5)node [midway, above] {\small{\texttt{claimB()}}} ; \end{tikzpicture} \caption{Sequential actions in liquid mode (top, \texttt{divest()} successful) and fallback mode (bottom, \texttt{divest()} unsuccessful).} \label{fig:sequential} \end{figure*} \begin{table*}[b!] \center \caption{The three potential outcomes for liquid mode} \label{tbl:liquid} \setlength{\extrarowheight}{5pt} \begin{tabular}{c c c p{8cm}} \hline \hline \textbf{Case} & \textbf{Payoff $\mathbf{A}$} & \textbf{Payoff $\mathbf{B}$} & \textbf{Description} \\ \hline $C_{T_1} \geq C_S$ & $ \frac{C_{T_1}}{2}$ & $ \frac{C_{T_1}}{2}$ & Proceeds are split equally among all tranche token holders. Both tranches are treated equally.\\\hline $ C_S > C_{T_1} > \frac{C_S}{2}$ & $ \frac{C_S}{2} + i$ & $C_{T_{1}} - \left( \frac{C_{S}}{2} + i \right) $ & $A$-tranche holders get fully compensated and receive yield payment. $B$-tranche holders receive a proportion of their initial stake.\\ \hline $C_{T_1} \leq \frac{C_S}{2}$ & $C_{T_1}$ & $0$ & Proceeds are used to partially compensate $A$-tranche holders. This can only occur if both yield-generating protocols suffer losses. \\ \hline \hline \end{tabular} \end{table*} A successful \texttt{divest()} call immediately transitions the protocol to the \texttt{Liquid} state by setting \texttt{inLiquidMode = true}. In this state, the allocation of the redeemed assets to the $A$- and $B$-tranches is deterministic and can be calculated as part of the \texttt{divest()} call. Let us define $C_S$ as the total initially invested amount, $C_{T_1}$ as the total redeemed amount and $i$ as the interest. We can then differentiate between three cases and determine the payouts for each case, as shown in Table \ref{tbl:liquid}. The payout per $A$- and $B$-tranche token is stored on the contract and can be accessed using the variables $cPayoutA$ and $cPayoutB$, respectively. During the liquid state, users can call the \texttt{claim()} function which accepts an amount for $A$- and $B$-tranches as input. If the caller is in control of at least the specified amount of tranches, the contract will burn these tranches and transfer the payout to the caller. For convenience, a \texttt{claimAll()} function is available and will internally call the \texttt{claim()} function with the caller's current balance of tranches. If no successful \texttt{divest()} call is made during the \texttt{ReadyToDivest} state, a forced transition occurs at $T_2$ and the protocol enters fallback mode, which starts in state \texttt{FallbackOnlyA}. In fallback mode, the protocol has no knowledge about the value of its interest-bearing tokens relative to the initial investment. Therefore, instead of assigning a payout to the tranches, the tranche holders can choose which of the two interest-bearing tokens they would like to redeem. Based on the total amount of tranche tokens and the remaining interest-bearing tokens, the contract determines a fixed redeem-ratio for each of the two interest-bearing tokens. These ratios are stored on the contract as $cxPayout$ and $cyPayout$ and are defined as the total amount of the respective asset, divided by half of the total amount of tranches. For example, assume $50$ $A$- and $50$ $B$-token have been minted and the contract holds $20$ $C_x$ and $1500$ $C_y$. A tranche can now be redeemed for $0.4$ $C_x$ or $30$ $C_y$. Once all tranches are redeemed there are no interest-bearing tokens left on the contract. $A$- and $B$-tranches can be redeemed for the same amount. However, during the \texttt{FallbackOnlyA} state, as the name suggests, only $A$-tranches can be redeemed for interest-bearing tokens with the function \texttt{claimA()}. As an input for this function, the caller specifies how many of their $A$-tranches they want to redeem for $C_x$ and how many for $C_y$. The contract then burns the tranches and transfers the assets according to the redeem-ratios. At time $T_3$, if the contract is in fallback mode, the final transition happens to the \texttt{FallbackAll} state. This state is identical to \texttt{FallbackOnlyA} with the only difference that $B$-tranches can now also be redeemed via the \texttt{claimB()} function. Finally, to ensure we never end up in a state where the assets cannot be recovered, we need to define a state transition from \texttt{ReadyToInvest} to \texttt{Liquid} if the \texttt{invest()} function was not successfully called. This transition happens after $T_1$ if \texttt{isInvested == false} and allows the users to reclaim their initially invested funds. \subsection{Extensions and Trade-Offs} \label{sub:extensions} To obtain insurance cover, a protocol user must sell their $B$-tranches. A possible extension to the insurance contract would be to use intra-transaction composability and connect it to a decentralized exchange. This would allow users to sell their $B$-tranches in the same transaction as the \texttt{splitRisk()} function. However, note that any additions to the insurance contract will introduce additional risk. Keeping the contract as simple as possible and reducing dependencies to a minimum will help to manage this risk. We argue that most extensions which introduce new dependencies should be implemented at the user interface level in a separate contract. Consider the following example: Let us assume that we want to create a function to insure an amount of $C$ tokens. We create a new contract with a function that uses a flash loan \cite{Qin.2021} for twice the amount and calls \texttt{splitRisk()}. In the same function, the $B$-tranches are sold to a decentralized exchange and the $A$-tranches transferred to the caller. Finally, the flash loan is repaid, using the proceeds from the sale and the funds from the initial caller. The additional contract can be developed and deployed independently of the insurance contract. This separation offers more flexibility and introduces no additional risks for other users. The trade-off here is that the transaction fees might be slightly higher, as external calls are more costly than internal ones. \subsection{Transaction Costs} Depositing funds into a protocol incurs a transaction fee, which is imposed by the blockchain network and expressed in units of computation -- commonly called gas. This transaction fee can vary slightly based on circumstantial parameters, but it largely depends on the computational complexity of the transaction. Depositing funds into our reference implementation via the \texttt{splitRisk()} function costs around 83,000 gas. Depositing to Aave or Compound directly incurs a fee of 249,000 or 156,000 gas, respectively. While calling the \texttt{invest()} function is expensive (488,000 gas), this cost can be split among all users in the insurance contract. Similar to yield aggregation protocols \cite{Cousaert.2022}, the insurance contract becomes more gas efficient, the more users participate and even for just a few users, we expect the minting of insured tokens to be cheaper than minting uninsured tokens. \section{LP-Incentives and Divergence Loss} \label{sec:efficiency} \label{sec:divergence} Recall that users must mint $A$- and $B$-tranches in equal proportions. Consequently, they will only be able to reach token allocations in line with their risk preferences if there is a liquid market. Insufficient liquidity would lead to large price spreads (or slippage). Hence, there is a need for market makers, or more generally liquidity providers. In what follows, we analyze the incentives for liquidity provision of $A$- and $B$-tranches on constant product market makers (CPMM), a special form of automated market makers (AMM) \cite{Mohan.2022}. Note, that CPMMs are only one of many possibilities; tranche token markets could emerge on any trading infrastructure. However, there are a few reasons why CPMMs are of particular importance. \textit{First}, they usually handle a large part of the on-chain trading volume. \textit{Second}, CPMMs allow for composable calls and will always be able to quote a price for any (input) amount. \textit{Third}, CPMMs can be set up in a completely decentralized way and are therefore in line with the strict decentralization requirement of our insurance protocol. In a CPMM setup, profitability for liquidity providers is determined by two opposing effects. On the one hand, the pool accumulates protocol fees. The gains are assigned proportionally to all liquidity provision shares. The rate of return depends on the pool's trading volume relative to the pool's liquidity. On the other hand, liquidity providers are s.t. divergence loss (also known as impermanent loss). Divergence loss refers to the problem that liquidity providers lose value, if the liquidity redemption price ratio differs from the liquidity provision price ratio. Intuitively, this effect can be thought of as negative arbitrage. Divergence loss is zero if the two pool tokens maintain their initial price ratio and increases when the relative price is shifting in one direction. To assess the incentives for $A$- and $B$-tranche liquidity providers we have to understand divergence loss in the context of our tranche tokens. Let us assume a standard $a\cdot b=k$ setup, where $a$ and $b$ represent the initial amount of $A$ and $B$ tokens in the pool and $k$ is a constant product, that determines all feasible combinations of $a$ and $b$. Let us rearrange the equation and take the partial derivative w.r.t. $a$. The absolute value of the resulting slope can be reinterpreted as the relative price. \begin{equation} p_{_{AB}}= \frac{k}{{a}^2} \label{eq:priceratio} \end{equation} Trading activity may shift the token allocation to $a^*$ and $b^*$, with $a^* \cdot b^* = k$. Using \eqref{eq:priceratio} we obtain the new price ratio $p^*_{_{AB}}$. This allows us to express the post-trade quantities as a function of the new price ratio $p^*_{_{AB}}$. \begin{equation} a^* = \sqrt{\frac{k}{p^*_{_{AB}}}} \text{ , } \text{ } \text{ } b^* = \sqrt{k \cdot p^*_{_{AB}}} \label{eq:quantities} \end{equation} We can now compute portfolio values $V_p$ of a simple buy and hold strategy \eqref{eq:bhvalue} with the outcome of liquidity provision \eqref{eq:lpvalue}. \begin{align} V_P(a, b) &= p^*_{_{AB}} \cdot a + b \label{eq:bhvalue} \\ V_P(a^*, b^*) &= p^*_{_{AB}} \cdot a^* + b^* \label{eq:lpvalue} \end{align} Using \eqref{eq:quantities} to substitute quantities in \eqref{eq:bhvalue} and \eqref{eq:lpvalue} we get \begin{align} V_P(a, b) &= p^*_{_{AB}} \cdot \sqrt{\frac{k}{p_{_{AB}}}} + \sqrt{k \cdot p_{_{AB}}}, \label{eq:bhvalueplugged}\\ V_P(a^*, b^*) &= 2 \cdot \sqrt{k \cdot p^*_{_{AB}}}. \label{eq:lpvalueplugged} \end{align} Divergence loss can be expressed as follows \begin{align} D &:= \left| \frac{V_P(a^*, b^*) - V_P(a, b)}{V_P(a, b)} \right | \label{eq:dl} \end{align} From \eqref{eq:dl} we plug in \eqref{eq:bhvalueplugged} and \eqref{eq:lpvalueplugged}. After rearranging we get \begin{equation} D = \left| \frac{2 \cdot \sqrt{\frac{p^*_{_{AB}}}{p_{_{_{AB}}}}} - \frac{p^*_{AB}}{p_{_{AB}}} - 1}{\frac{p^*_{AB}}{p_{_{AB}}} + 1} \right|. \label{eq:dlPlugged} \end{equation} We can now use this equation to analyze two distinct outcomes and observe the effects on the pool and the liquidity providers. \textit{First}, assume the cover is not needed. The contract enters \texttt{Liquid} state, and $A$- and $B$-tranches can be redeemed for equal amounts of $C$. We refer to this case as the \textit{standard case}. \textit{Second}, assume one of the underlying yield-generating protocols suffers losses. These losses will be reflected in the price of tranche $B$ and therefore have an effect on the liquidity pools that contain $B$. We refer to this as the \textit{benefit case}. \begin{figure}[h!] \centering \begin{tikzpicture} \draw [->,thick] (0.5,0.8) -- (8,0.8) node (time) [right] {$t$}; \draw (1,0.4) node [below, align=center] {$S$ \\ \texttt{invest()}} -- (1,3.3) ; \draw (7,0.4) node [below, align=center] {$T_{1}$ \\ \texttt{divest()}} -- (7,3.3); \draw [](1,2) node[left] {\small{$p_{_C}$}} -- (7,2); \draw[blue] (1,2.5) node[left,blue] {\small{$p_{_A}$}} -- (7,2.8); \draw[red] (1,1.5) node[left,red] {\small{$p_{_B}$}} -- (7,2.8); \draw [<->,thick] (7.2, 2) -- node[right, align=left] {\small{Interest}} (7.2,2.8) ; \end{tikzpicture} \caption{Relative price development of $A$ and $B$ shares between $S$ and $T_1$, compared to the price of the underlying redeemable asset $C$.} \label{fig:lpprice} \end{figure} \subsubsection{Standard Case} In the standard case $A$-tranches lose their cover value over time. Conversely, $B$-tranches become less risky and will eventually be redeemable for an equal amount of $C$ as $A$-tranches. Hence, we know that $p^*_{_{AB}}=1$. Making use of substitution in \eqref{eq:dlPlugged}, the expected divergence loss can be expressed as a function of the initial price ratio $p_{_{AB}}$. The greater the initial risk premium, the higher the divergence loss for liquidity provision in $A/B$-pools. Alternatively, a liquidity provider could decide to contribute to an $A/C$- or $B/C$-pool. In $T_1$, we know that $p_A = p_B = p_C \cdot (1+i)$, where $i$ is the accumulated interest. Hence, we know that $p^*_{_{AC}}= p^*_{_{BC}}=1+i$. If we plug this value into \eqref{eq:dlPlugged}, the expected divergence loss, for any expected interest rate, can be expressed as a function of the initial price ratio $p_{_{AB}}$. Figure \ref{fig:lpprice} shows the price relations of the three tokens. For $A/B$-pool liquidity provision considerations, interest rates can be neglected. However, for $A/C$- and $B/C$-pools, interest plays an important role. Note that $B$-tranche prices already have a positive time trend. As such, interest will further increase the price spread to $C$. Conversely, $A$-tranche prices have a negative time trend and interest will therefore decrease the spread. Consequently, any (positive) interest will create a situation where the divergence loss of $B/C$-pools is greater than the divergence loss of $A/C$-pools. This is shown in Figure \ref{fig:abcPoolCurve}. \begin{figure}[h!] \center \includegraphics[width=8.5cm]{assets/DivergenceLoss.pdf} \caption{Divergence Loss (in line with equation \eqref{eq:dlPlugged}) for $a/c$-and $b/c$-Pools with an expected interest of 5\%. The two points marked in our graph represent an example for an initial price spread between $A$ and $B$. The initial valuation of each $a$ token starts at 1.02 $c$, and the valuation of each $b$ token at 0.98 $c$.} \label{fig:abcPoolCurve} \end{figure} While the extent of the divergence loss depends on various factors, it is important to understand that the effect is relatively small. Moreover, there are ways to mitigate a trend-based divergence loss. Alternative pool models, such as the \textit{constant power sum invariant}\cite{niemerg2020yieldspace} can be used to design decentralized exchanges that are better suited for tokens with an inherent price trend. \subsubsection{Benefit Case} If any of the yield-generating protocols suffer a loss, $A$-tranche holders will be compensated at the expense of $B$-tranche holders. In extreme scenarios, where one of the yield-generating protocols loses its entire collateral, $B$-tranches become worthless. From \eqref{eq:dlPlugged} we know that $\lim_{p^*_{{AB}} \to \infty } D = -1 $. Hence, $A/B$- and $B/C$-pool liquidity providers are at risk of losing their entire stake. While this constitutes an additional risk for providers of $B$-tranche liquidity, where they have to expose the $B$ counterpart to an additional risk and effectively stake twice the amount, they receive trading fees in return. As such, the incentives depend on the specifics and the risks of the insured protocols as well as the relative trading volume. In extreme cases, where $A/B$ and $B/C$ liquidity provision would be prohibitively risky, liquidity providers could instead contribute to $A/C$-pools. Liquid $A/C$-pools would be sufficient, in the sense that anyone who is interested in coverage could obtain it directly from the pool. This scenario will be further discussed in Section \ref{sec:discussion}. \section{Discussion} \label{sec:discussion} In the introduction we argued that current smart contract-based insurance protocols face various challenges and limitations. We will start our discussion by revisiting these points and explain how our model addresses them. First, the vast majority of existing insurance protocols allows for {over-insurance}, where users can buy cover that exceeds their exposure. This can create problematic incentives and -- depending on the jurisdiction -- result in conflict with the law. Our model does not allow for over-insurance. The risk and capital are linked through our tranches and cannot be separated without the use of another protocol. Second, there are various challenges relating to claim assessment. All of the existing insurance protocols we have examined have some form of dependency on external factors during the claim assessment process. These dependencies can be introduced through parametric triggers, oracles, community voting or decisions by a predetermined expert council. All of these approaches can lead to undesirable outcomes. The incentives may not be aligned and create situations that can result in deviations from the true outcome. In our model, we do not rely on claim assessors, voting in a decentralized autonomous organization (DAO), expert councils, oracles or any trigger events. Instead, we use a deterministic distribution schedule of a common underlying (Liquid Mode) and a sequential choice model in accordance with the seniority of the tranches (Fallback Mode). Consequently, payouts are not conditional on any subjective decisions by an involved- or third-party. Third, we argued that many DeFi insurance protocols suffer from capital inefficiencies and there certainly is a trade-off between {capital efficiency}, security and special privileges. We found that most existing protocols tend to be conservative or cautious in their approach. The collateral is usually held in low-risk, non-interest-bearing assets. As a result, these protocols have at most 50\% capital efficiency before leverage. Some protocols are capable of increasing their efficiency by covering multiple -- ideally uncorrelated -- risks with the same collateral; however, they still require the collateral to be in a low-risk, non-interest-bearing asset. In our model it is possible to hold the collateral, i.e., the $B$-tranche, in a interest-bearing asset without any significant drawbacks on the security side, if the risks of the insured protocols are indeed uncorrelated. Moreover, our approach is quite flexible in the sense that further leverage, based on a larger number of underlying protocols is feasible and could be implemented as an extension. In addition to these three initial points, there is another advantage related to the risk premium that we came across in the course of our research. As shown in Section \ref{sec:divergence}, both our cover and collateral ($A$- and $B$-tranches) are freely tradable. The risk premium is simply determined by the relative price between the two tranches. This allows us to create a market-based price-finding mechanism for a fair risk premium. The price can emerge naturally and does not depend on preset parameters or statically implemented risk spreads that may paralyze risk transfer activity. In Section \ref{sec:divergence} we show that there are greater incentives to provide liquidity for the $A$-tranches than for the $B$-tranches. Even in an extreme case, where the $B$ liquidity would be very low to non-existent, one could still obtain $B$-tranches. To do so, they call the \texttt{splitRisk()} function to mint $A$- and $B$-tranches in equal amounts and then sell the $A$-tranches, for which the market can be assumed to be sufficiently liquid. Anyone interested in the insurance cover could simply buy $A$-tranches on the open market and would not have to interact with the protocol. Assuming a constant supply, greater demand for $A$-tranches would increase the risk spread and therefore incentivize the creation of additional $A$- (and $B$-) tranches. There are many benefits to our proposal and we believe that this paper significantly contributes towards the DeFi protocol stack. However, every proposal also has its limitations and drawbacks. In the remainder of this section, we discuss some of these limitations and propose potential extensions and new research avenues to mitigate these issues. First, our model requires a {common underlying} among all involved protocols. The reason for this is to eliminate any reliance on external price sources, i.e., oracles. In liquid mode, we redeem everything to denominations of a unified underlying at the end of a predefined time period. While it is theoretically possible to wrap tokens to give them an arbitrary underlying, this will have one of two consequences: either a dependency on external price sources has to be introduced, or the fallback case in our model would introduce an insurance against relative price movements of the assets and the underlying. The latter may be desirable in some cases, but it is not the default behaviour we want to achieve. Second, our protocol has a fixed time span. Consequently, insuring assets over a longer period of time requires regular actions from all involved parties. A new contract has to be deployed for each period and the assets need to be moved over. This problem is exacerbated by shorter insurance periods. Longer insurance periods on the other hand increase the time that claimants have to wait for their compensation in case of an incident and also increase the risk of both protocols failing during the same period. We believe this limitation could be mitigated with an extension to the protocol, which uses short insurance periods and rolls over any non-redeemed tranches to a new insurance period. However, an extension of this nature could significantly increase the complexity of the protocol and would require further research to determine the practicality and potential consequences. Third, in our model we specify minting and redeeming time windows for the tranches. Consequently, the total supply of $A$- and $B$-tranches cannot change during the main insurance period. This can be an issue, especially if there is insufficient liquidity for the $B$-tranches, as discussed in Section \ref{sec:divergence} or if the demand for cover changes significantly. Further research into this topic is necessary, but we believe that under certain circumstances, the minting window could be extended to allow the creation of new shares during the active insurance phase. One requirement for this would be a way to track the accrued interest on the insurance protocol and to increase the costs of the newly created tranches accordingly. Similar considerations can be made for the redeeming window. Early redemption of equal parts of $A$- and $B$-tranches should be possible without large changes to the model. Even early redemption of just $A$-tranches is theoretically possible. Finally, our model and the reference implementation use two protocols. This is not a strict limitation. In fact, it can be shown that the model works as described as long as the number of tranches is equal to the number of insured protocols. For example, an extension to three protocols is possible with the introduction of a third tranche, without any fundamental changes to the protocol. A more challenging extension is the addition of further protocols without any changes to the number of tranches. This extension would severely increase the complexity of fallback mode. Recall that $A$-tranche holders get to choose which of the remaining interest-bearing tokens they want to redeem. In a world where the number of tranches is equal to the number of protocols, this is unproblematic, since there will always be sufficient collateral of any type for $A$-tranche holders to choose from. In a model where the number of protocols is greater than the number of tranches, $A$-tranche holders might compete with each other and race to redeem the more valuable collateral. As such, models where the number of protocols is greater than the number of tranches can create a first mover advantage, where $A$-tranche holders are treated inconsistently. A potential solution to solve this issue is a two-step approach, that lets tranche holders choose and commit their redemption preferences before the final redemption ratios are calculated. \section{Conclusion} \label{sec:conclusion} In this paper, we propose a fully decentralized DeFi insurance model that does not rely on any external information sources, such as price feeds (oracles) or claim assessors. The general idea of our insurance protocol is to pool assets from two third-party protocols, and allow users to split the pool redemption rights into two freely tradable tranche tokens: $A$ and $B$. Any losses are first absorbed by the $B$-tranche holders. $A$-tranche holders will only be negatively affected if 50\% or more of the pooled funds are irrecoverable, or if both protocols become temporarily illiquid and face (partial) losses. The market for $A$- and $B$-tranches determines the fair risk premium for the insurance. Our approach has several advantages over other DeFi insurance solutions. In addition to being fully decentralized and trustless, it also prevents over-insurance, does not rely on any parametric triggers, and is highly capital-efficient. We provide a complete reference implementation of the insurance protocol in Solidity, with coverage for two popular lending market protocols. We believe that fully decentralized and trustless infrastructure is crucial and may create more transparent, open and resilient financial markets. Our contribution should be seen as a composable building block and a foundation for further research and development efforts. \section*{Acknowledgment} The authors would like to thank Tobias Bitterli, Mitchell Goldberg, Emma Littlejohn, Katrin Schuler and Dario Thürkauf.
1,108,101,565,384
arxiv
\section{Appendix} \input{tables/architecture} \input{tables/hyperparameters} \section{Background} \label{sec:background} In this section, we review the background notation, the basic variational autoencoder (VAE) framework, and the concept of \emph{disentanglement} in the context of this framework. \subsection{Variational Autoencoders} Let $X = \{x_i\}_{i=1}^N$ be a dataset consisting of $N$ i.i.d samples $x_i \in \mathbb{R}^n$ of a random variable $x$. An autoencoder framework is comprised of two mappings: the encoder $\text{Enc}_\phi:\mathbb{R}^n \to Z$, paramaterized by $\phi$, and the decoder $\text{Dec}_\theta :Z \to \mathbb{R}^n$, paramterized by $\theta$. $Z$ is typically termed the \emph{latent space}. In the \emph{variational} autoencoder (VAE) framework, both mappings are taken to be probabalistic and a fixed prior distribution $p(z)$ over $Z$ is assumed. The training objective is the marginalized log-likelihood: \begin{equation} \sum_{i=1}^n \log p(x_i) \label{eq:loglikelihood} \end{equation} In practice, the parameters of the model; $\phi$ and $\theta$ are jointly trained via gradient descent to minimize a more tractable surrogate: the Evidence Lower Bound (ELBO) \begin{equation} \mathbb{E}_{z \sim q(z|x_i)}\log p(x_i | z) - D_{\text{KL}}(q(z | x_i) || p(z)) \label{eq:elbo} \end{equation} where the first term corresponds to the reconstruction loss and the second corresponds to the KL divergence between the latent representation $q(z | x_i)$ and the prior distribution $p(z)$, typically chosen to be the standard normal $\mathcal{N}(0,I)$. A significant extension, $\beta$-VAE, proposed by \citet{higgins17} introduces a weight parameter $\beta$ on the KL term: \begin{equation} \mathbb{E}_{z \sim q(z|x_i)}\log p(x_i | z) - \beta D_{\text{KL}}(q(z | x_i) || p(z)). \label{eq:betaelbo} \end{equation} The value of $\beta$ is usually chosen to induce certain desireable qualities in the latent representation\textemdash e.g. interpretability or disentanglement~\cite{chen2018, ridgeway2018}. Recent work has also proposed methods for selecting $\beta$ adaptively or according to pre-defined schedules during training~\cite{bowman2016, fu2019}. \subsection{Supervised Methods for Evaluating Disentanglement} \begin{figure}[t] \centering \iffalse \begin{subfigure}[b]{0.4\linewidth} \includegraphics[angle=90,origin=c,width=\linewidth]{figures/te2.png} \subcaption{} \label{fig:entangledsprites} \end{subfigure} \begin{subfigure}[b]{0.4\linewidth} \includegraphics[angle=90,origin=c,width=\linewidth]{figures/td2.png} \subcaption{} \label{fig:disentangledsprites} \end{subfigure} \fi \begin{subfigure}[]{0.121\linewidth} \includegraphics[trim={15cm 6cm 0 0},clip,width=\linewidth]{figures/te2.png} \subcaption{$\beta = 0.1$} \label{fig:entangledsprites} \end{subfigure} \begin{subfigure}[]{0.124\linewidth} \includegraphics[width=\linewidth]{figures/vaesamp4.pdf} \subcaption{$\beta = 1.0$} \label{fig:vsamp4} \end{subfigure} \begin{subfigure}[]{0.128\linewidth} \includegraphics[width=\linewidth]{figures/vaesamp8.pdf} \subcaption{$\beta = 10$} \label{fig:vsamp8} \end{subfigure} \begin{subfigure}[]{0.12\linewidth} \includegraphics[width=\linewidth]{figures/vaesamp16.pdf} \subcaption{$\beta = 100$} \label{fig:vsamp16} \end{subfigure} \caption{(\textbf{\subref{fig:entangledsprites}\textemdash\subref{fig:vsamp16}.}) dSprite samples from \emph{entangled} and \emph{disentangled} latent spaces. Note the occurrence of noisy, missing, and unrealistic samples when $\beta$ is set inappropriately ($\beta=0.1, 10, 100$). We propose an unsupervised algorithm to find the appropriate choice of parameters such as $\beta$ to encourage disentanglement.} \label{fig:dspritesamples} \end{figure} There have been recent efforts by the deep learning community towards learning intrinsic generative factors from data, commonly referred to as learning a disentangled representation. While there are few formalizations of disentanglement, an informal description is provided by \citet{bengio09}: \begin{quotation}\textit{a representation where a change in one dimension corresponds to a change in one factor of variation, while being relatively invariant to changes in other factors.} \end{quotation} Recent work has shown that learning disentangled representations facilitate robustness~\cite{Yang_Guo_Wang_Xu_2021}, interpretability~\cite{zhu2021}, and other desireable characteristics~\cite{locatello2019b}. A simple example of the difference between the quality of samples drawn from entangled and disentangled models is provided in Fig.~\ref{fig:dspritesamples}. As a result, evaluating models for their ability to learn disentangled latent spaces has received a large amount of attention in recent years~\cite{locatello2019a}. \textbf{$\beta$-VAE \& FactorVAE.} $\beta$-VAE~\cite{higgins17} and FactorVAE~\cite{kim2018} are popular methods for evaluating disentanglement and encouraging learning disentangled representations. As previously mentioned, $\beta$-VAE uses a modified version of the VAE objective with a larger weight ($\beta > 1$) on the KL divergence between the variational posterior and the prior, and has proven to be effective for encouraging disentangled representations. In addition to introducing a modification of the ELBO loss, \citet{higgins17} proposed a supervised metric that attempts to quantify disentanglement when the ground truth factors of a data set are given. The metric is the error rate of a linear classifier computed as follows: \begin{enumerate} \item Choose a factor and sample data $x$ with the factor fixed \item Obtain their representations (mean of $q(z|x)$) \item Take the absolute value of the pairwise differences of these representations. \item The mean of these statistics across the pairs are the variables and the index of the fixed factor is the corresponding response \end{enumerate} Intuitively, if the learned representations were perfectly disentangled, the dimension of the encoding corresponding to the fixed generative factor would be exactly zero, and the linear classifier would map the index of the zero to the index of the factor. However this metric has several weaknesses: \begin{enumerate} \item The classifier requires labeled generative factors \item The metric is sensitive to the classifier's parameters \item The coefficients of the classifier may not be sparse \item The classifier may give $100\%$ accuracy even when only $k - 1$ factors out of $k$ have been disentangled \end{enumerate} In an attempt to resolve the issues resulting from the application of a parametric model, \citet{kim2018} proposed to replace the linear predictor with a nonparametric majority-vote classifier applied to the empirical variances of the latent embeddings. In other words, the classifier predicts the generative factor $k$ corresponding to the latent dimension with the smallest variance. However, although the drawbacks of linear classification are addressed, certain new limitations are introduced: (1.) an assumption of independence between generative factors (2.) necessity of factor labels. \textbf{Mutual Information Gap. } The Mutual Information Gap (MIG)~\cite{chen2018} metric involves estimating the mutual information between generative factors each latent dimensions. For each factor, \citet{chen2018} consider the pair of latent dimensions with the highest MI scores. It is assumed that in a disentangled representation the difference between these two scores would be large. The MIG score is the average normalized difference between pairs of MI scores. \citet{chen2018} claim that the MIG score is more general compared to the $\beta$-VAE and FactorVAE metrics. However, as with $\beta$-VAE and FactorVAE, the labels of the underlying generative factors are required. \subsection{Intrinsic Indicators of Disentanglement} \begin{figure}[t] \centering \includegraphics[width=0.6\textwidth]{figures/workflow.pdf} \caption{Framework. (\textbf{A}) Step 1: Model training: $n$ networks are trained from different initializations for each model specification. (\textbf{B}) Step 2: Networks are jointly embedded according to training dynamics. (\textbf{C}) Step 3: Pairwise MMD scores are computed from the joint embeddings calculated in Step 2. (\textbf{D}) Step 4: The network specifications are sorted using the mean MMD score.} \label{fig:workflow} \end{figure} In this section, we review recent work on identifying fundamental indicators of disentangled models. Recent work \cite{Duan2020Unsupervised,zhou2021evaluating, rotman2022unsupervised, bounliphone2015, khrulkov18a} has explored various unsupervised scoring functions to evaluate and compare generative models for similarity, completeness, and disentanglement from the perspective of the latent space. In particular, \cite{Duan2020Unsupervised} and \cite{zhou2021evaluating} propose unsupervised methods for measuring disentanglement based on computing notions of similarity between generative models. \citet{zhou2021evaluating} utilize persistence homolgy to motivate a topological dissimilarity between latent embedding spaces, while \citet{zhou2021evaluating} propose a statistical test between simpling distributions of latent activations. As far as we are aware, we are the first to exploit the dynamics of activations observed during training. \citet{he2018lagging} investigate disentanglement by studying the dynamics of the deep VAE ELBO loss observed during gradient descent. Their conclusions suggest that artifacts of poor hyperparameter selection or architecture design, e.g., posterior collapse, are a direct product of a ``mismatch'' between the variational distribution and true posterior. \citet{chechik05, kunin19} showed that suitable regularization allows linear autoencoders to recover principal components up to rotation. \citet{lucas19} explicitly show that linear VAEs with a diagonal covariance structure recover the principal components exactly. Significantly, \citet{rolinek19} observed that the diagonal covariance used in the variational distribution of VAEs encourages orthogonal representation. They utilize linearaizations of deep networks to rigorously motivate these observations, along with an assumption to handle the presence of posterior collapse. Following up on this work, \citet{kumar20} empirically demonstrate a more general relationship between the variational distribution covariance and the Jacobian of the decoder. In particular, \citet{kumar20} show that a block diagonal covariance structure implies a block structure of the Jacobian of the decoder. The proposed method is motivated by these central results. \begin{enumerate} \item Local minima are global \item The local linearization of the Jacobian,\\$J_i = \frac{\partial \text{Dec}_\theta (\mu_\phi (x_i))}{\partial \mu_\phi (x_i)}$ is orthogonal \end{enumerate} In summary, we design a method to quantify disentaglement according to a novel notion of disagreement between decoder dynamics for multiple instantiations of VAE specifications during learning. \section{Conclusion and Future Work} We have introduced a method for unsupervised model selection for variational disentangled representation learning. We demonstrated that our metric is reliably correlated with three baseline supervised disentanglement metrics and with performance on two downstream tasks. Crucially, our method does not rely on supervision of the ground-truth generative factors and is therefore robust to nonexistent or noisily labeled generative factors. Future work includes exploring more challenging datasets, addressing scalability, and integrating labels and adapting our framework to other contexts by exploring qualities of neural networks correlate well with training dynamic stability. \section{Experiments} \label{sec:experiments} We evaluate the proposed method on the dSprites dataset~\cite{dsprites17}. This dataset consists of binary images of individual shapes. Each image in the dataset can be fully described by four generative factors: shape ($3$ values), $x$ and $y$ position ($32$ values), size ($6$ values), and rotation ($40$ values). The generative process for this dataset is fully deterministic, resulting in $737,280$ images. We adopt the same convolutional encoder-decoder architecture presented in~\citet{higgins17}. Network instances vary with respect to the dimension of the code and the $\beta$-factor used during training with $\beta$ chosen from the set $[1,2,4,8,16]$ and latent space dimension in $[2, 4, 8, 16, 32]$. \iffalse \begin{figure}[t] \centering \begin{minipage}[t]{.45\textwidth} \begin{subfigure}[b]{0.45\linewidth} \includegraphics[width=\linewidth]{figures/mmd_scores.png} \subcaption{} \label{fig:mmd_heatmap} \end{subfigure} \begin{subfigure}[b]{0.455\linewidth} \includegraphics[width=\linewidth]{figures/mphate1_5_ctype.png} \subcaption{} \label{fig:mphate1ct} \end{subfigure} \begin{subfigure}[b]{0.455\linewidth} \includegraphics[width=\linewidth]{figures/mphate4_5_ctype.png} \subcaption{} \label{fig:mphate2ct} \end{subfigure} \begin{subfigure}[b]{0.455\linewidth} \includegraphics[width=\linewidth]{figures/mphate16_5_ctype.png} \subcaption{} \label{fig:mphate3ct} \end{subfigure} \end{minipage} \begin{subfigure}[]{0.124\linewidth} \includegraphics[width=\linewidth]{figures/vaesamp4.pdf} \subcaption{} \label{fig:vsamp4} \end{subfigure} \begin{subfigure}[]{0.128\linewidth} \includegraphics[width=\linewidth]{figures/vaesamp8.pdf} \subcaption{} \label{fig:vsamp8} \end{subfigure} \begin{subfigure}[]{0.12\linewidth} \includegraphics[width=\linewidth]{figures/vaesamp16.pdf} \subcaption{} \label{fig:vsamp16} \end{subfigure} \caption{(\textbf{\subref{fig:mmd_heatmap}.}) Average MMD values for different choices of the dimension of $z$ (dimension of the latent space). A lower value denotes more stable learning dynamics. $4$ is the ground truth number of factors. (\textbf{\subref{fig:mphate1ct}\textemdash\subref{fig:mphate3ct}.}) Joint $2$-d embeddings of network dynamics as the number of latent dimensions ranges from $4,8,16$. Colors denote weight initializations. (\textbf{\subref{fig:mphate1ct}\textemdash\subref{fig:mphate3ct}.}) Samples from associated decoders.} \label{fig:score_dimension} \vspace{-0.5cm} \end{figure} \fi \begin{figure}[t] \begin{subfigure}[b]{0.24\linewidth} \includegraphics[width=\linewidth]{figures/mmd_scores.png} \subcaption{} \label{fig:mmd_heatmap} \end{subfigure} \begin{subfigure}[b]{0.245\linewidth} \includegraphics[width=\linewidth]{figures/mphate1_5_ctype.png} \subcaption{} \label{fig:mphate1ct} \end{subfigure} \begin{subfigure}[b]{0.245\linewidth} \includegraphics[width=\linewidth]{figures/mphate4_5_ctype.png} \subcaption{} \label{fig:mphate2ct} \end{subfigure} \begin{subfigure}[b]{0.245\linewidth} \includegraphics[width=\linewidth]{figures/mphate16_5_ctype.png} \subcaption{} \label{fig:mphate3ct} \end{subfigure} \caption{(\textbf{\subref{fig:mmd_heatmap}.}) Average MMD values for different choices of the dimension of $z$ (dimension of the latent space). A lower value denotes more stable learning dynamics. Note that $4$ is the ground truth number of generative factors. (\textbf{\subref{fig:mphate1ct}\textemdash\subref{fig:mphate3ct}.}) Joint $2$-d embeddings of network dynamics as the number of latent dimensions ranges from $4,8,16$. Colors denote weight initializations. } \label{fig:score_dimension} \vspace{-0.5cm} \end{figure} \input{tables/results} \textbf{Correlation with supervised disentanglement metrics. } We first demonstrate that our method produces rankings that are correlated with those produced using \emph{supervised} baseline methods\textemdash methods that exploit supervision of latent factors. Although the proposed method does not require supervision, it does necessitate training multiple realizations of each model specification. To make a fair comparison with existing methods, we compute the mean of the supervised disentanglement scores over each set of networks.\looseness=-1 In Table~\ref{tab:rankcorr}, we show that rankings produced using our score correlate positively with rankings produced using the supervised methods. Furthermore, we observe that as the number of networks used to compute the score increases, the correlation and standard deviation improves. \input{tables/noiseregimes} \textbf{A failure mode of supervised methods. } In many real world datasets, the generative factors are unknown or are unreliably labeled. We demonstrate that methods which rely on supervision of the latent factors are brittle to label noise. We propose a new instance of dSprites: \emph{noisy-dSprites}. We compare the robustness of various metrics on the dSprites dataset with labels diluted with different amounts of noise. More concretely, when selecting samples with fixed generative $k$ (i.e. step 1. of the $\beta$-VAE metric), we perturb factor $k$, either uniformly if the factor is discrete (e.g. shape) or according to Gaussian noise with mean $0$ (size, orientation, position). We introduce three \emph{noise regimes} in Table~\ref{tab:noiseregimes}. In table~\ref{tab:noisemodel}, for various disentanglement metrics, we show that as the amount of noise increases, the quality of the metric decays. However, since the proposed method does not require labeled generative factors, it is robust to label noise. \input{tables/noisydsprites} \input{tables/downstreamtasks} \textbf{Correlation with downstream task performance. } It has been shown that learning (e.g. classifiers or RL agents) with disentangled representations~\cite{watters2019,locatello2019b} is easier in some sense\textemdash i.e. online decision making can be done more efficiently (with respect to sample complexity) when the state-space is disentangled~\cite{watters2019}. Here, we demonstrate that our method can be used to identify VAEs that are useful for downstream classification tasks where training data efficiency is important. More precisely, we evaluate efficiency as the number of steps needed to achieve 90\% accuracy on a clustering task.\looseness=-1 The agent is provided with a pre-trained encoder trained with $\beta \in \{0, 0.01, 0.1, 1\}$, an exploration policy and a transition model. The goal is to learn a reward predictor to cluster shapes by various generative factors. We use 5 random initialisations of the reward predictor for each possible MONet model, and train them to perform the clustering task detailed in \citet{watters2019}. We additionally evaluate our method according to it's fairness as defined in~\citet{locatello2019b}: $$ \text{unfairness}(\hat{y}) = \frac{1}{|S|}\sum_sD_\text{TV}(p(\hat{y}), p(\hat{y} | s = s))\quad \forall y $$ We adopted a similar setup described in \citet{locatello2019b}. A gradient boosted classifier is trained over $10000$ labelled examples. The fairness score is computed by taking the mean of the fairness scores across all targets and all sensitive variables where the fairness scores are computed by measuring the total variation. In Table~\ref{tab:downstream}, we see that our method exhibits high Spearman correlation with the fairness score and superior correlation with sample efficiency on the reinforcement learning task. \section{Introduction} Generative models provide accurate models of data, without expensive manual annotation~\cite{bengio09, kingma2014}. However, in contrast to classifiers, fully unsupervised model selection within the class of generative models is far from a solved problem~\cite{locatello2019a}. For example, simply computing and comparing likelihoods can be a challenge for some families of recently proposed models~\cite{goodfellow14,li15}. Given two models that exhibit similar loss-values on a held-out dataset, there is no computationally friendly way to determine whether one likelihood is significantly higher than the other. Permutation testing or other generic strategies are often computationally prohibitive and it is unclear if likelihood correlates with desirable qualities of generative models. We are motivated to address this problem. In this work, we focus on the problem of unsupervised model selection, namely Variational Autoencoders (VAEs), for \emph{disentanglement}~\cite{higgins17}. In this context, model selection without full or partial supervision of the ground truth generative process and/or attribute labels is currently an open problem and existing metrics exhibit high variance, even for models with the same hyperparameters trained on the same datasets~\cite{locatello2019a, locatello2019b}. Since ground truth generative factors are unknown or expensive to provide in most real-world tasks, it is important to develop efficient unsupervised methods. To address the aforementioned issues, we propose a simple and flexible method for fully unsupervised model selection for VAE-based disentangled representation learning. Our approach is inspired by recent findings that attempt to explain why VAEs disentangle~\cite{rolinek19}. We characterize disentanglement quality by performing pairwise comparisons between the training dynamics exhibited by models during gradient descent. We validate our approach using baselines discussed in \citet{{locatello2019a, locatello2019b}} and demonstrate that the model rankings produced by our approach correlate well with performance on downstream tasks. \subsection{Contributions} Our contributions can be summarized as follows: \begin{enumerate} \item We design a novel method for model selection for disentanglement based on the activation dynamics of the decoder observed throughout training. \item Notably, our method is fully unsupervised\textemdash our method does not rely on class labels, training supervised models, or ground-truth generative factors. \item We evaluate our proposed metric by demonstrating strong correlation with supervised baselines on the dSprites dataset~\cite{dsprites17} and downstream performance on reinforcement learning and classification tasks~\cite{watters2019,locatello2019b}. \end{enumerate} \section{Comparing VAEs via Learning Dynamics} \label{sec:method} The two results mentioned above imply that stability of the activation dynamics of the decoder with respect to different initializations may correlate with disentanglement. In this section, we propose a method for computing a similarity score between two decoders according to their activation dynamics. We hypothesize that realizations of a particular specification of a VAE (its architecture and various hyperparameters) which encourages disentangled representation learning will exhibit similar activation dynamics during training, regardless of initialization. \subsection{Finding a Common Representation} To compare the dynamics of multiple VAEs, we define a \emph{multislice} kernel defined on the per-epoch activations between a fixed set of samples. \begin{figure}[t] \centering \begin{subfigure}[b]{0.24\linewidth} \includegraphics[width=\linewidth]{figures/mphate1_5_1.png} \subcaption{} \label{fig:mphate1} \end{subfigure} \begin{subfigure}[b]{0.24\linewidth} \includegraphics[width=\linewidth]{figures/mphate1_5_2.png} \subcaption{} \label{fig:mphate2} \end{subfigure} \begin{subfigure}[b]{0.24\linewidth} \includegraphics[width=\linewidth]{figures/mphate1_5_3.png} \subcaption{} \label{fig:mphate3} \end{subfigure} \begin{subfigure}[b]{0.24\linewidth} \includegraphics[width=\linewidth]{figures/mphate1_5_all.png} \subcaption{} \label{fig:mphate4} \end{subfigure} \caption{(\textbf{\subref{fig:mphate1}\textemdash\subref{fig:mphate3}.}) $2$d embeddings of training dynamics of individual network realizations. (\textbf{\subref{fig:mphate4}.}) Joint $2$d embeddings of training dynamics. $x$ and $y$ coordinates are integers between $0$ and $64$. Pairs of coordinates are mapped to single values via row-major order\textemdash i.e. samples are colored according to the value $x+y\cdot64$.} \label{fig:mphate_alignment} \end{figure} \textbf{The Multislice Kernel.} We construct a \emph{Multislice Kernel}~\cite{gigante19} defined over a fixed set of \emph{trace} samples. The entries of the kernel\textemdash i.e. the similarities between input samples\textemdash are computed according to the \emph{intermediate activations} exhibited by the decoder. Following the notation of \citet{gigante19}, the time trace $\mathbf{T}$ of the decoder is an $n\times m\times p$ tensor encoding the activations at each epoch $\tau \in [1,n]$ of $p$ hidden units $\text{Dec}_{\theta_\tau}$ with respect to each of $m$ trace samples. A pair of kernels are constructed using $\mathbf{T}$: $\mathbf{K}_{\text{intraslice}}$, which encodes the affinity between pairs of trace samples indexed by $i$ and $j$ at epoch $\tau$ according to the activation patterns they induce when encoded and passed to $\text{Dec}_{\theta_\tau}$, and $\mathbf{K}_{\text{interslice}}$, which encodes the ``self-affinity'' between a sample $i$ at time $\tau$ and itself at time $\nu$: \begin{align*} &\mathbf{K}_{\text{intraslice}}^{(\tau)}(i,j) =\exp(-||\mathbf{T}(\tau,i) - \mathbf{T}(\tau,j)||^\alpha_2 / \sigma^2_{(\tau,i)}) \\ &\mathbf{K}_{\text{interslice}}^{(i)}(i,j) = \exp(-||\mathbf{T}(\tau,i) - \mathbf{T}(\nu,i)||^2_2 / \epsilon^2), \end{align*} where $\sigma_{\tau,i}$ and $\epsilon$ correspond to intraslice and interslice kernel bandwidth parameters. The multislice kernel matrix $\mathbf{K}$ and its symmetrization $\mathbf{K}'$ are then defined: \iffalse \begin{align*} &\mathbf{K}((\tau,i), (\nu,j)) = \begin{cases} \mathbf{K}_{\text{intraslice}} & \text{if } \tau=\nu\\ \mathbf{K}_{\text{interslice}} & \text{if } i=j\\ 0 & \text{otherwise} \end{cases} \\ &\mathbf{K}' = \frac{1}{2} (\mathbf{K} + \mathbf{K}^\top) \end{align*} \fi \begin{align*} &\mathbf{K}((\tau,i), (\nu,j)) = \begin{cases} \mathbf{K}_{\text{intraslice}} & \text{if } \tau=\nu\\ \mathbf{K}_{\text{interslice}} & \text{if } i=j\\ 0 & \text{otherwise} \end{cases} \quad \mathbf{K}' = \frac{1}{2} (\mathbf{K} + \mathbf{K}^\top) \end{align*} $\mathbf{K}'$ is row-normalized to obtain $\mathbf{P} = \mathbf{D}^{-1}\mathbf{K'}$. The row-stochastic matrix $\mathbf{P}$ represent a random walk over the samples across all epochs, where propagating from $(\tau, i)$ to $(\nu, j)$ is conditional on the transition probabilities between epochs $\tau$ and $\nu$~\citep{gigante19}. Powers of the matrix, the \emph{diffusion kernel} $\mathbf{P}^t$, represents running the chain forward $t$ steps. \citet{gigante19} define a distance based on $\mathbf{P}^t$ and a corresponding distance preserving embedding. \textbf{Joint embeddings. }It is important to note that \citet{gigante19} originally proposed and applied the Multislice Kernel to characterize and differentiate the behavior of different classifiers by constructing visualizations of the network's \emph{hidden units} based on their activations on a fixed set of training samples. In contrast, we propose to apply the Multislice Kernel to directly compare variational models according to the activation response of the decoder on a fixed set of \emph{trace samples}. In other words, in the work of \citet{gigante19}, the entries of $\mathbf{K'}$ correspond to similarities between \emph{hidden units}, but in our method, the entries of $\mathbf{K'}$ correspond to similarities between \emph{trace samples}. This subtle difference is key and implies a simple and direct method to compare different VAEs with \emph{different} architectures by comparing their associated multi-slice kernels computed on the \emph{same} set of trace samples. We accomplish this by concatenating the rows of diffusion kernels associated with each set of realizations per-specification and computing the left singular vectors of this tall matrix. Inspired by \citet{gigante19}, we expect that each individual the kernel encode some average sense of affinity across the data, and by extension that the matrix derived by concatenating these kernels is similarly meaningful. In Fig.~\ref{fig:mphate_alignment}, we plot the embeddings associated with a single realizations (initializations) of a given model specification (Fig.~\ref{fig:mphate_alignment}~\subref{fig:mphate1}\textemdash\subref{fig:mphate3}) and the aligned embeddings (left singular vectors of concatenated kernels, Fig.~\ref{fig:mphate_alignment}~\subref{fig:mphate4}). Note that each embedding is a slight perturbation of the others, but sample embeddings in the joint space roughly align according to a generative factor that explains a significant amount of sample variance (coordinate position). \subsection{Maximum Mean Discrepancy (MMD)} Recall that we propose to consider the cumulative kernel similarity between independent realizations of a VAE specification as a proxy for the disentanglement. To compute a similarity between joint embeddings, we apply the Maximum Mean Discrepancy (MMD) test statistic~\cite{gretton12a} to the left singular vectors of the matrix formed by concatenating diffusion kernels. \begin{definition}{Maximum Mean Discrepancy (MMD;~\citet{gretton12a}) } Let $\mathcal{F}$ be a Reproducing Kernel Hilbert space (RKHS), with continuous feature mapping $\phi(x) \in \mathcal{F}$ from each $x \in \mathcal{X}$, such that the inner product between the features is given by the kernel function $k(x,x'):= \langle \phi(x), \phi(x') \rangle$. Then the squared population MMD is \begin{align*} \text{MMD}^2 (\mathcal{F}, P_x, P_y) = \mathbb{E}_{x,x'}[k(x,x')] - 2\:\mathbb{E}_{x,y\phantom{'}}[k(x,y\phantom{'})] + \mathbb{E}_{y,y'}[k(y,y')]. \end{align*} \end{definition} To summarize, distances between distributions are represented as distance between mean embeddings of features characterized by the map $\phi$. \iffalse The following theorem describes an unbiased quadratic-time estimate of the MMD. In particular, exploiting the fact that $\phi$ aps to a general RKHS facilitates application of the kernel trick and related benefits\textemdash namely the fact that many kernels lead to the MMD being zero if and only if the distributions are identical. \begin{theorem}{(\citet{gretton12a} Lemma 6) Unbiased empirical estimate of $\text{MMD}_u^2$ is a sum of two U-statistics and a sample average} Define observations $X_m := \{x_1, \ldots, x_m\}$ and $Y_m := \{y_1, \ldots, y_m\}$ independently and identically distributed (i.i.d.) from $P_x$ and $P_y$, respectively. An unbiased empirical estimate of $\text{MMD}^2 (\mathcal{F}, P_x, P_y)$ is a sum of two $U$-statistics and a sample average, \begin{align*} \text{MMD}^2_u (\mathcal{F}, P_x, P_y)= &\frac{1}{m(m-1)}\sum_{i=1}^m \sum_{j\not = i}^m k(x_i, x_j) + \\ &\frac{1}{n(n-1)}\sum_{i=1}^n \sum_{j\not = i}^n k(y_i, y_j) - \\ &\frac{2}{mn}\sum_{i=1}^m \sum_{j = i}^n k(x_i, y_j) \end{align*} \end{theorem} \fi \subsection{Unsupervised Model Selection for Disentanglement} As mentioned previously, we are motivated by the observation that networks which disentangle well exhibit ``stable'' learning dynamics under different initializations. We approximate this stability with the similarity between learning dynamics for networks that differ \emph{only} in their initial weights. We characterize the learning dynamics according the principles proposed by \citet{gigante19 Our method consists of four steps below and in Fig.~\ref{fig:workflow}.\looseness=-1 \begin{enumerate} \item Train $k\times n$ different models ($k$ different ``specifications'': architectures / hyperparameters, $n$ different random ``realizations'' per instance) \item Jointly embed each group of $n$ models using the left singular vectors of the concatenated multislice kernels \item For each group, calculate the pairwise MMD metric between each of pair of $n$ models \item Report the average of the MMD metric over each group as the score for the corresponding realization \end{enumerate} An example of the above algorithm applied to a set of VAEs which differ in architecture and regularization weight $\beta$ is provided in Fig.~\ref{fig:score_dimension}~\subref{fig:mmd_heatmap}. Note that the scores are smallest for networks with a latent space whose dimension is equal to the number of generative factors ($4$), and for fixed dimension, the scores generally increase as the regularization weight increases\textemdash agreeing with previous work~\cite{higgins17}. In Fig.~\ref{fig:score_dimension}~\subref{fig:mphate1ct}\textemdash\subref{fig:mphate3ct} we provide the 2-d restriction of our algorithm to a set of networks with fixed $\beta=1$ with latent dimension chosen from $4,8,16$.
1,108,101,565,385
arxiv
\section{Introduction} In multi-label classification the task is to automatically assign an object to multiple categories based on its characteristics. Each object of our interest is described by a feature vector $\field{x}$ belonging to $p$-dimensional space and vector of $K$ labels $\field{y}=(y_1,\ldots,y_K)'$. In this paper we consider binary labels such that $y_k=1$ indicates that the considered object belongs to $k$-th category or has the $k$-th property. The issue has recently attracted significant attention, motivated by an increasing number of applications such as image and video annotation (assigning metadata to digital images or videos), music categorization (assigning various emotions to the songs), text categorization (assigning various subjects to documents), direct marketing (predicting which products will be purchased), functional genomics (determining the functions of genes and proteins) and medical diagnosis (predicting types of diseases). The key problem in multi-label learning is how to utilize label dependencies to improve the classification performance, motivated by which number of multi-label algorithms have been proposed in recent years (see \cite{Madjarov2012} for extensive comparison of several methods). A naive approach, called binary relevance (BR), which is based on building separate classification models for each label and which does not take into account any dependencies between labels, is deficient in many applications. A natural approach, to tackle the issue, is to model the joint conditional probability $P(y_1,\ldots,y_n|\field{x})$ and then to choose the most probable set of labels for a given $\field{x}$, i.e. the mode of the joint distribution $\field{y}^{*}(\field{x})=\arg\max_{\field{y}\in\{0,1\}^K}p(\field{y}|\field{x})$. This approach minimizes the subset 0/1 loss (\cite{Dembczynskietal2012}), which generalizes the well-known 0/1 loss from the conventional to the multi-label setting. Direct modelling of the joint distribution is usually problematic. Classifier chain method (\cite{Readetal2011}, \cite{Dembczynskietal2010}) fixes this problem by using chain probability rule \begin{equation} \label{chain1} P(\field{y}|\field{x})=P(y_1,\ldots,y_n|\field{x})=\prod_{k=1}^{K}P(y_k|y_1,\ldots,y_{k-1},\field{x}), \end{equation} which allows to estimate the conditional probabilities in (\ref{chain1}) by using single label classifiers in which $y_k$ is treated as response variable, whereas $y_1,\ldots,y_{k-1},\field{x}$ are treated as explanatory variables. This gives the approximations of $P(y_k|y_1,\ldots,y_{k-1},\field{x})$ and thus also the approximation of the joint distribution, which will be denoted by $\hat{P}(\field{y}|\field{x})$. Classifier chains are commonly assigned to the group of problem-transformation methods (\cite{TsoumakasandKatakis2007}), as in this case the multi-label problem is transformed into several single label problems. One of the main difficulties of classifier chains is their sensitivity to the ordering of labels during building the models. Although the order of conditioning in (\ref{chain1}) is not relevant, the order of training models can affect significantly the estimation accuracy. In this paper we consider the classifier chains combined with logistic model fitting, i.e. logistic models are used to estimate the conditional probabilities in (\ref{chain1}). Our motivation for studying this particular combination is that the logistic model is one of the most commonly used classifiers in practice. The combination of classifier chains and logistic regression has already been used by many authors, in various applications, e.g. in \cite{Kumaretal2013}, \cite{Dembczynskietal2010}, \cite{Dembczynskietal2012}, \cite{Montanes2014}. However, there are no theoretical results on large sample properties for such procedures. This paper intends to fill this gap. Moreover, the logistic model has strong theoretical background (\cite{FahrmeirKaufmann1985}, \cite{Fahrmeir1987}), which allows to prove some asymptotic results for classifier chains as well. First, we study the asymptotic properties of classifier chains. We impose the conditions for the distribution of the feature vector $\field{x}$ and number of labels $K$ under which the estimated mode $\hat{\field{y}}(\field{x})=\arg\max_{\field{y}\in\{0,1\}}\hat{P}(\field{y}|\field{x})$ converges in probability to the true mode $\field{y}^{*}(\field{x})$. The above property can be proved, provided that there exists the "optimal" ordering of labels, such that the consecutive conditional probabilities in (\ref{chain1}) are of the logistic form. However, in practice, the "optimal" ordering is usually unknown or it may not exist at all (this is the case when the data generation scheme is different from logistic one). This leads to further important questions discussed in this paper. What is the influence of the ordering of labels on the estimation of the joint distribution (and thus on the estimation of the mode)? How accurate can we estimate the conditional probabilities in (\ref{chain1}) when the ordering of labels is wrong or the logistic model is incorrectly specified? It turns out that in this case we attempt to estimate the parameters of assumed logistic model which is closest to the true one in the sense of average Kullback-Leibler distance. Moreover, we propose a procedure of finding the optimal ordering of labels, which uses measures of correct specification. The procedure allows to determine the ordering such that the consecutive conditional probabilities in (\ref{chain1}) are best possibly specified. Although the following results are obtained for logistic model, we believe that some ideas can be expanded to the case when other base-classifiers are used. The problem of label ordering has been discussed in literature, however the direct influence on the joint mode estimation has not been investigated. \cite{Kumaretal2013} proposed to use log-likelihood function as a measure of correct ordering and beam search to reduce the large number of possible orderings. To eliminate an effect of ordering of labels, \cite{Readetal2009} proposed to average the multi-label predictions over randomly chosen set of permutations. This extension, called ensembled classifier chain (ECC) improves the results of single chain, however it does not answer the question how to find the optimal ordering. The rest of the paper is organized as follows. In Section \ref{Classifier chain model} we describe the logistic classifier chain model (LCC), which is motivated by (\ref{chain1}). In Section \ref{Consistency of the joint mode estimation} we state the Theorem concerning the consistency of the joint mode estimation. The problem of ordering the labels is discussed in Section \ref{Ordering of labels in classifier chain}. In Section \ref{Ordering of labels in classifier chain} we also review the measures of correct specification and the procedure of determining the optimal ordering is presented. In Section \ref{Inference in classifier chain model}, problem of inference in classifier chain model is described. Finally, the results of selected numerical experiments are reported in Section \ref{Empirical evaluation}. Section \ref{Conclusions} contains final conclusions and \ref{Proofs} contains proofs. \section{Logistic classifier chain (LCC) model} \label{Classifier chain model} In multi-label learning each object of our interest is described by a pair $(\field{x},\field{y})$, where $\field{x}\in R^{p}$ is the random vector of $p$ explanatory variables (features) and $\field{y}=(y_1,\ldots,y_{K_n})'$ is the vector of $K_n$ binary responses (labels). The first coordinate of $\field{x}$ is equal $1$, which corresponds to the intercept. We assume that the number of responses depends on the number of observations, which reflects the common situation of large number of labels. Moreover, this assumption can correspond to the situation of sparse labels (equal zero for the majority of cases) - we add the new label at a time when the first non-zero value of this label appears. We assume the following data generation scheme. First, we assume that there exists a permutation/ordering of labels $\pi^{*}(1),\ldots,\pi^{*}(K_n)$ such that the probability of $y_{\pi^{*}(k)}=1$, given $\field{x}$ and the previous labels $y_{\pi^{*}(1)},\ldots,y_{\pi^{*}(k-1)}$, is always of the logistic form, i.e. \begin{equation} \label{marg_distr0} P(y_{\pi^{*}(k)}=1|\field{x},y_{\pi^{*}(1)},\ldots,y_{\pi^{*}(k-1)})=\sigma(\field{z}_{k}(\pi^*)'\theta_{k}(\pi^*)),\quad\textrm{for all } k=1,\ldots,K_n, \end{equation} where $\sigma(s)=1/(1+\exp(-s))$ is a logistic function; $\field{z}_{k}(\pi^*)=(\field{x}',y_{\pi^{*}(1)},\ldots,y_{\pi^{*}(k-1)})'$ is a combined vector of features and labels with indices $\pi^{*}(1),\ldots,\pi^{*}(k-1)$; $\theta_{k}(\pi^*)\in R^{p+k-1}$ is a corresponding vector of parameters. In total we have $pK_n+K_n(K_n-1)/2$ unknown parameters. For notation simplicity we will assume that the ordering of labels, for which (\ref{marg_distr0}) holds, is an identity permutation $\pi^{*}(k)=k$ and in this case we will write in brief $\field{z}_k$ and $\theta_k$ instead of $\field{z}_{k}(\pi^*)$ and $\theta_{k}(\pi^*)$. So, in this case, we can simply write \begin{equation} \label{marg_distr} P(y_{k}=1|\field{x},y_{1},\ldots,y_{k-1})=\sigma(\field{z}_k'\theta_k),\quad\textrm{for all } k=1,\ldots,K_n. \end{equation} The joint distribution is of the form \begin{equation} \label{joint_distr} P(\field{y}|\field{x})=P(y_1,\ldots,y_k|\field{x})=\prod_{k=1}^{K_n}P(y_k=1|\field{x},y_1,\ldots,y_{k-1})=\prod_{k=1}^{K_n}\sigma(\field{z}_k'\theta_k)^{y_k}[1-\sigma(\field{z}_k'\theta_k)]^{1-y_k}. \end{equation} The joint distribution corresponding to BR model, with the conditional distributions of the logistic form, can be written in an analogous form to (\ref{joint_distr}), with the only difference that $\field{z}_k=\field{x}$. Diagrams on Figure \ref{fig:nets} represent the networks corresponding to LCC model and BR model. \begin{figure}[ht!] \begin{center}$ \begin{array}{ccc} \includegraphics[scale=0.8]{Rysunki/siec1.pdf} & \includegraphics[scale=0.8]{Rysunki/siec3.pdf} \\ (a) & (b)\\ \end{array}$ \end{center} \caption{Diagrams representing data generation schemes in LCC model (a) and BR model (b), in the case of $K_n=3$ labels. The following notation is used: $\theta_k=(a_k',b_k')'$, where $a_k\in R^p$ is a subvector of parameters corresponding to $\field{x}$ and $b_k=(b_{k,1},\ldots,b_{k,k-1})'\in R^{k-1}$ is a subvector of parameters corresponding to $k-1$ first labels in the chain.} \label{fig:nets} \end{figure} As the true joint distribution is usually unknown, it is estimated based on training examples $\mathcal{D}=(\field{x}^{(i)},\field{y}^{(i)})$, $i=1,\ldots,n$ ($\field{x}^{(i)}\in R^p$, $\field{y}^{(i)}\in\{0,1\}^{K_n}$) which are assumed to be generated from (\ref{marg_distr}). Observe that conditional distribution (\ref{marg_distr}) can be estimated by fitting logistic regression model (on the set $\mathcal{D}$) in which $y_k$ is a response variable and $\field{z}_k=(\field{x}',\field{y}_{1:(k-1)}')'$ is a vector of explanatory variables. By $\mathcal{M}(y_k,\field{z}_k)$ we will denote a logistic model fitted on set $\mathcal{D}$, in which $y_k$ is a response variable and $\field{z}_k$ is a vector of explanatory variables. Let $l_{y_k,\field{z}_k}(\cdot)$ be the log-likelihood function calculated for model $\mathcal{M}(y_k,\field{z}_k)$ using $\mathcal{D}$ and $\hat{\theta}_k=\arg\max_{\theta}l_{y_k,\field{z}_k}(\theta)$ be the maximum likelihood estimator of $\theta_k$. By building $K_n$ independent logistic models $\mathcal{M}(y_1,\field{z}_1),\ldots,\mathcal{M}(y_{K_n},\field{z}_{K_n})$, one can estimate parameters $\theta_1,\ldots,\theta_{K_n}$ in the consecutive models in the chain and thus also the joint distribution \begin{equation} \label{joint_distr_est} \hat{P}(\field{y}|\field{x})=\prod_{k=1}^{K_n}\sigma(\field{z}_k'\hat{\theta}_k)^{y_k}[1-\sigma(\field{z}_k'\hat{\theta}_k)]^{1-y_k}. \end{equation} \section{Consistency of the joint mode estimation} \label{Consistency of the joint mode estimation} In the following we will assume that the true ordering of labels $(\pi^{*}(1),\ldots,\pi^{*}(K_n))=(1,\ldots,K_n)$, for which (\ref{marg_distr0}) holds, is known and the logistic models $\mathcal{M}(y_1,\field{z}_1),\ldots,\mathcal{M}(y_{K_n},\field{z}_{K_n})$ are fitted according to this order based on training set $\mathcal{D}$. Denote by $\field{y}^{*}(\field{x})=\arg\max_{\field{y}\in\{0,1\}^{K_n}}P(\field{y}|\field{x})$ the true joint mode of the distribution (\ref{joint_distr}), for some new observation $\field{x}\in R^p$ and by $\hat{\field{y}}(\field{x})=\arg\max_{\field{y}\in\{0,1\}^{K_n}}\hat{P}(\field{y}|\field{x})$ the estimated mode for point $\field{x}$, calculated based on set $\mathcal{D}$. As in multi-label problems we are usually interested in finding the true joint mode, it is worthwhile to approximate it accurately. In this section we will give the assumptions under which the true joint mode can be estimated consistently. Let $s_{y_k,\field{z}_k}(\theta)=\frac{\partial}{\partial\theta}l_{y_k,\field{z}_k}(\theta)$ be the gradient and $H_{y_k,\field{z}_k}(\theta)=-\frac{\partial^2}{\partial\theta\partial\theta'}l_{y_k,\field{z}_k}(\theta)$ be the negative Hessian matrix, based on model $\mathcal{M}(y_k,\field{z}_k)$ and data $\mathcal{D}$. Let $Z_k$ be the matrix whose $i$-th row is $\field{z}_{k}^{(i)}=(\field{x}^{(i)'},\field{y}^{(i)'}_{1:(k-1)})'$, for $i=1,\ldots,n$. For logistic model we have \begin{equation*} s_{y_k,\field{z}_k}(\theta)=Z_k'(\field{w}_k-E(\field{w}_k|Z_k)), \end{equation*} where $\field{w}_k=(y_k^{(1)},\ldots,y_{k}^{(n)})'$ is a response vector corresponding to $k$-th label. To prove our main Theorem, we use the following Lemma, which is an important technical tool. The Lemma follows from \cite{Zhang2009}, after noting that binary random variable $w$ satisfies $E\exp(t(w-Ew))\leq \exp(t^2/8)$ and taking $\sigma=1/2$, $\eta=(\epsilon+1/2)^2$ in his Proposition 10.2. \begin{Lemma} \label{Lemma5} Let $\field{w}$ be $n$-dimensional vector consisting of independent binary variables having not necessarily the same distribution and $Z$ be $n\times q$ fixed matrix. Then for any $\eta>1$ we have \begin{equation*} P[||Z'(\field{w}-E\field{w})||^2\geq tr(Z'Z)\eta]\leq \exp(-\eta/20). \end{equation*} \end{Lemma} The inequality in Lemma \ref{Lemma5} holds also for random $Z$, which can be easily seen by conditioning \begin{equation*} P[||Z'(\field{w}-E\field{w})||^2\geq tr(Z'Z)\eta]=E\{P[||Z'(\field{w}-E(\field{w}|Z))||^2\geq tr(Z'Z)\eta|Z]\}\leq \exp(-\eta/20), \end{equation*} where the first expectation in above formula is taken with respect to $Z$. It is seen that Lemma \ref{Lemma5} can be used to bound from above $||s_{y_k,\field{z}_k}(\theta)||$, which is an important part of the proof of Theorem \ref{Theorem1}. In the following Theorem, we impose conditions under which the true mode $\field{y}^{*}(\field{x})$ is estimated consistently. For the proof, see in Section \ref{Proof of Theorem1}. \begin{Theorem} \label{Theorem1} Let $\field{x}$ be a fixed point at which the joint mode is calculated. Assume that: \begin{enumerate} \item there exists $\epsilon>0$, such that $P[\field{y}^{*}(\field{x})|\field{x}]>P[\field{y}|\field{x}]+\epsilon$, for all $\field{y}\neq \field{y}^{*}(\field{x})$, \item $E||\field{x}^{(i)}||^2<\infty$, \item $K_n^{4}\log(K_n)/n\to 0$ and $K_n$ is monotonic function of $n$, \item there exists constant $c_1$ such that \begin{equation*} P\left[\min_{1\leq k\leq K_n}\lambda_{\min}[H_{y_k,\field{z}_k}(\theta_k)/n]\geq c_1\right]\to 1, \end{equation*} where $\lambda(\cdot)$ is an eigenvalue ($\lambda_{\min}$ is minimal eigenvalue). \end{enumerate} Then, we have that $\hat{\y}(\x)$ is consistent, i.e. $P[\hat{\y}(\x)= \y^{*}(\x)]\to 1$, as $n\to\infty$. \end{Theorem} The first assumption ensures that the true mode $\field{y}^{*}(\field{x})$ is unique for $\field{x}$. The second assumption on the distribution of $\field{x}^{(i)}$ is rather mild and is satisfied for the majority of common distributions. The third one indicates that the number of possible labels should be small in comparison with the number of training examples $n$. It can be verified that if for almost every $\field{y}$, $|\{k:y_k\neq 1\}|\leq C<\infty$, with probability tending to $1$, then the third assumption can be weakened to $K_n^{3}\log(K_n)/n\to 0$. This would reflect the situation in which the number of possible labels is large, however only a few of them can be equal to $1$ simultaneously. The last assumption is a regularity condition, analogous to assumption (A4) used in \cite{ChenChen2012} to prove the consistency of Extended Bayesian Information Criterion (EBIC) for logistic regression model. Conditions similar to our last assumption are used in many papers devoted to asymptotic results in logistic regression (see e.g. in \cite{QianField2002} and \cite{SzymanowskiMielniczuk2015}). Observe that, \begin{equation*} \lambda_{\min}[H_{y_k,\field{z}_k}(\theta_k)/n]\geq \min_{i}[\sigma(\field{z}_{k}^{(i)'}\theta_k)(1-\sigma(\field{z}_{k}^{(i)'}\theta_k))]\lambda_{\min}(Z_k'Z_k/n) \end{equation*} and thus condition 4 is implied by the following two assumptions: \begin{equation} \label{cond11} P\left[\min_{1\leq k\leq K_n}\min_{i}[\sigma(\field{z}_{k}^{(i)'}\theta_k)(1-\sigma(\field{z}_{k}^{(i)'}\theta_k))]\geq c_{11}\right]\to 1, \end{equation} and \begin{equation} \label{cond12} P\left[\min_{1\leq k\leq K_n}\lambda_{\min}[Z_k'Z_k/n]\geq c_{12}\right]\to 1, \end{equation} for some constants $c_{11}$ and $c_{12}$. Above condition (\ref{cond11}) indicates that conditional variance of $y_k$ given $\field{x},y_1,\ldots,y_{k-1}$ must be bounded from $0$, for all observations and all tasks. The convergence in (\ref{cond12}) is regularity condition on the design matrices. Note that (\ref{cond12}) is weaker than assumption stating that $Z_k'Z_k/n$ converges to some positive definite matrix, which is commonly used assumption in regression analysis (see e.g. \cite{Nishii1984}). Using the proof of Theorem 1, we can also show the consistency of parameter estimators. Let ${\mathbf \theta}=(\theta_1,\ldots,\theta_{K_n})'\in R^{pK_n+K_n(K_n-1)/2}$ be the vector of parameters corresponding to all models in the chain and let $\hat{{\mathbf\theta}}=(\hat{\theta}_1,\ldots,\hat{\theta}_{K_n})'\in R^{pK_n+K_n(K_n-1)/2}$ be the corresponding vector of estimators. The following Corollary holds (see Section \ref{Proof of Remark1} for the proof). \begin{Corollary} \label{Remark1} Assume that conditions 2,3,4 from Theorem \ref{Theorem1} hold. Then for any $\epsilon>0$ \[ P(||\hat{{\mathbf\theta}}-{\mathbf\theta}||>\epsilon K_n^{-1})\to 0, \] as $n\to\infty$. \end{Corollary} \section{Ordering of labels in classifier chain} \label{Ordering of labels in classifier chain} In practice, the true ordering of labels $(\pi^{*}(1),\ldots,\pi^{*}(K_n))=(1,\ldots,K_n)$, for which (\ref{marg_distr0}) holds, is unknown. Moreover, it may happen that the ordering, for which the consecutive conditional probabilities are of the logistic form, does not exist at all. In this section we study the influence of the ordering of labels on the estimation of the joint distribution (\ref{joint_distr}). To illustrate the problem, consider the following example on real dataset \texttt{emotions} (\cite{Trohidisetal2008}), having $6$ binary labels and $72$ features. To obtain convenient visualization, we initially replaced the features by their first principal component, which explains about $88\%$ of variance of the original features. Figure \ref{fig1} shows estimated joint distributions (for $4$ selected labellings) as functions of the first principal component. To estimate joint distributions we use classifier chains with logistic regression and two orders of fitting models: $(1,2,3,4,5,6)$ and $(6,5,4,3,2,1)$. Obviously in this case the true ordering of labels is unknown, however it is clearly seen that the order of fitting models in the chain affects the estimate of the joint probability significantly. \begin{figure}[ht!] \centering \includegraphics[scale=0.5]{Rysunki/example2.pdf} \caption{Estimated joint distributions (for $4$ combinations of labels) as functions of the 1st principal component, for \texttt{emotions} dataset.} \label{fig1} \end{figure} Let $\pi^0$ be assumed ordering of labels and recall that $\field{z}_{k}(\pi^0)=(\field{x}',y_{\pi^{0}(1)},\ldots,y_{\pi^{0}(k-1)})'$. We consider the situation when logistic models $\mathcal{M}(y_{\pi^{0}(k)},\field{z}_{k}(\pi^0))$, for $k=1,\ldots,K_n$ are fitted according to $\pi^{0}$, whereas the data is generated according to the true ordering $\pi^{*}$. Unfortunately, in this case we may encounter the problem of incorrect model specification, e.g. we fit the logistic model whereas the true function, from which the data is generated, may not be necessarily a logistic one. To assess the problem, consider the simple case of $2$ labels and one feature $x\in R$. Assume that the true ordering is $\pi^{*}=(1,2)$ and the labels are generated as $P(y_1=1|x)=\sigma(x)$ and $P(y_2=1|x,y_1)=\sigma(x+ay_1)$, where $a\in R$ is a parameter. The joint distribution in this case is of the form $P(y_1,y_2|x)=\sigma(x)^{y_1}(1-\sigma(x))^{1-y_1}\sigma(x+ay_1)^{y_2}(1-\sigma(x+ay_1))^{1-y_2}$. Assume now that we fit logistic models according to reverse ordering $\pi^{0}=(2,1)$, so we use logistic model to approximate $P(y_2=1|x)$ which is \begin{equation*} P(y_2=1|x)=P(y_1=0,y_2=1|x)+P(y_1=1,y_2=1|x)=\sigma(x)+\sigma(x)[\sigma(x+a)-\sigma(x)]. \end{equation*} It is easy to verify that there do not exist parameters $v,w\in R$ such that for all $x$, $P(y_2=1|x)=\sigma(v+wx)$, so we cannot approximate accurately the true distribution $P(y_2=1|x)$ using logistic model. Recall that $\hat{\theta}_{k}(\pi^0)$ is a maximum likelihood estimator from fitted model $\mathcal{M}(y_{\pi^{0}(k)},\field{z}_{k}(\pi^0))$. We show what is the limit of $\hat{\theta}_{k}(\pi^0)$ in the general situation, when the assumed ordering $\pi^0$ is not necessarily the true one. \cite{HjortPollard1993} studied the performance of logistic model under incorrect model specification. See also \cite{CzadoSantner1992} and \cite{Hjort1988}, where various implications for statistical inference are discussed. Using their methodology (see Section 5B in \cite{HjortPollard1993}) it can be shown that for any $\epsilon>0$ \begin{equation} \label{conv_proj} P(|\hat{\theta}_{k}(\pi^0)-\tilde{\theta}_{k}(\pi^0)|>\epsilon)\to 0, \end{equation} as $n\to\infty$, where \begin{equation*} \tilde{\theta}_{k}(\pi^0):=\arg\min_{\gamma\in R^{p+k-1}}E\left\{KL\left[P(y_{\pi^0(k)}=1|\field{z}_{k}(\pi^0)),\sigma(\field{z}_{k}(\pi^0)'\gamma)\right]\right\}, \end{equation*} and $KL(q_1,q_2):=q_1\log[q_1/q_2]+(1-q_1)\log[(1-q_1)/(1-q_2)]$ is a Kullback-Leibler divergence. The expectation in (\ref{conv_proj}) is taken with respect to random vector $\field{z}_{k}(\pi^0)$. So the limit $\tilde{\theta}_{k}(\pi^0)$ minimizes the expected KL divergence from the true distribution $P(y_{\pi^0(k)}=1|\field{z}_{k}(\pi^0))$ to the postulated one with parameter $\sigma(\field{z}_{k}(\pi^0)'\tilde{\theta}_{k}(\pi^0))$. The convergence in (\ref{conv_proj}) can be interpreted as follows: we find the most accurate approximation of the true probability $P(y_{\pi^0(k)}=1|\field{z}_{k}(\pi^0))$ with respect to the KL divergence, within the space of the logistic models. When $\pi^0=\pi^*$, the true probability $P(y_{\pi^0(k)}=1|\field{z}_{k}(\pi^0))$ is of the logistic form and then $\tilde{\theta}_{k}(\pi^0)=\theta_{k}(\pi^0)$. Moreover, we state the proposition which indicates that the expectations of the true distribution and the postulated one, with parameter $\tilde{\theta}_{k}(\pi^0)$, coincide. \begin{Proposition} The following equalities hold \begin{equation} \label{moment1} E[P(y_{\pi^0(k)}=1|\field{z}_{k}(\pi^0))\field{z}_{k}(\pi^0)]=E[\sigma(\field{z}_{k}(\pi^0)'\tilde{\theta}_{k}(\pi^0))\field{z}_{k}(\pi^0)] \end{equation} and \begin{equation} \label{moment2} E[P(y_{\pi^0(k)}=1|\field{z}_{k}(\pi^0))]=E[\sigma(\field{z}_{k}(\pi^0)'\tilde{\theta}_{k}(\pi^0))]. \end{equation} \end{Proposition} \begin{proof} Using the fact that $\partial\sigma(\field{z}_{k}(\pi^0)'\gamma)/\partial\gamma=\sigma(\field{z}_{k}(\pi^0)'\gamma)(1-\sigma(\field{z}_{k}(\pi^0)'\gamma))$, it is easy to see that \begin{eqnarray*} && E\left[\frac{\partial KL\left[P(y_{\pi^0(k)}=1|\field{z}_{k}(\pi^0)),\sigma(\field{z}_{k}(\pi^0)'\gamma)\right]}{\partial\gamma}\right]= \cr && E[-P(y_{\pi^0(k)}=1|\field{z}_{k}(\pi^0))\field{z}_{k}(\pi^0)+ \sigma(\field{z}_{k}(\pi^0)'\gamma)\field{z}_{k}(\pi^0)], \end{eqnarray*} which yields (\ref{moment1}). Equality (\ref{moment2}) follows from (\ref{moment1}) after noting that the first coordinate of $\field{z}_{k}(\pi^0)$, corresponding to intercept, is equal $1$. \qed \end{proof} Hereinafter, we propose a method which aims to find the ordering such that the consecutive logistic models in the chain are correctly specified. We propose to use a forward procedure such that at each stage we choose the label corresponding to best specified logistic model. Note that we are more interested in evaluation of goodness of specification than evaluation of the goodness of fit. In order to have such a procedure, it is necessary to define a measure which indicates whether the given model is correctly specified or not. \subsection{Goodness of specification measures} \label{Goodness of specification tests} For convenience, in this section we will focus on the link functions, which relate posterior probabilities to the linear combinations of the features. For the logistic model, the link function is the inverse of $\sigma(\cdot)$. We discuss the situation of fitting (based on response variable $y$ and some input features $\field{z}$) a logistic model having link function $g_{0}(\mu)=\log(\frac{\mu}{1-\mu})$ when in fact, the correct link function is $g_{*}(\mu)$. The idea is to define more general family of link functions $\mathcal{L}:=\{g(\mu;\alpha):\alpha\in R\}$, depending on some parameter $\alpha$ (or more parameters), in such a way that both assumed link and correct link are the members of $\mathcal{L}$. Having such a family, we may write $g_{0}(\mu)=g(\mu;\alpha_0)$ and $g_{*}(\mu)=g(\mu;\alpha_*)$, where $\alpha_0$ is known whereas $\alpha_*$ is unknown. Using the first-order Taylor expansion about the assumed link we have the approximation \begin{equation*} g_{*}(\mu)\approx g_{0}(\mu)+(\alpha_*-\alpha_0)\frac{\partial}{\partial\alpha}g(\mu;\alpha)|_{\alpha=\alpha_{0}}. \end{equation*} Observe that $g_{*}(\mu)=\field{z}'\theta$, where $\field{z}$ is a vector of input variables and $\theta$ is vector of unknown parameters associated with the unknown link and thus the assumed link can be approximated by \begin{equation} \label{approx1} g_{0}(\mu)\approx \field{z}'\theta + w(\alpha_0-\alpha_*), \end{equation} where $w:=\frac{\partial}{\partial\alpha}g(\mu;\alpha)|_{\alpha=\alpha_{0}}$ is so-called carrier variable and $\gamma:=(\alpha_0-\alpha_*)$ is an unknown parameter. Variable $w$ is unknown but we can approximate it in the following way. We initially fit model with logistic link $g_{0}(\mu)$ and estimate parameters using maximum likelihood method. This yields maximum likelihood estimates $\hat{\theta}$ and thus also $\hat{\mu}$, from which we can approximate $\hat{w}:=\frac{\partial}{\partial\alpha}g(\hat{\mu};\alpha)|_{\alpha=\alpha_{0}}$. Now as a measure of correct link specification, we can use deviance statistic, defined as \begin{equation} \label{Deviance} D_{\hat{w}}(y,\field{z}):=2\{l_{y,(\field{z},\hat{w})}(\hat{\theta}_{e},\hat{\gamma}_{e})-l_{y,\field{z}}(\hat{\theta})\}, \end{equation} where $\hat{\theta}_{e},\hat{\gamma}_{e}$ denote estimators based on extended model with an additional variable $\hat{w}$. This requires merely refitting the original logistic model with additional variable $\hat{w}$. Observe that $\gamma=0$ if the link is correctly specified. Small value of the deviance indicates that the link is correctly specified, whereas significant departure from $0$ suggest incorrect specification. The above reasoning can be easily generalized to the case of more than $1$ carrier, it simply requires refitting the original logistic model with additional $2,3,\ldots$ carriers. The crucial in the above idea is the choice of the family $\mathcal{L}$. Usually the proposed families are parametrized by one or two parameters. In the literature, various approaches have been explored. Best of our knowledge, the first attempt was made by \cite{Preigbon1980} who use the family defined by \begin{equation*} \mathcal{L}:=\left\{\frac{(\mu/n)^{\alpha-\delta}-1}{\alpha-\delta}-\frac{(1-\mu/n)^{\alpha+\delta}-1}{\alpha+\delta}:\alpha, \delta\in R\right\}, \end{equation*} where logit link is given by $g_{0}(\mu)=\lim_{\alpha,\delta\to 0}g(\mu;\alpha,\delta)$, which can be seen by applying L'H\^{o}spital's rule twice. In Preigbon family, $\alpha$ corresponds to the symmetry of the distribution of latent variable associated with a given model from $\mathcal{L}$ and the heaviness of tails of this distributions is parametrized by $\delta$. In this case the carriers are $w_1=0.5(\log^{2}(\hat{\mu}/n)-\log^{2}(1\hat{\mu}/n))$ and $w_2=-0.5(\log^{2}(\hat{\mu}/n)+\log^{2}(1\hat{\mu}/n))$. The comprehensive list of other families can be found in \cite{Stukel1988}. The carriers used in our tests are summarized in Table \ref{tab1}. \begin{table} \scriptsize \centering \caption{Carriers for computing deviance $D(y,\field{z})$.} \label{tab1} \begin{tabular}{ll} \hline Method & Carriers\\ \hline Preigbon& $w_1=0.5(\log^{2}(\hat{\mu}/n)-\log^{2}(1-\hat{\mu}/n))$; $w_2=-0.5(\log^{2}(\hat{\mu}/n)+\log^{2}(1-\hat{\mu}/n))$ \\ Stukel& $w_1=0.5(\field{z}'\hat{\theta})^2\field{I}(\field{z}'\hat{\theta}\geq 0)$; $w_2=-0.5(\field{z}'\hat{\theta})^2\field{I}(\field{z}'\hat{\theta}< 0)$\\ Prentice& $w_1=-(1-\hat{\mu})^{-1}\log(\hat{\mu})$; $w_2=-\hat{\mu}^{-1}\log(1-\hat{\mu})$\\ Guerrero-Johnson& $w_1=0.5(\field{z}'\hat{\theta})^2$\\ Morgan& $w_1=(\field{z}'\hat{\theta})^3$\\ Aranda (asymmetric) & $w_1=1+\hat{\mu}^{-1}\log(1-\hat{\mu})$\\ \hline \end{tabular} \end{table} \subsection{Procedure for finding the order in classifier chains} \label{Procedure for finding the order in classifier chains} The methodology described in the previous section can be used to determine the optimal ordering in LCC. We propose to use a forward procedure such that at each stage we choose the label corresponding to best specified logistic model. The single step can be characterized in the following way. We fit logistic models in which features $\field{x}$ and labels found in the previous steps are treated as input variables whereas the candidate labels are treated as response variables. We select the label corresponding to the best specified model. The overall scheme is shown in Algorithm \ref{alg1}. \begin{algorithm}[ht!] \caption{Pseudo-code of the procedure for finding the order in classifier chains.} \SetKwInOut{Input}{Input} \SetKwInOut{Initialize}{Initialize} \SetKwInOut{Output}{Output} \KwData{training set $\mathcal{D}=(\field{x}^{(i)},\field{y}^{(i)})$, $i=1,\ldots,n$} \Initialize{ $\pi:=\emptyset$ $I:=(1,\ldots,K_n)$ $\field{z}_{\textrm{act}}:=\field{x}$ } \For{k $\leftarrow$ $1$ \KwTo $K_n$}{ $\hat{k}=\arg\min_{k\in I}D_{\hat{w}}(y_k,\field{z}_{\textrm{act}})$, where $D$ is defined in (\ref{Deviance}) $\field{z}_{\textrm{act}}\leftarrow\field{z}_{\textrm{act}}\cup \{y_{\hat{k}}\}$ $I\leftarrow I \setminus \{\hat{k}\}$ $\pi\leftarrow\pi\cup \{\hat{k}\}$ } \KwOut{ordering of labels $\pi$} \label{alg1} \end{algorithm} In the following, we will give the heuristic justification of the above procedure. Consider the general situation in which we would like to test whether the given variable $w$ is significant in the fitted logistic model. Assume that $P(y=1|\field{z},w)=\sigma(\field{z}'\theta+w\gamma)$, where $(\field{z}',w)'$ is a vector of explanatory variables and $\theta,\gamma$ are unknown parameters. Let \begin{equation} \label{Deviance_true} D_{w}(y,\field{z}):=2\{l_{y,(\field{z},w)}(\hat{\theta}_{e},\hat{\gamma}_{e})-l_{y,\field{z}}(\hat{\theta})\} \end{equation} be a deviance statistic, where $\hat{\theta}$ is an estimator based on smaller model (with variable $w$ omitted), whereas $\hat{\theta}_{e},\hat{\gamma}_{e}$ denote estimators based on extended model with an additional variable $w$. Let $D_{\gamma=0}$ denotes deviance (\ref{Deviance_true}) corresponding to the case $\gamma=0$ and $D_{\gamma\neq 0}$ be the analogous quantity, corresponding to the case $\gamma\neq 0$. \begin{Theorem} \label{Theorem2} Assume that: \begin{enumerate} \item $E||\field{z}||^2<\infty$, \item there exists constant $c_1$ such that \begin{equation*} P\left[\lambda_{\min}[H_{y,(\field{z}',w)'}(\theta,\gamma)/n]\geq c_1\right]\to 1 \end{equation*} ($\lambda_{\min}$ is a minimal eigenvalue). \end{enumerate} Then, $P[D_{\gamma\neq 0}>D_{\gamma=0}]\to 1$, as $n\to\infty$. \end{Theorem} For the proof, see in Section \ref{Proof of Theorem2}. Part of the proof concerning the asymptotic performance of $D_{\gamma=0}$ follows from \cite{Fahrmeir1987}, whereas the asymptotic convergence of $D_{\gamma\neq 0}$, under the above conditions, was proved in \cite{Teisseyre2013}. Observe that assumptions 1, 2 in above Theorem are analogous to assumptions 2, 4 in Theorem \ref{Theorem1}. The above Theorem is formulated for the case when there is only one additional variable $w$, but it can be obviously expanded to the case of more additional variables. The above Theorem aims to justify a procedure described in Algorithm \ref{alg1}. Namely, deviance $D_{\gamma=0}$ corresponds to the correct specification, whereas $D_{\gamma\neq 0}$ corresponds to the incorrect specification. Note however, that there are small differences between situation described in Theorem \ref{Theorem2} and setting described in Section \ref{Goodness of specification tests} (and applied in Algorithm \ref{alg1}). First, observe that the extended model in (\ref{Deviance}) is not exactly a true one (as in (\ref{Deviance_true})), because it is based on approximation given in (\ref{approx1}). Secondly, observe that an additional variable $\hat{w}$ in (\ref{Deviance}) depends on the estimator $\hat{\theta}$ calculated for the smaller model, which is not the case in (\ref{Deviance_true}). \section{Inference in classifier chain model} \label{Inference in classifier chain model} The classifier chain model allows to estimate the joint probability distribution. Having estimated the joint distributions, one would usually like to infer from this distribution, i.e. to make a prediction for some new instance $\field{x}$. The Bayes optimal rule is the mode of the joint distribution $\field{y}^{*}(\field{x})=\arg\max_{y\in\{0,1\}^{K_n}}P(\field{y}|\field{x})$, which is estimated by $\hat{\field{y}}(\field{x})=\arg\max_{\field{y}\in\{0,1\}^{K_n}}\hat{P}(\field{y}|\field{x})$, based on training set. In Theorem \ref{Theorem1} we have shown that for classifier chain model, $\hat{\field{y}}(\field{x})$ converges to $\field{y}^{*}(\field{x})$ in probability, even in the case of large number of labels, provided that the "true" ordering of labels is known and certain assumptions on the design matrix and number of labels are satisfied. However, from the practical point of view, the problematic is computation of $\hat{\field{y}}(\field{x})$ as it requires evaluation of $2^{K_n}$ possible labellings for one new instance $\field{x}$. This approach is often referred to as "exhaustive inference". The "exhaustive inference" is limited to data sets with a small to moderate number of labels, say, not more than about 15. In "multi-label community", the method which combines the classifier chains with "exhaustive inference" is called Probabilistic classifier chains (PCC), see in \cite{Dembczynskietal2010} and \cite{Readetal2011}. Chain structure of the model, suggests to use "greedy inference" by successively choosing the most probable label according to each of the classifiers’ predictions, i.e. when the assumed order of labels is $\pi^0$, then we apply the following procedure \begin{itemize} \item find $\hat{y}_{\pi^{0}(1)}=\arg\max_{y\in\{0,1\}}\hat{P}(y_{\pi^{0}(1)}=y|\field{x})$, \item find $\hat{y}_{\pi^{0}(2)}=\arg\max_{y\in\{0,1\}}\hat{P}(y_{\pi^{0}(2)}=y|\hat{y}_{\pi^{0}(1)},\field{x})$, \item $\ldots$ \item find $\hat{y}_{\pi^{0}(K_n)}=\arg\max_{y\in\{0,1\}}\hat{P}(y_{\pi^{0}(K_n)}=y|\hat{y}_{\pi^{0}(1)},\ldots,\hat{y}_{\pi^{0}(K_n-1)},\field{x})$, \end{itemize} which requires $K_n$ operations instead of $2^{K_n}$. This approach, introduced originally in \cite{Readetal2009}, is usually simply referred to as classifier chains (CC). Since the naming can be a little confusing, we stress that the learning stage is the same in both PCC and CC, whereas the difference is in the inference stage. Note that the "greedy inference" does not guarantee that the mode will be identified correctly, even when the true joint distribution is known. Consider an example with two binary labels $y_1$ and $y_2$ and the ordering $\pi^0=(1,2)$. Let $P(y_1=1|\field{x})=0.6$, $P(y_1=0|\field{x})=0.4$, $P(y_2=1|\field{x},y_1=1)=P(y_2=0|\field{x},y_1=1)=0.5$ and $P(y_2=1|\field{x},y_1=0)=0.9$. Then the "greedy inference" yields labelling $\field{y}=(1,1)'$ (or $\field{y}=(1,0)'$), whereas the true mode is $\field{y}^*=(0,1)'$. The other problem with "greedy inference", addressed in \cite{Sengeetal2012}, is an error propagation along the chain. Since each classifier relies on its predecessors, a false prediction might be propagated. In a consequence, the labels which are placed at the end of the chain, carry the highest errors. The problem of inference from the joint distribution can be visualise using the binary tree, in which nodes represent labels, edges represent conditional probabilities and leaves represent particular labellings. Figure \ref{fig_tree} shows an example tree for two labels. The inference task is equivalent to finding the optimal path in a rooted, complete binary tree of height $K_n$. The "exhaustive inference" requires evaluation of all $2^{K_n}$ paths in the tree, on the other hand, the "greedy inference" corresponds to one chosen path. \begin{figure}[ht!] \centering \includegraphics[scale=0.8]{Rysunki/graf.pdf} \caption{Binary tree used for inference with two labels.} \label{fig_tree} \end{figure} To overcome the problem of time complexity of "exhaustive inference", from one side and the possible lower accuracy of "greedy inference", from the other side, some authors develop approximate inference schemes that trade off accuracy against efficiency in a reasonable way. For example, \cite{Kumaretal2013} proposed using beam search algorithm for inference to explore the binary tree. The idea is to keep $b$ candidate solutions at each level of the tree, where $b$ is a user-defined parameter known as the beam width, which represent the best partial solutions seen thus far. Then the tree is explored in a breadth-first fashion using these solutions. "Beam search inference" requires $bK$ operations and allows to explore more paths in the binary tree, than for "greed inference". Of course, as in the case of "greedy inference", this approach may also not result in finding the optimal label vector. Observe that "Beam search inference" with $b=1$ is equivalent to "greedy inference", whereas $b=\infty$ is equivalent to "exhaustive inference". The other simple solution is to limit the inference only to the label combinations that appear in the training set. Since the inference problem is not a main issue of this paper, we limit our empirical experiments to two basic methods: "exhaustive inference" and "greedy inference". \section{Empirical evaluation} \label{Empirical evaluation} \subsection{Performance of goodness of specification measures} \label{Performance of goodness of specification measures} The aim of the first experiment is to compare the goodness of specification tests described in Section \ref{Goodness of specification tests}. We use 10 models generated in the following way. We first generate feature vector $\field{x}$ from uniform distribution on $[-4,4]$ and then the consecutive labels $y_1,\ldots,y_{K_n}$ are generated using classifier chain model (\ref{marg_distr}) according to the order $\pi^{*}(k)=k$. The parameters corresponding to the considered models are summarized in Table \ref{tab2}. The first two simple models match the one discussed at the beginning of Section \ref{Ordering of labels in classifier chain}, with $a=3$ and $a=5$, respectively. The structure of the models was chosen so as to investigate what is the influence of the: number of labels, number of features, values of parameters corresponding to features and values of parameters corresponding to labels. We use the procedure described in \ref{Procedure for finding the order in classifier chains} in combination with methods summarized in Table \ref{tab1}. For each $n=50,100,\ldots$, we repeat $200$ times: generate data with $n$ instances and apply the procedure for finding the optimal ordering. This enables to estimate the probability of choosing the correct ordering. Figure \ref{fig1} shows the results. The dashed line corresponds to the random choice of the optimal ordering. We also tested using minus log-likelihood function (this measure was used in \cite{Kumaretal2013}) instead of $D(y_k,\field{z}_{\textrm{act}})$ in Algorithm \ref{alg1}. This gives very poor results (comparable with the random choice), so we do not present the results in the figures. The structure of the model affects the probability of choosing the correct ordering. Observe that, in all cases, the results are significantly better than the baseline (random choice). The method proposed by Morgan performs poorly in the majority of cases. The methods, which use two carriers (Preigbon, Stukel, Prentice), usually outperform those which use only one carrier. This is clearly seen for models (M4) and (M6). This phenomenon can be explained by the fact that the methods from the first group are able to represent wider family of possibly misspecified models. In the case of model (M12), the probabilities are relatively low (about $0.01$ for $n=4000$ and Preigbon method), but they are still much larger than the random choice, which is $1/K_n!\approx 10^{-7}$. On the other hand, in this case, the use of Preigbon methods improves the probability of correct mode selection significantly (see in the next section). Note also that the probability of choosing the correct ordering decreases when the number of labels increases (compare models (M3) and (M4) or (M5) and (M10)) and the probability of choosing the correct ordering decreases when the absolute values of parameters corresponding to labels decrease (compare models (M8) and (M9)). \begin{figure}[ht!] \begin{center}$ \begin{array}{ccc} \includegraphics[scale=0.28]{Rysunki/order_example1.pdf} & \includegraphics[scale=0.28]{Rysunki/order_example2.pdf} & \includegraphics[scale=0.28]{Rysunki/order_example3.pdf} \\ \includegraphics[scale=0.28]{Rysunki/order_example4.pdf} & \includegraphics[scale=0.28]{Rysunki/order_example5.pdf} & \includegraphics[scale=0.28]{Rysunki/order_example6.pdf} \\ \includegraphics[scale=0.28]{Rysunki/order_example7.pdf} & \includegraphics[scale=0.28]{Rysunki/order_example8.pdf} & \includegraphics[scale=0.28]{Rysunki/order_example9.pdf} \\ \includegraphics[scale=0.28]{Rysunki/order_example10.pdf} & \includegraphics[scale=0.28]{Rysunki/order_example11.pdf} & \includegraphics[scale=0.28]{Rysunki/order_example12.pdf} \\ \end{array}$ \end{center} \caption{Probabilities of finding the correct ordering with respect to the number of training cases. The scale for model (M12) is much smaller than for other models.} \label{fig1} \end{figure} \begin{table} {\scriptsize \caption{Characteristics of the considered models.} \label{tab2} \begin{tabular}{|l|l||l|l|} \hline Model & Parameters & Model & Parameters\\ \hline M1 & \begin{tabular}{ll} $\theta_1$&$=(0,1)'$ \\ $\theta_2$&$=(0,1,3)'$ \end{tabular} & M7 & \begin{tabular}{ll} $\theta_1$&$=(2, -2, 1)'$ \\ $\theta_2$&$=(2, -2, 1, 5)' $ \\ $\theta_3$&$=(2, -2, 1, 5, -5)' $ \\ $\theta_4$&$=(2, -2, 1, -5, 5, -5)' $ \\ \end{tabular} \cr \hline M2 & \begin{tabular}{ll} $\theta_1$&$=(0, 1)'$ \\ $\theta_2$&$=(0, 1, 5)' $ \end{tabular} & M8 & \begin{tabular}{ll} $\theta_1$&$=(2, -2, 1)'$ \\ $\theta_2$&$=(2, -2, 1, 2)' $ \\ $\theta_3$&$=(2, -2, 1, 2, -2)' $ \\ $\theta_4$&$=(2, -2, 1, -2, 2, -2)' $ \\ \end{tabular} \cr \hline M3 & \begin{tabular}{ll} $\theta_1$&$=(2, -2, 1)'$ \\ $\theta_2$&$=(2, -2, 1, 5)' $ \\ $\theta_3$&$=(2, -2, 1, 5, -5)' $ \\ $\theta_4$&$=(2, -2, 1, -5, 5, -5)' $ \\ $\theta_5$&$=(2, -2, 1, 5, -5, 5, -5)' $ \\ $\theta_6$&$=(2, -2, 1, 5, -5, 5, -5, 5)' $ \\ \end{tabular} & M9 & \begin{tabular}{ll} $\theta_1$&$=(2, -2, 1)'$ \\ $\theta_2$&$=(2, -2, 1, 10)' $ \\ $\theta_3$&$=(2, -2, 1, 10, -10 )' $ \\ $\theta_4$&$=(2, -2, 1, -10, 10, -10)' $ \\ \end{tabular} \cr \hline M4 & \begin{tabular}{ll} $\theta_1$&$=(2, -2, 1)'$ \\ $\theta_2$&$=(2, -2, 1, 5)' $ \\ $\theta_3$&$=(2, -2, 1, 5, -5)' $ \\ $\theta_4$&$=(2, -2, 1, -5, 5, -5)' $ \\ $\theta_5$&$=(2, -2, 1, 5, -5, 5, -5)' $ \\ \end{tabular} & M10 & \begin{tabular}{ll} $\theta_1$&$=(5, -5, 2)'$ \\ $\theta_2$&$=(5, -5, 2, 5)' $ \\ $\theta_3$&$=(5, -5, 2, 5, -5)' $ \\ $\theta_4$&$=(5, -5, 2, -5, 5, -5)' $ \\ \end{tabular} \cr \hline M5 & \begin{tabular}{ll} $\theta_1$&$=\mathbf{a}$ \\ $\theta_2$&$=(\mathbf{a}', 5)' $ \\ $\theta_3$&$=(\mathbf{a}', 5, -5)' $ \\ $\theta_4$&$=(\mathbf{a}', -5, 5, -5)' $ \\ $\mathbf{a}$&$=(1, -1, 1, -1, 1, -1, 1, -1, 1, -1)'$ \end{tabular} & M11 & \begin{tabular}{ll} $\theta_1$&$=\mathbf{a}$ \\ $\theta_2$&$=(\mathbf{a}', -8)'$ \\ $\theta_3$&$=(\mathbf{a}', 1, 3)'$ \\ $\theta_4$&$=(\mathbf{a}', 0.5, 5, 10)'$ \\ $\mathbf{a}$&$=(1, -1, 1, -1, 1, -1, 1, -1, 1, -1)'$ \end{tabular} \cr \hline M6 & \begin{tabular}{ll} $\theta_1$&$=(1, -3, 0.5)'$ \\ $\theta_2$&$=(1.5, -2.5, 1, 5)' $ \\ $\theta_3$&$=(2, -2, 1.5, 5, -5)' $ \\ $\theta_4$&$=(2.5, -1.5, 2, -5, ,5 -5)' $ \\ \end{tabular} & M12 & \begin{tabular}{ll} $\theta_1$&$=\mathbf{a}$ \\ $\theta_2$&$=(\mathbf{a}', 5)'$ \\ $\theta_3$&$=(\mathbf{a}', 5, -5, )'$ \\ $\theta_4$&$=(\mathbf{a}', -5, 5, -5 )'$ \\ $\theta_5$&$=(\mathbf{a}', 5, -5, 5, -5 )'$ \\ $\theta_6$&$=(\mathbf{a}', 5, -5, 5, -5, 5 )'$ \\ $\theta_7$&$=(\mathbf{a}', 5, -5, 5, -5, 5, -5 )'$ \\ $\theta_8$&$=(\mathbf{a}', 5, -5, 5, -5, 5, -5, 5 )'$ \\ $\theta_9$&$=(\mathbf{a}', 5, -5, 5, -5, 5, -5, 5, -5 )'$ \\ $\theta_{10}$&$=(\mathbf{a}', 5, -5, 5, -5, 5, -5, 5, -5, 5 )'$ \\ $\mathbf{a}$&$=(1, -1, 1, -1, 1, -1, 1, -1, 1, -1)'$ \end{tabular} \cr \hline \end{tabular} } \end{table} \subsection{Consistency of the joint mode selection} In the second experiment, we illustrate the theoretical result from Theorem \ref{Theorem1} concerning the consistency of the joint mode selection. In this experiment we use the same models (M1)-(M12) as in the previous section. The models are generated as was described previously and additionally we generate test set containing $200$ observations. For each observation in the test set we check whether the correct mode was selected. The whole procedure is repeated $200$ times, which yields the estimates of the probability of correct mode selection. As the correct ordering is unknown in practical applications, we use the procedure described in Algorithm \ref{alg1}, combined with Preigbon method. Figure \ref{fig2} shows probabilities of correct mode selection with respect to the number of observations in the training set for the correct ordering, selected ordering and wrong ordering (reversed correct order). To assess the direct effect of ordering on the joint mode estimation, we use "exhaustive inference" in the case of all models, except model (M12), for which "greedy inference" was used, due to computational costs. First of all, it is seen that the ordering of labels may affect the probability of correct mode selection, although for some models (e.g. (M3) and (M4)) the differences are not significant. Secondly, it is seen that Algorithm \ref{alg1}, combined with Preigbon method, performs well in practice- the results are very close to those for correct ordering (for models (M1), (M2), (M5) they practically coincide). In the case of model (M12), the differences between the selected ordering and the wrong one are significant, although it is difficult to determine the true ordering exactly (compare Figure \ref{fig1}, model (M12)). \begin{figure}[ht!] \begin{center}$ \begin{array}{ccc} \includegraphics[scale=0.28]{Rysunki/results_model1.pdf} & \includegraphics[scale=0.28]{Rysunki/results_model2.pdf} & \includegraphics[scale=0.28]{Rysunki/results_model3.pdf} \\ \includegraphics[scale=0.28]{Rysunki/results_model4.pdf} & \includegraphics[scale=0.28]{Rysunki/results_model5.pdf} & \includegraphics[scale=0.28]{Rysunki/results_model6.pdf} \\ \includegraphics[scale=0.28]{Rysunki/results_model7.pdf} & \includegraphics[scale=0.28]{Rysunki/results_model8.pdf} & \includegraphics[scale=0.28]{Rysunki/results_model9.pdf} \\ \includegraphics[scale=0.28]{Rysunki/results_model10.pdf} & \includegraphics[scale=0.28]{Rysunki/results_model11.pdf} & \includegraphics[scale=0.28]{Rysunki/results_model12.pdf}\\ \end{array}$ \end{center} \caption{Probabilities of correct mode selection with respect to the number of training cases.} \label{fig2} \end{figure} \subsection{Experiments on benchmark data sets} To investigate the performance of the proposed approach, we also carried out experiments on benchmark datasets. The datasets are publicly available at \url{http://mulan.sourceforge.net/datasets-mlc.html} (at this website one can also find the detailed description of each data set, as well as the references). The details of the data sets are summarized in Table \ref{tab3}. \begin{table} \caption{Basic statistics for the benchmark datasets.} \label{tab3} \begin{tabular}{ccccc} \hline Dataset&Domain&$\#$observations&$\#$features&$\#$labels\\ \hline cal500 &music& 502& 68& 174\\ emotions & music& 593& 72& 6\\ flags &images (toy)& 194 & 19& 7\\ mediamill &video& 10000& 120& 101\\ scene& images& 2407& 294& 6\\ yeast& biology& 2417& 103& 14 \\ \hline \end{tabular} \end{table} We compare the following methods. \begin{itemize} \item (BR) Binary relevance method, in which we build $K_n$ separate classifiers for each label. Logistic model is used as a single classifier. \item (CC EX) Classifier chain with the logistic model as a single classifier and "exhaustive inference". The order of fitting models in the chain corresponds to the original ordering of labels in dataset. \item (CC PREIGBON EX) Classifier chain with the logistic model as a single classifier and "exhaustive inference". The order of fitting models in the chain is determined using Algorithm \ref{alg1} combined with Preigbon method. \item (CC GR) Classifier chain with the logistic model as a single classifier and "greedy inference". The order of fitting models in the chain corresponds to the original ordering of labels in dataset. \item (CC PREIGBON GR) Classifier chain with the logistic model as a single classifier and "greedy inference". The order of fitting models in the chain is determined using Algorithm \ref{alg1} combined with Preigbon method. \end{itemize} All above methods have been implemented in \texttt{R} system (\cite{Rsystem}), for the purpose of this paper. The common part of all considered methods is a logistic model. To reduce the variance of the estimators, we used regularized versions of maximum likelihood estimators, with $l_2$ penalty and small value of penalty parameter $\lambda=0.001$. Let $\field{y}$ be the vector of true labels and $\hat{\field{y}}$ be the predicted vector of labels for given test instance. We consider the following evaluation measures \begin{equation*} \textrm{Hamming}(\field{y},\hat{\field{y}})=\frac{1}{K_n}\sum_{k=1}^{K_n}\field{I}[y_k=\hat{y}_k], \end{equation*} \begin{equation*} \textrm{Subset accuracy}(\field{y},\hat{\field{y}})=\field{I}[\field{y}=\hat{\field{y}}], \end{equation*} \begin{equation*} \textrm{Recall}(\field{y},\hat{\field{y}})=\frac{|\{k:y_k=1,\hat{y}_k=1\}|}{|\{k:y_k=1\}|}, \end{equation*} \begin{equation*} \textrm{Precision}(\field{y},\hat{\field{y}})=\frac{|\{k:y_k=1,\hat{y}_k=1\}|}{|\{k:\hat{y}_k=1\}|}, \end{equation*} \begin{equation*} \textrm{F measure}(\field{y},\hat{\field{y}})=2\cdot\frac{\textrm{Precision}(\field{y},\hat{\field{y}})\cdot\textrm{Recall}(\field{y},\hat{\field{y}})}{\textrm{Precision}(\field{y},\hat{\field{y}})+\textrm{Precision}(\field{y},\hat{\field{y}})}. \end{equation*} The subset accuracy is maximized by the methods which allow to find the mode of the joint distribution (e.g. classifier chains). On the other hand, the Hamming measure is maximized by the methods which find the marginal modes (BR). The marginal and joint modes coincide when the labels are conditionally independent given feature vector $\field{x}$, but this is usually not the case. For further discussion on different evaluation measures and loss functions see in \cite{Dembczynskietal2012}. Since in many multi-label problems we are interested in predicting the presence of certain properties of the objects (the presence of $k$-th property is usually coded as $y_k=1$), it is worthwhile to consider additional measures: Recall, Precision and F-measure. For example, in text categorization, $y_k=1$ indicates that the given text has been assigned to $k$-th topic. Recall indicates how many labels are correctly predicted as $1$ among those equal $1$, precision shows how many labels are correctly predicted as $1$ among those predicted as $1$ and F-measure is a harmonic mean between recall and precision. To assess the considered methods, we apply $5$-fold cross-validation. In each cross-validation loop, the models are built on the four, train folds and the prediction is made on the remaining, test fold. The above evaluation measures are averaged over observations in the test fold. Finally, the reported results are averaged over all cross-validation folds. We performed two experiments. In the first one, we consider datasets with small or moderate number of labels: emotions, flags and scene. For the remaining datasets (yeast, mediamill, cal500) we have limited the number of labels to 10 in such a way that we keep the most frequent labels (and remove all instances having only relevant or only irrelevant labels). This approach was also used in \cite{Dembczynskietal2010} to test PCC method. The limited number of labels allows to use "exhaustive inference". Thus we eliminate the effect associated with the type of prediction method and we can directly assess how the ordering of labels affects the results. The results for the first experiment are shown in Tables \ref{emotions}-\ref{mediamill}. The bold font corresponds to the maximal value in the given column. It is notable that, application of Algorithm \ref{alg1}, combined with Preigbon method, improves the subset accuracy, relative to original ordering of labels for $5$ out of $6$ datasets. This occurs in both cases: for exhaustive and greedy inferences. Moreover, in the majority of cases, CC PREIGBON EX outperforms other methods with respect to all measures except Hamming measure. The "exhaustive inference" outperforms "greedy inference" which is obvious as in the latter case we explore the limited number of possible labellings. We have the highest values of Hamming measure for BR method, which is consistent with observations of other authors and also confirms the theoretical results concerning Hamming measure, given in \cite{Dembczynskietal2012}. The second experiment was performed for datasets: yeast, mediamill, cal500, with the original number of labels (which varies from $14$ to $174$). For this setting, we only used 3 methods: BR, CC GR and CC GR PREIGBON, as other methods are infeasible for such a large number of labels. The results for the second experiment are shown in Tables \ref{yeastall}-\ref{mediamillall}. It is seen that also in this case the application of Algorithm \ref{alg1}, combined with Preigbon method, improves the subset accuracy and recall for all three datasets and improves the F-measure for two datasets. \begin{table}[ht!] \footnotesize \centering \caption{Results for emotions data set.} \label{emotions} \begin{tabular}{llllll} \hline method & Hamming & Subset accuracy & Recall & Precision & F-measure \\ \hline BR & {\bf 0.7917} (0.0062) & 0.2478 (0.0092) & 0.6276 (0.015) & 0.6456 (0.0174) & 0.6027 (0.0125) \\ CC EX & 0.7791 (0.0031) & 0.2917 (0.011) & 0.6619 (0.0124) & 0.6416 (0.0106) & 0.6247 (0.0084) \\ CC PREIGBON EX & 0.7889 (0.0036) & {\bf 0.3052} (0.0083) & {\bf 0.6844} (0.0128) & {\bf 0.6627} (0.0103) & {\bf 0.6448} (0.0072) \\ CC GR & 0.776 (0.0051) & 0.2731 (0.0094) & 0.6431 (0.0164) & 0.6405 (0.015) & 0.6126 (0.0139) \\ CC PREIGBON GR & 0.7839 (0.0051) & 0.29 (0.0099) & 0.6661 (0.0148) & 0.6531 (0.0104) & 0.6303 (0.0108) \\ \hline \end{tabular} \end{table} \begin{table}[ht!] \footnotesize \centering \caption{Results for scene data set.} \label{scene} \begin{tabular}{llllll} \hline method & Hamming & Subset accuracy & Recall & Precision & F-measure \\ \hline BR & {\bf 0.8948} (9e-04) & 0.5272 (0.0055) & 0.6597 (0.0032) & 0.6217 (0.0043) & 0.6279 (0.0033) \\ CC EX & 0.8912 (0.0027) & 0.6315 (0.0056) & 0.7035 (0.008) & 0.706 (0.0085) & 0.6967 (0.0078) \\ CC PREIGBON EX & 0.8945 (0.0023) & {\bf 0.6386} (0.0069) & {\bf 0.7145} (0.006) & {\bf 0.7121} (0.0069) & {\bf 0.7051} (0.0063) \\ CC GR & 0.8905 (0.0032) & 0.6278 (0.0079) & 0.701 (0.0104) & 0.7021 (0.0102) & 0.6934 (0.0098) \\ CC PREIGBON GR & 0.8922 (0.0028) & 0.6298 (0.0085) & 0.7062 (0.0078) & 0.7034 (0.0089) & 0.6966 (0.0082) \\ \hline \end{tabular} \end{table} \begin{table}[ht!] \footnotesize \centering \caption{Results for flags data set.} \label{flags} \begin{tabular}{llllll} \hline method & Hamming & Subset accuracy & Recall & Precision & F-measure \\ \hline BR & {\bf 0.7297} (0.0124) & 0.139 (0.0098) & 0.6757 (0.0191) & {\bf 0.695} (0.0124) & 0.6723 (0.0117) \\ CC EX & 0.7173 (0.0126) & 0.237 (0.0316) & {\bf 0.6785} (0.0154) & 0.6733 (0.0124) & 0.6736 (0.0134) \\ CC PREIGBON EX & 0.7179 (0.0056) & {\bf 0.2625} (0.029) & 0.6749 (0.0138) & 0.6774 (0.0096) & {\bf 0.6741} (0.0111) \\ CC GR & 0.6982 (0.0118) & 0.2012 (0.0156) & 0.6559 (0.0199) & 0.6535 (0.0141) & 0.652 (0.0168) \\ CC PREIGBON GR & 0.7144 (0.0145) & 0.2421 (0.0408) & 0.6733 (0.0166) & 0.6771 (0.0088) & 0.673 (0.0131) \\ \hline \end{tabular} \end{table} \begin{table}[ht!] \footnotesize \centering \caption{Results for yeast data set (top $10$ labels).} \label{yeast} \begin{tabular}{llllll} \hline method & Hamming & Subset accuracy & Recall & Precision & F-measure \\ \hline BR & {\bf 0.7461} (0.004) & 0.1589 (0.0077) & 0.6175 (0.0042) & {\bf 0.6972} (0.0076) & 0.626 (0.0052) \\ CC EX & 0.7352 (0.0044) & {\bf 0.2453} (0.008) & {\bf 0.6572} (0.0074) & 0.6578 (0.0066) & {\bf 0.6345} (0.0069) \\ CC PREIGBON EX & 0.7328 (0.0061) & 0.2379 (0.0098) & 0.6499 (0.0095) & 0.661 (0.0077) & 0.631 (0.0076) \\ CC GR & 0.7255 (0.0047) & 0.2214 (0.0057) & 0.6208 (0.0063) & 0.6428 (0.0083) & 0.6056 (0.007) \\ CC PREIGBON GR & 0.7313 (0.006) & 0.2143 (0.0099) & 0.6343 (0.0094) & 0.6601 (0.0086) & 0.6202 (0.0072) \\ \hline \end{tabular} \end{table} \begin{table}[ht!] \footnotesize \centering \caption{Results for cal500 data set (top $10$ labels).} \label{cal500} \begin{tabular}{llllll} \hline method & Hamming & Subset accuracy & Recall & Precision & F-measure \\ \hline BR & 0.6052 (0.0075) & 0.004 (0.0024) & 0.7193 (0.0075) & 0.6626 (0.0081) & 0.6713 (0.007) \\ CC EX & 0.6016 (0.0087) & 0.0159 (0.0051) & 0.7065 (0.0115) & 0.661 (0.0087) & 0.6649 (0.0091) \\ CC PREIGBON EX & 0.6044 (0.0085) & {\bf 0.016} (0.0051) & 0.7121 (0.0106) & 0.6631 (0.0085) & 0.6679 (0.0089) \\ CC GR & 0.5954 (0.0091) & 0.0139 (0.0074) & 0.6998 (0.009) & 0.6552 (0.0098) & 0.6579 (0.0076) \\ CC PREIGBON GR & 0.608 (0.0083) & 0.01 (0.0055) & {\bf 0.7141} (0.0082) & {\bf 0.6679} (0.0096) & 0.6722 (0.0075) \\ \hline \end{tabular} \end{table} \begin{table}[ht!] \footnotesize \centering \caption{Results for mediamill data set (top $10$ labels).} \label{mediamill} \begin{tabular}{llllll} \hline method & Hamming & Subset accuracy & Recall & Precision & F-measure \\ \hline BR & {\bf 0.8277} (0.0013) & 0.1642 (0.0052) & 0.5844 (0.0035) & {\bf 0.7456} (0.0012) & {\bf 0.6246} (0.0026) \\ CC EX & 0.8165 (0.0023) & 0.2152 (0.0058) & 0.6049 (0.0066) & 0.6836 (0.0049) & 0.6094 (0.0054) \\ CC PREIGBON EX & 0.8186 (0.0021) & {\bf 0.2168} (0.007) & 0.6033 (0.0075) & 0.6873 (0.0035) & 0.6103 (0.0057) \\ CC GR & 0.8199 (9e-04) & 0.2046 (0.004) & 0.6007 (0.005) & 0.703 (0.0017) & 0.6143 (0.0034) \\ CC PREIGBON GR & 0.8211 (0.0025) & 0.2098 (0.0048) & 0.5772 (0.0083) & 0.7087 (0.0052) & 0.6032 (0.0055) \\ \hline \end{tabular} \end{table} \begin{table}[ht!] \footnotesize \centering \caption{Results for yeast data set (all labels).} \label{yeastall} \begin{tabular}{llllll} \hline method & Hamming & Subset accuracy & Recall & Precision & F-measure \\ \hline BR & {\bf 0.7959} (0.0032) & 0.1411 (0.0061) & 0.5871 (0.0052) & {\bf 0.6963} (0.0075) & {\bf 0.6083} (0.0051) \\ CC GR & 0.7708 (0.0102) & 0.175 (0.0265) & 0.6033 (0.0091) & 0.6221 (0.0156) & 0.5867 (0.0059) \\ CC PREIGBON GR & 0.7808 (0.0044) & {\bf 0.1812} (0.0052) & {\bf 0.6059} (0.0117) & 0.6526 (0.007) & 0.6017 (0.0077) \\ \hline \end{tabular} \end{table} \begin{table}[ht!] \footnotesize \centering \caption{Results for cal500 data set (all labels).} \label{call500all} \begin{tabular}{llllll} \hline method & Hamming & Subset accuracy & Recall & Precision & F-measure \\ \hline BR & 0.7676 (0.0037) & 0 (0) & 0.3339 (0.008) & 0.2719 (0.0045) & 0.2937 (0.0033) \\ CC GR & {\bf 0.7825} (0.0042) & 0 (0) & 0.3431 (0.0109) & {\bf 0.3017} (0.0087) & {\bf 0.3146} (0.0082) \\ CC PREIGBON GR & 0.7773 (0.0074) & 0 (0) & {\bf 0.3454} (0.0131) & 0.295 (0.0088) & 0.3109 (0.0048) \\ \hline \end{tabular} \end{table} \begin{table}[ht!] \footnotesize \centering \caption{Results for mediamill data set (all labels).} \label{mediamillall} \begin{tabular}{llllll} \hline method & Hamming & Subset accuracy & Recall & Precision & F-measure \\ \hline BR & {\bf 0.9441} (0.0046) & 0 (0) & 0.4538 (0.0045) & {\bf 0.3749} (0.0404) & {\bf 0.3837} (0.0205) \\ CC GR & 0.8989 (0.0026) & 0 (0) & 0.4632 (0.0097) & 0.2175 (0.0063) & 0.2769 (0.0043) \\ CC PREIGBON GR & 0.9118 (0.0068) & {\bf 0.0028} (0.0016) & {\bf 0.4887} (0.0128) & 0.2744 (0.0276) & 0.3191 (0.0182) \\ \hline \end{tabular} \end{table} \section{Conclusions} \label{Conclusions} In this paper we studied the large sample properties of the logistic classifier chain (LCC) model. We found the conditions under which the estimated joint mode is consistent. In particular it follows from Theorem 1 that the number of labels should be relatively small compared with the number of training examples. Secondly, we have shown that when the ordering of building models is incorrect, the consecutive probabilities in the chain are approximated by logistic models as closely as possible with respect to Kullback-Leibler distance. We have shown that the ordering of labels may affect probability of joint mode selection significantly. The proposed procedure of ordering the labels, based on measures of correct specification, performs well in practice, which is confirmed by experiments on artificial and real datasets. \section*{Acknowledgements} Research of Pawe{\l} Teisseyre was supported by the European Union from resources of the European Social Fund within project 'Information technologies: research and their interdisciplinary applications' POKL.04.01.01-00-051/10-00. \newpage \section{Proofs} \label{Proofs} \subsection{Proof of Theorem \ref{Theorem1}} \label{Proof of Theorem1} We present the proof for $K_n\to\infty$; the proof for the constant $K_n$ goes along the same lines, but requires some simple modifications. For simplicity, we will write $l_k$, $s_k$ and $H_k$ instead of $l_{y_k,\field{z}_k}$, $s_{y_k,\field{z}_k}$, $H_{y_k,\field{z}_k}$, respectively. We will show that $P[\hat{\y}(\x)\neq \y^{*}(\x)]\to 0$, as $n\to\infty$. Recall that $\field{z}_k=(\field{x}',\field{y}_{1:(k-1)}')'$. Using Lemmas \ref{Lemma1} and \ref{Lemma2} we have \begin{eqnarray} \label{T1_eq1} && P[\hat{\y}(\x)\neq \y^{*}(\x)]\leq P\left[\max_{\field{y}\in\{0,1\}^{K_n}}|\hat{P}(\field{y}|\field{x})-P(\field{y}|\field{x})|>\frac{\epsilon}{2}\right]\leq \cr && P\left[\max_{\field{y}\in\{0,1\}^{K_n}}\sum_{k=1}^{K_n}\left|\sigma(\field{z}_k'\hat{\theta}_k)^{y_k}[1-\sigma(\field{z}_k'\hat{\theta}_k)]^{1-y_k}-\sigma(\field{z}_k'\theta_k)^{y_k}[1-\sigma(\field{z}_k'\theta_k)]^{1-y_k}\right|>\frac{\epsilon}{2}\right]\leq \cr && P\left[\max_{\field{y}\in\{0,1\}^{K_n}}\max_{1\leq k\leq K_n}\left|\sigma(\field{z}_k'\hat{\theta}_k)^{y_k}[1-\sigma(\field{z}_k'\hat{\theta}_k)]^{1-y_k}-\sigma(\field{z}_k'\theta_k)^{y_k}[1-\sigma(\field{z}_k'\theta_k)]^{1-y_k}\right|>\frac{\epsilon}{2K_n}\right]\leq \cr && \sum_{k=1}^{K_n}P\left[\max_{\field{y}\in\{0,1\}^{K_n}}\left|\sigma(\field{z}_k'\hat{\theta}_k)-\sigma(\field{z}_k'\theta_k)\right|>\frac{\epsilon}{2K_n}\right]. \end{eqnarray} Using mean value theorem and Schwarz inequality the probability under the above sum can be bounded by \begin{eqnarray*} && P\left[\max_{\field{y}\in\{0,1\}^{K}}|\field{z}_k'\hat{\theta}_k-\field{z}_k'\theta_k|>\frac{\epsilon}{2K_n}\right]\leq P\left[||\hat{\theta}_k-\theta_k||>\frac{\epsilon}{2K_{n}^{3/2}}\right], \cr \end{eqnarray*} where the last inequality follows from the fact that $||\field{z}_k||^2\leq 4K_n$, for sufficiently large $n$ (as first $p$ coordinates of $\field{z}_k$ corresponds to a fixed point $\field{x}$). Denote by $\epsilon_n:=\frac{\epsilon}{2K_{n}^{3/2}}$. Define $\theta_{k,u}:=\theta_k+u\epsilon_n/2$, for $u\in R^p$ such that $||u||=1$. Using Taylor expansion and concavity of $l_k$, we have for some point $\theta_k^{*}$ between $\theta_{k,u}$ and $\theta_k$ \begin{eqnarray} \label{T1_eq3} && P\left[||\hat{\theta}_k-\theta_k||>\epsilon_n\right]\leq P\left[\exists u: ||u||=1, l_{k}(\theta_{k,u})-l_{k}(\theta_{k})>0 \right]= \cr && P\left[\exists u: ||u||=1, u's_{k}(\theta_k)>\epsilon_n u'H_{k}(\theta_k^{*})u/4 \right]. \end{eqnarray} Using Schwarz inequality we have \begin{eqnarray} \label{T1_eq4} && \max_{1\leq k\leq K_n}\max_{1\leq i\leq n}|\field{z}^{(i)'}(\theta_k-\theta^{*}_k)|^2\leq \max_{1\leq i\leq n}||\field{z}^{(i)}_{K_n}||^{2}\epsilon_n^2\leq \max_{1\leq i\leq n}||\field{x}^{(i)}_{K_n}||^{2}\epsilon_n^2+K_n\epsilon_n^2\xrightarrow{P} 0, \end{eqnarray} where the last convergence follows from Lemma \ref{Lemma6}, Markov inequality and assumption 2. In view of (\ref{T1_eq4}), the assertion of Lemma \ref{Lemma4} is satisfied for all $1\leq k\leq K_n$ and thus the probability in (\ref{T1_eq3}) can be bounded from above by \begin{eqnarray} \label{T1_eq5} && P\left[\exists u: ||u||=1, u's_{k}(\theta_k)>\epsilon_n c u'H_{k}(\theta_k)u/4\right]\leq \cr && P\left[||s_{k}(\theta_k)||>\epsilon_n c \lambda_{\min}[H_{k}(\theta_k)]/4\right]\leq P\left[||s_{k}(\theta_k)||>\epsilon_n c c_1 n/4 \right], \end{eqnarray} where the second inequality follows from assumption 4. Let $r_n:=p+K_n-1$. Using Lemma \ref{Lemma5} and the fact that we have (for some constant $c_2$, with probability tending to $1$) \begin{equation*} tr(Z_k'Z_k)\leq r_n c_2 n, \end{equation*} the probability in (\ref{T1_eq5}) can be further bounded by \begin{eqnarray} \label{T1_eq6} && P\left[||s_k(\theta_k)||^2>\frac{c^2 c_1^2 \epsilon_n^2 n^2 r_n c_2}{16r_n c_2}\right]\leq P\left[||s_k(\theta_k)||^2>\frac{c^2 c_1^2}{16c_2}\frac{\epsilon_n^2 n}{r_n}tr(Z_k'Z'k)\right]\leq \cr && \exp\left[-\frac{1}{20}\frac{c^2 c_1^2}{16 c_2}\frac{\epsilon_n^2 n}{r_n}\right]. \end{eqnarray} It follows from (\ref{T1_eq6}) and assumption 3 that probability in (\ref{T1_eq1}) converges to zero, which ends the proof. \qed \subsection{Proof of Corollary \ref{Remark1}} \label{Proof of Remark1} The probability in Corollary \ref{Remark1} can be bounded from above as \begin{eqnarray*} && P(||\hat{{\mathbf\theta}}-{\mathbf\theta}||^2>\epsilon K_n^{-2})= P\left[\sum_{k=1}^{K_n}||\hat{\theta}_k-\theta_k||^2>\epsilon K_n^{-2}\right]\leq P(\max_{1\leq k \leq K_n}||\hat{\theta}_k-\theta_k||^2>\epsilon K_n^{-3})\leq \cr && \sum_{k=1}^{K_n}P(||\hat{\theta}_k-\theta_k||>\epsilon K_n^{-3/2})\to 0, \end{eqnarray*} from the proof of Theorem \ref{Theorem1} (see (\ref{T1_eq3})-(\ref{T1_eq6})). \qed \subsection{Proof of Theorem \ref{Theorem2}} \label{Proof of Theorem2} Consider first term $D_{\gamma=0}$. It follows from Lemma 4.2.7 and Remark 4.2.3 in \cite{Teisseyre2013}, that assumptions of Theorem \ref{Theorem2} imply assumptions of Theorem 1 in \cite{Fahrmeir1987}, which states that $D_{\gamma=0}\xrightarrow{d} \chi^2_{1}$, where $\xrightarrow{d}$ denotes convergence in distribution and $\chi^2_{1}$ is a random variable distributed according to chi-squared distribution with $1$ degree of freedom. Consider term $D_{\gamma\neq 0}$. It follows from Proposition 4.2.10 in \cite{Teisseyre2013}, that under assumptions of Theorem 2, $P[D_{\gamma\neq 0}>w_n]\to 1$, as $n\to\infty$, for some sequence $w_n\to\infty$. This implies the assertion. \qed \subsection{Auxiliary facts} \begin{Lemma} \label{Lemma1} Assume that $P(\field{y}^{*}(\field{x})|\field{x})>P(\field{y}|\field{x})+\epsilon$, for all $\field{y}\neq \field{y}^{*}(\field{x})$ and some $\epsilon>0$. Then $\hat{\y}(\x)\neq\y^{*}(\x)$ implies that \begin{equation*} \max_{\field{y}\in\{0,1\}^{K}}|\hat{P}(\field{y}|\field{x})-P(\field{y}|\field{x})|>\frac{\epsilon}{2}. \end{equation*} \end{Lemma} \begin{proof} When $|P(\y^{*}(\x)|\field{x})-\hat{P}(\y^{*}(\x)|\field{x})|>\frac{\epsilon}{2}$ then the assertion holds. Consider the case $|P(\y^{*}(\x)|\field{x})-\hat{P}(\y^{*}(\x)|\field{x})|\leq\frac{\epsilon}{2}$. Then we have $\hat{P}(\hat{\y}(\x)|\field{x})\geq \hat{P}(\y^{*}(\x)|\field{x})$ which implies $|P(\hat{\y}(\x)|\field{x})-\hat{P}(\hat{\y}(\x)|\field{x})|>\frac{\epsilon}{2}$ in view of the assumption. This ends the proof. \end{proof} \begin{Lemma} \label{Lemma2} Let $|a_k|\leq 1$, $|b_k|\leq 1$, for $k=1,\ldots,K_n$. Then \begin{equation*} \left|\prod_{k=1}^{K_n}a_k-\prod_{k=1}^{K_n}b_k\right|\leq \sum_{k=1}^{K_n}|a_k-b_k|. \end{equation*} \end{Lemma} \begin{proof} Using the following inequality \begin{eqnarray*} && \left|\prod_{k=1}^{K_n}a_k-\prod_{k=1}^{K_n}b_k\right|= \left|\prod_{k=1}^{K_n}a_k-b_{K_n}\prod_{k=1}^{K_n-1}a_k+b_{K_n}\prod_{k=1}^{K_n-1}a_k-\prod_{k=1}^{K_n}b_k\right|= \cr && \left|(a_{K_n}-b_{K_n})\prod_{k=1}^{K_n-1}a_k+b_{K_n}\left(\prod_{k=1}^{K_n-1}a_k-\prod_{k=1}^{K_n-1}b_k\right)\right|\leq |a_{K_n}-b_{K_n}|+\left|\prod_{k=1}^{K_n-1}a_k-\prod_{k=1}^{K_n-1}b_k\right|. \end{eqnarray*} and induction, the assertion follows. \qed \end{proof} Lemmas \ref{Lemma4} and \ref{Lemma6} below are proved in \cite{MielniczukTeisseyre2015}. \begin{Lemma} \label{Lemma4} If $\max_{1\leq i\leq n}|\field{z}^{(i)'}(\gamma-\theta)|\leq c$ then for any vector $u\in R^p$ we have \begin{equation*} \exp(-3c)u'H_k(\theta)u\leq u'H_k(\gamma)u \leq \exp(3c)u'H_k(\theta)u. \end{equation*} \end{Lemma} \begin{Lemma} \label{Lemma6} The convergence $\max_{1\leq i\leq n}||\field{x}^{(i)}||\epsilon_n\xrightarrow{P} 0$ is equivalent to $||\field{x}^{(n)}||\epsilon_n\xrightarrow{P} 0$ for non-decreasing sequence $\epsilon_n\to 0$. \end{Lemma}
1,108,101,565,386
arxiv
\section{Introduction} \subsection{} This paper is intended to supplement the two earlier papers \cite{BW13, BW18} of the first two authors on canonical bases arising from quantum symmetric pairs (QSP) and applications to super Kazhdan-Lusztig theory; it extends some principal results therein to proper generalities. \subsection{} As a generalization of the works of Lusztig and Kashiwara, a theory of canonical basis arising from a QSP $(\bold{U}, {\bold{U}^{\imath}})$ of finite type is systematically developed in \cite{BW18}. For any finite-dimensional based $\bold{U}$-module $(M, \bold{B})$ \`a la Lusztig \cite[\S27]{Lu94}, we constructed in \cite[Theorem 5.7]{BW18} a new bar involution $\psi_{\imath}$ on $M$ and a $\psi_{\imath}$-invariant basis $\bold{B}^\imath$ of $M$ such that $(M, \bold{B}^\imath)$ forms a based ${\bold{U}^{\imath}}$-module. Examples of such based $\bold{U}$-modules (and hence based ${\bold{U}^{\imath}}$-modules) include any finite-dimensional simple $\bold{U}$-module or a tensor product of several such simple $\bold{U}$-modules. By definition of a QSP $(\bold{U}, {\bold{U}^{\imath}})$, ${\bold{U}^{\imath}}$ is a coideal subalgebra of $\bold{U}$ \cite{Le99}. Hence $M\otimes N$ is a ${\bold{U}^{\imath}}$-module for any ${\bold{U}^{\imath}}$-module $M$ and $\bold{U}$-module $N$. In the first main theorem (see Theorem \ref{thm:1}) we show that, for a based ${\bold{U}^{\imath}}$-module $(M, \bold{B}^\imath)$ and a based $\bold{U}$-module $(N, \bold{B})$, there exists a $\psi_{\imath}$-invariant basis $\bold{B}^\imath \diamondsuit_{\imath} \bold{B}$ on the ${\bold{U}^{\imath}}$-module $M\otimes N$ such that $(M\otimes N, \bold{B}^\imath \diamondsuit_{\imath} \bold{B})$ is a based ${\bold{U}^{\imath}}$-module. The construction of the new bar involution $\psi_{\imath}$ on $M\otimes N$ above uses a certain element $\Theta^\imath$ in a completion of ${\bold{U}^{\imath}}\otimes \bold{U}^+$, which was due to \cite{BW13} for quasi-split QSP of type AIII/IV and then established in Kolb \cite{Ko17} in full generality with an elegant new proof. The integrality of $\Theta^\imath$ relies on the integrality of the quasi-$\mathcal R$ matrix in \cite{Lu94} and the integrality of the quasi-$\mathcal K$ matrix in \cite{BW18}. To construct the $\imath$-canonical basis on $M\otimes N$, we use a partial order different from the one used in \cite{BW18} even when $M$ is a $\bold{U}$-module. Another simple but useful observation for our purpose is that $\Theta^\imath$ has a leading term $1\otimes 1$. \subsection{} For the quasi-split QSP $(\bold{U}, {\bold{U}^{\imath}})$ of type AIII/AIV, the quantum group $\bold{U}$ is of type A; we let $\mathbb{V}$ and $\mathbb{W}$ denote the natural representation of $\bold{U}$ and its dual. In this case, the $\imath$-canonical basis on a based $\bold{U}$-module was first constructed in \cite{BW13} when a certain parameter $\kappa=1$ (also see \cite{Bao17} with parameter $\kappa=0$). The super Kazhdan-Lusztig theory for the {\em full} BGG category $\mathcal O_{\bf b}$ of an ortho-symplectic Lie superalgebra $\mathfrak g$ of type B in \cite{BW13} (for type D see \cite{Bao17}) of integer or half-integer weights was formulated via the $\imath$-canonical basis on a mixed tensor $\bold{U}$-module with $m$ copies of $\mathbb{V}$ and $n$ copies of $\mathbb{W}$, where the order of the tensor product depends on the choice of a Borel subalgebra $\bf b$ in $\mathfrak g$. As a consequence, a Kazhdan-Lusztig theory for parabolic categories $\mathcal O^{\mathfrak l}_{\bf b}$, where the Levi subalgebra $\mathfrak l$ in $\mathfrak g$ is a product of Lie algebras of type A, can be formulated and established via the $\imath$-canonical basis on a tensor product $\bold{U}$-module $\mathbb T$ of various exterior powers of $\mathbb{V}$ and of $\mathbb{W}$. Note however not all the parabolic categories of $\mathfrak g$-modules arise in this way. Theorem \ref{thm:1}, when specialized for the QSP $(\bold{U}, {\bold{U}^{\imath}})$ of quasi-split type AIII/AIV, provides an $\imath$-canonical basis for a ${\bold{U}^{\imath}}$-module on the tensor product of the form $\wedge^{a} \mathbb{V}_- \otimes \mathbb T$. Here $\mathbb T$ is a tensor product $\bold{U}$-module of various exterior powers of $\mathbb{V}$ and of $\mathbb{W}$, while $\wedge^{a} \mathbb{V}_-$ (for $a>0$) is a ``type $B$" exterior power, which is a simple ${\bold{U}^{\imath}}$-module but not a $\bold{U}$-module. These $\imath$-canonical bases are used to formulate the super Kazhdan-Lusztig theory for an {\em arbitrary} parabolic BGG category $\mathcal O$ of the ortho-symplectic Lie superalgebras of type B and D; see Theorem~\ref{thm:2}. \subsection{} This paper is organized as follows. Theorem~\ref{thm:1} and its proof are presented in Section~\ref{sec:based}, and we shall follow notations in \cite{BW18} throughout Section~\ref{sec:based}. The formulation of Theorem~\ref{thm:2} is given in Section~\ref{sec:KL}; its proof basically follows the proof for the Kazhdan-Lusztig theory for the full category $\mathcal O$ in \cite[Part 2]{BW13} once we have Theorem~\ref{thm:1} available to us. We shall follow notations in \cite{BW13} throughout Section~\ref{sec:KL}. To avoid much repetition, we refer precisely and freely to the two earlier papers \cite{BW13, BW18}. \vspace{.3cm} {\bf Acknowledgement.} . The research of WW is partially supported by an NSF grant DMS-1702254. The research of HW is supported by JSPS KAKENHI grant 17J00172. \section{Tensor product modules as based ${\bold{U}^{\imath}}$-modules} \label{sec:based} \subsection{} We shall follow the notations in \cite{BW18} throughout this section. Let $\bold{U}$ denote a quantum group of finite type over the field $\mathbb{Q}(q)$ associated to a root datum of type $(\mathbb{I}, \cdot)$, and let $\Delta$ denote its comultiplication as in \cite{Lu94}. We denote the bar involution on $\bold{U}$ or its based module by $\psi$. Let ${\bold{U}^{\imath}} \subset \bold{U}$ be a coideal subalgebra associated to a Satake diagram such that $(\bold{U},{\bold{U}^{\imath}})$ forms a quantum symmetric pair \cite{Le99}. Let $\mathcal{A} := {\mathbb Z}[q,q^{-1}]$. Let $\dot{\bold{U}}^{\imath}$ be the modified version of ${\bold{U}^{\imath}}$ and let ${}_\mA\dot{\bold{U}}^{\imath}$ be its $\mathcal{A}$-form, respectively; see \cite[\S3.7]{BW18}. Let $\psi_{\imath}$ be the bar involution on ${\bold{U}^{\imath}}$, $\dot{\bold{U}}^{\imath}$ and ${}_\mA\dot{\bold{U}}^{\imath}$. Let $X_\imath$ be the $\imath$-weight lattice \cite[(3.3)]{BW18}. A weight (i.e., $X_\imath$-weight) module of ${\bold{U}^{\imath}}$ can be naturally regarded as a $\dot{\bold{U}}^{\imath}$-module. We introduce based ${\bold{U}^{\imath}}$-modules generalizing \cite[\S27.1.2]{Lu94}. Let $\mathbf{A} = \mathbb{Q}[[q^{-1}]] \cap \Q(q)$. We write $-\otimes- = - \otimes_{\Q(q)} -$ whenever the base ring is $\Q(q)$. \begin{definition}\label{ad:def:1} Let $M$ be a finite-dimensional weight ${\bold{U}^{\imath}}$-module over $\Q(q)$ with a given $\Q(q)$-basis $\bold{B}^\imath$. The pair $(M, \bold{B}^\imath)$ is called a based ${\bold{U}^{\imath}}$-module if the following conditions are satisified: \begin{enumerate} \item $\bold{B}^\imath \cap M_{\nu}$ is a basis of $M_{\nu}$, for any $\nu \in X_\imath$; \item The $\mathcal{A}$-submodule ${}_{\mathcal{A}}M$ generated by $\bold{B}^\imath$ is stable under ${}_{\mathcal{A}}\dot{\bold{U}}^{\imath}$; \item The $\mathbb{Q}$-linear involution $\psi_{\imath}: M \rightarrow M$ defined by $\psi_{\imath}(q)= q^{-1}, \psi_{\imath}(b) = b$ for all $b \in \bold{B}^\imath$ is compatible with the $\dot{\bold{U}}^{\imath}$-action, i.e., $\psi_{\imath}(um) = \psi_{\imath}(u) \psi_{\imath}(m)$, for all $u\in \dot{\bold{U}}^{\imath}, m\in M$; \item Let $L(M)$ be the $\mathbf{A}$-submodule of $M$ generated by $\bold{B}^\imath$. Then the image of $\bold{B}^\imath$ in $L(M)/ q^{-1}L(M)$ forms a $\mathbb{Q}$-basis in $L(M)/ q^{-1}L(M)$. \end{enumerate} \end{definition} We shall denote by $\mathcal L(M)$ the ${\mathbb Z}[q^{-1}]$-span of $\bold{B}^\imath$; then $\bold{B}^\imath$ forms a ${\mathbb Z}[q^{-1}]$-basis for $\mathcal L(M)$. (There are similar constructions for a based $\bold{U}$-module in similar notations.) \subsection{} \label{subsec:Up} Let $\Upsilon =\sum_\mu \Upsilon_\mu$ (with $\Upsilon_0=1$ and $\Upsilon_\mu \in \bold{U}^+_\mu$) be the intertwiner (also called quasi-$\mathcal K$ matrix) of the quantum symmetric pair $(\bold{U}, {\bold{U}^{\imath}})$ introduced in \cite[Theorem 2.10]{BW13}; for full generality see \cite[Theorem 6.10]{BK18}, \cite[Theorem 4.8, Remark 4.9]{BW18}). It follows from \cite[Theorem~5.7]{BW18} (also cf. \cite[Theorem 4.25]{BW13}) that a based $\bold{U}$-module $(M,\bold{B})$ becomes a based ${\bold{U}^{\imath}}$-module with a new basis $\bold{B}^\imath$ (which is uni-triangular relative to $\bold{B}$) with respect to the involution $\psi_{\imath} := \Upsilon \circ \psi$. Let $\widehat{\bold{U} \otimes \bold{U}}$ be the completion of the $\Q(q)$-vector space $\bold{U} \otimes \bold{U}$ with respect to the descending sequence of subspaces \[ \bold{U} \otimes \bold{U}^- \bold{U}^0 \big(\sum_{\text{ht}(\mu) \geq N}\bold{U}_{\mu}^+ \big) + \bold{U}^+ \bold{U}^0 \big(\sum_{\text{ht}(\mu) \geq N}\bold{U}_{\mu}^- \big) \otimes \bold{U}, \text{ for }N \ge 1, \mu \in {\mathbb Z}\mathbb{I}. \] We have the obvious embedding of $\bold{U} \otimes \bold{U}$ into $\widehat{\bold{U} \otimes \bold{U}}$. By continuity the $\mathbb{Q}(q)$-algebra structure on $\bold{U} \otimes \bold{U}$ extends to a $\mathbb{Q}(q)$-algebra structure on $ \widehat{\bold{U} \otimes \bold{U}}$. We know the quasi-$\mathcal R$ matrix $\Theta$ lies in $\widehat{\bold{U} \otimes \bold{U}}$ by \cite[Theorem~4.1.2]{Lu94}. It follows from \cite[Theorem~4.8]{BW18} and \cite[Theorem~6.10]{BK18} that $\Upsilon^{-1} \otimes \text{id}$ and $\Delta(\Upsilon)$ are both in $\widehat{\bold{U} \otimes \bold{U}}$. We define \begin{equation}\label{ad:eq:1} \Theta^\imath = \Delta (\Upsilon) \cdot \Theta \cdot (\Upsilon^{-1} \otimes \text{id}) \in \widehat{\bold{U} \otimes \bold{U}}. \end{equation} We can write \begin{equation} \label{eq:Thetamu} \Theta^\imath = \sum_{\mu \in {\mathbb N}\mathbb{I}}\Theta^\imath_{\mu}, \qquad \text{ where } \Theta^\imath_{\mu} \in \bold{U} \otimes \bold{U}^+_\mu. \end{equation} The following result first appeared in \cite[Proposition~3.5]{BW13} for the quantum symmetric pairs of (quasi-split) type AIII/AIV. \begin{lem}\cite[Proposition~3.6]{Ko17}\label{ad:lem:1} We have $\Theta^\imath_{\mu} \in {\bold{U}^{\imath}} \otimes \bold{U}^+_\mu$, for all $\mu$. \end{lem} Another basic ingredient which we shall need is the integrality property of $\Theta^\imath$. \begin{lem} \label{lem:Z} We have $\Theta^\imath_{\mu} \in {}_\mA{\bold{U}} \otimes_{\mathcal{A}} {}_\mA{\bold{U}}^+_\mu$, for all $\mu$. \end{lem} \begin{proof} By a result of Lusztig \cite[24.1.6]{Lu94}, we have $\Theta =\sum_{\nu \in{\mathbb N}\mathbb{I}} \Theta_\nu$ is integral, i.e., $\Theta_\nu \in {}_\mA{\bold{U}}^-_\nu \otimes_{\mathcal{A}} {}_\mA{\bold{U}}^+_\nu$. By \cite[Theorem~5.3]{BW18} we have $\Upsilon=\sum_{\mu \in{\mathbb N}\mathbb{I}} \Upsilon_\mu$ is integral, i.e., $\Upsilon_\mu \in {}_\mA{\bold{U}}^+_\mu$ for each $\mu$; it follows that $\Upsilon^{-1} = \psi(\Upsilon)$ is integral too thanks to \cite[Corollary~4.11]{BW18}. It is well known that the comultiplication $\Delta$ preserves the $\mathcal{A}$-form, i.e., $\Delta ({}_\mA{\bold{U}}) \subset {}_\mA{\bold{U}} \otimes_{\mathcal{A}} {}_\mA{\bold{U}}$. The lemma follows now by the definition of $\Theta^\imath$ in \eqref{ad:eq:1}. \end{proof} \subsection{} Define a partial order $<$ on $X$ by setting $\mu'<\mu$ if $\mu' - \mu \in {\mathbb N} \mathbb{I}$. Denote by $|b | =\mu$ if an element $b$ in a $\bold{U}$-module is of weight $\mu$. Now we are ready to prove the first main result of this paper. \begin{thm}\label{thm:1} Let $(M, \bold{B}^\imath)$ be a based ${\bold{U}^{\imath}}$-module and $(N, \bold{B})$ be a based $\bold{U}$-module. \begin{enumerate} \item For $b_1 \in \bold{B}^\imath, b_2\in \bold{B}$, there exists a unique element $b_1\diamondsuit_\imath b_2$ which is $\psi_{\imath}$-invariant such that $b_1\diamondsuit_\imath b_2\in b_1\otimes b_2 +q^{-1}{\mathbb Z}[q^{-1}] \bold{B}^\imath \otimes \bold{B}$. \item We have $b_1\diamondsuit_\imath b_2 \in b_1\otimes b_2 +\sum\limits_{(b'_1,b'_2) \in \bold{B}^\imath \times \bold{B}, |b_2'| < |b_2|} q^{-1}{\mathbb Z}[q^{-1}] \, b_1' \otimes b_2'$. \item $\bold{B}^\imath \diamondsuit_\imath \bold{B} :=\{b_1\diamondsuit_\imath b_2 \mid b_1 \in \bold{B}^\imath, b_2\in \bold{B} \}$ forms a $\Q(q)$-basis for $M \otimes N$, an $\mathcal{A}$-basis for ${}_\mathcal{A} M \otimes_{\mathcal{A}} {}_\mathcal{A} N$, and a ${\mathbb Z}[q^{-1}]$-basis for $\mathcal L(M) \otimes_{{\mathbb Z}[q^{-1}]} \mathcal L(N)$. (This is called {\em the $\imath$-canonical basis} for $M\otimes N$.) \item $(M\otimes N, \bold{B}^\imath \diamondsuit_\imath \bold{B})$ is a based ${\bold{U}^{\imath}}$-module. \end{enumerate} \end{thm} \begin{proof} It follows by Lemma~\ref{ad:lem:1} that the element $\Theta^\imath$ gives rise to a well-defined operator on the tensor product $M \otimes N$. Following \cite[(3.17)]{BW13}, we define a new bar involution on $M\otimes N$ (still denoted by $\psi_{\imath}$) by letting \[ \psi_{\imath} := \Theta^\imath \circ (\psi_{\imath} \otimes \psi) : M \otimes N \longrightarrow M \otimes N. \] Recall from \cite{Lu94} that $\Delta(E_i) = E_i \otimes 1 + \widetilde{K}_i \otimes E_i$. If follows that \[ \Delta (\Upsilon) \in \Upsilon \otimes 1 + \sum_{0\neq \mu\in {\mathbb N}\mathbb{I}} \bold{U}\otimes \bold{U}^+_\mu. \] Recalling \eqref{eq:Thetamu}, we have \begin{equation}\label{ad:eq:theta} \Theta^\imath_0 = (\Upsilon \otimes 1) \cdot (1 \otimes 1) \cdot (\Upsilon^{-1} \otimes 1)=1 \otimes 1. \end{equation} Let $b_1 \in \bold{B}^\imath$ and $b_2 \in \bold{B}$. By \eqref{ad:eq:theta} and Lemma~\ref{lem:Z}, we have \begin{equation} \label{ad:eq:order} \psi_{\imath} (b_1 \otimes b_2) \in b_1 \otimes b_2 + \sum_{\substack{(b'_1,b'_2) \in \bold{B}^\imath \times \bold{B} \\ |b_2'| < |b_2|}} \mathcal{A} \, b'_1 \otimes b'_2. \end{equation} Applying \cite[Lemma~24.2.1]{Lu94}, there exists a $\psi_{\imath}$-invariant element $(b_1 \otimes b_2)^\imath \in M \otimes N$ such that \[ b_1 \diamondsuit_\imath b_2 \in b_1 \otimes b_2 + \sum_{\substack{(b'_1,b'_2) \in \bold{B}^\imath \times \bold{B} \\ |b_2'| < |b_2|}} q^{-1} {\mathbb Z}[q^{-1}]\, b'_1 \otimes b'_2. \] This proves (2), and Part (3) follows immediately. A by now standard argument shows the uniqueness of $b_1 \diamondsuit_\imath b_2$ as stated in (1); note a weaker condition than (2) is used in (1). It remains to see that $(M, \bold{B}^\imath \diamondsuit_\imath \bold{B})$ is a based ${\bold{U}^{\imath}}$-module. The item (3) in the definition of a based ${\bold{U}^{\imath}}$-module is proved in the same way as for \cite[Proposition~ 3.13]{BW13}, while the remaining items are clear. This completes the proof. \end{proof} \begin{rem} \begin{enumerate} \item An elementary but key new ingredient in Theorem~\ref{thm:1} above is the use of a (coarser) partial order $<$, which is different from the partial order $<_{\imath}$ used in \cite[(5.2)]{BW18}. \item Assume that $M$ is a based $\bold{U}$-module. Then the $\imath$-canonical basis for $M\otimes N$ in Theorem~\ref{thm:1} coincides with the one in \cite[Theorem 5.7]{BW18}, thanks to the uniqueness in Theorem~\ref{thm:1}(1). \end{enumerate} \end{rem} \begin{rem} \label{rem:general} Theorem~\ref{thm:1} would be valid whenever we can establish the (weaker) integrality of $\Theta^\imath$ acting on $M\otimes N$. This might occur when we consider more general parameters for ${\bold{U}^{\imath}}$ than \cite{BW18} or when we consider quantum symmetric pairs of Kac-Moody type in a forthcoming work of the first two authors. \end{rem} \section{Applications to super Kazhdan-Lusztig theory} \label{sec:KL} \subsection{} In this section, we shall apply Theorem~\ref{thm:1} to formulate and establish the (super) Kazhdan-Lusztig theory for an arbitrary parabolic category $\mathcal{O}$ of modules of integer or half-integer weights for ortho-symplectic Lie superalgebras, generalizing \cite[Part ~2]{BW13} (also see \cite{Bao17}). We shall present only the details on an arbitrary parabolic category $\mathcal{O}$ consisting of modules of integer weights for Lie superalgebra $\mathfrak{osp}(2m+1\vert 2n)$. \subsection{} All relevant notations throughout this section shall be consistent with \cite[Part 2]{BW13}. In particular, we use a comultiplication for $\bold{U}$ different from \cite{Lu94}; this leads to a version of the intertwiner $\Upsilon =\sum_\mu \Upsilon_\mu$ with $\Upsilon_\mu \in \widehat{\bold{U}}^-$ (compare with \S\ref{subsec:Up}), and a version of Theorem~\ref{thm:1} in which the opposite partial order and the lattice ${\mathbb Z}[q]$ are used. To further match notations with \cite{BW13} in this section, we denote the $\mathcal{A}$-form of any based $\bold{U}$ or ${\bold{U}^{\imath}}$-modules $M$, as $M_{\mathcal{A}}$ (instead of ${}_\mathcal{A} M$). We consider the infinite-rank quantum symmetric pair $(\bold{U}, {\bold{U}^{\imath}})$ as defined in \cite[Section~8]{BW13} (where the parameter is chosen to be $\kappa=1$ in the notation of \cite{BW18}). It is a direct limit of quantum symmetric pairs of type AIII, $(\bold{U}(\mathfrak{sl}_N), {\bold{U}^{\imath}}(\mathfrak{sl}_N) )$, for $N$ even. We denote by $\mathbb{V}$ the natural representation of $\bold{U}$, and by $\mathbb{W}$ the restricted dual of $\mathbb{V}$. Associated to any given $0^m1^n$-sequence ${\bf b} = (b_1,\ldots,b_{m+n})$ starting with $0$, we have a fundamental system of $\mathfrak{osp}(2m+1\vert 2n)$, denoted by $\Pi_{\bf{b}} = \{ -\epsilon_1^{b_1}, \epsilon_i^{b_i} - \epsilon_{i+1}^{b_{i+1}} \mid 1 \leq i \leq m+n-1 \}$; here $\epsilon_i^0 = \epsilon_x$ for some $1 \leq x \leq m$ and $\epsilon_j^1 = \epsilon_{\bar{y}}$ for some $1 \leq y \leq n$ so that $\{\epsilon_i^{b_i} \mid 1\le i \le m+n\}$ form a permutation of $\{\epsilon_a, \epsilon_{b} \mid 1\le a \le m, 1\le b \le n\}$. \subsection{} Let $W_{B_{s}}$ and $W_{A_{s-1}}$ be the Weyl group of type $B_{s}$ and type $A_{s-1}$ with unit $e$, respectively. We denote their corresponding Hecke algebras by $\mathcal{H}_{B_{s}}$ and $\mathcal{H}_{A_{s-1}}$, with Kazhdan-Lusztig bases by $\{\underline{H_{w}} \vert w \in W_{B_s}\}$ and $\{\underline{H_{w}} \vert w \in W_{A_{s-1}}\}$, respectively. Both algebras act naturally on the right on $\mathbb{V}^{\otimes s}$ and $\mathbb{W}^{\otimes s}$; cf. \cite[Section 5]{BW13}. We define \begin{align*} \wedge^{s} \mathbb{V}_- &= \mathbb{V}^{\otimes s} \Big / \sum_{e \neq w \in W_{B_{s}}}\mathbb{V}^{\otimes s} \cdot \underline{H_w},\\ \wedge^{s} \mathbb{V}&= \mathbb{V}^{\otimes s} \Big / \sum_{e \neq w \in W_{A_{s-1}}} \mathbb{V}^{\otimes s} \cdot \underline{H_w}. \end{align*} We similarly define $\wedge^{s} \mathbb{W}_-$ and $\wedge^{s} \mathbb{W}$. Note $\wedge^{s} \mathbb{V}_-, \wedge^{s} \mathbb{V}, \wedge^{s} \mathbb{W}_-$ and $\wedge^{s} \mathbb{W}$ are all based ${\bold{U}^{\imath}}$-modules by \cite[Theorem~5.8]{BW13}. We shall denote $$ \mathbb{V}^{c} := \begin{cases} \mathbb{V} \quad & \text{ if } c = 0, \\ \mathbb{W} \quad & \text{ if } c = 1. \end{cases} $$ The following corollary is a direct consequence of Theorem~\ref{thm:1}. \begin{cor}\label{cor:1} Let $c_1,\ldots,c_k \in \{ 0,1 \}$ and $a_0, a_1,\ldots,a_k \in {\mathbb N}$. Then (a suitable completion of) the tensor product \[ \mathbb{T}^{\bf{b}, \mathfrak{l}} = \wedge^{a_0} \mathbb{V}_- \otimes \wedge^{a_1} \mathbb{V}^{c_1} \otimes \cdots \otimes \wedge^{a_k} \mathbb{V}^{c_k} \] is a based ${\bold{U}^{\imath}}$-module. \end{cor} The completion above arises since we deal with quantum symmetric pairs of infinite rank, and it is a straightforward generalization of the $B$-completion studied in \cite[Section~9]{BW13}. Note that the $\imath$-canonical basis lives in $\mathbb{T}^{\bf{b}, \mathfrak{l}}$ (instead of its completion) by Theorem~\ref{thm:positivity} below. \subsection{} Associated to the fundamental system $\Pi_{\bf{b}}$ are the set of positive roots $\Phi^+_{\mathbf{b}}$ and the Borel subalgebra $\mathfrak b_{\bf b}$ of $\mathfrak{osp}(2m+1 \vert 2n)$. Let $\Pi_{\mathfrak{l}} \subset \Pi_{\bf{b}}$ be a subset of even simple roots. We introduce the corresponding Levi subalgebra $\mathfrak l$ and parabolic subalgebra $\mathfrak p$ of $\mathfrak{osp}(2m+1 \vert 2n)$: \begin{align*} \mathfrak{l} = \mathfrak h_{m|n} \bigoplus \bigoplus_{\alpha \in \mathbb{Z}\Pi_{\mathfrak{l}} \cap \Phi_{\mathbf{b}}} \mathfrak{osp}(2m+1\vert 2n)_{\alpha}, \qquad\quad \mathfrak{p} = \mathfrak{l} + \mathfrak b_{\bf b}. \end{align*} Recall \cite[\S7]{BW13} the weight lattice $X(m|n) = \sum^{m}_{i=1}{\mathbb Z}\epsilon_{i} + \sum^n_{j=1}{\mathbb Z}\epsilon_{\overline j}.$ We denote \[ X_{\mathbf{b}}^{\mathfrak l,+} = \{\lambda \in X(m|n) \mid (\lambda \vert \alpha) \ge 0, \forall \alpha \in \Pi_{\mathfrak{l}}\}. \] Let $L_0(\lambda)$ be the irreducible $\mathfrak{l}$-module with highest weight $\lambda$, which is extended trivially to a $\mathfrak{p}$-module. We form the parabolic Verma module \[ M^{\mathfrak{l}}_{{\bf b}}(\lambda) :=\text{Ind}^{\mathfrak{osp}(2m+1|2n)}_{\mathfrak{p}}L_0(\lambda). \] \begin{definition} Let $\mathcal{O}^{\mathfrak{l}}_{\mathbf{b}}$ be the category of $\mathfrak{osp}(2m+1\vert 2n)$-modules $M$ such that \begin{itemize} \item[(i)] $M$ admits a weight space decomposition $M=\bigoplus\limits_{\mu \in X(m|n)}M_\mu$, and $\dim M_\mu<\infty$; \item[(ii)] M decomposes over $\mathfrak{l}$ into a direct sum of $L_\mathfrak{l}(\lambda)$ for some $\lambda \in X_{\mathbf{b}}^{\mathfrak l,+}$; \item[(iii)] there exist finitely many weights ${}^1\lambda,{}^2\lambda,\ldots,{}^k\lambda\in X_{\mathbf{b}}^{\mathfrak l,+}$ (depending on $M$) such that if $\mu$ is a weight in $M$, then $\mu\in{{}^i\lambda}-\sum_{\alpha\in{\Pi_{{\bf b}}}}{\mathbb N}\alpha$, for some $i$. \end{itemize} The morphisms in $\mathcal{O}^{\mathfrak{l}}_{\mathbf{b}}$ are all (not necessarily even) homomorphisms of $\mathfrak{osp}(2m+1|2n)$-modules. \end{definition} For $\lambda \in X_{\mathbf{b}}^{\mathfrak l,+}$, we shall denote by $L^{\mathfrak{l}}_{{\bf b}}(\lambda)$ the simple quotient of the parabolic Verma module $M^{\mathfrak{l}}_{{\bf b}}(\lambda)$ in $\mathcal{O}^{\mathfrak{l}}_{\mathbf{b}}$ with highest weight $\lambda$. Following \cite[Definition~7.4]{BW13}, we can define the tilting modules $T^{\mathfrak{l}}_{{\bf b}}(\lambda)$ in $\mathcal{O}^{\mathfrak{l}}_{\mathbf{b}}$, for $\lambda \in X_{\mathbf{b}}^{\mathfrak l,+}$. We denote by $\mathcal{O}^{\mathfrak{l}, \Delta}_{\mathbf{b}}$ the full subcategory of $\mathcal{O}^{\mathfrak{l}}_{\mathbf{b}}$ generated by all modules possessing finite parabolic Verma flags. \subsection{} Recall the bijection $X(m \vert n) \leftrightarrow I^{m+n}$ \cite[\S8.4]{BW13}, where an element $f \in I^{m+n}$ is understood as a $\rho$-shifted weight. We consider the restriction $X_{\mathbf{b}}^{\mathfrak l,+} \leftrightarrow I_{\mathfrak{l}, +}^{m+n}$, where the index set $I_{\mathfrak{l}, +}^{m+n}$ is defined as the image under the bijection. Let $W_{\mathfrak{l}}$ be the Weyl group of $\mathfrak{l}$ with the corresponding Hecke algebra $\mathcal{H}_{\mathfrak{l}}$. Recall that $\Pi_{\mathfrak{l}} \subset \Pi_{\bf{b}}$ is a subset of even simple roots. Hence we have the natural right action of $\mathcal{H}_\mathfrak{l}$ on the $\mathcal{A}$-module $\mathbb{T}_{\mathcal{A}}^{\bf{b}} := \mathbb{V}_{\mathcal{A}}^{b_1} \otimes_{\mathcal{A}} \cdots \otimes_{\mathcal{A}} \mathbb{V}_{\mathcal{A}}^{b_{m+n}}$ with a standard basis $M^{\bf{b}}_{f} \in \mathbb{T}^{\bf{b}}_{\mathcal{A}}$, for $f \in I_{\mathfrak{l}}^{m+n}$;cf. \cite[\S8.2]{BW13}. We define \[ \mathbb{T}^{\bf{b}, \mathfrak{l}}_{\mathcal{A}} = \mathbb{T}^{\bf{b}}_{\mathcal{A}} \Big / \sum_{e \neq w \in W_{\mathfrak{l}}} \mathbb{T}_{\mathcal{A}}^{\bf{b}} \cdot \underline{H_w}. \] The quotient space is an $\mathcal{A}$-form $\mathbb{T}^{\bf{b}, \mathfrak{l}}_{\mathcal{A}}$ of the $\Q(q)$-space $\mathbb{T}^{\bf{b}, \mathfrak{l}}$ appearing in Corollary~\ref{cor:1}: \begin{equation} \label{eq:T-} \mathbb{T}^{\bf{b}, \mathfrak{l}}_{\mathcal{A}} = \wedge^{a_0} \mathbb{V}_{-,\mathcal{A}} \otimes_{\mathcal{A}} \wedge^{a_1} \mathbb{V}_\mathcal{A}^{c_1} \otimes_{\mathcal{A}} \cdots \otimes_{\mathcal{A}} \wedge^{a_k} \mathbb{V}_\mathcal{A}^{c_k}, \qquad \text{ for } c_i \in \{0, 1\}, \quad a_i \in \mathbb{N}, \end{equation} where $c_i$ and $a_i$ are determined as follows. Let $W'$ denote a subgroup of the Weyl group of $\mathfrak{osp}(2m+1 \vert 2n)$, $W' = W_{B_{m}} \times S_{n} = \langle s_0,s_1,\ldots,s_{m-1}, s_{m+1}, \ldots, s_{m+n-1} \rangle$, where $s_i =s_{\alpha_i}$, and $\alpha_0 = -\epsilon_1^{b_0}$, $\alpha_i = \epsilon_{i}^{b_i} - \epsilon_{i+1}^{b_{i+1}}$ for $1 \leq i \leq m+n-1$. Then, $W_{\mathfrak{l}}$ is the parabolic subgroup of $W'$ generated by $\{ s_i \mid \alpha_i \in \Pi_{\mathfrak{l}} \}$. Let us write $\{ 0,1,\ldots,m+n \} \setminus \{ i \mid \alpha_i \in \Pi_{\mathfrak{l}} \} = \{ j_1 < j_2 <\cdots < j_{k+1} \}$. Then, $a_i = j_{i+1} - j_{i}$ and $c_{i+1} = b_{j_i}$, where it is understood that $j_0 = 0$. For any standard basis element $M^{\bf{b}}_{f} \in \mathbb{T}^{\bf{b}}_{\mathcal{A}}$ with $f \in I_{\mathfrak{l}, +}^{m+n}$, we denote by $M^{\bf{b}, \mathfrak{l}}_{f}$ its image in $\mathbb{T}^{\bf{b}, \mathfrak{l}}_{\mathcal{A}}$. Then $\{M^{\bf{b}, \mathfrak{l}}_{f} \vert f \in I_{\mathfrak{l}, +}^{m+n} \}$ forms an $\mathcal{A}$-basis of $\mathbb{T}^{\bf{b}, \mathfrak{l}}_{\mathcal{A}}$. Let \[ \mathbb{T}^{\bf{b}, \mathfrak{l}}_{{\mathbb Z}} = \mathbb{T}^{\bf{b}, \mathfrak{l}}_{\mathcal{A}} \otimes_{\mathcal{A}} {\mathbb Z} \] be the specialization of $\mathbb{T}^{\bf{b}, \mathfrak{l}}_{\mathcal{A}}$ at $q=1$. Let $\widehat{\mathbb{T}}^{\bf{b}, \mathfrak{l}}_{{\mathbb Z}}$ be the $B$-completion of $\mathbb{T}^{\bf{b}, \mathfrak{l}}_{{\mathbb Z}}$ following \cite[Section~9]{BW13}. It follows from Corollary~\ref{cor:1} the space $\widehat{\mathbb{T}}^{\bf{b}, \mathfrak{l}}_{{\mathbb Z}}$ admits the $\imath$-canonical basis $\{T^{\bf{b}, \mathfrak{l}}_f\vert f \in I_{\mathfrak{l}, +}^{m+n}\}$. We can similarly define the dual $\imath$-canonical basis $\{L^{\bf{b}, \mathfrak{l}}_f \vert f \in I_{\mathfrak{l}, +}^{m+n}\}$ of $\widehat{\mathbb{T}}^{\bf{b}, \mathfrak{l}}_{{\mathbb Z}}$ following \cite[Theorem~9.9]{BW13}. \subsection{} We denote by $[\mathcal{O}^{\mathfrak{l}, \Delta}_{\mathbf{b}}]$ the Grothendieck group of the category $\mathcal{O}^{\mathfrak{l}, \Delta}_{\mathbf{b}}$. We have the following isomorphism of ${\mathbb Z}$-modules: \begin{align*} \Psi:[\mathcal{O}^{\mathfrak{l}, \Delta}_{\mathbf{b}}]&\longrightarrow \mathbb{T}^{\bf{b}, \mathfrak{l}}_{{\mathbb Z}}\\ M^{\mathfrak{l}}_{\bf{b}}(\lambda)&\mapsto M^{{\bf b},{\mathfrak{l}}}_{f^{{\bf b}}_{\lambda}}(1), \quad \quad \text{ for } \lambda \in X_{\mathbf{b}}^{\mathfrak l,+}. \end{align*} We define $[[\mathcal{O}^{\mathfrak{l}, \Delta}_{\mathbf{b}}]]$ as the completion of $[\mathcal{O}^{\mathfrak{l}, \Delta}_{\mathbf{b}}]$ such that the extension of $\Psi$, \[ \Psi: [[\mathcal{O}^{\mathfrak{l}, \Delta}_{\mathbf{b}}]] \longrightarrow \widehat{\mathbb{T}}_{{\mathbb Z}}^{\bf{b}, \mathfrak{l}}, \] is an isomorphism of ${\mathbb Z}$-modules. The following proposition is a reformulation of the Kazhdan-Lusztig theory for the parabolic category $\mathcal{O}$ of the Lie algebra $\mathfrak{so}(2m+1)$ (theorems of Brylinski-Kashiwara, Beilinson-Bernstein). \begin{prop} Let ${\bf b} = (0^m)$ (that is $n=0$). The isomorphism $\Psi: [[\mathcal{O}^{\mathfrak{l}, \Delta}_{\mathbf{b}}]] \longrightarrow {\mathbb{T}}_{{\mathbb Z}}^{\bf{b}, \mathfrak{l}}$ sends \[ \Psi([L^{\mathfrak{l}}_{{\bf b}}(\lambda)]) = L^{{\bf b},{\mathfrak{l}}}_{f^{{\bf b}}_{\lambda}}(1), \quad \quad \quad \Psi([T^{\mathfrak{l}}_{{\bf b}}(\lambda)]) = T^{{\bf b},{\mathfrak{l}}}_{f^{{\bf b}}_{\lambda}}(1), \quad \quad \text{ for } \lambda \in X_{\mathbf{b}}^{\mathfrak l,+}. \] \end{prop} Note $\widehat{\mathbb{T}}^{\bf{b}, \mathfrak{l}}_{{\mathbb Z}} = {\mathbb{T}}^{\bf{b}, \mathfrak{l}}_{{\mathbb Z}}$ in this case, i.e. no completion is needed. \begin{proof} Thanks to \cite[Theorem~5.8]{BW13}, the $\imath$-canonical basis on $\mathbb{T}^{\bf{b}}$ can be identified with the Kazhdan-Lusztig basis (of type B) on $\mathbb{T}^{\bf{b}}$. Note by \cite[Theorem~5.4]{BW13} that $\mathbb S^{\mathfrak l} := \sum_{e\neq w \in W_{\mathfrak{l}}} \mathbb{T}^{\bf{b}} \cdot \underline{H_w}$ is a ${\bold{U}^{\imath}}$-submodule of $\mathbb{T}^{\bf{b}}$, and it is actually a based ${\bold{U}^{\imath}}$-submodule of $\mathbb{T}^{\bf{b}}$ with its Kazhdan-Lusztig basis. Therefore the $\imath$-canonical basis on $\mathbb{T}^{\bf{b}, \mathfrak{l}}_{\mathcal{A}}$ in Theorem~\ref{thm:1} can be identified with the basis in the based quotient $\mathbb{T}^{\bf{b}} / \mathbb S^{\mathfrak l}$, which is exactly the parabolic Kazhdan-Lusztig basis. The proposition follows now from the classical Kazhdan-Lusztig theory (cf. \cite{BW13}). \end{proof} Now we can formulate the super Kazhdan-Lusztig theory for $\mathcal{O}^{\mathfrak{l}}_{\bf b}$. \begin{thm} \label{thm:2} The isomorphism $\Psi: [[\mathcal{O}^{\mathfrak{l}, \Delta}_{\mathbf{b}}]] \longrightarrow \widehat{\mathbb{T}}_{{\mathbb Z}}^{\bf{b}, \mathfrak{l}}$ sends \[ \Psi([L^{\mathfrak{l}}_{{\bf b}}(\lambda)]) = L^{{\bf b},{\mathfrak{l}}}_{f^{{\bf b}}_{\lambda}}(1), \quad \quad \quad \Psi([T^{\mathfrak{l}}_{{\bf b}}(\lambda)]) = T^{{\bf b},{\mathfrak{l}}}_{f^{{\bf b}}_{\lambda}}(1), \quad \quad \text{ for } \lambda \in X_{\mathbf{b}}^{\mathfrak l,+}. \] \end{thm} \begin{proof} Let us briefly explain the idea of the proof from \cite{BW13}. The crucial new ingredient of this paper (cf. Remark~\ref{rem:Ui} below) is the existence of the $\imath$-canonical basis and dual $\imath$-canonical basis on $\widehat{\mathbb{T}}^{\bf{b}, \mathfrak{l}}$ thanks to Theorem~\ref{thm:1}. Here the dual $\imath$-canonical basis refers to a version of canonical basis where the lattice ${\mathbb Z}[q]$ is replaced by ${\mathbb Z}[q^{-1}]$; see \cite{BW13}. We have already established the version of the theorem for the full category $\mathcal{O}$ of the Lie superalgebra $\mathfrak{osp}(2m+1 \vert 2n)$ in \cite[Theorem~11.13]{BW13}. We have the following commutative diagram of ${\mathbb Z}$-modules: \[ \xymatrix{[[\mathcal{O}^{\mathfrak{l}, \Delta}_{\mathbf{b}}]] \ar[r] \ar[d]&\widehat{\mathbb{T}}_{{\mathbb Z}}^{\bf{b}, \mathfrak{l}} \ar[d] \\ [[\mathcal{O}^{\Delta}_{\mathbf{b}}]] \ar[r]& \widehat{\mathbb{T}}_{{\mathbb Z}}^{\bf{b}} } \] (Note that the vertical arrow on the right is not a based embedding of ${\bold{U}^{\imath}}$-modules.) Then the theorem follows from comparison of characters entirely similar to \cite[\S11.2]{BW13}. Note that this comparison uses only the classical Kazhdan-Lusztig theory. \end{proof} \begin{rem} \label{rem:Ui} In the case of the full category $\mathcal O$ (i.e., $\mathfrak l$ is the Cartan subalgebra), the theorem goes back to \cite[Theorem 11.13]{BW13}. Following \cite[Remark 11.16]{BW13}, the Kazhdan-Lusztig theory for the paraoblic category $\mathcal{O}^{\mathfrak{l}}_{\bf b}$ with $\alpha_0 \neq \Pi_{\mathfrak l}$ was a direct consequence of \cite[Theorem 11.13]{BW13}, via the $\imath$-canonical basis in \cite[Theorem~ 4.25]{BW13} in the ${}_\mA{\bold{U}}$-module $ \mathbb{T}^{\bf{b}, \mathfrak{l}}_{\mathcal{A}}$ in \eqref{eq:T-} with $a_0=0$. When $a_0> 0$ (which corresponds to the condition $\alpha_0 \in \Pi_{\mathfrak l}$ on the Levi $\mathfrak l$), the space $ \mathbb{T}^{\bf{b}, \mathfrak{l}}_{\mathcal{A}}$ in \eqref{eq:T-} is a ${}_\mathcal{A}{\bold{U}^{\imath}}$-module but not a ${}_\mA{\bold{U}}$-module, and hence Theorem~\ref{thm:1} is needed. \end{rem} Denote \[ T^{{\bf b}, \mathfrak{l}}_f = M^{{\bf b}, \mathfrak{l}}_f + \sum_{g} t^{\bf b}_{gf}(q) M^{{\bf b}, \mathfrak{l}}_g, \qquad \text{ for } t^{\bf b}_{gf}(q) \in {\mathbb Z}[q]. \] The following positivity and finiteness results generalize \cite[Theorem 9.11]{BW13} and follow by the same proof. \begin{thm} \label{thm:positivity} {\quad} \begin{enumerate} \item We have $t^{{\bf b}, \mathfrak{l}}_{gf}(q) \in {\mathbb N} [q]$. \item The sum $T^{{\bf b}, \mathfrak{l}}_f = M_f^{{\bf b}, \mathfrak{l}} + \sum_{g}t^{\bf b}_{gf}(q)M^{{\bf b}, \mathfrak{l}}_g$ is finite, for all $f$. \end{enumerate} \end{thm} \begin{rem} \label{rem:halfZ} To formulate a super Kazhdan-Lusztig theory for the parabolic category $\mathcal{O}$ consisting of modules of half-integer weights for $\mathfrak{osp}(2m+1\vert 2n)$, we use the quantum symmetric pair $(\bold{U}, {\bold{U}^{\imath}})$ which is a direct limit of $(\bold{U}(\mathfrak{sl}_N), {\bold{U}^{\imath}}(\mathfrak{sl}_N) )$ for $N$ odd; cf. \cite[Sections 6, 12]{BW13}. Theorem~\ref{thm:2} holds again in this setting. \end{rem} \begin{rem} Following \cite{Bao17}, a simple conceptual modification allows us to formulate a super (type D) Kazhdan-Lusztig theory for the parabolic category $\mathcal{O}$ consisting of modules of integer (respectively, half-integer) weights for $\mathfrak{osp}(2m\vert 2n)$. To that end, we use the $\imath$-canonical basis of the module \eqref{eq:T-} for the quantum symmetric pair $(\bold{U}, {\bold{U}^{\imath}})$, where the parameter is now chosen to be $\kappa=0$ in the notation of \cite{BW18}. Theorem~\ref{thm:2} holds again in this setting, where the cases new to this paper correspond to the cases $a_0 > 0$. \end{rem}
1,108,101,565,387
arxiv
\section{Introduction} Recent advances in measure theory have stimulated the resurgence of Maharam's classical work \cite{ma42} to the formulation of the ``saturation'' of measure spaces. The notion of saturation is rooted in the study of Loeb spaces. A refinement of this notion appeared in \cite{fk02,fr12,hk84} and the current definition is formulated in \cite{ks09}. Saturated measure spaces possess an essentially uncountably generated $\sigma$-\hspace{0pt}algebra and saturation is a strengthened notion of nonatomicity. As a consequence, saturation remedies a well-known failure of the Lyapunov convexity theorem in infinite-dimensional Banach spaces and guarantees its validity, even in nonseparable locally convex spaces; see \cite{gp13,ks13,ks15,ks16a}. Earlier remedies for the failure of the Lyapunov convexity theorem in infinite dimensions were given in \cite{su92,su97} for the case with Banach spaces with the Radon--Nikodym property in the setting of nonatomic Loeb spaces. The recovery of the Lyapunov convexity theorem in infinite dimensions is undoubtedly useful, especially in variational analysis. It is noteworthy that saturation is not only sufficient, but also necessary for the Lyapunov convexity theorem in separable Banach spaces, as shown in \cite{ks13,ks15,ks16a}. Furthermore, it is also necessary and sufficient for the convexity of the Bochner and the Gelfand integrals of a multifunction taking values in separable Banach spaces and their dual spaces (see \cite{po08,sy08}), and for Fatou's lemma in the Bochner and the Gelfand integral settings; see \cite{ks14a,kss16}. In this sense, saturation is the best possible structure on measure spaces for dealing with vector measures taking values in infinite-dimensional vector spaces. Based on these findings, in \cite{ks14b} the interplay of the Lyapunov convexity theorem and the bang-bang principle in separable Banach spaces were established with the Bochner integral setting and the relaxation and the purification techniques for nonconvex variational problems were given a full-fledged treatment under the saturation hypothesis. See also \cite{ls06,ls09,po09} for earlier results on the purification principle. This work details a further step toward the equivalence results on saturation along the lines of the aforementioned literature. The purpose of the paper is twofold. First, we formulate the bang-bang and purification principles in dual spaces of a separable Banach space with Gelfand integrals and provide a complete characterization of the saturation property of finite measure spaces. To this end, we make the best use of ``relaxed controls'' developed in \cite{mc67,wa72,yo69}. In particular, we provide the equivalence of saturation and the existence of solutions to nonconvex variational problems with Gelfand integrals constraints. This is a novel aspect not pursued in the author's previous work \cite{ks14b}, which is referred to the ``minimization principle'' for saturation. Second, we present an application of the relaxation technique to large economies with infinite-dimensional commodity spaces, where the space of agents is modeled as a finite measure space following \cite{au64,au66}. We propose a ``relaxation'' of large economies along the lines of \cite{ba08,ks16b}, which is regarded as a reasonable convexification of original economies. We introduce the notion of relaxed Pareto optimality and derive the existence of Pareto optimal allocations of the original economy under the saturation hypothesis. The relaxation and purification techniques enable us to prove the existence of Pareto optimal allocations without convexity assumptions. In the following section, we provide a brief overview of Gelfand integrals for functions and multifunctions taking values in dual spaces of a Banach space and derive the compactness property of the set of Gelfand integrable selectors of a multifunction. Thereafter, the organization of the paper proceeds to address the two purposes described. \section{Preliminaries} \subsection{Gelfand Integrals of Functions} Let $(T,\Sigma,\mu)$ be a finite measure space and $E$ be a real Banach space with the dual system $\langle E,E^* \rangle$, where $E^*$ is the norm dual of $E$. A function $f:T\to E^*$ is \textit{weakly$^*\!$ scalarly measurable} if the scalar function $\langle f(\cdot),x \rangle$ on $T$ is measurable for every $x\in E$. We say that weakly$^*\!$ scalarly measurable functions $f$ and $g$ are \textit{weakly$^*\!$ scalarly equivalent} if $\langle f(t)-g(t),x \rangle=0$ for every $x\in E$ a.e.\ $t\in T$ (the exceptional $\mu$-\hspace{0pt}null set depending on $x$). Denote by $\mathrm{Borel}(E^*,\mathit{w}^*)$ the Borel $\sigma$-\hspace{0pt}algebra of $E^*$ generated by the weak$^*\!$ topology. If $E$ is a separable Banach space, then $E^*$ is separable with respect to the weak$^*\!$ topology (see \cite[Lemma I.3.4 of Part II]{sc73}) and it is a locally convex Suslin space under the weak$^*\!$ topology; see \cite[p.\,67]{th75}. Hence, under the separability of $E$, a function $f:T\to E^*$ is weakly$^*\!$ scalarly measurable if and only if it is measurable with respect to $\mathrm{Borel}(E^*,\mathit{w}^*)$; see \cite[Theorem 1]{th75}. We say that a weakly$^*\!$ scalarly measurable function $f:T\to E^*$ is \textit{weakly$^*\!$ scalarly integrable} if the scalar function $\langle f(\cdot),x \rangle$ is integrable for every $x\in E$. A weakly$^*\!$ scalarly measurable function $f$ is \textit{Gelfand integrable} over $A\in \Sigma$ if there exists $x^*_A\in E^*$ such that $$ \langle x^*_A,x \rangle=\int_A\langle f(t),x \rangle d\mu \quad\text{for every $x\in E$}. $$ The element $x^*_A$ is called the \textit{Gelfand integral} (or the \textit{weak$^*\!$ integral}) of $f$ over $A$ and denoted by $\int_Afd\mu$. Every weakly$^*\!$ scalarly integrable function is weakly$^*\!$ integrable; see \cite[Theorem 11.52]{ab06}. Denote by $G^1(\mu,E^*)$ (abbreviated to $G^1_{E^*}$) the equivalence classes of Gelfand integrable functions with respect to weakly$^*\!$ scalarly equivalence, normed by $$ \| f \|_{\mathit{G}^1}=\sup_{x\in B_E}\int_T|\langle f(t),x \rangle |d\mu, $$ where $B_E$ is the closed unit ball in $E$. This norm is called the \textit{Gelfand norm}, whereas the normed space $(G^1(\mu,E^*), \|\cdot \|_{\mathit{G}^1})$, in general, is not complete. \subsection{The Topology of Pointwise Convergence on $G^1_{E^*}$} Let $L^\infty(\mu)$ be the space of $\mu$-essentially bounded measurable functions on $T$ with the essential sup norm. Denote by $B(L^\infty(\mu)\times E)$ the space of bilinear forms on the product space $L^\infty(\mu)\times E$. For each pair $(\varphi,x)\in L^\infty(\mu)\times E$, the linear functional $\varphi\otimes x$ on $B(L^\infty(\mu)\times E)$ is defined by $(\varphi\otimes x)(M)=M(\varphi,x)$, $M\in B(L^\infty(\mu)\times E)$. The \textit{tensor product} $L^\infty(\mu)\otimes E$ of $L^\infty(\mu)$ and $E$ is the subspace of the algebraic dual of $B(L^\infty(\mu)\times E)$ spanned by these elements $\varphi\otimes x$. Thus, a typical \textit{tensor} $f^*$ in $L^\infty(\mu)\otimes E$ has a (not necessarily unique) representation $f^*=\sum_{i=1}^n\varphi_i\otimes x_i$ with $\varphi_i\in L^\infty(\mu)$, $x_i\in E$, $i=1,\dots,n$. A bilinear form on $G^1(\mu,E^*)\times (L^\infty(\mu)\otimes E)$ is given by $$ \langle f^*,f \rangle=\sum_{i=1}^n\int_T\varphi_i(t)\langle f(t),x_i \rangle d\mu=\sum_{i=1}^n\left\langle \int_T\varphi_i(t)f(t)d\mu,x_i \right\rangle $$ for $f\in G^1(\mu,E^*)$ and $f^*=\sum_{i=1}^n\varphi_i\otimes x_i\in L^\infty(\mu)\otimes E$, where $\int\varphi_ifd\mu$ denotes the Gelfand integral of $\varphi_if\in G^1(\mu,E^*)$. The pair of these spaces $\langle G^1(\mu,E^*),L^\infty(\mu)\otimes E \rangle$ equipped with this bilinear form is a dual system. The coarsest topology on $G^1(\mu,E^*)$ such that the linear functional $f\mapsto \langle f^*,f \rangle$ is continuous for every $f^*\in L^\infty(\mu)\otimes E$, denoted by $\sigma(G^1_{E^*},L^\infty\otimes E)$, is the \textit{topology of pointwise convergence} on $L^\infty(\mu)\otimes E$, generated by the family of seminorms $\{ p_{f^*}\mid f^*\in L^\infty\otimes E \}$, where $p_{f^*}(f)=|\langle f^*,f \rangle|$, $f\in G^1(\mu,E^*)$. Thus, $G^1(\mu,E^*)$ endowed with the $\sigma(G^1_{E^*},L^\infty\otimes E)$ is a locally convex space. Let $L^1(\mu)$ be the space of $\mu$-integrable functions with the $L^1$-norm. A net $\{ f_\alpha \}$ in $G^1(\mu,E^*)$ converges to $f\in G^1(\mu,E^*)$ for the $\sigma(G^1_{E^*},L^\infty\otimes E)$-\hspace{0pt}topology if and only if for every $x\in E$ the net $\{ \langle f_\alpha(\cdot),x \rangle \}$ in $L^1(\mu)$ converges weakly to $\langle f(\cdot),x \rangle \in L^1(\mu)$. It is evident that the $\sigma(G^1_{E^*},L^\infty\otimes E)$-\hspace{0pt}topology is coarser than the weak topology $\sigma(G^1_{E^*},(G^1_{E^*})^*)$. \subsection{Gelfand Integrals of Multifunctions} \label{subsec} Let $\Gamma:T\twoheadrightarrow E^*$ be a multifunction. (By \textit{multifunction} we always mean a set-valued mapping with nonempty values.) Denote by $\overline{\mathrm{co}}^{\mathit{\,w}^*}\Gamma:T\twoheadrightarrow E^*$ the multifunction defined by the weakly$^*\!$ closed convex hull of $\Gamma(t)$ and by $\mathrm{ex}\,\overline{\mathrm{co}}^{\,\mathit{w}^*}\Gamma:T\twoheadrightarrow E^*$ the multifunction defined by the extreme points of $\overline{\mathrm{co}}^{\mathit{\,w}^*}\Gamma(t)$. Let $s:(\cdot, C):E\to \mathbb{R}\cup \{+\infty \}$ be the \textit{support function} of a set $C\subset E^*$ defined by $s(x, C)=\sup_{x^*\in C}\langle x^*,x \rangle$. A multifunction $\Gamma$ is \textit{weakly$^*\!$ scalarly measurable} if the scalar function $s(x,\Gamma):T\to \mathbb{R}\cup\{ +\infty \}$ is measurable for every $x\in E$; it is \textit{weakly$^*\!$ scalarly integrable} if $s(x, \Gamma)$ is integrable for every $x\in E$. It follows from $s(x,\Gamma)=s(x,\overline{\mathrm{co}}^{\mathit{\,w}^*}\Gamma)=s(x,\mathrm{ex}\,\overline{\mathrm{co}}^{\,\mathit{w}^*}\Gamma)$ for every $x\in E$ that $\Gamma$ is weakly$^*\!$ scalarly measurable (resp.\ weakly$^*\!$ scalarly integrable) if and only if so is $\overline{\mathrm{co}}^{\mathit{\,w}^*}\Gamma$ (resp.\ $\mathrm{ex}\,\overline{\mathrm{co}}^{\,\mathit{w}^*}\Gamma$). The \textit{graph} of $\Gamma$ is defined by the set $\mathrm{gph}\,\Gamma=\{ (t,x^*)\in T\times E^*\mid x^*\in \Gamma(t) \}$. A multifunction $\Gamma$ is \textit{integrably bounded} if there exists $\varphi\in L^1(\mu)$ such that $\sup_{x^*\in \Gamma(t)}\| x^* \|\le \varphi(t)$ for every $t\in T$. If $\Gamma$ has weakly$^*\!$ compact convex values, then $\Gamma$ is weakly$^*\!$ scalarly measurable if and only if $\mathrm{gph}\,\Gamma\in \Sigma\otimes \mathrm{Borel}(E^*,\mathit{w}^*)$ whenever $(T,\Sigma,\mu)$ is complete and $E$ is separable; see \cite[Theorem III.30]{cv77}. A function $f:T\to E^*$ is a \textit{selector} of a multifunction $\Gamma$ if $f(t)\in \Gamma(t)$ a.e.\ $t\in T$. Denote by $\mathcal{S}^1_\Gamma$ the set of Gelfand integrable selectors of $\Gamma$. If $E$ is separable, then $E^*$ is Suslin with respect to the weak$^*\!$ topology, and hence, any multifunction $\Gamma$ with $\mathrm{gph}\,\Gamma\in \Sigma\otimes \mathrm{Borel}(E^*,\mathit{w}^*)$ admits a measurable selector whenever $(T,\Sigma,\mu)$ is complete; see \cite[Theorem III.22]{cv77}. Since any measurable selector from an integrably bounded multifunction $\Gamma$ is Gelfand integrable, $\mathcal{S}^1_\Gamma$ is nonempty for every integrably bounded multifunction $\Gamma$ with measurable graph whenever $E$ is separable. The Gelfand integral of $\Gamma$ is defined by $$ \int_T\Gamma(t)d\mu=\left\{ \int_Tf(t)d\mu \mid f\in \mathcal{S}^1_\Gamma \right\}. $$ For the later use, we need the following result. \begin{lem} \label{lem1} Let $(T,\Sigma,\mu)$ be a complete finite measure space, $E$ be a separable Banach space, and $\Gamma:T\twoheadrightarrow E^*$ be an integrably bounded, weakly$^*\!$ closed convex-\hspace{0pt}valued multifunction with $\mathrm{gph}\,\Gamma\in \Sigma\otimes \mathrm{Borel}(E^*,\mathit{w}^*)$. Then $\mathcal{S}^{1}_\Gamma$ is nonempty, $\sigma(G^1_{E^*},L^\infty\otimes E)$-\hspace{0pt}compact, and convex. \end{lem} \begin{proof} The nonemptiness and convexity of $\mathcal{S}^1_\Gamma$ are obvious. Let $L^1(\mu)^E$ be the space of functions from $E$ to $L^1(\mu)$ endowed with the product topology $\tau_p$ induced by the weak topology of $L^1(\mu)$. This means that the convergence of a net $\{ v_\alpha \}$ in $L^1(\mu)^E$ is characterized by the pointwise convergence: $v_\alpha\to v$ in the $\tau_p$-\hspace{0pt}topology if and only if $v_\alpha(x) \to v(x)$ weakly in $L^1(\mu)$ for every $x\in E$. For each $f\in G^1(\mu,E^*)$ define $v_f\in L^1(\mu)^E$ by $v_f(x)=\langle f(\cdot),x \rangle$. Then the convex set ${\mathcal{V}}=\{ v_f\mid f\in \mathcal{S}^1_\Gamma \}$ is $\tau_p$-\hspace{0pt}compact in $L^1(\mu)^E$; see \cite[Proposition 2.3, Theorem 4.5, and Condition $(\alpha)$ of p.\,885]{ckr11}. Define $\Psi:{\mathcal{V}}\to G^1(\mu,E^*)$ by $\Psi v_f=f$. (Since $f\mapsto v_f$ is one-to-one, $\Psi$ is well-defined.) To demonstrate the continuity of $\Psi$, let $v_{f_\alpha}\to v_f$ in ${\mathcal{V}}$ for the $\tau_p$-\hspace{0pt}topology. Since $\langle f_\alpha(\cdot),x \rangle=v_{f_\alpha}(x)\to v_f(x)=\langle f(\cdot),x \rangle$ weakly in $L^1(\mu)$ for every $x\in E$, we have $\int \varphi(t)\langle (\Psi v_{f_\alpha})(t),x \rangle d\mu\to \int \varphi(t)\langle (\Psi v_f)(t),x \rangle d\mu$ for every $(\varphi,x)\in L^\infty(\mu)\times E$. Thus, $\Psi$ is continuous in the $\tau_p$-\hspace{0pt}topology in $L^1(\mu)^E$ and the $\sigma(G^1_{E^*},L^\infty\otimes E)$-\hspace{0pt}topology in $G^1(\mu,E^*)$. Hence, $\Psi({\mathcal{V}})=\mathcal{S}^1_\Gamma$ is $\sigma(G^1_{E^*},L^\infty\otimes E)$-\hspace{0pt}compact. \end{proof} \section{Relaxation and Purification in Saturated Measure Spaces} \subsection{The Bang-Bang Principle} In the sequel, we always assume the completeness of $(T,\Sigma,\mu)$. A finite measure space $(T,\Sigma,\mu)$ is said to be \textit{essentially countably generated} if its $\sigma$-\hspace{0pt}algebra can be generated by a countable number of subsets together with the null sets; $(T,\Sigma,\mu)$ is said to be \textit{essentially uncountably generated} whenever it is not essentially countably generated. Let $\Sigma_S=\{ A\cap S\mid A\in \Sigma \}$ be the $\sigma$-\hspace{0pt}algebra restricted to $S\in \Sigma$. Denote by $L^1_S(\mu)$ the space of $\mu$-integrable functions on the measurable space $(S,\Sigma_S)$ whose elements are restrictions of functions in $L^1(\mu)$ to $S$. An equivalence relation $\sim$ on $\Sigma$ is given by $A\sim B \Leftrightarrow \mu(A\triangle B)=0$, where $A\triangle B$ is the symmetric difference of $A$ and $B$ in $\Sigma$. The collection of equivalence classes is denoted by $\Sigma(\mu)=\Sigma/\sim$ and its generic element $\widehat{A}$ is the equivalence class of $A\in \Sigma$. We define the metric $\rho$ on $\Sigma(\mu)$ by $\rho(\widehat{A},\widehat{B})=\mu(A\triangle B)$. Then $(\Sigma(\mu),\rho)$ is a complete metric space (see \cite[Lemma 13.13]{ab06}) and $(\Sigma(\mu),\rho)$ is separable if and only if $L^1(\mu)$ is separable (see \cite[Lemma 13.14]{ab06}). The \textit{density} of $(\Sigma(\mu),\rho)$ is the smallest cardinal number of the form $|\mathcal{U}|$, where $\mathcal{U}$ is a dense subset of $\Sigma(\mu)$. \begin{dfn} A finite measure space $(T,\Sigma,\mu)$ is \textit{saturated} if $L^1_S(\mu)$ is nonseparable for every $S\in \Sigma$ with $\mu(S)>0$. We say that a finite measure space has the \textit{saturation property} if it is saturated. \end{dfn} Saturation implies nonatomicity and several equivalent definitions for saturation are known; see \cite{fk02,fr12,hk84,ks09}. One of the simple characterizations of the saturation property is as follows. A finite measure space $(T,\Sigma,\mu)$ is saturated if and only if $(S,\Sigma_S,\mu)$ is essentially uncountably generated for every $S\in \Sigma$ with $\mu(S)>0$. The saturation of finite measure spaces is also synonymous with the uncountability of the density of $\Sigma_S(\mu)$ for every $S\in \Sigma$ with $\mu(S)>0$; see \cite[331Y(e)]{fr12}. An inceptive notion of saturation already appeared in \cite{ka44,ma42}. The significance of the saturation property lies in the fact that it is necessary and sufficient for the weak$^*\!$ compactness and the convexity of the Gelfand integral of a multifunction as well as the Lyapunov convexity theorem; see \cite{ks13,ks15,ks16a,po08,sy08}. We present here the relevant result from \cite[Theorem 3.3 and 3.6]{ks15} with a slight extension for later use. \begin{prop}[Lyapunov convexity theorem] \label{lyap1} Let $(T,\Sigma,\mu)$ be a finite measure space and $E$ be a sequentially complete, separable, locally convex space. If $(T,\Sigma,\mu)$ is saturated, then for every $\mu$-continuous vector measure $m:\Sigma\to E$, its range $m(\Sigma)$ is weakly compact and convex. Conversely, if every $\mu$-continuous vector measure $m:\Sigma\to E$ has the weakly compact convex range, then $(T,\Sigma,\mu)$ is saturated whenever $E$ is an infinite-\hspace{0pt}dimensional locally convex space such that there is an infinite-dimensional Banach space $X$ that admits an injective continuous linear operator from $X$ into $E$. \end{prop} \begin{proof} We show only the converse implication. If $(T,\Sigma,\mu)$ is not saturated, then for every infinite-dimensional Banach space $X$ there is a $\mu$-continuous vector measure $n:\Sigma\to X$ such that its range $n(\Sigma)$ is not weakly compact, convex subset of $X$; see \cite[Lemma 4.1 and Theorem 4.2]{ks13}. Let $\Phi:X\to E$ be an injective continuous linear operator and define the $\mu$-continuous vector measure $m:\Sigma\to E$ by $m=\Phi\circ n$. By construction, $m$ does not possess the weakly compact convex range in $E$. \end{proof} \begin{cor} \label{lyap2} Let $(T,\Sigma,\mu)$ be a finite measure space and $E$ be a separable Banach space. If $(T,\Sigma,\mu)$ is saturated, then for every $\mu$-continuous vector measure $m:\Sigma\to E^*$, its range $m(\Sigma)$ is weakly$^*\!$ compact and convex. Conversely, if every $\mu$-continuous vector measure $m:\Sigma\to E^*$ has the weakly$^*\!$ compact convex range, then $(T,\Sigma,\mu)$ is saturated whenever $E$ is infinite dimensional. \end{cor} \begin{proof} Since $E^*$ is sequentially complete with respect to the weak$^*\!$ topology consistent with duality (see \cite[Corollary 2.6.21]{me98}), the range $m(\Sigma)$ is weakly$^*\!$ compact and convex by Proposition \ref{lyap1}. For the converse implication, let $\imath_{E^*}: (E^*,\| \cdot \|)\to (E^*,\mathit{w}^*)$ be the identity map on $E^*$, which is an injective continuous linear operator. Therefore, $E^*$ satisfies the hypothesis in Proposition \ref{lyap1}. \end{proof} The following result is the \textit{bang-bang principle} in infinite dimensions, an analogue of \cite[Theorem 4.1]{ks14b} in the dual space setting. \begin{thm}[bang-bang principle] \label{BBP} Let $(T,\Sigma,\mu)$ be a saturated finite measure space, $E$ be a separable Banach space, and $\Gamma:T\twoheadrightarrow E^*$ be an integrably bounded, weakly$^*\!$ closed-valued multifunction with $\mathrm{gph}\,\Gamma\in \Sigma\otimes \mathrm{Borel}(E^*,\textit{w}^*)$. Then \begin{equation} \label{bbp} \int_T\Gamma(t)d\mu=\int_T\mathrm{ex}\,\overline{\mathrm{co}}^{\,\mathit{w}^*}\Gamma(t)d\mu. \tag{BBP} \end{equation} \end{thm} \begin{proof} The saturation property guarantees that $\int\Gamma d\mu$ is weakly$^*\!$ compact and convex with equality $\int\Gamma d\mu=\int \overline{\mathrm{co}}^{\,\mathit{w}^*}\Gamma d\mu$ by \cite[Theorem 4]{po08} and \cite[Proposition 1]{sy08}. It thus suffices to show that $\int \overline{\mathrm{co}}^{\,\mathit{w}^*}\Gamma d\mu=\int \mathrm{ex}\,\overline{\mathrm{co}}^{\,\mathit{w}^*}\Gamma d\mu$. Define the integration operator $I:G^1(\mu,E^*)\to E^*$ by $I(f)=\int fd\mu$. It is easy to see that $I$ is continuous in the $\sigma(G^1_{E^*},L^\infty\otimes E)$- and the weak$^*\!$ topologies. Take a point $x^*\in I(\mathcal{S}_{\overline{\mathrm{co}}^{\,\mathit{w}^*}\Gamma}^1)=\int \overline{\mathrm{co}}^{\,\mathit{w}^*}\Gamma d\mu$ arbitrarily. Since $\overline{\mathrm{co}}^{\,\mathit{w}^*}\Gamma$ is integrably bounded and weakly$^*\!$ scalarly measurable, by Lemma \ref{lem1}, the set $I^{-1}(x^*)\cap \mathcal{S}_{\overline{\mathrm{co}}^{\,\mathit{w}^*}\Gamma}^1$ is a $\sigma(G^1_{E^*},L^\infty\otimes E)$-\hspace{0pt}compact, convex subset of $G^1(\mu,E^*)$, and hence, it has an extreme point $\hat{f}$ in view of the Krein--Milman theorem; see \cite[Corollary 7.68]{ab06}. It suffices to show that $\hat{f}\in \mathcal{S}_{\mathrm{ex}\,\overline{\mathrm{co}}^{\,\mathit{w}^*}\Gamma}^1$. Since $\hat{f}$ is a measurable selector of $\overline{\mathrm{co}}^{\,\mathit{w}^*}\Gamma$, the set $\{ t\in T\mid f(t)\in \overline{\mathrm{co}}^{\,\mathit{w}^*}\Gamma(t)\setminus \mathrm{ex}\,\overline{\mathrm{co}}^{\,\mathit{w}^*}\Gamma(t) \}$ is measurable; see \cite[p.\,108]{cv77}. Hence, if $\hat{f}\not\in \mathcal{S}_{\mathrm{ex}\,\overline{\mathrm{co}}^{\,\mathit{w}^*}\Gamma}^1$, then there exists $S\in \Sigma$ with $\mu(S)>0$ such that $\hat{f}(t)\not\in \mathrm{ex}\,\overline{\mathrm{co}}^{\,\mathit{w}^*}\Gamma(t)$ for every $t\in S$. By the integrable boundedness of $\overline{\mathrm{co}}^{\,\mathit{w}^*}\Gamma$ and \cite[Theorem IV.14]{cv77}, there exists $g\in G^1(\mu,E^*)$ such that $\hat{f}\pm g\in \mathcal{S}_{\overline{\mathrm{co}}^{\,\mathit{w}^*}\Gamma}^1$ and $g\ne 0$ on $S$. Define the vector measure $m:\Sigma\to E^*$ by $m(A)=\int_Agd\mu$ for $A\in \Sigma$. By Corollary \ref{lyap2}, the range of $m$ is weakly$^*\!$ compact and convex in $E^*$. Take $B\subset S$ with $m(B)=\frac{1}{2}m(S)$ and define $\hat{g}:T\to E^*$ by $$ \hat{g}(t)= \begin{cases} \hspace{0.35cm} g(t) & \text{if $t\in S\setminus B$}, \\ -g(t) & \text{if $t\in B$}, \\ \quad 0 & \text{otherwise}. \end{cases} $$ Then $\hat{f}=\frac{1}{2}(\hat{f}+\hat{g})+\frac{1}{2}(\hat{f}-\hat{g})$ and $\hat{f}\pm \hat{g}\in \mathcal{S}_{\overline{\mathrm{co}}^{\,\mathit{w}^*}\Gamma}^1$. A simple calculation yields $\int \hat{g}dm=0$, and hence, $\hat{f}\pm \hat{g}\in I^{-1}(x^*)\cap \mathcal{S}_{\overline{\mathrm{co}}^{\,\mathit{w}^*}\Gamma}^1$. This contradicts the fact that $\hat{f}$ is an extreme point of $I^{-1}(x^*)\cap \mathcal{S}_{\overline{\mathrm{co}}^{\,\mathit{w}^*}\Gamma}^1$. \end{proof} Theorem \ref{BBP} means that the Gelfand integral of any Gelfand integrable selector $f$ from $\Gamma$ is realized as that of some Gelfand integrable selector $g$ from $\mathrm{ex}\,\overline{\mathrm{co}}^{\,\mathit{w}^*}\Gamma$ in the sense that $\int fd\mu=\int gd\mu$. The saturation property of finite measure spaces guarantees the bang-bang principle for every integrably bounded, weakly$^*\!$ closed valued multifunction with measurable graph whenever $E$ is separable. Furthermore, the converse of Theorem \ref{BBP} is also true. \begin{thm} \label{nec1} Let $(T,\Sigma,\mu)$ be a finite measure space and $E$ be an infinite-\hspace{0pt}dimensional separable Banach space. If \eqref{bbp} holds for every integrably bounded, weakly$^*\!$ closed-valued multifunction $\Gamma:T\twoheadrightarrow E^*$ with $\mathrm{gph}\,\Gamma\in \Sigma\otimes \mathrm{Borel}(E^*,\textit{w}^*)$, then $(T,\Sigma,\mu)$ is saturated. \end{thm} \begin{proof} Suppose, to the contrary, that $(T,\Sigma,\mu)$ is not saturated. Then there exists a Bochner (and hence Gelfand) integrable function $f\in G^1(\mu,E^*)$ such that the range of the indefinite Bochner integral $R_f=\{ \int_Afd\mu\in E^*\mid A\in \Sigma \}$ is not convex; see \cite[Lemma 4]{po08} or \cite[Remark 1(2)]{sy08}. Since the essential range of Bochner integrable functions is separable (see \cite[Theorem II.1.2]{du77}), we may assume that $f$ takes values in a separable subspace $V$ of $E^*$. Let $\Gamma_f:T\twoheadrightarrow V$ be a multifunction defined by $\Gamma_f(t)=\overline{\mathrm{co}}\{ 0,f(t) \}$, where the closed convex hull is taken with respect to the dual norm. Then $\Gamma_f$ is an integrably bounded, weakly$^*\!$ compact, convex-valued multifunction with $\mathrm{gph}\,\Gamma_f\in \Sigma\otimes \mathrm{Borel}(E^*,\textit{w}^*)$. Since $\mathcal{S}^1_{\Gamma_f}$ is convex in $G^1(\mu,E^*)$, the Gelfand integral $\int \Gamma_f d\mu$ is convex in $E^*$. Moreover, since the Gelfand integrable selectors from $\Gamma_f$ precisely coincide with the Bochner integrable selectors, we have $\mathrm{ex}\,\Gamma_f(t)=\{ 0,f(t) \}$ and $\mathcal{S}^1_{\mathrm{ex}\,\Gamma_f}=\{ f\chi_A \mid A\in \Sigma \}$. Since $\overline{\mathrm{co}}\{ 0,f(t) \}=\overline{\mathrm{co}}^{\,\mathit{w}^*}\{ 0,f(t) \}$ (see \cite[Theorem 5.98]{ab06}), if $\Gamma_f$ satisfies \eqref{bbp}, then $\int \Gamma_f d\mu=\int \mathrm{ex}\,\overline{\mathrm{co}}^{\,\mathit{w}^*}\Gamma_fd\mu=\int \mathrm{ex}\,\overline{\mathrm{co}}\,\Gamma_fd\mu=R_f$, an obvious contradiction to the convexity of $\int \Gamma_f d\mu$. \end{proof} \begin{rem} The equivalence of saturation and the Lyapunov convexity theorem was established in \cite{ks13} for the case with separable Banach spaces and in \cite{gp13} for the case with dual spaces of a separable Banach space. Proposition \ref{lyap1} covers the abovementioned results and improves \cite{ks15}. The equivalence of saturation and the ``convexity principle'' in the sense that $\int\Gamma d\mu=\int\overline{\mathrm{co}}^{\,\mathit{w}^*}\Gamma d\mu$ for every integrably bounded, weakly$^*\!$ closed-valued multifunction $\Gamma$ with measurable graph was established in \cite{po08,sy08}; see also the earlier work by \cite{su97} in the setting of nonatomic Loeb spaces (which form a special class of saturated measure spaces). Theorems \ref{BBP} and \ref{nec1} are another characterization of saturation in terms of \eqref{bbp}. See \cite{ks14b} for a characterization of saturation in terms of the bang-bang principle with Bochner integrals in separable Banach spaces and \cite{ks16a} for that in terms of the bang-bang principle with Bourbaki--Kluv\'anek--Lewis integrals in separable locally convex spaces. \end{rem} \subsection{The Purification Principle} We denote by $\Pi(X)$ the set of probability measures on a Hausdorff topological space $X$ furnished with the Borel $\sigma$-algebra $\mathrm{Borel}(X)$. We endow $\Pi(X)$ with the \textit{topology of weak convergence} of probability measures (also called the \textit{narrow topology}), which is the coarsest topology on $\Pi(X)$ for which the integral functional $P\mapsto \int u dP$ on $\Pi(X)$ is continuous for every bounded continuous function $u:X\to \mathbb{R}$. Then $\Pi(X)$ is a Suslin space if and only if $X$ is a Suslin space; see \cite[Theorem 2.7 of Appendix in Part II]{sc73}. If $X$ is a Polish space, then the Borel $\sigma$-algebra on $\Pi(X)$ is the smallest $\sigma$-algebra for which the real-valued function $P\mapsto P(A)$ on $\Pi(X)$ is measurable for every $A\in \Sigma$; see \cite[Theorem 7.25]{bs78}. By $\mathcal{M}(T,X)$ we denote the space of measurable functions from $T$ to $X$ and by $\mathcal{R}(T,X)$ the space of measurable functions from $T$ to $\Pi(X)$. Each element in $\mathcal{M}(T,X)$ is called a \textit{control} and that in $\mathcal{R}(T,X)$ is called a \textit{relaxed control} (a \textit{Young measure}, a \textit{stochastic kernel}, or a \textit{transition probability}), which is a probability measure-valued control. If $X$ is a Polish space, then for every function $\lambda:T\to \Pi(X)$, the real-\hspace{0pt}valued function $t\mapsto \lambda(t)(C)$ is measurable for every $C\in \mathrm{Borel}(X)$ if and only if $\lambda$ is measurable. By $\Delta(X)$, we denote the set of Dirac measures on $X$, i.e., $\delta_x\in \Delta(X)$ whenever for every $C\in \mathrm{Borel}(X)$: $\delta_x(C)=1$ if $x\in C$ and $\delta_x(C)=0$ otherwise. If $X$ is a Polish space, then each control $f\in \mathcal{M}(T,X)$ is identified with the Dirac measure-valued control $\delta_{f(\cdot)}\in \mathcal{R}(T,X)$ satisfying $\delta_{f(t)}\in \Delta(X)$ for every $t\in T$. Given a multifunction $U:T\twoheadrightarrow X$, we say that $\lambda\in \mathcal{R}(T,X)$ is \textit{concentrated} on $U$ if $\lambda(t)(U(t))=1$ a.e.\ $t\in T$. We say that a function $\Phi:T\times X\to E^*$ is \textit{integrably bounded} if the multifunction $t\mapsto \Phi(t,X)\subset E^*$ is integrably bounded, i.e., there exists $\varphi\in L^1(\mu)$ such that $\| \Phi(t,x) \|\le \varphi(t)$ for every $(t,x)\in T\times X$. Recall that the real-valued function $\varphi:T\times X\to \mathbb{R}$ is a \textit{Carath\'eodory function} if $t\mapsto \varphi(t,x)$ is measurable for every $x\in X$ and $x\mapsto \varphi(t,x)$ is continuous for every $t\in T$. If $X$ is a Polish space, then the Carath\'eodory function $\varphi$ is jointly measurable; see \cite[Lemma 4.51]{ab06}. The proof of the following result is in \cite[Lemma 2.1]{ks14b}. \begin{lem} \label{lem2} Let $(T,\Sigma,\mu)$ be a probability space and $C$ be a weakly$^*\!$ closed convex subset of the dual space $E^*$ of a Banach space $E$. If $f:T\to E^*$ is a Gelfand integrable function with $f(T)\subset C$, then $\int fd\mu\in C$. \end{lem} The next result is the \textit{purification principle} in infinite dimensions, an analogue of \cite[Theorem 5.1]{ks14b} in the dual space setting. \begin{thm}[purification principle] \label{PP1} Let $(T,\Sigma,\mu)$ be a saturated finite measure space, $E$ be a separable Banach space, and $X$ be a Suslin space. If $\Phi:T\times X\to E^*$ is an integrably bounded measurable function such that $\Phi(t,\cdot):X\to E^*$ is continuous in the weak$^*\!$ topology of $E^*$ for every $t\in T$ and $U:T\twoheadrightarrow X$ is a compact-\hspace{0pt}valued multifunction with $\mathrm{gph}\,U\in \Sigma\otimes \mathrm{Borel}(X)$, then for every $\lambda\in \mathcal{R}(T,X)$ concentrated on $U$ there exists $f \in \mathcal{M}(T,X)$ with $f(t)\in U(t)$ a.e.\ $t\in T$ such that \begin{equation} \label{pp1} \int_T \int_X\Phi(t,x)\lambda(t,dx)d\mu=\int_T \Phi(t,f(t))d\mu. \tag{PP} \end{equation} \end{thm} \begin{proof} Define the multifunction $\Gamma:T\twoheadrightarrow E^*$ by $\Gamma(t)=\overline{\mathrm{co}}^{\,\mathit{w}^*}\Phi(t,U(t))$. Then $\Gamma$ is integrably bounded and weakly$^*\!$ compact, convex-\hspace{0pt}valued because of the hypotheses on $\Phi$ and $U$. Since $\Phi(t,\cdot)$ is bounded and Borel measurable for every $t\in T$, it is also weakly$^*\!$ scalarly integrable as a function from $X$ to $E^*$. Thus, we can define the Gelfand integral of $\Phi(t,\cdot)$ by $g_\lambda(t)=\int \Phi(t,x)\lambda(t,dx)$ for any $\lambda\in \mathcal{R}(T,X)$ concentrated on $U$. It is evident that the function $g_\lambda:T\to E^*$ is Gelfand integrable. Applying Lemma \ref{lem2} for every $t\in T$ to the probability space $(X,\mathrm{Borel}(X),\lambda(t))$, the weakly$^*\!$ closed convex set $\Gamma(t)\subset E^*$, and the Gelfand integrable function $\Phi(t,\cdot)$, we have $g_\lambda\in \mathcal{S}_\Gamma^1$. It follows from $\int\Gamma d\mu=\int \overline{\mathrm{co}}^{\,\mathit{w}^*}\Gamma d\mu$ (see \cite[Theorem 4]{po08} and \cite[Proposition 1]{sy08}) that there exists a Gelfand integrable selector $g$ from $\Gamma$ such that $\int gd\mu=\int g_\lambda d\mu$. In view of $g(t)\in \Phi(t,U(t))$ and Filippov's implicit function theorem (see \cite[Theorem III.38]{cv77}), there exists a measurable function $f:T\to X$ such that $g(t)=\Phi(t,f(t))$ and $f(t)\in U(t)$ a.e.\ $t\in T$. This $f$ is a desired control function. \end{proof} Theorem \ref{PP1} means that any ``relaxed'' control system $t\mapsto\hat{\Phi}(t,\lambda(t)):=\int\Phi(t,x)\lambda(t,dx)$ operated by $\lambda\in \mathcal{R}(T,X)$ consistent with the control set $U(t)$ is realized by adopting a ``purified'' control system $t\mapsto\Phi(t,f(t))$ operated by $f\in \mathcal{M}(T,X)$ with the feasibility constraint $f(t)\in U(t)$ in such a way that its Gelfand integral over $T$ is preserved with $\int\hat{\Phi}(t,\lambda(t))d\mu=\int\Phi(t,f(t))d\mu$. The converse of Theorem \ref{PP1} is as follows. \begin{thm} \label{nec2} Let $(T,\Sigma,\mu)$ be a finite measure space, $E$ be an infinite-dimensional separable Banach space, and $X$ be an uncountable compact Polish space. If for every integrably bounded measurable function $\Phi:T\times X\to E^*$ such that $\Phi(t,\cdot):X\to E^*$ is continuous in the weak$^*\!$ topology of $E^*$ for every $t\in T$ and for every $\lambda\in \mathcal{R}(T,X)$ there exists $f \in \mathcal{M}(T,X)$ satisfying \eqref{pp1}, then $(T,\Sigma,\mu)$ is saturated. \end{thm} \begin{proof} It follows from \cite[Theorem 3]{po09} that if $(T,\Sigma,\mu)$ is not saturated, then there exists an integrably bounded Carath\'eodory function $\varphi:T\times X\to \mathbb{R}$ and $\lambda\in \mathcal{R}(T,X)$ such that no $f\in \mathcal{M}(T,X)$ with $f(t)\in U(t)$ a.e.\ $t\in T$ satisfies $$ \int_T \int_X\varphi(t,x)\lambda(t,dx)d\mu=\int_T \varphi(t,f(t))d\mu. $$ Let $x^*\in E^*\setminus \{ 0 \}$ be given arbitrarily and define the function $\Phi:T\times X\to E^*$ by $\Phi(t,x)=\varphi(t,x)x^*$. Obviously, $\Phi$ is integrably bounded, (jointly) measurable, and $\Phi(t,\cdot)$ is continuous in the weak$^*\!$ topology of $E^*$ for every $t\in T$, but \eqref{pp1} is false. \end{proof} \begin{rem} For the case with $E=E^*=\mathbb{R}^n$, \eqref{pp1} holds under the nonatomicity hypothesis, which is a well-known result in control theory attributed to \cite[Theorem IV.3.14]{wa72}. The equivalence of saturation and the purification principle was established in \cite{ls09,po09} for the case where $\Phi$ takes values in the countable product $\mathbb{R}^\mathbb{N}$ of the real line ($\mathbb{R}^\mathbb{N}$ is a Fr\'echet space). In particular, the observation in \cite[Example 2.7]{ls06} that the purification principle fails without the use of nonatomic Loeb measures when $\Phi$ takes values in $\mathbb{R}^\mathbb{N}$ provides the basis for the necessity of saturation in \cite{ls09}. The equivalence of saturation and the purification principle for the case where $\Phi$ takes values in a separable Banach space is due to \cite{ks14b}. Theorems \ref{PP1} and \ref{nec2} are another characterization of saturation in terms of \eqref{pp1} in dual spaces of a separable Banach space. \end{rem} \subsection{The Density Property} Let $X$ be a Polish space. An extended real-\hspace{0pt}valued function $\varphi:T\times X\to \mathbb{R}\cup \{+\infty\}$ is called an \textit{integrand} if it is $\Sigma\otimes \mathrm{Borel}(X)$-\hspace{0pt}measurable. An integrand $\varphi$ is called a \textit{normal integrand} if $\varphi(t,\cdot)$ is lower semicontinuous on $X$ for every $t\in T$. A Carath\'eodory integrand is a normal integrand. Denote by $\mathcal{C}^1(T\times X,\mu)$ the space of integrably bounded Carath\'eodory integrands on $T\times X$. For each integrand $\varphi$, define the integral functional $J_\varphi:\mathcal{R}(T,X)\to \mathbb{R}\cup \{\pm \infty \}$ by $J_\varphi(\lambda)=\iint\varphi(t,x)\lambda(t,dx)d\mu$. The \textit{weak topology} on $\mathcal{R}(T,X)$ is defined as the coarsest topology for which every integral functionals $J_\varphi$, $\varphi\in \mathcal{C}^1(T\times X,\mu)$, are continuous. The weak topology of $\mathcal{R}(T,X)$ is also the coarsest topology for which $J_\varphi$ is lower semicontinuous for every nonnegative normal integrand $\varphi$ whenever $X$ is compact; see \cite[Lemma A.2]{ba84a}. If $T$ is a singleton, then the set $\mathcal{R}(T,X)$ coincides with the set $\Pi(X)$. In this case $\mathcal{C}^1(T\times X,\mu)$ coincides with the space $C_b(X)$ of bounded continuous functions on $X$ and the weak topology of $\mathcal{R}(T,X)$ is the topology of weak convergence of probability measures in $\Pi(X)$. A sequence $\{ \lambda_i \}$ in $\mathcal{R}(T,X)$ \textit{converges weakly} to $\lambda$ if for every $\varphi\in \mathcal{C}^1(T\times X,\mu)$, we have $\lim_{i}J_\varphi(\lambda_i)=J_\varphi(\lambda)$. A sequence $\{ \lambda_i \}$ in $\mathcal{R}(T,X)$ \textit{converges narrowly} to $\lambda$ if for every $u\in C_b(X)$ and $A\in \Sigma$, we have $$ \lim_{i\to \infty}\int_A\int_Xu(x)\lambda_i(t,dx)d\mu=\int_A\int_Xu(x)\lambda(t,dx)d\mu. $$ It follows from the definitions that weak convergence implies narrow convergence in $\mathcal{R}(T,X)$. Furthermore, the converse is also true, i.e., weak and narrow convergence in $\mathcal{R}(T,X)$ are equivalent; see \cite[Theorem 4.10 and Remark 3.6]{ba00}. If $X$ is compact, then $\mathcal{R}(T,X)$ is compact and sequentially compact for the weak topology; see \cite[Lemma A.4]{ba84a}. \begin{thm}[density property] \label{dens} Let $(T,\Sigma,\mu)$ be a saturated finite measure space, $E$ be a separable Banach space, $C$ be a weakly$^*\!$ closed subset of $E^*$, and $X$ be a compact Polish space. Suppose that the following conditions are satisfied. \begin{enumerate}[\rm (i)] \item $\Phi:T\times X\to E^*$ is an integrably bounded measurable function such that $\Phi(t,\cdot):X\to E^*$ is continuous in the weak$^*\!$ topology of $E^*$ for every $t\in T$; \item $U:T\twoheadrightarrow X$ is a closed-\hspace{0pt}valued multifunction with $\mathrm{gph}\,U\in \Sigma\otimes \mathrm{Borel}(X)$; \item $\left\{ \int_T\int_X\Phi(t,x)dPd\mu\mid P\in\Pi(X) \right\}\cap C$ is nonempty. \end{enumerate} Further, define the subset of $\mathcal{R}(T,X)$ by $$ {\mathcal{K}}:=\left\{ \lambda\in \mathcal{R}(T,X)\,\Bigl| \begin{array}{l} \int_T\int_X\Phi(t,x)\lambda(t,dx)d\mu\in C \\ \lambda(t)(U(t))=1\ \text{a.e.\ $t\in T$} \end{array} \right\}. $$ We then have the following equality $$ {\mathcal{K}}=\overline{\left\{ \delta_{f(\cdot)}\in \mathcal{R}(T,X)\,\Bigl| \begin{array}{l} \int_T \Phi(t,f(t))d\mu\in C,\,f\in \mathcal{M}(T,X) \\ f(t)\in U(t) \text{ a.e.\ $t\in T$} \end{array} \right\}}^{\,\mathit{w}} $$ where $\overline{\{ \cdots \}}^{\,\mathit{w}}$ signifies the closure of the set with respect to the weak topology of $\mathcal{R}(T,X)$. \end{thm} \begin{proof} Let $\lambda_0\in {\mathcal{K}}$ be arbitrary and $\mathcal{U}_0$ be its arbitrary neighborhood. By definition of the weak topology of $\mathcal{R}(T,X)$, there exist $\varphi_1,\dots,\varphi_k$ in $\mathcal{C}^1(T\times X,\mu)$ with $k\in \mathbb{N}$ such that $|J_{\varphi_i}(\lambda)-J_{\varphi_i}(\lambda_0)|<1$, $i=1,\dots,k$ implies $\lambda\in \mathcal{U}_0$. Define $\Psi:T\times X\to E^*\times \mathbb{R}^k$ by $\Psi=(\Phi,\varphi_1,\dots,\varphi_k)$. Then $\Psi$ is an integrably bounded measurable function such that $\Psi(t,\cdot):X\to E^*\times \mathbb{R}^k$ is continuous in the weak$^*\!$ topology of $E^*\times \mathbb{R}^k$ for every $t\in T$. Define the subset $D$ of $E^*\times \mathbb{R}^k$ by $$ D=C\times \left\{ \left(\int_T\int_X\varphi_1(t,x)\lambda_0(t,dx)d\mu,\dots,\int_T\int_X\varphi_k(t,x)\lambda_0(t,dx)d\mu \right) \right\}. $$ Applying Theorem \ref{PP1} to the pair $(\Psi,U)$ yields the existence of $f\in \mathcal{M}(T,X)$ with $f(t)\in U(t)$ a.e.\ $t\in T$ such that $\int\Psi(t,f(t))d\mu=\iint\Psi(t,x)\lambda_0(t,dx)d\mu$. This implies that $J_{\varphi_i}(\delta_{f(\cdot)})=J_{\varphi_i}(\lambda_0)$ for $i=1,\dots,k$ and $\int\Phi(t,f(t))d\mu=\iint\Phi(t,x)\lambda_0(t,dx)d\mu\in C$. Therefore, $\delta_{f(\cdot)}\in \mathcal{U}_0$. Since the choice of $\lambda_0$ and $\mathcal{U}_0$ is arbitrary, this means the density property is as claimed. \end{proof} \begin{rem} \label{rem} The case with $\Phi(t,x)\equiv 0$ and $C\equiv E^*$ means that the constraint in the dual space does not arise in control systems, in which case, the classical Lyapunov convexity theorem is sufficient for the density property and Theorem \ref{dens} is true even if $(T,\Sigma,\mu)$ is not saturated, but it is nonatomic; see \cite[Theorem IV.3.10]{wa72}. Moreover, if $U(t)\equiv X$, then every constraint is unbinding and Theorem \ref{dens} is reduced to the well-known result $\mathcal{R}(T,X)=\overline{\mathcal{M}(T,X)}^{\,\mathit{w}}$; see \cite[Theorem IV.2.6]{wa72}. For the density property with finite-dimensional control systems, see, e.g., \cite[Proposition II.7]{bl73} and \cite[Theorem 7 and Corollary 4]{sb78}. \end{rem} \section{Variational Problems with Gelfand Integral Constraints} \subsection{The Minimization Principle} The variational problem under investigation is a general form of the isometric problems, which is an infinite-dimensional analogue of \cite{ap65} with the finite-dimensional setting. The relaxation technique explored here is a Gelfand integral analogue of \cite{ks14b} with the Bochner integral setting. For the existence issue in relaxation and purification in finite-dimensional control systems with integral constraints, see, e.g., \cite{al72,ar98,bl73}. Suppose that an integrand $\varphi:T\times X\to \mathbb{R}\cup \{+\infty\}$ denotes a cost function and a constraint is described by a measurable function $\Phi:T\times X\to E^*$, a multifunction $U:T\twoheadrightarrow X$ and a given subset $C$ of $E^*$. The variational problem under consideration is \begin{equation} \label{vp} \begin{aligned} & \min_{f\in \mathcal{M}(T,X)} \int_T\varphi(t,f(t))d\mu \quad \\ & \text{s.t. $\int_T \Phi(t,f(t))d\mu\in C$ and $f(t)\in U(t)$ a.e.\ $t\in T$}. \end{aligned} \tag{\text{VP}} \end{equation} Denote by $\min\eqref{vp}$ the minimum value of \eqref{vp} if it exists. The relaxed variational problem corresponding to \eqref{vp} is as follows. \begin{equation} \label{rvp} \begin{aligned} & \min_{\lambda\in \mathcal{R}(T,X)}\int_T\int_X\varphi(t,x)\lambda(t,dx)d\mu \quad \\ & \text{s.t. $\int_T\int_X\Phi(t,x)\lambda(t,dx)d\mu\in C$ and $\lambda(t)(U(t))=1$ a.e. $t\in T$}. \end{aligned} \tag{\text{RVP}} \end{equation} Denote by $\min\eqref{rvp}$ the minimum value of \eqref{rvp} if it exists. Since any $f\in \mathcal{M}(T,X)$ is identified with $\delta_{f(\cdot)}\in \mathcal{R}(T,X)$ such that $\int\varphi(t,x)d(\delta_{f(t)})=\varphi(t,f(t))$ and $\int\Phi(t,x)d(\delta_{f(t)})=\Phi(t,f(t))$ for every $t\in T$, and the transformations $\lambda\mapsto (\int\varphi(t,x)\lambda(t,dx),\int\Phi(t,x)\lambda(t,dx))$ are affine on $\Pi(X)$ in their own right, \eqref{rvp} is a convexified problem to \eqref{vp} with $\min\eqref{vp}\ge \min\eqref{rvp}$ whenever solutions to both problems exist. (If the infimum value of \eqref{vp} happens to be $+\infty$, any feasible solutions to \eqref{vp} and \eqref{rvp} are optimal. Thus, we may innocuously assume that the infimum value of $\eqref{rvp}$ is less than $+\infty$.) \begin{lem} \label{lem3} Let $(T,\Sigma,\mu)$ be a finite measure space, $E$ be a Banach space, and $X$ be a Polish space. If $\Phi:T\times X\to E^*$ is an integrably bounded measurable function such that $\Phi(t,\cdot):X\to E^*$ is continuous in the weak$^*\!$ topology of $E^*$ for every $t\in T$, then the Gelfand integral functional $I_\Phi:\mathcal{R}(T,X)\to E^*$ defined by $$ I_\Phi(\lambda)=\int_T\int_X\Phi(t,x)\lambda(t,dx)d\mu $$ is sequentially continuous in the weak topology of $\mathcal{R}(T,X)$ and the weak$^*\!$ topology of $E^*$. \end{lem} \begin{proof} Let $\{ \lambda_i \}$ be a sequence in $\mathcal{R}(T,X)$ converging weakly to $\lambda$. Take $y\in E$ arbitrarily. We then have \begin{align*} \lim_{i\to \infty}\left\langle \int_T\int_X\Phi(t,x)\lambda_i(t,dx)d\mu,y \right\rangle & =\lim_{i\to \infty}\int_T\left\langle \int_X\Phi(t,x)\lambda_i(t,dx),y \right\rangle d\mu \\ & =\lim_{i\to \infty}\int_T\left[ \int_X\langle \Phi(t,x),y \rangle \lambda_i(t,dx) \right]d\mu \\ & =\int_T\left[ \int_X\langle \Phi(t,x),y \rangle \lambda(t,dx) \right]d\mu \\ & =\left\langle \int_T\int_X\Phi(t,x)\lambda(t,dx)d\mu,y \right\rangle, \end{align*} where the third equality follows from the fact that the function $(t,x)\mapsto \langle \Phi(t,x),y \rangle$ belongs to $\mathcal{C}^1(T\times X,\mu)$ in view of $\int\langle \Phi(t,x),y \rangle \lambda_i(t,dx)\le \|y\| \psi(t)$ with $\psi\in L^1(\mu)$ for every $i$ and $t\in T$ by the integrable boundedness of $\Phi$ and the definition of the weak convergence in $\mathcal{R}(T,X)$. Therefore, $I_\Phi(\lambda_i)$ converges weakly$^*\!$ to $I_\Phi(\lambda)$ in $E^*$. \end{proof} \begin{thm} \label{exst1} Let $(T,\Sigma,\mu)$ be a finite measure space, $E$ be a separable Banach space, $C$ be a weakly$^*\!$ closed subset of $E^*$, and $X$ be a compact Polish space. Suppose that the following conditions are satisfied. \begin{enumerate}[\rm (i)] \item $\varphi:T\times X\to \mathbb{R}\cup \{+\infty\}$ is a normal integrand such that there exists $\psi\in L^1(\mu)$ such that $\psi(t)\le \varphi(t,x)$ for every $(t,x)\in T\times X$; \item $\Phi:T\times X\to E^*$ is an integrably bounded measurable function such that $\Phi(t,\cdot):X\to E^*$ is continuous in the weak$^*\!$ topology of $E^*$ for every $t\in T$; \item $U:T\twoheadrightarrow X$ is a closed-\hspace{0pt}valued multifunction with $\mathrm{gph}\,U\in \Sigma\otimes \mathrm{Borel}(X)$; \item $\left\{ \int_T\int_X\Phi(t,x)dPd\mu\mid P\in\Pi(X) \right\}\cap C$ is nonempty. \end{enumerate} Then a solution to \eqref{rvp} exists. \end{thm} \begin{proof} Let $\{ \lambda_i \}$ be a minimizing sequence in $\mathcal{R}(T,X)$ for \eqref{rvp}. By the weak compactness of $\mathcal{R}(T,X)$, we can extract a subsequence from $\{ \lambda_i \}$ (which we do not relabel) that converges weakly to some $\lambda\in \mathcal{R}(T,X)$. Since $\{ \lambda_i \}$ converges narrowly to $\lambda\in \mathcal{R}(T,X)$ and each $\lambda_i$ is concentrated on $U$, we conclude that $\lambda$ is concentrated on $U$ as well; see \cite[Lemma 4.11 and Theorem 4.15]{ba00}. It follows from $\iint\Phi(t,x)\lambda_i(t,dx)d\mu\in C$ for each $i$ that $\iint\Phi(t,x)\lambda(t,dx)d\mu\in C$ by Lemma \ref{lem3} and the weak$^*\!$ closedness of $C$. Since $\varphi$ is integrably bounded from below, without loss of generality we may assume for the sake of \eqref{rvp} that $\varphi$ is a nonnegative normal integrand. Since the integral functional $J_\varphi(\nu)=\iint\varphi(t,x)\nu(t,dx)d\mu$ is weakly lower semicontinuous by the definition of the weak topology of $\mathcal{R}(T,X)$, we have $J_\varphi(\lambda)\le \liminf_i J_\varphi(\lambda_i)=\min\text{\eqref{rvp}}$. Hence, $\lambda$ is a solution to \eqref{rvp}. \end{proof} For the existence of solutions to \eqref{rvp}, the saturation assumption on the measure space is unnecessary. Thus, the Lebesgue unit interval, the most fundamental probability space in many applications, is covered in Theorem \ref{exst1}. To ensure that $\min\text{\eqref{vp}}=\min\text{\eqref{rvp}}$ and the existence of solutions to \eqref{vp}, the saturation of the measure space and the continuity property on the normal integrand are sufficient. \begin{cor}[minimization principle] \label{exst2} Let $(T,\Sigma,\mu)$ be a saturated finite measure space, $E$ be a separable Banach space, $C$ be a weakly$^*\!$ closed subset of $E^*$, and $X$ be a compact Polish space. Suppose that the following conditions are satisfied. \begin{enumerate}[\rm (i)] \item $\varphi:T\times X\to \mathbb{R}$ is an integrably bounded Carath\'{e}odory function; \item $\Phi:T\times X\to E^*$ is an integrably bounded measurable function such that $\Phi(t,\cdot):X\to E^*$ is continuous in the weak$^*\!$ topology of $E^*$ for every $t\in T$; \item $U:T\twoheadrightarrow X$ is a closed-\hspace{0pt}valued multifunction with $\mathrm{gph}\,U\in \Sigma\otimes \mathrm{Borel}(X)$; \item $\left\{ \int_T\int_X\Phi(t,x)dPd\mu\mid P\in\Pi(X) \right\}\cap C$ is nonempty. \end{enumerate} Then a solution to \eqref{vp} exists. \end{cor} \begin{proof} Let $\lambda\in \mathcal{R}(T,X)$ be a solution to \eqref{rvp}. Define the function $\Psi:T\times X\to E^*\times \mathbb{R}$ by $\Psi=(\Phi,\varphi)$ and the weakly$^*\!$ closed set $D\subset E^*\times \mathbb{R}$ by $D=C\times \{ \iint\varphi(t,x)\lambda(t,dx)d\mu \}$. Applying Theorem \ref{PP1} to the pair $(\Psi,U)$ yields the existence of $f\in \mathcal{M}(T,X)$ with $f(t)\in U(t)$ a.e.\ $t\in T$ satisfying $\iint\Psi(t,x)\lambda(t,dx)d\mu=\int\Psi(t,f(t))d\mu$. This means that $\int\varphi(t,f(t))d\mu=\iint\varphi(t,x)\lambda(t,dx)d\mu$ and $\int\Phi(t,f(t))d\mu=\iint\Phi(t,x)\lambda(t,dx)d\mu\in C$. Therefore, $\min\eqref{vp}=\min\eqref{rvp}$ and $f$ is a solution to \eqref{vp}. \end{proof} We say that a quartet $(\varphi,\Phi,U,C)$ fulfills the \textit{minimization principle} (MP) if \eqref{vp} corresponding to $(\varphi,\Phi,U,C)$ has a solution. The converse of Corollary \ref{exst2} is framed in the following form. \begin{thm} Let $(T,\Sigma,\mu)$ be a nonatomic finite measure space, $E$ be an infinite-\hspace{0pt}dimensional separable Banach space, $C$ be a weakly$^*\!$ closed subset of $E^*$, and $X$ be an uncountable compact Polish space. If every quartet $(\varphi,\Phi,U,C)$ satisfying conditions {\rm (i)} to {\rm (iv)} of Corollary \ref{exst2} fulfills {\rm (MP)}, then $(T,\Sigma,\mu)$ is saturated. \end{thm} \begin{proof} Suppose that the nonatomic finite measure space $(T,\Sigma,\mu)$ is not saturated. By Theorem \ref{nec2}, letting $U(t)\equiv X$ and $C\equiv E^*$ guarantees that there exists a quartet $(\varphi,\Phi,U,C)$ satisfying conditions (i) and (iv) of Corollary \ref{exst2} such that for some $\lambda\in \mathcal{R}(T,X)$ no $f\in \mathcal{M}(T,X)$ satisfies \eqref{pp1} and $J_\varphi(\lambda)=\int\varphi(t,f(t))d\mu$. Hence, the variational problem corresponding to $(\varphi,\Phi,U,C)$ yields the ``relaxation gap'': $\min\eqref{rvp}<\min\eqref{vp}$. Since $\mathcal{R}(T,X)=\overline{\mathcal{M}(T,X)}^{\,\mathit{w}}$ in view of the nonatomicity hypothesis (see Remark \ref{rem}) and $J_\varphi$ is weakly lower semicontinuous on $\mathcal{R}(T,X)$, there exists a minimizing sequence $\{ f_n \}$ in $\mathcal{M}(T,X)$ such that $J_\varphi(\delta_{f_n(\cdot)})\to \min\text{\eqref{rvp}}$. This means that $J_\varphi(\delta_{f_n(\cdot)})=\int\varphi(t,f_n(t))d\mu<\min\eqref{vp}$ for every sufficiently large $n$, an obvious contradiction. \end{proof} The proof of Corollary \ref{exst2} is based on the ``direct method'' of the calculus of variations via the relaxation technique. For the existence result without the relaxation technique, based on the ``indirect method'' exploiting the duality theory in Asplund spaces in the nonsmooth setting, see \cite{sa15}. As investigated thoroughly in \cite{mo06}, Asplund spaces are suitable places for exploring subdifferential calculus fully. Another relevant application of saturation to subdifferential calculus in Asplund spaces for integral functionals is found in \cite{ms18}. \section{Relaxation of Large Economies} \subsection{Existence of Pareto Optimal Allocations} Large economies introduced in \cite{au64,au66} model the set of agents as a nonatomic finite measure space in the finite-\hspace{0pt}dimensional framework to show the existence of Walrasian equilibria and the core-Walras equivalence without any convexity hypothesis; see also \cite{hi74} for detailed references to follow-up work till 1974. We apply the relaxation technique to exchange economies with an infinite-dimensional commodity space and make use of the purification principle to show the existence of Pareto optimal allocations for the original economy. The set of agents is given by a (complete) finite measure space $(T,\Sigma,\mu)$. The commodity space is given by the dual space $E^*$ of a separable Banach space $E$. The preference relation ${\succsim}(t)$ of each agent $t\in T$ is a complete, transitive binary relation on a common consumption set $X\subset E^*$, which induces the preference map $t\mapsto {\succsim}(t)\subset X\times X$. We denote by $x\,{\succsim}(t)\,y$ the relation $(x,y)\in {\succsim}(t)$. The indifference and strict relations are defined respectively by $x\,{\sim}(t)\,y$ $\Leftrightarrow$ $x\,{\succsim}(t)\,y$ and $y\,{\succsim}(t)\,x$, and by $x\,{\succ}(t)\,y$ $\Leftrightarrow$ $x\,{\succsim}(t)\,y$ and $x\,{\not\sim}(t)\,y$. Each agent possesses an initial endowment $\omega(t)\in X$, which is the value of a Gelfand integrable function $\omega:T\to E^*$. The economy $\mathcal{E}$ consists of the primitive $\mathcal{E}=\{ (T,\Sigma,\mu),X,\succsim,\omega \}$. The standing assumption on $\mathcal{E}$ is described as follows. \begin{assmp} \label{assmp} \begin{enumerate}[(i)] \item $X$ is a weakly$^*\!$ compact subset of $E^*$. \item ${\succsim}(t)$ is a weakly$^*\!$ closed subset of $X\times X$ for every $t\in T$. \item For every $x,y\in X$ the set $\{ t\in T\mid x\,{\succ}(t)\,y \}$ is in $\Sigma$. \end{enumerate} \end{assmp} The preference relation ${\succsim}(t)$ is said to be \textit{continuous} if it satisfies Assumption \ref{assmp}(ii). Since $E$ is separable, the weakly$^*\!$ compact set $X\subset E^*$ is metrizable for the weak$^*\!$ topology (see \cite[Corollary 2.6.20]{me98}), and hence, the common consumption set $X$ is a compact Polish space. It follows from \cite[Proposition 1]{au69} that there exists a Carath\'eodory function $\varphi:T\times X\to \mathbb{R}$ such that \begin{equation} \label{rp1} \forall x,y\in X\ \forall t\in T: x\,{\succsim}(t)\,y \Longleftrightarrow \varphi(t,x)\ge \varphi(t,y). \end{equation} (While \cite{au69} treated the case where $X$ is the nonnegative orthant of a finite-dimensional Euclidean space, the proof is obviously valid as it stands for the case where $X$ is a separable metric space.) Moreover, this representation in terms of Carath\'eodory functions is unique up to strictly increasing, continuous transformations in the following sense: If $F:T\times \mathbb{R}\to \mathbb{R}$ is a function such that $t\mapsto F(t,r)$ is measurable and $r\mapsto F(t,r)$ is strictly increasing and continuous, then $x\,{\succsim}(t)\,y \Leftrightarrow F(t,\varphi(t,x))\ge F(t,\varphi(t,y))$, where $(t,x)\mapsto F(t,\varphi(t,x))$ is a Carath\'eodory function. In the sequel, we may assume without loss of generality that the preference map $t\mapsto {\succsim}(t)$ is represented by a Carath\'eodory function $\varphi$ that is unique up to strictly increasing, continuous transformations. Given a continuous preference ${\succsim}(t)$ on $X$, its continuous affine extension ${\succsim}_\mathcal{R}(t)$ to $\Pi(X)$ is obtained by convexifying (randomizing) the individual utility function $\varphi(t,\cdot)$ in such a way \begin{equation} \label{rp2} \forall P,Q\in \Pi(X)\ \forall t\in T: P\,{\succsim}_\mathcal{R}(t)\,Q \stackrel{\text{def}}{\Longleftrightarrow} \int_X \varphi(t,x)dP\ge \int_X \varphi(t,x)dQ. \end{equation} The continuous extension ${\succsim}_{\mathcal{R}}(t)$ of ${\succsim}(t)$ from $X$ to the \textit{relaxed consumption set} $\Pi(X)$ is called a \textit{relaxed preference relation} on $\Pi(X)$. Thus, the restriction of ${\succsim_\mathcal{R}}(t)$ to $\Delta(X)$ coincides with ${\succsim}(t)$ on $X$. Indifference relation ${\sim}_\mathcal{R}(t)$ and strict relation ${\succ}_\mathcal{R}(t)$ are defined in a way analogous to the above. The extension formula in \eqref{rp2} conforms to the relaxation technique investigated in Section 3. As observed in \cite{ks16b}, relaxed preferences are also consistent with the axioms for the ``expected utility hypothesis'' and the continuous function $\varphi(t,\cdot)$ corresponds to the ``von Neumann--Morgenstern utility function'' for ${\succsim}_\mathcal{R}(t)$. Denote by $\mathcal{E}_\mathcal{R}=\{ (T,\Sigma,\mu),\Pi(X),{\succsim}_\mathcal{R},\delta_{\omega(\cdot)} \}$ the \textit{relaxed economy} induced by the original economy $\mathcal{E}=\{ (T,\Sigma,\mu),X,\succsim,\omega \}$, where the initial endowment $\omega(t)\in X$ of each agent is identified with a Dirac measure $\delta_{\omega(t)}\in \Delta(X)$, and hence, $\delta_{\omega(\cdot)}\in \mathcal{R}(T,X)$. Let $\imath_X$ be the identity map on $X$. We denote by $\int\imath_XdP$ the Gelfand integral of $\imath_X$ with respect to the probability measure $P\in \Pi(X)$. To deal with Pareto optimality with or without free disposal simultaneously, following \cite[Chapter 8]{mo06}, we introduce ``market constraints'' for the definition of (relaxed) allocations. \begin{dfn} Let $W$ be a nonempty subset of $E^*$. \begin{enumerate}[(i)] \item A Gelfand integrable function $f\in G^1(\mu,E^*)$ is an \textit{allocation} for $\mathcal{E}$ if it satisfies: $$ \int_Tf(t)d\mu-\int_T\omega(t)d\mu\in W \quad\text{and $f(t)\in X$ a.e. $t\in T$}. $$ \item A relaxed control $\lambda\in \mathcal{R}(T,X)$ is a \textit{relaxed allocation} for $\mathcal{E}_\mathcal{R}$ if it satisfies: $$ \int_T\int_X\imath_X(x)\lambda(t,dx)d\mu-\int_T\omega(t)d\mu\in W. $$ \end{enumerate} \end{dfn} \noindent In particular, when $W=\{ 0 \}$, the definition reduces to the (relaxed) allocations ``without'' \textit{free disposal}; when $-W$ is a convex cone and $E$ is endowed with the cone order $\le$ defined by $x\le y \Leftrightarrow y-x\in -W$, the definition reduces to the (relaxed) allocations ``with'' free disposal. Denote by $\mathcal{A}(\mathcal{E})$ the set of allocations for $\mathcal{E}$ and by $\mathcal{A}(\mathcal{E}_\mathcal{R})$ the set of relaxed allocations for $\mathcal{E}_\mathcal{R}$. If $\lambda$ is a relaxed allocation for $\mathcal{E}_\mathcal{R}$ such that $\lambda(t)=\delta_{f(t)}\in \Delta(X)$ for every $t\in T$ and $f\in G^1(\mu,E^*)$, then it reduces to the usual feasibility constraint $\int fd\mu-\int \omega d\mu\in W$ for $\mathcal{E}$. This means that $\mathcal{A}(\mathcal{E})\subset \mathcal{A}(\mathcal{E}_\mathcal{R})$. An immediate consequence of Theorem \ref{dens} leads to the density property of the set of allocations. \begin{thm} Let $(T,\Sigma,\mu)$ be a saturated finite measure space, $E$ be a separable Banach space, $X$ be a weakly$^*\!$ compact subset of $E^*$, and $W$ be a weakly$^*\!$ closed subset of $E^*$. Then $\mathcal{A}(\mathcal{E}_\mathcal{R})=\overline{\mathcal{A}(\mathcal{E})}^{\mathit{\,w}}$. \end{thm} \begin{proof} Simply apply Theorem \ref{dens} to the case with $\Phi(t,x)\equiv \imath_X(x)$, $U(t)\equiv X$, and $C=\int\omega d\mu+W$. \end{proof} \begin{dfn} \begin{enumerate}[(i)] \item An allocation $f\in \mathcal{A}(\mathcal{E})$ is \textit{Pareto optimal} for $\mathcal{E}$ if there exist no $g\in \mathcal{A}(\mathcal{E})$ and $A\in \Sigma$ of positive measure such that $g(t)\,{\succsim}(t)\,f(t)$ a.e.\ $t\in T$ and $g(t)\,{\succ}(t)\,f(t)$ for every $t\in A$. \item A relaxed allocation $\lambda\in \mathcal{A}(\mathcal{E}_{\mathcal{R}})$ is \textit{Pareto optimal} for $\mathcal{E}_\mathcal{R}$ if there exist no $\nu\in \mathcal{R}(T,X)$ and $A\in \Sigma$ of positive measure such that $\nu(t)\,{\succsim}_\mathcal{R}(t)\,\lambda(t)$ a.e.\ $t\in T$ and $\nu(t)\,{\succ}_\mathcal{R}(t)\,\lambda(t)$ for every $t\in A$. \end{enumerate} \end{dfn} \noindent Denote by $\mathcal{P}(\mathcal{E})$ the set of Pareto optimal allocations for $\mathcal{E}$ and by $\mathcal{P}(\mathcal{E}_\mathcal{R})$ the set of Pareto optimal relaxed allocations for $\mathcal{E}_\mathcal{R}$. \begin{thm} \label{exst3} Let $(T,\Sigma,\mu)$ be a finite measure space, $E$ be a separable Banach space, and $W$ be a weakly$^*\!$ closed subset of $E^*$. Then $\mathcal{P}(\mathcal{E}_\mathcal{R})$ is nonempty for every economy $\mathcal{E}$ satisfying Assumption \ref{assmp}. \end{thm} \begin{proof} If the Carath\'{e}odory integrand $\varphi$ in the preference representation \eqref{rp1} happens to be integrably unbounded, then choose any Carath\'{e}odory function $F:T\times \mathbb{R} \to \mathbb{R}$ such that $F(t,\cdot)$ is strictly increasing for every $t\in T$ and there exists $\psi\in L^1(\mu)$ satisfying $|F(t,r)|\le \psi(t)$ for every $(t,r)\in T\times \mathbb{R}$, and consider the transformation $(t,x)\mapsto F(t,\varphi(t,x))$ of the preference representation, which is obviously an integrably bounded Carath\'{e}odory integrand preserving \eqref{rp1}. (For example, letting $\tilde{\varphi}(t):=\max_{x\in X}|\varphi(t,x)|$ and $F(t,r):=e^{-\tilde{\varphi}(t)}r$ yields $|F(t,\varphi(t,x))|\le 1$ for every $(t,x)\in T\times X$.) Thus, without loss of generality we may assume that $\varphi$ is integrably bounded. Consider \eqref{rvp} with $\Phi(t,x)\equiv \imath_X(x)$, $U(t)\equiv X$, and $C=\int\omega d\mu+W$, which is reduced to the variational problem with a Gelfand integral constraint \begin{equation} \label{rvp2} \begin{aligned} & \max_{\lambda\in \mathcal{R}(T,X)}\int_T\int_X\varphi(t,x)\lambda(t,dx)d\mu \\ & \text{s.t. }\int_T\int_X\imath_X(x)\lambda(t,dx)d\mu-\int_T\omega(t)d\mu\in W. \end{aligned} \tag{RVP$'$} \end{equation} Suppose that the solution $\lambda$ to \eqref{rvp2} does not belong to $\mathcal{P}(\mathcal{E}_\mathcal{R})$. Then there exist $\nu\in \mathcal{A}(\mathcal{E}_\mathcal{R})$ and $A\in \Sigma$ of positive measure such that $\nu(t)\,{\succsim}_\mathcal{R}(t)\,\lambda(t)$ a.e.\ $t\in T$ and $\nu(t)\,{\succ}_\mathcal{R}(t)\,\lambda(t)$ for every $t\in A$. Given the preference formula \eqref{rp2}, this is equivalent to $\int\varphi(t,x)\nu(t,dx)\ge \int\varphi(t,x)\lambda(t,dx)$ a.e.\ $t\in T$ and $\int\varphi(t,x)\nu(t,dx)>\int\varphi(t,x)\lambda(t,dx)$ for every $t\in A$. Integrating these inequalities over $T$ and adding them up yield $\iint\varphi(t,x)\nu(t,dx)d\mu>\iint\varphi(t,x)\lambda(t,dx)d\mu$, a contradiction to the fact that $\lambda$ is a solution to \eqref{rvp2}. \end{proof} It should be noted that to guarantee that $\mathcal{P}(\mathcal{E}_\mathcal{R})$ is nonempty, the saturation and even the nonatomicity hypotheses are unnecessary as well as the convexity hypothesis. Under the saturation hypothesis, the existence of Pareto optimal allocations for the original economy is guaranteed. \begin{thm} \label{exst4} Let $(T,\Sigma,\mu)$ be a saturated finite measure space, $E$ be a separable Banach space, and $W$ be a weakly$^*\!$ closed subset of $E^*$. Then $\mathcal{P}(\mathcal{E})$ is nonempty for every economy $\mathcal{E}$ satisfying Assumption \ref{assmp}. \end{thm} \begin{proof} As shown in the proof of Theorem \ref{exst3}, any solution $\lambda\in \mathcal{R}(T,X)$ to \eqref{vp2} belongs to $\mathcal{P}(\mathcal{E}_\mathcal{R})$. It follows from Proposition \ref{PP1} that there exists $f\in \mathcal{A}(\mathcal{E})$ such that $\iint \varphi(t,x)\lambda(t,dx)=\int\varphi(t,f(t))d\mu$. If $f$ is not a solution to the variational problem \begin{equation} \label{vp2} \begin{aligned} & \max_{f\in \mathcal{M}(T,X)}\int_T\varphi(t,f(t))d\mu \\ & \text{s.t. }\int_Tf(t)d\mu-\int_T\omega(t)d\mu\in W \end{aligned} \tag{VP$'$} \end{equation} then there exists $g\in \mathcal{A}(\mathcal{E})$ such that $\int\varphi(t,g(t))d\mu>\int\varphi(t,f(t))d\mu$. Since $\delta_{g(\cdot)}\in \mathcal{A}(\mathcal{E}_\mathcal{R})$, the above inequality obviously contradicts the fact that $\lambda\in \mathcal{R}(T,X)$ is a solution to \eqref{vp2}. Suppose that the solution $f$ to \eqref{vp2} does not belong to $\mathcal{P}(\mathcal{E})$. Then there exist $g\in \mathcal{A}(\mathcal{E})$ and $A\in \Sigma$ of positive measure such that $g(t)\,{\succsim}(t)\,f(t)$ a.e.\ $t\in T$ and $g(t)\,{\succ}(t)\,f(t)$ for every $t\in A$. Given the preference formula \eqref{rp1}, this is equivalent to $\varphi(t,g(t))\ge \varphi(t,f(t))$ a.e.\ $t\in T$ and $\varphi(t,g(t))>\varphi(t,f(t))$ for every $t\in A$. Integrating these inequalities over $T$ and adding them up yield $\int\varphi(t,g(t))d\mu>\int\varphi(t,f(t))d\mu$, a contradiction to the fact that $f$ is a solution to \eqref{vp2}. \end{proof} The existence of (relaxed) Walrasian equilibria with free disposal is investigated in \cite{ks16b} for the commodity space with the dual space of $L^\infty=(L^1)^*$. The crucial argument for the proof is the nonemptiness of the norm interior of the positive cone of $L^\infty$, which fails to hold for general dual spaces. It is a challenging open question to establish the existence of (relaxed) Walrasian equilibria for general dual spaces. \begin{rem} The role of the weak$^*\!$ compactness of the consumption set $X$ in Assumption \ref{assmp} is twofold. The first role is to guarantee the existence of continuous utility functions. To apply the celebrated Debreu's utility representation theorem, $X$ is required to satisfy the second axiom of countability; in particular, it needs to be a separable metric space. Without the weak$^*\!$ compactness assumption, $X$ may not be a separable metric space with respect to the weak$^*\!$ topology even if $E$ is separable, which prevents one from obtaining continuous utility functions representing continuous preference relations. The second role is to guarantee the existence of solutions to \eqref{rvp} in Theorem \ref{exst1}. The lack of compactness of $X$ inevitably leads to the noncompactness of $\mathcal{R}(T,X)$ and $\Pi(X)$, and hence, to a possible failure of Theorems \ref{exst3} and \ref{exst4}. \end{rem}
1,108,101,565,388
arxiv
\section{Introduction} \label{sec:intro} Block-wise product (BWP) codes, particularly BWP-BCH codes, recently received quite a lot of research as well as practical interests \cite{Cho14} -- \cite{Kim16}. In BWP codes, the user data is arranged in a two-dimensional array, composed of rows and columns. Each entry of the array is composed of multiple bits, and is called a {\em block}, or {\em intersecting block}, since it intersects a row and a column. Each row and column is encoded by an error-correcting code, typically a binary BCH code. In this work we only consider binary BCH codes as constituent codes. BWP-BCH codes are also called \emph{block-wise concatenated BCH (BC-BCH) codes}. The decoding of a BWP-BCH code is performed iteratively, such that in each iteration the constituent sensewords in one dimension (say, the rows) are decoded first, and subsequently the words of the opposite dimension (columns) are decoded. The expansion of the intersection from a single bit in conventional product codes to block in BWP-BCH codes allows for stronger (and fewer) constituent codes for the same overall block length and rate parameters. It turns out that having stronger constituent codes, even though their number is smaller, provides significant performance boost. A BWP-BCH code with this structure is said to be \emph{parallel concatenated}, since the rows and columns can be encoded in parallel. BCH codes can also be block-wise concatenated in series, by encoding one dimension first (say, the rows), and then encoding the opposite dimension, \emph{including} the parity bits of the first dimension. This is advantageous for classical product codes, but is less effective for BPC codes wherein the parity-on-parity property no longer holds. An optimization of the construction to improve the BCH parameters was proposed in~\cite{Kim15}. Serial concatenation is investigated in~\cite{Kim16}. The decoding of BWP-BCH codes can be performed in a soft or hard manner. In hard decoding, the decoder receives a binary channel output for each transmitted bit. The hard decoding process is simply to iteratively alternate row-wise and column-wise hard decoding using the Berlekamp algorithm. In soft decoding, the decoder receives multi-bit soft information regarding the likelihood of the channel value of each transmitted bit, wherein soft decoding of BCH codes is often referred to as Chase-II decoding \cite{Pyndiah98}. An important challenge in the design of BWP-BCH codes and their decoder is the mitigation of error floor. An error floor forms in BWP-BCH codes when the noise level is low enough for most constituent words (rows and columns) to be corrected, except for a small number of words whose error count exceeds their correction capability. Three methods were proposed to mitigate the error floor of BWP-BCH codes: soft decoding~\cite{Cho14}, a concatenation of an erasure code over the intersecting blocks~\cite{Yu14}, and collaborative decoding~\cite{Kim15}. In \cite{Cho14}, the decoder obtains additional soft information from the media and uses it to decode the failed BCH codes. \cite{Yu14} considers to concatenate an erasure code over the intersecting blocks, such that each block is treated as a symbol of the erasure code. Erroneous blocks are identified by the intersections of failed rows and columns, and are corrected by the outer erasure code. It explored Raptor or Reed-Solomon (RS) codes as erasure codes. In \cite{Kim15}, the row and column that intersect in a single erroneous block is combined to cancel the errors in the intersected block (but adding up errors in parity) in attempt to correct the errors. BWP-BCH coding is a strong contender in solid-state-drive (SSD) controller \cite{Cho14, Yu14, Kim16}. It particularly appeals to enterprise SSD storage, wherein latency is the most critical metric. Normal read in flash NAND outputs single bit information without soft reliability, while it takes much longer latency to generate soft reliability (Typically multiple retry reads are carried to externally combine into multi-bit soft information, or a dedicated NAND command carries multiple reads and combines into multi-bit soft information internally). Thus soft-read in flash does not appeal to enterprise storage. Data sector length is prevalently 4K bytes (possibly with extra metadata) while gradually migrating to 8K bytes. The typical parity overhead ranges 8 $\sim$ 16\%. The error floor is required to be below 1e-16. BWP-BCH coding is also proposed for the optical transport network, wherein only hard-decision decoding is considered \cite{ITU}. For modern optical transport network, it is generally required that the output bit-error-rate (BER) be below 1e−15, and the codec be able to achieve very high data rate, e.g., 100 Gbps. Thus, turbo codes and low density parity check (LDPC) codes are not suited. In this work we systematically explore BWP-BCH codes. Our contributions are in threefold. Firstly, we devise various new decoding algorithms for BCH codes. The proposed $-1$ decoding algorithm effectively eliminates fruitless Chien search when decoding is proclaimed unsuccessful. It is often employed in decoding of (block-wise) product BCH codes in order to reduce BCH miscorrection rate. The refined $+1$ list decoding algorithm exhibits the same complexity order as the state-of-the-art hard decoding algorithms. The proposed $+2$ list decoding algorithm exhibited a desired computational complexity of $O(n^2)$, where $n$ denotes the code length. Secondly, we design scalable BWP-BCH codes under the given arbitrarily data and parity length, in attempt to simultaneously achieve three goals: good scalability, low encoding and decoding complexity, and good waterfall performance with low error floor. We arrange the input data in a near square array so that row and column codes share similar parameters, which allows to share the same circuit for row-wise and column-wise decoding. It also allows to accommodate slight increasing or decreasing data or parity length with minor architectural changes. We choose extended BCH (eBCH) code instead of BCH code as constituents in purpose for reducing constituent-wise miscorrection rate wile simplifying decoding process. We use an inner high-rate Reed-Solomon code to lower error floor while minimizing extra parity overhead and implementation complexity. Lastly, we investigate efficient hard decoding of the proposed BWP-BCH codes. We aim to minimize three different error events which cause hard decoding failure. The first one is excessive errors in a few data blocks which causes both row and column decoding failure. The second one is excessive errors in a few eBCH parities, which are not cross protected. The last one is miscorrection of eBCH constituents, due to small minimum distance. We present a novel iterative decoding algorithm which is divided into three phases. The first phase iteratively applies reduced BCH correction capabilities to correct lightly corrupted rows/columns while suppressing miscorrection, until the process stalls. The second phase iteratively decodes up to the designed correction capabilities, until the process stalls. The last phase iteratively applies the proposed list decoding in a novel manner which effectively determines the correct candidate as follows. Upon successfully determining a list of candidates from a failed row (column) constituent word, trial-correction is performed on each candidate. Each time check if the crossing column (row) Berlekamp decoding of any previous failed word is successful. We choose the one that results in the most number of crossing column (row) corrections and make both row and column corrections accordingly. The paper is organized as following. Section II devises BCH decoding algorithms for reduced-1-bit decoding, extra-1-bit decoding, and extra-2-bit decoding. Section III presents a systematic BPC-BCH code construction that carefully takes into account for performance, implementation and scalability. Section IV describes a novel iterative decoding algorithm for BWP-BCH codes aiming to improve both waterfall and error-floor performance. Section~V describes the evaluation setup and results to validate the proposed decoding algorithm. Section~VI concludes with pertinent remarks. \section{BCH Decoding Algorithms} \label{sec:list} Let $t$ denote the designed error-correction capability of a (possibly shortened) BCH ($n$, $k$) code, defined over a binary extension field $\text{GF}(q=2^m)$. Let $\alpha$ denote a primitive element of $\text{GF}(q)$. The underlying generator polynomial $g(x)$ of the BCH code contains consecutive roots $\alpha, \alpha^2, \dots, \alpha^{2t}$, such that \begin{equation} g(x)\stackrel{\triangle}{=} \text{LCM}\big(\mu_1(x), \mu_3(x),\ldots, \mu_{2t-1}(x)\big) \end{equation} where $\mu_i(x)$ denotes the minimal binary polynomial of $\alpha^i$ and LCM stands for the least common multiple (cf \cite{MacWilliams}). A counterpart extended BCH (eBCH) code adds an extra root 1, i.e., \begin{equation} {\bar g}(x)\stackrel{\triangle}{=} (x-1)\text{LCM}\big(\mu_1(x), \mu_3(x),\ldots, \mu_{2t-1}(x)\big). \end{equation} Clearly, each eBCH codeword has even Hamming weight. It is shown that (p. 263, \cite{MacWilliams}) \begin{lemma} The error-correction capability $t$ of a BCH $(n, k)$ code satisfies \begin{equation} n-k = mt \end{equation} and \begin{equation} g(x) = \mu_1(x)\mu_3(x)\ldots \mu_{2t-1}(x) \end{equation} when it holds \begin{equation} t\leq 2^{\lceil m/2\rceil -1}. \label{linear-t-condition} \end{equation} \end{lemma} Note the BCH codes of practical interests have high code rates, hence, we shall practically treat $n-k = mt$ in the subsequent analyses. Define the support of a codeword to be the index set of its nonzero entries. Next lemma sheds some light on the codeword distribution \begin{lemma} \label{LEM-BCH-min-support} There does not exist a nonzero codeword whose support lies in the interval of $n-k$. \end{lemma} {\em Proof: } Assume otherwise is true. Let $c(x)$ be such a codeword polynomial. Then, $c(x)$ must be in the form of $$c(x)=x^i c^*(x)$$ where $c^*(x)$ has degree less $n-k$ by assumption. By definition, $c(x)$ divides the generator polynomial $g(x)$. Note $x^i$ is coprime with $g(x)$, thus $c^*(x)$ must divide $g(x)$. This is apparently false, since $\deg(c^*(x))<n-k=\deg(g(x))$. This concludes the lemma. \hfill $\Box\Box$ Let a (possibly shortened) BCH ($n$, $k$) code be with the designed error correction capability $t$. For a binary senseword $r(x)=\sum_{j=0}^{n-1} r_jx^j$, its syndromes are defined \begin{equation} S_i\stackrel{\triangle}{=} r(\alpha^{i+1}), i=0, 1, \ldots, 2t-1. \end{equation} The Berlekamp algorithm is a simplified version of the Berlekamp-Massey algorithm for decoding binary BCH codes by incorporating the special syndrome property \begin{equation} S_{2i+1}=S_{i}^2, \hspace{0.2in} i=0, 1, 2, \ldots \label{square-syndrome} \end{equation} which yields zero discrepancies at even iterations of the Berlekamp-Massey algorithm (cf. \cite{Blahut}). Below is a concisely reformulated Berlekamp algorithm. \vspace{0.1in} {\fbox {\bf ALG-1: Reformulated Berlekamp Algorithm}} {\small \begin{itemize} \item Input: \ ${\bf S}=[S_0, \;\; S_1,\; \;S_2,\; \;\ldots,\;\; S_{2t-1}]$ \item Initialization: \ $\Lambda^{(0)}(x)=1$, ${\mathcal B} ^{(-1)}(x)=x$, $L^{(0)}_\Lambda=0$, $L^{(-1)}_{\mathcal B} =1$ \item For $r=0$, 2, \ldots, $2t-2$, \ do: \begin{itemize} \item Compute $\Delta^{(r+2)}=\sum_{i=0}^{L^{(r)}_\Lambda} \Lambda^{(r)}_i \cdot S_{r-i}$ \item Compute $\Lambda^{(r+2)}(x)=\Lambda^{(r)}(x) -\Delta^{(r+2)} \cdot {\mathcal B} ^{(r-1)}(x)$ \item If $\Delta^{(r+2)} \ne 0$ and $2L^{(r)}_\Lambda\leq r$, \ then \begin{itemize} \item Set ${\mathcal B} ^{(r+1)}(x)\gets (\Delta^{(r+2)})^{-1} \cdot x^2\Lambda^{(r)}(x)$ \item Set $L^{(r+2)}_\Lambda\gets L^{(r-1)}_{\mathcal B} $, \ \ $L^{(r+1)}_{\mathcal B} \gets L^{(r)}_\Lambda+2$ \end{itemize} \item Else \begin{itemize} \item Set ${\mathcal B} ^{(r+1)}(x) \gets x^2 {\mathcal B} ^{(r-1)}(x)$ \item Set $L^{(r+1)}_{\mathcal B} \gets L^{(r-1)}_{\mathcal B} +2$, \ \ $L^{(r+2)}_\Lambda\gets L^{(r)}_\Lambda$ \end{itemize} \end{itemize} \item Output: \ $\Lambda(x)$, \ ${\mathcal B} (x)$, \ $L_\Lambda$, \ $L_{\mathcal B} $ \end{itemize} } In the above algorithm, ${\mathcal B} (x)$ is a shifted $B(x)$ which is widely used in textbooks (cf. \cite{Blahut}), ${\mathcal B} (x)\stackrel{\triangle}{=} x^2 B(x)$ (we found ${\mathcal B} (x)$ more concise in subsequent algorithmic descriptions). It is easily observed from the above algorithm \begin{equation} L_\Lambda + L_{\mathcal B} = 2t+1. \end{equation} For the conventional decoding, the so-called Chien search (P. 164, \cite{Blahut}) is an exhaustive root search among $\{\alpha^{-i}\}_{i=0}^{n-1}$ carried out on $\Lambda(x)$. It is worth noting that, for practical high-rate BCH codes, the Chien search is far more computationally intensive than the Berlekamp algorithm and syndrome computation. If the number of distinct roots equals to $L_\Lambda$, then all root indexes correspond to the error locations, otherwise the decoding is declared failure. We refer ``$-1$ decoding'' to correcting up to $t-1$ errors under the designed correction capability $t$, In \cite{Al09}, it is shown that performing reduced-1-bit decoding during the first iteration effectively achieves superior decoding performance by reducing constituent-wise miscorrection rate. In Section IV, we shall also incorporate reduced-1-bit decoding into our proposed iterative decoding of BWP-BCH codes. In the following we present an efficient reduced-1-bit decoding algorithm. \vspace{0.1in} {\fbox {\bf ALG-2: $-1$ Decoding Algorithm}} \begin{itemize} \item Input: \ $S_0$, $S_1$, \ldots, $S_{2t-1}$ \item Apply the Berlekamp algorithm to produce $\Lambda(x)$ and $L_\Lambda$. \item If $L_\Lambda \geq t$, then declare failure. \item Perform the Chien search to determine all roots. If the number of distinct roots equals to $L_\Lambda$ then correct all erroneous bits, else declare failure. \end{itemize} Note the extra syndrome $S_{2t-1}$ is used for $t-1$-correction. Its advantage is twofold. When $t-1$ correctable, it guarantees $L_\Lambda<t$, otherwise, $L_\Lambda<t$ occurs with probability $q^{-1}$, where $q$ denotes its operation field \cite{Blahut, Wu08}, thus precluding the fruitless Chien search. Clearly, when there are $t$ errors, the Berlekamp algorithm results in $L_\Lambda=t$ and thus the Chien search is precluded. We next introduce the list decoding algorithms to correct extra 1 bit beyond $t$. Define \begin{equation} Q(x)\stackrel{\triangle}{=} \frac{\Lambda(x)}{{\mathcal B} (x)} \label{def-Delta} \end{equation} and \begin{equation} Q_i = \{ j: \; Q(\alpha^{-j}) = i \}. \end{equation} An efficient extra-1-bit list decoding algorithm was given below, with minor modifications from \cite{Wu08}. \vspace{0.1in} {\fbox {\bf ALG-3: $+1$ List Decoding Algorithm}} \begin{enumerate} \item[Input:] \ $\Lambda(x)$, \ ${\mathcal B} (x)$, \ $L_\Lambda$ \item If $L_\Lambda > t+1 $, then declare a decoding failure. \item If $L_\Lambda \leq t $, then determine all distinct roots in $\{\alpha^{-i}\}_{i=0}^{n-1}$. If the number of (distinct) roots is equal to $L_\Lambda$, then return the corresponding unique codeword, otherwise, if $L_\Lambda < t$, declare a decoding failure (which is identical to the normal Berlekamp algorithm) \item Initialize $\delta_i=\emptyset$, $i=0$, 1, 2, \ldots, $q-1$ \item For $i=0$, 1, 2, \ldots, $n-1$, {\bf do}: \begin{itemize} \item Evaluate $Q_i=\frac{\Lambda(\alpha^{-i})}{{\mathcal B} (\alpha^{-i})}$. \item If $Q_i\ne \infty$, then set $\delta_{Q_i}\gets \delta_{Q_i} \cup \{i\}$. \item If $|\delta_{Q_i}| = t+1$, then flip bits on indices in $\delta_{Q_i}$ and output the resulting candidate codeword. \end{itemize} \end{enumerate} Note a minor correction in Step 2 is made over the original algorithm in \cite{Wu08}. Specifically, the clause ``if $L_\Lambda < t$'' is necessary for an early termination, whereas $L_\Lambda = t$ may yield valid $t+1$ error corrections. On another note, ${\mathcal B} _i=0$ results in $Q_i=\infty$. Since ${\mathcal B} (x)$ is not a valid error locator polynomial so its roots are safely ignored. We observe that $\{Q_i\}_{i=0}^{q-1}$ may be efficiently implemented by link list structure at the space complexity of $O(q)$. Overall, the computational complexity of the above algorithm remains the same as the Berlekamp algorithm, i.e., $O(tn)$, but utilizing a larger space complexity of $O(q)$. Note that $n$ terms of $\{Q_i\}_{i=0}^{n-1}$ may contribute to at most $\lfloor\frac{n}{t+1}\rfloor$ groups of $t+1$ identical values. Therefore, the above one-step-ahead algorithm may produce up to $\lfloor\frac{n}{t+1}\rfloor$ candidate codewords. An alternative interpretation is to flip each of $n$ bits and each time to apply the Berlekamp algorithm. This produces at most $n$ codewords (assuming each decoding trail is successful). Note any codeword is repeated in $t+1$ times, i.e., the same codeword is yielded by flipping any of $t+1$ error bits. However, if the actual minimum distance is at least $2t+3$, particularly for shortened codes, then there is up to single candidate. For some (particularly low rate) BCH codes, it occurs that $S_{2t+2}$, \ldots, $S_{2t+2\tau}$ ($\tau\geq 1$) are known. By sweeping $S_{2t}$ over $\text{GF}(q)$, up to $t+1+\tau$ errors are list decoded with computational complexity of $O(qnt)$. The algorithm is detailed below \vspace{0.1in} {\fbox {\bf ALG-4: $+\tau+1$ List Decoding with Known $\{S_{2t+2i}\}_{i=1}^\tau$ }} \begin{itemize} \item[Input:] \ $\Lambda^{(2t)}(x)$, \ ${\mathcal B} ^{(2t-1)}(x)$, \ $L^{(2t)}_\Lambda$, \ $L^{(2t-1)}_{\mathcal B} $, \ $\{S_{2t+2i}\}_{i=1}^\tau$ \item For $S_{2t}=0$, 1, 2, \ldots, $q-1$, \ {\bf do}: \begin{itemize} \item For $r=2t$, $2t+2$, \ldots, $2t+2\tau$, \ {\bf do}: \begin{itemize} \item Compute $\Delta^{(r+2)}=\sum_{i=0}^{L^{(r)}_\Lambda} \Lambda^{(r)}_i \cdot S_{r-i}$ \item Compute $\Lambda^{(r+2)}(x)=\Lambda^{(r)}(x) -\Delta^{(r+2)} \cdot {\mathcal B} ^{(r-1)}(x)$ \item If $\Delta^{(r+2)} \ne 0$ and $2L^{(r)}_\Lambda\leq r$, \ then \begin{itemize} \item Set ${\mathcal B} ^{(r+1)}(x)\gets (\Delta^{(r+2)})^{-1} \cdot x^2\Lambda^{(r)}(x)$ \item Set $L^{(r+2)}_\Lambda\gets L^{(r-1)}_{\mathcal B} $, \ \ $L^{(r+1)}_{\mathcal B} \gets L^{(r)}_\Lambda+2$ \end{itemize} \item Else \begin{itemize} \item Set ${\mathcal B} ^{(r+1)}(x) \gets x^2 {\mathcal B} ^{(r-1)}(x)$ \item Set $L^{(r+1)}_{\mathcal B} \gets L^{(r-1)}_{\mathcal B} +2$, \ \ $L^{(r+2)}_\Lambda\gets L^{(r)}_\Lambda$ \end{itemize} \end{itemize} \item Perform the Chien search on $\Lambda(x)$. If the number of distinct roots equals to $L_\Lambda$, then output the resulting candidate codeword. \end{itemize} \end{itemize} Consider the (63, 24) BCH code with $t=7$. Its generator polynomial contains the roots $\alpha$, $\alpha^3$, \ldots, $\alpha^{13}$, but also two extra roots $\alpha^{17}$ and $\alpha^{19}$. For this code, up to 3 extra errors, i.e., up to 10 errors, can be listed decoded by sweeping $S_{14}$. Note the above algorithm may further incorporate the preceding extra-1-bit decoding to achieve extra-$\tau+2$-bit list decoding at the same order of complexity. We proceed to present an efficient extra-2-bit list decoding algorithm. The basic idea is to apply a one-pass Chase decoding \cite{Wu12} and to follow with the above extra-1-bit decoding. \cite{Wu12} described a one-pass Chase decoding algorithm in which the error locator polynomial associated with flipping a bit can be obtained in constant time and with the computational complexity of $O(t)$. The following describes one-pass one-bit flipping Chase decoding. \vspace{0.1in} {\fbox {\bf ALG-5: One-Pass One-Bit Flipping Chase Decoding Algorithm}} \begin{itemize} \item[Input:] \ $\Lambda(x)$, \ ${\mathcal B} (x)$, \ $L_\Lambda$, \ $L_{\mathcal B} $ \item For $i=0$, 1, 2, \ldots, $n-1$, \ do: {\small \begin{enumerate} \item Evaluate \ $\Lambda_i\gets \Lambda(\alpha^{-i}), \hspace{0.1in} {\mathcal B} _i\gets {\mathcal B} (\alpha^{-i})$ \item Update polynomials: \begin{itemize} \item {\bf Case 1}: $\Lambda_i=0$ $\vee$ ($\Lambda_i\ne 0$ $\wedge$ ${\mathcal B} _i\ne 0$ $\wedge$ $L_\Lambda \geq L_{\mathcal B} $) \begin{equation*} \left\{ \begin{array}{l} \Lambda^{(i)}(x) \gets {\mathcal B} _i \cdot \Lambda(x) - \Lambda_i \cdot {\mathcal B} (x) \\ {\mathcal B} ^{(i)}(x) \gets (x^2-\alpha^{-2i}) {\mathcal B} (x) \\ L^{(i)}_\Lambda \gets L_\Lambda, \hspace{0.1in} L^{(i)}_{\mathcal B} \gets L_{\mathcal B} +2 \end{array}\right. \end{equation*} \item {\bf Case 2}: ${\mathcal B} _i=0$ $\vee$ ($\Lambda_i\ne 0$ $\wedge$ ${\mathcal B} _i\ne 0$ $\wedge$ $L_\Lambda<L_{\mathcal B} -1$) \begin{equation*} \left\{ \begin{array}{l} \Lambda^{(i)}(x) \gets (x^2-\alpha_i^{-2}) \Lambda(x) \\ {\mathcal B} ^{(i)}(x) \gets {\mathcal B} _i \cdot x^2\Lambda(x) - \alpha^{-2i}\Lambda_i \cdot {\mathcal B} (x) \\ L^{(i)}_\Lambda \gets L_\Lambda+2, \hspace{0.1in} L^{(i)}_{\mathcal B} \gets L_{\mathcal B} \end{array}\right. \end{equation*} \item {\bf Case 3}: $\Lambda_i\ne 0$ $\wedge$ ${\mathcal B} _i\ne 0$ $\wedge$ $L_\Lambda=L_{\mathcal B} -1$ \begin{equation*} \left\{ \begin{array}{l} \Lambda^{(i)}(x) \gets {\mathcal B} _i \cdot \Lambda(x) - \Lambda_i \cdot {\mathcal B} (x) \\ {\mathcal B} ^{(i)}(x) \gets {\mathcal B} _i \cdot x^2\Lambda(x) - \alpha^{-2i}\Lambda_i \cdot {\mathcal B} (x) \\ L^{(i)}_\Lambda \gets L_\Lambda+1, \hspace{0.1in} L^{(i)}_{\mathcal B} \gets L_{\mathcal B} +1 \end{array}\right. \end{equation*} \end{itemize} \end{enumerate} } \end{itemize} For each pair $\left(\Lambda^{(i)}(x), {\mathcal B} ^{(i)}(x)\right)$, $i=0, 1, \ldots, n-1$, we may apply the proposed $+1$ decoding algorithm to determine all candidates up to $t+1$ bits difference (note the index $i$ is pre-flipped). Thus, combining the above one-pass one-bit flipping Chase decoding algorithm and the $+1$ decoding algorithm effectively list decodes all codewords up to distance $t+2$. Clearly, the overall computational complexity is $O(n^2t)$, due to $n$ deployments of extra-1-bit decoding. We now explore ways to reduce complexity. We first note that $\{(\Lambda_i,\; {\mathcal B} _i)\}_{i=0}^{n-1}$ are evaluated by the above algorithm, so, instead of updating polynomial pairs $(\Lambda^{(i)}(x), {\mathcal B} ^{(i)}(x))$, $i=0, 1, \ldots, n-1$, it takes $O(n)$ to evaluate each vector pairs $\left( \{\Lambda_j^{(i)} \}_{j=0}^{n-1},\;\; \{{\mathcal B} _j^{(i)} \}_{j=0}^{n-1} \right)$ for an index $i$. Consequently, the $+1$ decoding algorithm takes merely $O(n)$ complexity. We further note that, when $i$-th index is flipped for $+1$ decoding, all candidates of $t+2$ bits correction involving at least a bit among indexes \{0, 1, 2, \ldots, $i-1\}$ have been listed, therefore, the extra-1-bit decoding algorithm associated with $i$-th bit flipping suffices to search through the index subset $\{i+1, i+2, \ldots, n-1\}$. The detailed algorithmic procedure is described below. \vspace{0.1in} \noindent {\fbox {\bf ALG-6: $+2$ List Decoding Algorithm}} \begin{enumerate} \item[Input:] \ $\Lambda(x)$, \ ${\mathcal B} (x)$, \ $L_\Lambda$, \ $L_{\mathcal B} $ \item Evaluate and store \ $\{\Lambda_i\}_{i=0}^{n-1} \gets \{\Lambda(\alpha^{-i})\}_{i=0}^{n-1}, \hspace{0.1in} \{{\mathcal B} _i\}_{i=0}^{n-1} \gets \{{\mathcal B} (\alpha^{-i})\}_{i=0}^{n-1}.$ \item {\bf For} $i=0$, 1, 2, \ldots, $n-t-2$, \ {\bf do}: {\small \begin{enumerate} \item Initialize $\delta_j=\emptyset$, $j=0$, 1, 2, \ldots, $q-1$. \item For $j= i+1$, $i+2$, \ldots, $n-1$, \ {\bf do}: \begin{itemize} \item Compute \begin{equation*} {\bar Q}_j \gets \begin{cases} \frac{ {\mathcal B} _i \Lambda_j + \Lambda_i {\mathcal B} _j }{ (\alpha^{-2(j-i)}+1){\mathcal B} _j }, & \text{if } \Lambda_i=0 \vee (\Lambda_i\ne 0 \wedge {\mathcal B} _i\ne 0 \wedge L_\Lambda \geq L_{\mathcal B} ) \\ \frac{ (\alpha^{-2j} + \alpha^{-2i}) \Lambda_j }{ \alpha^{-2(j-i)} {\mathcal B} _i \Lambda_j+\Lambda_i {\mathcal B} _j }, & \text{if } {\mathcal B} _i=0 \vee (\Lambda_i\ne 0 \wedge {\mathcal B} _i\ne 0 \wedge L_\Lambda<L_{\mathcal B} -1) \\ \frac{ {\mathcal B} _i \Lambda_j + \Lambda_i {\mathcal B} _j }{ \alpha^{-2(j-i)} {\mathcal B} _i \Lambda_j + \Lambda_i {\mathcal B} _j }, & \text{otherwise} \end{cases} \end{equation*} \item If ${\bar Q}_j\ne \infty$, then set $\delta_{{\bar Q}_j}\gets \delta_{{\bar Q}_j} \cup \{j\}$. \item If $|\delta_{{\bar Q}_j}| = t+1$, then flip bits on indices in $\delta_{{\bar Q}_j}\cup \{i\}$ and output the resulting candidate codeword. \end{itemize} \end{enumerate} } \end{enumerate} Note that ${\bar Q}_j$ in the above Step 2.b is scaled by a constant $\alpha^{-2i}$ without altering result. Clearly, the above algorithm exhibits a computational complexity of $O(n^2)$ and space complexity of $O(q)$. Assume in a perfect scenario that flipping each of two bits results in a candidate with $t$ errors. There are up to $\binom{n}{2}$ candidate codewords. Note each candidate is exactly repeatedly counted in $\binom{t+2}{2}$ times, this is because flipping any of 2 out of $t+2$ errors is corrected to the same codeword. Therefore, the number of candidate codewords is bounded by $\frac{n(n-1)}{(t+2)(t+1)}$. However, if the actual minimum distance is at least $2t+3$, particularly in the case of shortened codes, then there exist up to $\lfloor \frac{n}{t+2} \rfloor$ candidates. Moreover, if the actual minimum distance is at least $2t+5$, particularly in the case of highly shortened codes, then there is up to single candidate. An alternative but less efficient approach to perform extra-2-bit list decoding is by sweeping all possibilities of $S_{2t}$, equivalently all possibilities of $\Delta_{2t+2}$, and then deploying extra-1-bit list decoding over each of $q$ pairs of $\big(\Lambda(x), \; {\mathcal B} (x)\big)$. Its complexity, after appropriate optimization, is reduced to $O(qn)$. This approach is akin to the $t+1$ list decoding algorithm for Reed-Solomon codes \cite{Egorov04}. The proposed $+2$ decoding algorithm is clearly advantageous when $n\ll q$, which often holds true during iterative decoding of BPC-BCH and other product-like BCH codes. Algorithms presented in this section assume the consecutive error locators, $\{\alpha^i\}_{i=0}^{n-1}$, as defined in conventionally shortened codes. In the scenario of BWP-BCH decoding, due to partial correction by the cross decoding, the error locators are usually not consecutive. Let $\{\alpha_i\}_{i=0}^{n^*-1}$ denote the set of uncorrected error locators (where $n^*\leq n$). To this end, $\alpha^i$ (likewise $\alpha^j$) in the above algorithms are to be replaced by $\alpha_i$, such that, $\alpha^{-i}\to \alpha_i^{-1}$, $\alpha^{-2i}\to \alpha_i^{-2}$. \begin{figure}[t] \begin{center} \centering \includegraphics[width=6.8in]{BWP_BCH_CW.pdf} \vspace{-0.5in} \caption{An example illustration of the case-1 BWP-BCH codeword.} \label{FIG:codeword-case1} \end{center} \end{figure} \begin{figure}[t] \begin{center} \centering \includegraphics[width=6.8in]{BWP_BCH_CW2.pdf} \caption{An example illustration of the case-2 BWP-BCH codeword.} \label{FIG:codeword-case2} \end{center} \end{figure} \section{Designing Scalable Block-Wise Product BCH Codes} Instead of following conventional wisdom of studying a regularly defined BWP-BCH code, we start from scratch with an arbitrarily given message length $K$ and parity length $R$. We leverage the following freedoms to design a ``good'' BWP-BCH code, wherein ``good'' qualitatively means good waterfall performance and low error floor, \\ $(i)$. block size $b$; \\ $(ii)$. BCH vs. eBCH;\\ $(iii)$. serial vs. parallel concatenation; \\ $(iv)$. square vs. rectangular shape in organizing message blocks; \\ $(v)$. concatenation of an outer or inner erasure code. Specifically, we leverage these freedoms under the following guidelines.\\ $(i)$. Block size $b$ affects the BCH message length, operation field, and its error correction capability. $b$ must be relatively large so that the resulting BCH codes exhibit low miscorrection rate. It is preferred to have $t\geq 4$. Also, it is preferred to choose $b$ a multiple of 8 to facilitate hardware implementation. However, It is unnecessary to force $b$ dividing $K$, as opposed to the literature \cite{Cho14, Kim15}. This is because a partial last message block (wherein $b$ does not divide $K$) can be mitigated at no rate penalty. Specifically, the partial block is padded with $\left\lceil \frac{K}{b} \right\rceil \cdot b -K$ zeros to form a full block for both inner and outer coding, but the padded zeros are not actually transmitted and the equal number of zeros are re-padded at the receiver. Though it is possible to harness the error-free property of the padded zeros to rule out certain miscorrection, we suggest to treat the padded block as a regular block, so that all row and column BCH constituent words are treated unanimously. \\ $(ii)$. An eBCH code includes an extra parity bit to enforce even code weights, and thus halves the miscorrection rate. Using a parity bit also allows to reduce decoding complexity. For a given eBCH senseword, either $+1$ list decoding or $+2$ list decoding is applicable, but not both. This is because, the parity syndrome being 0 indicates that the number of errors must be even, whereas 1 indicates an odd number of errors. In \cite{Chen09}, it is proven that the number of Chase-II decoding trials of eBCH codes can be cut by half without performance loss, rendering more efficient soft-decision decoding of product eBCH codes. Our extensive simulations also indicate that BWP-eBCH codes performs slightly superior to the counterpart BWP-BCH codes, attributed to the smaller miscorrection of eBCH codes. Therefore, we shall use eBCH, instead of BCH, codes as constituent code. \\ $(iii)$. In parallel concatenation, each message block is protected by a row and column eBCH code, but neither row nor column eBCH parity is protected by the other, whereas in serial concatenation, there is one dimension of parities that are covered by the other dimension of parities. It is known that BWP codes loose the key property of parity-on-parity for conventional product codes. Parity-on-parity in conventional product codes yields a minimum distance of product of row and column minimum distances but it is not true for BWP codes even with protecting one-dimensional parity, i.e., serial concatenation. In short, serial concatenation does not exhibit a conspicuous property, such as a product of minimum distances. Therefore, we choose parallel concatenation wherein row and column decoding are symmetrical. The symmetry effectively enables to design the same circuit for row and column decoding. Moreover, parallel concatenation inherently allows for row and column encoding in parallel. \\ Lemma~\ref{LEM-BCH-min-support} sheds some light on minimum weight codewords (associated with the minimum distance). There is no codeword whose support lies only in eBCH parities. Furthermore, if the block size is no greater than eBCH parity length (which is often the true case), then there does not exist a codeword whose support lies only in one message block. An inner $f$-erasure RS code guarantees at least $f+1$ non-zero blocks, yielding at least $f+1$ uncorrelated non-zero eBCH codewords. \\ $(iv)$. Message blocks are organized in a near square shape so that row and column decoding operations are nearly identical. To enforce code scalability as well as implementation simplicity, all eBCH codes are defined in the same field (even if the last data column has a single block).\\ $(v)$. Concatenation of an erasure code mitigates error floor at the price of lower BCH correction capabilities. An inner Reed-Solomon (RS) code is incorporated to mitigate error floor. This is different from \cite{Yu14}, wherein an outer erasure code is concatenated. An outer code has to protect both message blocks, eBCH parities, along with its own parity, therefore, it demands for more parity than an inner code which just protects message blocks. The rationale behind not protecting eBCH parities is that, when rows/columns have few errors in their data messages but excessive errors in parities which result in decoding failure, the errors in data messages can be corrected by cross decoding predominantly, thus the errors in parities can be simply recovered through re-encoding. In fact, our extensive simulations indicate that up to 4 RS parity symbols suffice to reduce error floor satisfactorily for our codes of interest. Therefore, adopting an inner RS code reserves more redundancy for eBCH parity but also reduces its implementation complexity. Note that two erasure blocks enables to recover single-row plus two-column failures or single-column plus two-row failures. Likewise, three erasure blocks enables to recover single-row plus three-column failures or single-column plus three-row failures. Four erasure blocks enables to recover single-row plus four-column failures, single-column plus four-row, or two-row plus two-column failures. When inner code is not used, i.e., $f=0$, the BWP-BCH code suffers from the dominant failure mechanism of single uncorrectable row and column \cite{Kim15}. The authors in \cite{Kim15} proposed a collaborative method to forge a new BCH senseword from the failed row and column constituents. However, this approach only succeeds in some cases, while taking extra endeavor to forge new syndromes. On the other hand, the single row and column failure is handily thwarted using a parity inner coding, at the cost of one parity block overhead. \\ $(vi)$. It allows to easily accommodate multiple message lengths and parity lengths in high granularity. This is particularly important in data storage, wherein different vendors/customers have slightly different requests. Given the block size $b$, the number of message blocks is $\lceil \frac{K}{b} \rceil$. Assume also $f$-erasure RS encoding is deployed. The inner RS code length is given by $\eta \stackrel{\triangle}{=} \lceil \frac{K}{b} \rceil + f$. Evidently, the minimum field dimension for RS coding is $\lceil \log_2 \eta \rceil$. Let $p$ satisfies \begin{equation} p(p-1) < \eta \leq p(p+1). \label{block-square-dimension} \end{equation} Note the positive number set, $\mathbb{Z}^+$, is disjointedly partitioned into $$\mathbb{Z}^+=\sum_{p=1}^\infty \Big[(p(p-1), \;\; p(p+1)\Big].$$ Thus, any positive number is uniquely distributed to one of intervals $\big(p(p-1),\; p(p+1)\big]$. To solve for $p$, first let $a$ be the real positive root of the equation $$a(a+1)=\eta.$$ We obtain $$a=\frac{-1+\sqrt{1+4\eta}}{2}.$$ It is easily verified that $p=\lceil a \rceil$, i.e., \begin{equation} p = \left\lceil \frac{-1+\sqrt{1+4\eta}}{2} \right\rceil \end{equation} is the unique solution of \eqref{block-square-dimension}. We further partition into two cases. The first case is such that \begin{equation} p(p-1) < \eta \leq p^2. \end{equation} Then the inner RS code is arranged into $p\times p$ matrix. There are $2p$ eBCH words, each is allocated with average $\lceil\frac{R-fb}{2p} \rceil$ parity bits. Accordingly, the eBCH code field dimension is determined by \begin{equation} m = \left \lceil \log_2 \left(pb + \left\lceil\frac{R-fb}{2p} \right\rceil \right) \right \rceil, \label{field-dimension} \end{equation} and the base correction capability is given by \begin{equation} t = \left \lfloor \frac{R-fb-2p}{2pm} \right\rfloor, \end{equation} wherein we assume the code rate is high enough to meet the condition of Lemma 1. For lower rate codes where Lemma 1 does not apply, $t$ is determined through computer search. Note there remains extra correction power of \begin{equation} \theta = \left \lfloor \frac{R-fb-2p}{m} \right \rfloor - 2pt. \end{equation} When $\theta>0$, the $\tau$ longest inner block rows/columns (in the sequel, a row/column always implies a block row/column) are assigned with $t+1$ correction capability, whereas the remaining $2p-\theta$ inner blocks rows/columns are assigned with $t$ correction capability. In this case, due to uneven distribution of eBCH parities, it is necessary to cross check the validity of \eqref{field-dimension}, such that \begin{equation} pb + (t+1)m+1 < 2^m. \end{equation} Figure~\ref{FIG:codeword-case1} illustrates an example of the above BWP-BCH code description, wherein the last partial data block is padded with zeros and single-parity RS code is used. The second case is such that \begin{equation} p^2 < \eta \leq p(p+1). \end{equation} Then the inner RS code is arranged in a $p\times (p+1)$ matrix. There are $2p+1$ eBCH words, each is allocated with average $\lceil\frac{R-fb}{2p+1} \rceil$ parity bits. Accordingly, the eBCH code field dimension is determined by \begin{equation} m = \left \lceil \log_2 \left((p+1)b + \left\lceil\frac{R-fb}{2p+1} \right\rceil \right) \right \rceil, \end{equation} and the base correction capability is given by \begin{equation} t = \left \lfloor \frac{R-fb-(2p+1)}{(2p+1)m} \right \rfloor. \end{equation} The residual correction power is determined by \begin{equation} \theta = \left\lfloor \frac{R-fb-(2p+1)}{m} \right\rfloor - (2p+1)t. \end{equation} Likewise, the $\theta$ longest inner block rows/columns are assigned with $t+1$ correction capability, whereas the remaining $2p+1-\theta$ inner blocks rows/columns are assigned with $t$ correction capability. When $\theta>0$, it is necessary to validate the field dimension $m$ such that \begin{equation} (p+1)b + (t+1)m+1 < 2^m. \end{equation} Figure~\ref{FIG:codeword-case2} illustrates an example of the above BWP-BCH code construction. Clearly, the foregoing coding configuration is totally determined by the two parameters, $b$ and $f$. $b$ is purposed to optimize the waterfall performance. $f$ is mainly associated with error floor. The larger $f$ yields the lower error floor, however, at the price of rate penalty. Since the inner blocks are arranged in a (near) square shape, the dominant error event is such that the equal number of rows and columns are uncorrectable, yielding a square number of intersecting blocks. For this reason, it is preferred to choose $f$ to be a square number, say 1, 4. It is shown in our simulations that $f=4$ achieves good balance between low error floor and superior waterfall. The failure probability of $i$ rows failures and $j$ columns failures are extensively investigated in literature (\cite{Cho14, Yu14, Kim15}). Our case also needs to take into account for different correction capabilities among rows (columns). Consider a data storage example wherein the data length is 4K bytes, i.e., $K=32768$, and the parity length is 455 bytes, i.e., $R=3640$. The code rate is 0.9. Assume block size to be $b=32$ and RS parity length is $f=4$. The number of inner code blocks is then $\left\lceil \frac{K}{b} \right\rceil + f = 1024+4=1028$. Accordingly, it belongs to the case 2, and results in $p=32$. The inner RS code is organized into 32 rows by 33 columns, wherein the last column has only 4 blocks. The resulting eBCH codes are defined over the field dimension of $m=11$. The base correction capability is given by $t = \left \lfloor \frac{3640-193}{65\times 11} \right \rfloor = 4$. The residual correction power is determined by $\tau = \left\lfloor \frac{3640-193}{11} \right\rfloor - 65\times 5 = 53$. We obtain the complete BWP-BCH configuration as in Table~\ref{Tab:Codemapping1}. \begin{table} \centering \caption{BWP-BCH mapping for $(K=32768, R=3640, b=32, f=4)$. \label{Tab:Codemapping1}} \begin{tabular}{|l|c|c|c|lll} \hline Rows/Columns & Inner Blocks & eBCH $t$ \\ \hline 4 rows & 33 & 5 \\ \hline 28 rows & 32 & 5 \\ \hline 21 columns & 32 & 5 \\ \hline 10 columns & 32 & 4 \\ \hline 1 column & 4 & 4 \\ \hline \end{tabular} \end{table} Consider a different block size $b=15$ while keeping $f=4$. The number of inner blocks is now 2189. Accordingly, the inner RS code is organized into 47 by 47 block matrix, wherein the last column has 27 blocks. The resulting eBCH codes are defined in 10-bit field, with the base correction capability $t=\left \lfloor \frac{3640-154}{2\times 47\times 10} \right\rfloor = 3$. The residual correction power is $\tau = \left\lfloor \frac{3640-154}{10} \right\rfloor - 94\times 3 = 66$. Table~\ref{Tab:Codemapping2} shows the detailed BWP-BCH configuration. \begin{table} \centering \caption{BWP-BCH mapping for $(K=32768, R=3640, b=15, f=4)$. \label{Tab:Codemapping2}} \begin{tabular}{|l|c|c|c|lll} \hline Rows/Columns & Inner Blocks & eBCH $t$ \\ \hline 27 rows & 47 & 4 \\ \hline 20 rows & 46 & 4 \\ \hline 19 columns & 47 & 4 \\ \hline 27 column & 47 & 3 \\ \hline 1 column & 24 & 3 \\ \hline \end{tabular} \end{table} \begin{figure}[h] \begin{center} \centering \includegraphics[width=6.8in]{BWP_BCH_Encoder.pdf} \vspace{-0.5in} \caption{Encoder block diagram for BWP-BCH codes} \label{FIG:Encoder} \end{center} \end{figure} Note one condition must be satisfied to enforce nontrivial RS coding \begin{equation} 2^b \geq \eta \stackrel{\triangle}{=} \left\lceil \frac{K}{b} \right\rceil +f, \label{def-eta} \end{equation} where $f>1$ denotes the number of RS parity blocks and $\eta$ denotes the number of inner RS blocks (for short, inner blocks). However, this enforcement is not needed if $f=1$, i.e., in the case of trivial parity coding. For implementation simplicity, we choose its generator polynomial \begin{equation} g_{RS}(x) = (x-1)(x-\beta)\ldots(x-\beta^{f-1}) \end{equation} where $\beta$ denotes a primitive element of the RS operation field. Its erasure-only decoding effectively recovers up to $f$ erased symbols, as briefly described below (cf. \cite{Blahut}). Upon receiving a senseword $y(x)$, its syndromes are computed as follows \begin{equation} {\hat S}_i = y(\beta^i), \;\;\;\; i=0, 1, \ldots, f-1, \end{equation} wherein the notation ${\hat S}$ is to differentiate from eBCH syndromes $S$. Let $X_0$, $X_1$, \ldots, $X_{e-1}$ ($e<f$), be (known) erasure locators. Then its erasure locator polynomial is given by \begin{equation} {\hat \Lambda}(x) \stackrel{\triangle}{=} (1-X_0x)(1-X_1x)...(1-X_{e-1}x) \end{equation} and erasure evaluator polynomial by \begin{equation} {\hat \Omega}(x) \stackrel{\triangle}{=} {\hat \Lambda}(x) {\hat S}(x) \pmod{x^f}. \end{equation} Erasure-only decoding is successful if and only if \begin{equation} \deg({\hat \Omega}(x)) < e. \end{equation} If true, then the corresponding erasure values, $\{Y_i\}_{i=0}^{e-1}$, are retrieved by \begin{equation} Y_i = \frac{{\hat \Omega}(X_i^{-1})}{ {\hat \Lambda}_\text{odd}(X_i^{-1}) }, \;\;\; i=0, 1, \ldots, e-1, \end{equation} where ${\hat \Lambda}_\text{odd}(x)$ denotes the odd term polynomial of ${\hat \Lambda}(x)$. In some cases \eqref{def-eta} is not met, then RS coding with $f>1$ is infeasible. We next consider to relax \eqref{def-eta}. Let $[D_{i,j}]$ be $p\times (p+1)$ data block array, wherein empty array blocks are treated as zero blocks. The RS data vector, $[{\hat D}_0, {\hat D}_1, {\hat D}_2, \ldots, {\hat D}_{2p-2}]$, is produced by XORing $[D_{i,j}]$ reverse diagonally, i.e., \begin{equation} {\hat D}_i\stackrel{\triangle}{=} \oplus_{l+j=i}D_{l, j}, \;\;\; i=0, 1, \ldots, 2p-2, \label{RS-Data-Symbol} \end{equation} where $\oplus$ denotes bit-wise XOR. Note $D_{2p-1}$ is not defined as $D_{p-1, p}$ is reserved for RS parity. Accordingly, \eqref{def-eta} is relaxed to \begin{equation} 2^b > 2p-1+f. \end{equation} It is worth noting that employing \eqref{RS-Data-Symbol} also renders simpler implementation. This is because a block can be partitioned into multiple sub-blocks such that each of them is separately protected by an RS code defined over a small operation field. Clearly, failed blocks in single-column plus multiple-rows or single-row plus multiple-columns belong to different RS symbols, and thus can be uniquely recovered. However, it may not fully work for $f=4$. This is because, when two-row plus two-column failure occurs, two failed blocks may line diagonally and thus belong to the same RS symbol. In this case, the remaining two failed blocks must belong to different RS symbols and thus are uniquely recovered. Consequently, the remaining two (uncorrelated) blocks are predominantly corrected by decoding row-wise or column-wise. For implementation simplicity, it is desirable to use the above method when $f\leq 4$. We next describe an efficient high-speed encoder. First note it is common that block size is much greater than the required RS symbol size, i.e., $ b \gg \log_2(\eta)$. Instead of treating a block as an RS symbol, we divide a block into multiple RS symbols and encode each to a separate RS code. This way dramatically reduces the circuit complexity of finite field multiplier and divisor. As in the above example, the 10-bit RS coding suffices for $b=40$. Thus, it suffices to partition each block into 4 RS symbols and to encode to 4 RS codes respectively. Secondly, assume a column-wise block of data is transferred each clock, theoretically we may use one high-speed BCH encoder (cf. \cite{Pei92, Zhang05}) to process $b$ bits in a clock. However, there is difficulty to offload parity and immediately switch to next column encoding (with proper register initializations). To this end, we use two column encoders to ping-pong for the task. On the other hand, a row encoder only needs to process a block of data upon transferring $p$ column-wise blocks of data. Our solution is to add a block buffer in front of each row encoder and design a low-performance encoder such that it processes only $\lceil b/p\rceil $ bits per cycle (eBCH encoder is halted if completed less than $p$ cycles). As far as encoding a block, row and column eBCH encoders, as well as RS encoders, follow the same first-in-first-out bit sequence. Figure~\ref{FIG:Encoder} depicts the block diagram of the proposed $b$-bits/clock encoder. Note that the proposed encoder applies for any inner block length within $\big(p(p-1),\; p(p+1)\big]$, at the same time accommodates different eBCH parities. It is worth pointing out that the enforcement of single BCH field allows to effectively share BCH encoder/decoder. \section{New Iterative harding Decoding of BWP-BCH Codes} Apparently, all existing BWP-BCH decoding algorithms are applicable (possibly with minor modification) to the proposed codes. In this section, we explore more efficient hard decoding of the proposed BWP-BCH codes. We aim to lower error floor by effectively handling three types of dominant error events. The first one is excessive errors in a few data blocks which causes both row and column decoding failure. The second one is excessive errors in a few eBCH parities, which are not cross protected. The last one is miscorrection of eBCH constituents, due to small minimum distance. We also aim to boost waterfall performance through intelligently incorporating the proposed extra-1-bit and extra-2-bit list decoding algorithms. We present a novel iterative decoding algorithm for the proposed BWP-BCH codes, in the following three phases. \begin{itemize} \item[I.] Iteratively alternate row and column reduced-1-bit decoding until the process stalls or a pre-determined maximum number of iterations is reached. \item[II.] Iteratively alternate row and column regular decoding until the process stalls or a pre-determined maximum number of iterations is reached. \item[III.] Iteratively alternate row and column list decoding up to extra-2-bit errors until the process stalls or a pre-determined maximum number of iterations is reached. \end{itemize} The underlying purpose of Phase-I is to reduce miscorrection rate so as to avoid error amplification. We next deep dive into the implementation details. We call {\em decoding stalling} if the numbers of failed rows and columns remain unchanged in a full iteration. Upon decoding stalling, let the number of failed intersecting blocks be the product of the number of failed rows by the number of failed columns. Upon the completion of each half-iteration, i.e., row-wise or column-wise eBCH decoding, the decoding status is checked; the process is early terminated if a decoding success is declared. Herein the decoding success is defined as {\em the number of failed intersecting blocks is up to $f$, and RS erasure-only decoding is successful.} We give the following examples to clarify this criterion. \begin{itemize} \item If all row (column) eBCH constituents are successfully decoded but not all column (row) eBCH constituents, and RS syndromes are zeros, then, the number of failed intersecting blocks is zero and thus is declared success even without inner RS coding, wherein failed constituents can be simply corrected by re-encoding. \item If all row (column) eBCH constituents are successfully decoded but not all column (row) eBCH constituents, and RS syndromes are non-zeros, then, it is proclaimed unsuccessful. \item If the number of failed intersecting blocks is less than $f$, then, it is declared success only if erasure-only decoding is successful, but not erasure-and-error decoding. When erasure-only decoding is successful, re-encoding is deployed to correct eBCH parities. \item If the number of failed intersecting blocks is greater than $f$, then declare failure. \item If all eBCH constituents are successfully decoded but RS syndromes are non-zeros, then it is proclaimed unsuccessful. This is because our designing purpose of inner RS coding is for erasure recovery, whereas random error correction may result in intractable decoding behaviors. \end{itemize} We next present the details of syndrome computation and update. When sequentially receiving the senseword, the (single-bit) parity syndrome and even-indexed syndromes for each eBCH codes (note the odd-indexed syndromes are not saved but computed on-the-fly through \eqref{square-syndrome}) and RS syndromes are simultaneously computed. Each time an eBCH constituent is successfully decoded, corrections are immediately made to the senseword, and syndromes of crossing eBCH constituents and the RS code(s) are updated accordingly. When indexes, denoted by $\{i_l\}_{l=0}^{\iota-1}$ associated with a $t$-correcting eBCH constituent is corrected by decoding of crossing eBCH constituents, its syndromes are updated as follows. \begin{equation} S_j \gets S_j + \sum_{l=0}^{\iota-1} \alpha_{i_l}^{j+1}, \;\;\;\; j=0, 2, \ldots, 2t-2. \end{equation} It is worth noting that the parity syndrome is used to eliminate unnecessary decodings. Specifically, if the parity syndrome plus the targeted number of corrections is an odd number then the corresponding decoding is deemed unsuccessful. In Phase-II, the decoding is limited to failed rows/columns. Upon stalling, if the number of intersecting blocks among failed eBCH words is up to $f$, then RS erasure decoding is called to recover those blocks and subsequently the senseword is corrected. We remark that the proposed Phase-I and II decoding is motivated from \cite{Al09}. There is a minor difference such that, reduced-1-bit iterative decoding is run until stalling for Phase-I, as opposed to limiting to the first iteration in \cite{Al09}. In addition, the proposed reduced-1-bit decoding algorithm effectively rules out unfruitful Chien searches. Phase-III is different from the previous two phases, as each eBCH list decoding may produces multiple candidates. To reduce the number of candidates as well as to reduce search complexity, the evaluation of $\Delta(x)$, as defined in \eqref{def-Delta}, is limited to the failed intersecting blocks and the parity block. One trivial solution is to randomly pick a candidate. However, this suffers miscorrection greatly due to high probability of wrong picking. Herein we present an alternative approach. Upon successfully determining a list of candidates from a failed row (column) constituent word, we perform trial correction on each candidate. Each time check if the crossing column (row) Berlekamp decoding of any previous failed word is successful. We choose the one that results in the most number of crossing column (row) corrections and make both row and column corrections accordingly. We shall discard all if none results in a crossing column (row) correction. Then the next row (column) is carried out in the same manner. This approach effectively takes advantage of crossing validation. Also note this trial-and-error approach is performed at the last phase wherein only a few rows and columns remain to be corrected, so the complexity increment is at most moderate. Two indicator vectors are exploited to ease implementation, namely, the correction indicator vector and the syndrome update indicator vector. The correction indicator vector tracks correction status. When an eBCH constituent is corrected, its syndromes are reset to zeros, while its corresponding indicator is delayed for a whole iteration to set to 1 (the rationale behind is to exploit cross decoding validation to reduce miscorrection). The proposed iterative decoding skips a constituent eBCH word if its correction indicator is 1. Moreover, the evaluation of $\Delta(x)$ in Phase-III skips blocks that are corrected from earlier iterations, i.e., their correction indicators are 1. The syndrome update vector keeps track of syndrome update. When correction is made by crossing eBCH constituents, its syndromes are updated accordingly and the corresponding indicator is set to 1. The proposed iterative decoding skips a constituent eBCH word if its syndrome update indicator is 0. When decoding of a eBCH constituent is done (Regardless of failure or success), its syndrome update indicator is reset to zero. At the beginning of each phase, syndrome update indicators corresponding to all rows/columns are initialized to 1. \section{Performance Evaluation} \label{sec:evaluation} \begin{figure} \includegraphics[width=1\textwidth]{sim.pdf} \caption{\label{fig:sim} R is the code rate, FER is the frame error rate, RBER is the raw bit error rate, and SNR is the signal to noise ratio ($2E_b/N_0$). } \end{figure} We simulate the proposed codes and decoding algorithms to evaluate their effectiveness. The size of the user data used for the simulation is 4kB ($K = 32768$). We evaluate code rates of 0.889, 0.9 and 0.93. For each code rate, we simulate decoding with no list decoding, with up to extra-1-bit list decoding, and with up to extra-2-bit list decoding. We further compare with stand-alone BCH codes of the same rates and lengths. The exact numbers of parity bits are specified in Table ~\ref{Tab:Codelenghts}. \begin{table} \centering \caption{Number of parity bits in simulated codes with respect to the fixed data size of $K=32768$. Right column shows BCH correcting power ($t$). \label{Tab:Codelenghts}} \begin{tabular}{|l|c|c|c|lll} \hline Rate & BWP & BCH & $t$ (BCH) \\ \hline 0.889 & 4082 & 4088 & 258 \\ \hline 0.9 & 3634 & 3640 & 228 \\ \hline 0.93 & 2463 & 2472 & 155 \\ \hline \end{tabular} \end{table} \begin{figure} \centering \includegraphics[width=0.7\textwidth]{extra2bit.pdf} \caption{\label{fig:error-floor}Benefit of $+2$ list decoding in error-floor region. The error-floor plots are numerical estimates, wherein code rate 0.9, 1 RS parity, block size $b=20$.} \end{figure} \begin{figure} \centering \includegraphics[width=0.58\textwidth]{1vs4rs.pdf} \caption{\label{fig:rs-parities}Comparison of 1 vs 4 RS parities with unique decoding at code rate 0.9 and block size $b=15$. } \end{figure} \begin{figure} \centering \includegraphics[width=0.58\textwidth]{iterations.pdf} \caption{\label{fig:iterations}Mean number of iterations under code rate $0.9$, 1 RS parity and block size 20. } \end{figure} We use an inner RS code with $f=4$ parities to reduce the error floor. The main additional free parameter is the block size $b$. For each code rate, we simulate various block sizes to identify one with good performance. An accurate optimization of the block size would not be practical, since simulating low frame error rates takes a long time. Further, we observe that the best performing block size for a given raw-bit-error-rate (RBER) and a given decoding algorithm tends also to perform (near) the best for minor varying RBERs and minor different algorithms. To this end, we settle for a good performing block size for each given code, instead of separately optimizing for each RBER point and each decoding algorithm. The block sizes we find to perform well are $b=20$ for rate 0.889, $b=15$ for rate 0.9 and $b=50$ for rate 0.93. The simulation results are presented in Figure \ref{fig:sim}. The maximum number of Turbo iterations is set to 32. Each simulation runs until 100 BWP-BCH frame failures are observed. The simulation results provide several insights: \begin{enumerate} \item List decoding is significantly superior to unique decoding in this setting. The error floor of unique decoding is very high, while list decoding reduces the error floor below the observable zone. \item In the lower code rates, BWP-BCH codes provide significant gain over stand-alone BCH, with 0.4 dB for rate 0.889. While the gain almost entirely disappears in rate 0.93, it is known that other iteratively decoded codes, such as LDPC of similar length, also do not outperform BCH in this rate (in hard decoding). \item Incorporating $+2$ list decoding provides a very small gain over that of $+1$ list decoding. However, we show next that incorporating $+2$ list decoding is beneficial for reducing the error-floor. \end{enumerate} To show the benefit of incorporating $+2$ list decoding over that of $+1$ list decoding, we use a numerical method to estimate the error floor. The method is straightforward, as described in \cite{Cho14}. We evaluate for the code under rate 0.9, with a single RS parity and block size of 20. Figure~\ref{fig:error-floor} shows simulation results along with error floor estimations. The error floor estimation proves to be quite accurate for radius $t$ and $t+1$. The figure suggests that $+2$ list decoding improves the error floor by 2.5 orders of magnitudes over $t+1$. On another note, at the benchmark frame-error-rate of $1e-6$, the proposed iterative decoding algorithm is shown to be 1 dB gap from the Shannon capacity, while gaining 0.35 dB over the BCH performance. Note that the error-floor estimation method from~\cite{Cho14} does not consider list decoding. To use this method with list decoding ($t+1$ and $+2$ list decoding), we assume that the list decoding always returns only the unique (correct) codeword. In other words, we use the extended decoding radius as if it was the actual decoding radius of the BCH component codes. The accuracy of the assumption is supported by the relative accuracy of the estimations for $t$ and $t+1$ decoding. It remains unclear as to why is the $+2$ list decoding improvement is so insignificant in the waterfall region. An investigation of the failure scenarios reveals the reason. In most cases when $+1$ list decoding gets stuck, the number of non-zero BCH syndromes is rather high. In such case, miscorrections often happen in the BCH decoding, since the hint from the opposite dimension is not so helpful. As a result, the BWP decoding is unable to correct those BCH sensewords. In contrast, in the error-floor region, the number of non-zero BCH syndromes is small, resulting in a good hint for the list decoder. Therefore, $+2$ list decoding resolves most $t+1$ failures. Note that the reasoning above is in principal also valid for the gain of $t+1$ decoding over $t$ decoding in the waterfall region. However, we do see a significant gain in $+1$ list decoding in the waterfall region. The reason for this behavior is that the lists of radius $t+1$ are almost always much smaller than those of radius $t+2$ for the values of $t$ used in the considered simulations (around $5$). Finally, note that the code in Figure~\ref{fig:error-floor} contains a single RS parity, while the codes in Figure~\ref{fig:sim} contain 4 parities. It is instructive to consider the trade-off that governs the choice for the number of RS parities. Intuitively, we would expect a code with less RS parities to perform better in the waterfall region, since the parities are used to strengthen the BCH codes. On the other hand, a code with more RS parities would perform better in the error-floor region, since the additional RS parities would protect better from errors concentrated in a small numbers of blocks. The choice of the number of RS parities should be made according to the desired error-floor level. We illustrate the trade-off by the simulation results in Figure~\ref{fig:rs-parities}, at code rate 0.9 and block size $b=15$, with unique decoding. Furthermore, the mean number of iterations is presented in Figure~\ref{fig:iterations}. \section{Concluding Remarks} The paper studies BWP-BCH codes with a focus on flash memory applications. Firstly, novel efficient BCH decoding algorithms are firstly presented, including $-1$ decoding, $+1$ list decoding, and $+2$ list decoding. Secondly, a unified construction framework of BWP-BCH codes is presented by leveraging many design freedoms to compromise among design scalability, implementation simplicity and superior performance. A high-speed scalable encoder is described. Finally, a novel iterative decoding algorithm for BWP-BCH codes is presented, which utilizes the proposed BCH decoding algorithms to optimize decoding performance. Simulation results demonstrate superior waterfall performance and significantly lowered error floor. Notably, it achieves 1 dB gap from capacity under the benchmark of 4kB data size, the code rate of 0.9 and the frame-error-rate of 1e$-6$. There are many problems to be explored. One interesting problem is to build soft information inside the proposed iterative decoding. One would wonder how much further gain may be achieved. Another interesting problem is to extend the proposed decoding algorithm to soft-input soft-output decoding. Furthermore, it is easily observed that concatenating an inner RS code significantly improves the minimum distance bound. However, to our view, it is still too loose to be meaningful in a straightforward manner. Finally, on the hardware perspective, the proposed $+1$ list decoding employs dynamic grouping and rational function evaluation, which are overly complex. A simplified hardware implementation would certainly renders the proposed iterative decoding more practical. \renewcommand{\baselinestretch}{1.0}\normalsize
1,108,101,565,389
arxiv
\section{Introduction} It is a classical result of A.\,Clebsch \cite{clebschPaper} that every planar pentagon $P$ is projectively equivalent to the pentagon $P'$ formed by intersections of diagonals of~$P$. More precisely, if we label the vertices of $P$ and $P'$ in such way that the $k$'th vertex of $P'$ is opposite to the $k$'th vertex of $P$ (see Figure \ref{Fig:pentagons}), then there is a projective transformation that takes $P$ to $P'$ and respects the labelings. \begin{figure}[h] \centering \begin{tikzpicture}[ scale = 1] \coordinate (A) at (0,1); \coordinate (B) at (1.5,-0.5); \coordinate (C) at (3.2,0); \coordinate (D) at (3,2); \coordinate (E) at (1,2.5); \fill (A) circle [radius=2pt]; \fill (B) circle [radius=2pt]; \fill (C) circle [radius=2pt]; \fill (D) circle [radius=2pt]; \fill (E) circle [radius=2pt]; \node[label={[shift={(0,0)}]left:\small$P$}] at (0.6, 1.95) () {}; \node[label={[shift={(0,0)}]left:\small$P'$}] at (2.2, 1) () {}; \draw [line width=0.3mm] (A) -- (B) -- (C) -- (D) -- (E) -- cycle; \draw [dashed, line width=0.2mm, name path=AC] (A) -- (C); \draw [dashed,line width=0.2mm, name path=BD] (B) -- (D); \draw [dashed,line width=0.2mm, name path=CE] (C) -- (E); \draw [dashed,line width=0.2mm, name path=DA] (D) -- (A); \draw [dashed,line width=0.2mm, name path=EB] (E) -- (B); \node[label={[shift={(0.15,0)}]left:\small${v_1}$}] at (A) () {}; \node[label={[shift={(-0.05,0.15)}]below:\small$v_2$}] at (B) () {}; \node[label={[shift={(0.15,0.1)}]below:\small$v_3$}] at (C) () {}; \node[label={[shift={(-0.2,0.2)}]right:\small$v_4$}] at (D) () {}; \node[label={[shift={(-0.1,-0.15)}]above:\small$v_5$}] at (E) () {}; \path [name intersections={of=AC and BD,by=Ep}]; \path [name intersections={of=BD and CE,by=Ap}]; \path [name intersections={of=CE and DA,by=Bp}]; \path [name intersections={of=DA and EB,by=Cp}]; \path [name intersections={of=EB and AC,by=Dp}]; \fill (Ap) circle [radius=2pt]; \fill (Bp) circle [radius=2pt]; \fill (Cp) circle [radius=2pt]; \fill (Dp) circle [radius=2pt]; \fill (Ep) circle [radius=2pt]; \draw [line width=0.3mm] (Ap) -- (Bp) -- (Cp) -- (Dp) -- (Ep) -- cycle; \node[label={[shift={(-0.15,0.05)}]right:\small$v_1'$}]at (Ap) () {}; \node[label={[shift={(0.1,-0.15)}]above:\small$v_2'$}]at (Bp) () {}; \node[label={[shift={(0.2,0.2)}]left:\small$v_3'$}] at (Cp) () {}; \node[label={[shift={(0.2,-0.15)}]left:\small$v_4'$}] at (Dp) () {}; \node[label={[shift={(0.15,0.15)}]below:\small$v_5'$}] at (Ep) () {}; \end{tikzpicture} \caption{Pentagons $P = v_1v_2v_3v_4v_5$ and $P' = v_1'v_2'v_3'v_4'v_5'$ are projectively equivalent.}\label{Fig:pentagons} \end{figure} Furthermore, as was proved by R.\,Schwartz \cite{schwartz2007poncelet}, Clebsch's theorem is true for all {Poncelet} polygons with odd number of vertices. Recall that a \textit{Poncelet polygon} is a polygon which is inscribed in a conic and circumscribed about another conic. In particular, any pentagon is Poncelet, while for $n$-gons with $n\geq 6$ being Poncelet is a non-trivial restriction. Poncelet polygons owe their name to J.-V.\,Poncelet and his famous ``{porism}'' which says that {if there exists an $n$-gon inscribed in a conic $C_1$ and circumscribed about a conic $C_2$, then any point of $C_1$ is a vertex of such an $n$-gon} (see Figure \ref{Fig:poncelet}). \begin{figure}[t] \centering \includegraphics[width = 5 cm]{5poncelet.pdf} \caption{Every point of $C_1$ is a vertex of a pentagon inscribed in $C_1$ and circumscribed about $C_2$.}\label{Fig:poncelet} \end{figure} \begin{figure}[b] \centering \begin{tikzpicture}[scale = 1.1] \coordinate (A) at (0,0); \coordinate (B) at (1.5,-0.5); \coordinate (C) at (3,1); \coordinate (D) at (3,2); \coordinate (E) at (1,3); \coordinate (F) at (-0.5,2.5); \coordinate (G) at (-1,1.5); \fill (A) circle [radius=2pt]; \fill (B) circle [radius=2pt]; \fill (C) circle [radius=2pt]; \fill (D) circle [radius=2pt]; \fill (E) circle [radius=2pt]; \fill (F) circle [radius=2pt]; \fill (G) circle [radius=2pt]; \draw [line width=0.3mm] (A) -- (B) -- (C) -- (D) -- (E) -- (F) -- (G) -- cycle; \draw [dashed, line width=0.2mm, name path=AC] (A) -- (C); \draw [dashed,line width=0.2mm, name path=BD] (B) -- (D); \draw [dashed,line width=0.2mm, name path=CE] (C) -- (E); \draw [dashed,line width=0.2mm, name path=DF] (D) -- (F); \draw [dashed,line width=0.2mm, name path=EG] (E) -- (G); \draw [dashed,line width=0.2mm, name path=FA] (F) -- (A); \draw [dashed,line width=0.2mm, name path=GB] (G) -- (B); \path [name intersections={of=AC and BD,by=Fp}]; \path [name intersections={of=BD and CE,by=Gp}]; \path [name intersections={of=CE and DF,by=Ap}]; \path [name intersections={of=DF and EG,by=Bp}]; \path [name intersections={of=EG and FA,by=Cp}]; \path [name intersections={of=FA and GB,by=Dp}]; \path [name intersections={of=GB and AC,by=Ep}]; \fill (Ap) circle [radius=2pt]; \fill (Bp) circle [radius=2pt]; \fill (Cp) circle [radius=2pt]; \fill (Dp) circle [radius=2pt]; \fill (Ep) circle [radius=2pt]; \fill (Fp) circle [radius=2pt]; \fill (Gp) circle [radius=2pt]; \draw [line width=0.3mm] (Ap) -- (Bp) -- (Cp) -- (Dp) -- (Ep) -- (Fp) -- (Gp) -- cycle; \node[label={[shift={(0.2,-0.1)}]left:\small$v_1$}] at (A) () {}; \node[label={[shift={(0.1,0.15)}]below:\small$v_2$}] at (B) () {}; \node[label={[shift={(0.25,0.25)}]below:\small$v_3$}] at (C) () {}; \node[label={[shift={(-0.2,0.15)}]right:\small$v_4$}] at (D) () {}; \node[label={[shift={(0,-0.15)}]above:\small$v_5$}] at (E) () {}; \node[label={[shift={(0.15,0.1)}]left:\small$v_6$}] at (F) () {}; \node[label={[shift={(0.15,0)}]left:\small$v_7$}] at (G) () {}; \node[label={[shift={(0.25,-0.25)}]left:\small$v_1'$}] at (Ap) () {}; \node[label={[shift={(0.1,0.15)}]below:\small$v_2'$}] at (Bp) () {}; \node[label={[shift={(0.25,0.3)}]below:\small$v_3'$}] at (Cp) () {}; \node[label={[shift={(-0.2,0.15)}]right:\small$v_4'$}] at (Dp) () {}; \node[label={[shift={(0,-0.15)}]above:\small$v_5'$}] at (Ep) () {}; \node[label={[shift={(0.2,0.2)}]left:\small$v_6'$}] at (Fp) () {}; \node[label={[shift={(0.1,0)}]left:\small$v_7'$}] at (Gp) () {}; \node at (1,1.5) () {$P'$}; \node at (0,3) () {$P$}; \end{tikzpicture} \caption{A convex polygon $P$ is Poncelet if and only if it is projectively equivalent to $P'$.}\label{Fig:labeling} \end{figure} Schwartz's generalization of Clebsch's theorem is as follows. Let $P$ be an $n$-gon with odd $n$, and let $P'$ be the polygon whose vertices are the intersections of consecutive shortest diagonals of~$P$, i.e. diagonals connecting second nearest vertices. Label the vertices of $P'$ as in Clebsch's theorem: the $k$'th vertex of $P'$ is opposite to the $k$'th vertex of $P$ (see Figure~\ref{Fig:labeling}). Assume that $P$ is Poncelet. Then there is a projective transformation that carries $P$ to $P'$ and respects the labelings (a weaker result saying that if $P$ is Poncelet then $P'$ is circumscribed about a conic was known to Darboux, see \cite[Theorem~2.1]{dragovic2014bicentennial}). The goal of the present paper is to show that in the convex setting the converse is also true. More precisely, we prove the converse of Schwartz's theorem for a broader class of \textit{weakly convex} polygons. Weak convexity is a technical condition (see Definition~\ref{def:lcp} below) which in particular holds for truly convex polygons. \begin{customthm}{A}\label{thm1} Let $P$ be a weakly convex closed polygon with an odd number of vertices. Let also $P'$ be the polygon whose vertices are the intersections of consecutive shortest diagonals of~$P$, labeled as in Figure~\ref{Fig:labeling}. Assume that there is a projective transformation that carries $P$ to $P'$ and respects the labelings. Then $P$ (and hence $P'$) is a Poncelet polygon. \end{customthm} Combining Theorem \ref{thm1} with Schwartz's theorem, we get the following characterization of weakly convex Poncelet polygons: \begin{corollary}\label{thm1cor} Let $P$ be a weakly convex closed polygon with an odd number of vertices. Let also $P'$ be the polygon whose vertices are the intersections of consecutive shortest diagonals of~$P$, labeled as in Figure~\ref{Fig:labeling}. Then $P$ is Poncelet if and only if it is projectively equivalent to $P'$. \end{corollary} The map taking the polygon $P$ in Figure \ref{Fig:labeling} to $P'$ is known as the \textit{pentagram map}. It was defined by Schwartz in 1992 \cite{schwartz1992pentagram} but became especially popular in the last decade thanks to the discovery that it is a discrete integrable system \cite{ovsienko2010pentagram, ovsienko2013liouville, soloviev2013integrability}, and also because of its connections with cluster algebras~\cite{GLICK20111019, Gekhtman2016, glick2015, kedem2015t, fock2014loop}. Since the pentagram map commutes with projective transformations, it is usually considered as a dynamical system on the space of polygons modulo projective equivalence. Our result can thus be viewed as a description of fixed points of the pentagram map, which has been an open question since Schwartz's first paper \cite{schwartz1992pentagram}. \begin{remark} Note that the pentagram map can be considered either on labeled polygons (i.e. polygons with labeled vertices), or on unlabeled ones. Theorem~\ref{thm1} describes fixed points of the pentagram map on the space of projective equivalence classes of \textit{labeled} polygons, where the labeling rule is depicted in Figure~\ref{Fig:labeling}. Although this is not the only possible labeling, it is the only one for which the pentagram map commutes with the action of the dihedral group, and hence the most symmetric of all labelings. A more common, non-symmetric labeling is given by the rule $v_k' := (v_{k-1}, v_{k+1}) \cap (v_k, v_{k+2})$. One can easily see that the only fixed points of the pentagram map with this non-symmetric labeling are regular polygons (again, assuming that the number of vertices is odd). The problem of describing the fixed points of the pentagram map for an \textit{arbitrary} labeling can also be approached using the techniques of the present paper, but due to the break of symmetry one should not expect an answer as nice as for the symmetric labeling. \end{remark} Theorem~\ref{thm1} also has an interpretation in terms of billiards. Indeed, if the conic $C_1$ circumscribed about a Poncelet polygon $P$ is confocal to the inscribed conic $C_2$ (which can always be arranged by applying a suitable projective transformation), then $P$ can be viewed as a closed trajectory of a billiard ball in the domain bounded by $C_1$. Conversely, any closed billiard trajectory in a conic is a Poncelet polygon. So, Corollary \ref{thm1cor} establishes a correspondence between fixed points of the pentagram map and periodic billiard trajectories in conics. Also note that, as shown in~\cite{levi2007poncelet}, the fact that a closed billiard trajectory in a conic is projectively equivalent to its pentagram image is essentially a corollary of integrability of the corresponding billiard system. At the same time, we show that if $P$ is projectively equivalent to its pentagram image then the vertices of $P$ are contained in a conic. So, one may hope to combine our results with the approach of \cite{levi2007poncelet} to show that for \textit{any} integrable billiard the impact points of a periodic trajectory are contained in a conic, and hence shed some light on the Birkhoff conjecture which says that the only integrable billiards are the ones in conics. It is also an interesting question whether this correspondence between periodic billiard trajectories in conics and fixed points of the pentagram map extends to higher dimensions. There exists numerous generalizations of the pentagram map to higher-dimensional spaces~\cite{Gekhtman2016, khesin2013, khesin2016, felipe2015} and one may wonder if their fixed points are related to periodic trajectories of billiards in multidimensional quadrics. \begin{remark}\label{ex:complex} We do not know if Theorem \ref{thm1} is true with no convexity-type assumptions, but it is for sure not true over complex numbers, as demonstrated by the following example. Let $\lambda := \exp({{2\pi\mathrm{i}}/{7}})$, where $\mathrm{i} = \sqrt{-1}$, and let $P$ be a heptagon in $\mathbb{C}^2$ with vertices $v_k := (\lambda^{2k}, \lambda^{3k})$. Then a direct computation shows that there exists a {projective} (in fact, even affine) transformation $\phi$ taking $P$ to $P'$ (see also Remark \ref{rem:crit} below for a conceptual proof). Moreover, for any vertex $v$ of $P$ and $v'$ of $P'$, the map $\phi$ can be chosen to take $v$ to $v'$. This means that the projective equivalence class of $P$ is fixed by the pentagram map, regardless of the labeling convention used to define the map. However, $P$ is not Poncelet. Moreover, it is not even inscribed in a conic. Indeed, the vertices of $P$ lie on a semi-cubical parabola $y^2 = x^3$, which has at most six intersection points with any conic. So, there exists no conic which contains all seven vertices of $P$.\par Theorem \ref{thm1} being not true over $\mathbb{C}$ is one of the reasons one should not expect it to have any kind of ``elementary'' proof, as such a proof would be valid over any field. Another reason is that the theorem is not true for non-closed polygons (see Remark \ref{rm:nc} below). Again, if there was a local (i.e. involving only few adjacent vertices) geometric construction producing inscribed and circumscribed conics for $P$ based on the projective equivalence between $P$ and $P'$, such a construction would work no matter whether $P$ is closed or not. \end{remark} We now outline the scheme of the proof of Theorem \ref{thm1}. As a first step, we prove the theorem under an additional assumption that the polygon $P$ is \textit{self-dual}. Recall that the dual of a polygon is the polygon in the dual projective plane whose vertices are the sides of the initial one. We label the vertices of the dual polygon as shown in Figure \ref{Fig:dual}. A polygon is self-dual if it is projectively equivalent to its dual. \begin{figure}[t] \centering \begin{tikzpicture}[scale = 0.9] \coordinate (A) at (0,0); \coordinate (B) at (1.5,-0.5); \coordinate (C) at (3,1); \coordinate (D) at (3,2); \coordinate (E) at (1,3); \coordinate (F) at (-0.5,2.5); \coordinate (G) at (-1,1.5); \coordinate (Ap) at (2,2.5); \coordinate (Bp) at (0.25,2.75); \coordinate (Cp) at (-0.75,2); \coordinate (Dp) at (-0.5,0.75); \coordinate (Ep) at (0.75,-0.25); \coordinate (Fp) at (2.25,0.25); \coordinate (Gp) at (3,1.5); \draw [line width=0.3mm] (A) -- (B) -- (C) -- (D) -- (E) -- (F) -- (G) -- cycle; \node[label={[shift={(0.2,-0.1)}]left:\small$v_1$}] at (A) () {}; \node[label={[shift={(0.1,0.15)}]below:\small$v_2$}] at (B) () {}; \node[label={[shift={(0.25,0.25)}]below:\small$v_3$}] at (C) () {}; \node[label={[shift={(-0.2,0.15)}]right:\small$v_4$}] at (D) () {}; \node[label={[shift={(0,-0.15)}]above:\small$v_5$}] at (E) () {}; \node[label={[shift={(0.15,0.1)}]left:\small$v_6$}] at (F) () {}; \node[label={[shift={(0.15,0)}]left:\small$v_7$}] at (G) () {}; \node[label={[shift={(0.25,-0.2)}]left:\small$v_1^*$}] at (Ap) () {}; \node[label={[shift={(0.1,0.15)}]below:\small$v_2^*$}] at (Bp) () {}; \node[label={[shift={(0.25,0.3)}]below:\small$v_3^*$}] at (Cp) () {}; \node[label={[shift={(-0.2,0.15)}]right:\small$v_4^*$}] at (Dp) () {}; \node[label={[shift={(0,-0.15)}]above:\small$v_5^*$}] at (Ep) () {}; \node[label={[shift={(0.2,0.2)}]left:\small$v_6^*$}] at (Fp) () {}; \node[label={[shift={(0.1,0)}]left:\small$v_7^*$}] at (Gp) () {}; \fill (A) circle [radius=2pt]; \fill (B) circle [radius=2pt]; \fill (C) circle [radius=2pt]; \fill (D) circle [radius=2pt]; \fill (E) circle [radius=2pt]; \fill (F) circle [radius=2pt]; \fill (G) circle [radius=2pt]; \end{tikzpicture} \caption{The $k$'th vertex of the dual polygon is opposite to the $k$'th vertex of the initial one.}\label{Fig:dual} \end{figure} Under this additional assumption of self-duality, Theorem \ref{thm1} is true in a more general setting of \textit{twisted polygons}, i.e. polygons that are closed only up to a projective transformation. More precisely, a twisted $n$-gon is a sequence $v_k \in \P^2$ such that $v_{k+n} = \psi(v_k)$ for a certain projective transformation~$\psi$, called the \textit{monodromy}. \begin{customthm}{B}\label{thm2} Let $P$ be a weakly convex twisted $n$-gon with odd $n$, and let $P'$ be as in Theorem \ref{thm1}. Assume that $P$ is self-dual and projectively equivalent to $P'$. Then $P$ is a Poncelet polygon. \end{customthm} \begin{remark}\label{rm:nc} Theorem \ref{thm2} is not true without the self-duality assumption (in other words, Theorem~\ref{thm1} is not true for non-closed polygons). As an example, consider a polygon $P$ in $\mathbb{R}^2$ whose vertices are given by $v_k := (4^k, 8^k)$. This is a twisted $n$-gon for any $n$, weakly convex and projectively equivalent to its pentagram image $P'$. % However, $P$ is not Poncelet (cf. Remark \ref{ex:complex}). \end{remark} The proof of Theorem \ref{thm2} is based on the theory of commuting difference operators, elliptic curves, and theta functions. Given the result of the theorem, the appearance of elliptic curves is not surprising, as their connection to Poncelet polygons is well-known. However, we are not given \textit{a priori} that $P$ is Poncelet, so there should be some other source where the elliptic curve is coming from. In our approach, that source is the theory of commuting difference operators. Namely, we show that a twisted polygon $P$ is projectively equivalent to its pentagram image $P'$ if and only if certain associated difference operators commute (see Section \ref{sec:fpcdo}). The general theory then says that the joint spectrum of those operators is a Riemann surface $\Gamma$, called the \textit{spectral curve} (see e.g.~\cite{krichever2003two}). Using further that the operators in question are of special form and, in particular, dual to each other (which is a reflection of self-duality of $P$), we show that the genus of $\Gamma$ is at most $1$, i.e. $\Gamma$ is rational or elliptic (see Section~\ref{ss:genus}). This is one of the few places in the proof where we use weak convexity in an essential way. Without that assumption, one only seems to be able to conclude that the genus is at most $2$. In particular, it seems possible to construct non-weakly-convex counterexamples to Theorems \ref{thm1} and \ref{thm2} using genus $2$ theta functions. \par The next step is to show that our upper bound on the genus of $\Gamma$ implies that $P$ is Poncelet (as a priori there is no connection between the elliptic curve $\Gamma$ and the one that is classically associated with a Poncelet polygon). More precisely, we show that elliptic spectral curves correspond to generic Poncelet polygons, while rational curves correspond to their degenerations (such as the regular polygon). To that end, we express coordinates of vertices of $P$ in terms of certain meromorphic functions on the spectral curve $\Gamma$. One ends up with elementary functions or theta functions, depending on whether the curve $\Gamma$ is rational or elliptic. In both cases, using relations between those functions (e.g. Riemann's relation in the elliptic case), one shows that $P$ is a Poncelet polygon, so Theorem \ref{thm2} holds (see Section~\ref{sec:rat} for the rational case and Section \ref{ss:sd} for the elliptic case). Furthermore, in the elliptic case the spectral curve $\Gamma$ turns out to be isogenous to the elliptic curve attached to $P$ due to its Poncelet property. \par After that, we proceed to prove Theorem \ref{thm1}. To that end, we first show that the self-duality assumption of Theorem \ref{thm2} is not too restrictive. Namely, given a polygon that is projectively equivalent to its pentagram image, it can be what is called \textit{rescaled} so that it becomes self-dual. This rescaling (closely related to the notion of \textit{spectral parameter}) is a one-parametric group of transformations of the moduli space of twisted polygons which preserves weak convexity but not closedness. So, starting with a closed polygon as in Theorem \ref{thm1}, we rescale it to a non-necessarily closed, but self-dual, weakly convex polygon, which is the setting of Theorem \ref{thm2}. This way, we conclude that a weakly convex closed polygon projectively equivalent to its pentagram image is Poncelet up to rescaling. The last step is to show that this rescaling must actually be trivial. To that end, we show that no non-trivial rescaling of a weakly convex Poncelet polygon is closed. In the rational case, this is proved by an elementary argument (see Section \ref{ss:genrat}), while the elliptic case requires careful analysis of the real part of the spectral curve (see Section~\ref{ss:genell}). In the latter case, the proof once again essentially relies on weak convexity. In addition to proofs of Theorems \ref{thm1} and \ref{thm2}, the paper contains an appendix (Section \ref{sec:app}) where we establish an auxiliary result on correspondence between dual difference operators and dual polygons. Although that result seems to be well-known, we could not find a proof in the literature that does not rely on a computation. So, we provide a proof here.\par We tried to make the exposition self-contained. In particular, we do not assume that the reader is familiar with the general theory of integrable systems or commuting difference operators. Only basic knowledge of Riemann surfaces is assumed. \par \smallskip {\bf Acknowledgments.} The author is grateful to Boris Khesin, Valentin Ovsienko, Richard Schwartz, and Sergei Tabachnikov for fruitful discussions. Some of the figures were created with help of software package Cinderella. This work was supported by NSF grant DMS-2008021. \section{Background results: polygons, difference operators, and corner invariants} This section is an overview of mostly well known results on the relation between difference operators and polygons. Namely, we give an introduction to difference operators in Section \ref{sec:primer}, after which we connect them to polygons in Section \ref{sec:polygons}. Note that while the description of the moduli space of polygons in terms of difference operators is well-known, our point of view is slightly different from the standard one. In particular, we identify the space of polygons with a certain \textit{quotient} of the space of third order difference operators, as opposed to the standard approach in which one identifies polygons with a certain \textit{subspace} of that space. In that respect, our approach is close to that of \cite{conley2019lagrangian}. In addition, we provide, in Section \ref{sec:ci}, another description of the polygon space, in terms of so-called corner invariants. Note that while corner invariants per se are not heavily used in the paper, they are needed to define weakly convex polygons and rescaling. Rescaling is also defined in Section \ref{sec:ci}, while weakly convex polygons are discussed in the next Section \ref{sec:lcp}. \par \subsection{A primer on difference operators}\label{sec:primer} This section is a brief introduction to the elementary theory of difference operators. Our terminology mainly follows that of \cite{van1979spectrum}. Let $\mathbb{R}^\infty$ be the vector space of bi-infinite sequences of real numbers, and let $m_- \leq m_+$ be integers. A linear operator $\mathcal D \colon \mathbb{R}^\infty \to \mathbb{R}^\infty$ is called \textit{a difference operator supported in $[m_-,m_+]:=\{m_-, \dots, m_+\}$} if it can be written as \begin{equation}\label{dodef} (\mathcal D\xi)_k = \! \sum_{j = m_-}^{m_+} \! a_k^j \xi_{k+j}, \end{equation} where $a_k^j \in \mathbb{R}$ for every $k \in \mathbb{Z}$ and every $j \in [m_-,m_+]$. In matrix terms, this can be rewritten as \begin{align}\label{infMatrix0} \mathcal D\xi = \left(\begin{array}{ccccccccc} \ddots & & \ddots & & \\ & a_{k-1}^{m_-} & \dots & a_{k-1}^{m_+} & \\ & & a_k^{m_-} & \dots & a_k^{m_+} & & \\ & & & a_{k+1}^{m_-} & \dots & a_{k+1}^{m_+} &\\ & & & & \ddots & &\ddots \end{array}\right)\xi, \end{align} so difference operators can be equivalently described as those whose matrices are {finite band} (i.e. have only finitely many non-zero diagonals). Furthermore, denoting, for every $j$, the sequence of $a_k^j$'s by $a^j$, formula \eqref{dodef} can be rewritten as \begin{align}\label{genDiffOp} \mathcal D = \!\sum_{j = m_-}^{m_+} a^j T^j, \end{align} where $T \colon \mathbb{R}^\infty \to \mathbb{R}^\infty$ is the left shift operator $(T\xi)_k = \xi_{k+1}$, and each $a^j \in \mathbb{R}^\infty$ acts on $\mathbb{R}^\infty$ by term-wise multiplication. \par The \textit{order} of difference operator \eqref{dodef} is the number $\ord \mathcal D := m_+ - m_-$. Difference operator~\eqref{dodef} is called \textit{properly bounded} if $a_k^{m_-} \!\neq 0$ and $a_k^{m_+} \!\neq 0$ for every $k \in \mathbb{Z}$. Clearly, for a properly bounded difference operator $\mathcal D$ one has $ \dim \Ker \mathcal D = \ord \mathcal D. $ Sequences $\xi^{1}, \dots, \xi^{d} \in \Ker \mathcal D$, where $d:=\ord \mathcal D$, form a basis in $\Ker \mathcal D$ if and only if the associated \textit{difference Wronskian} $$ W_k := \left|\begin{array}{ccc}\xi^{1}_{k} & \dots & \xi^{d}_{k} \\ & \dots \\ \xi^{1}_{k+d-1} & \dots & \xi^{d}_{k+d-1} \end{array}\right|, $$ where $|M|$ stands for the determinant of the matrix $M$, is non-vanishing for some $k \in \mathbb{Z}$. This is equivalent to non-vanishing of $W_k$ for any $k$ due to the relation \begin{equation}\label{wrons} W_{k+1} = (-1)^d \frac{a_{k-m_-}^{m_-}}{a_{k-m_-}^{m_+}} W_k. \end{equation} \par Along with $\mathbb{R}^\infty$, difference operators naturally act on the space $(\mathbb{R}^d)^\infty$ of bi-infinite sequences of vectors in $\mathbb{R}^d$. The case $d = \ord \mathcal D$ is of particular interest. Let $V \in (\mathbb{R}^d)^\infty$, where $d = \ord \mathcal D$, be a solution of the difference equation $\mathcal D V = 0$. Define scalar sequences $\xi^{1}, \dots, \xi^{d} \in \mathbb{R}^\infty$ by setting $\xi_k^j$ to be equal to the $j$'th coordinate of $V_k$. We say that $V$ is a \textit{fundamental solution} if the sequences $\xi^{1}, \dots, \xi^{d} \in \mathbb{R}^\infty$ form a basis in $\Ker \mathcal D$. As follows from the Wronskian criterion, a solution of $\mathcal D V = 0$ is fundamental if and only if the vectors $V_k, \dots, V_{k + d -1}$ are linearly independent for some (equivalently, for all) $k \in \mathbb{Z}$. A difference operator $\mathcal D$ is \textit{$n$-periodic} if its coefficients $a_k^j$ are $n$-periodic in the index $k$. This is equivalent to saying that $\mathcal D$ commutes with the $n$'th power of the shift operator: $\mathcal D T^n = T^n\mathcal D$, so the kernel of an $n$-periodic operator $\mathcal D$ is invariant under the action of $T^n$. The finite-dimensional operator $T^n\vert_{\Ker \mathcal D}$ is called the \textit{monodromy} of $\mathcal D$. Note that the eigenvectors of the monodromy are exactly scalar {quasi-periodic} solutions of the equation $\mathcal D\xi = 0$, i.e. solutions which belong to the space \begin{equation}\label{qspace} \v{z} := \{ \xi \in \mathbb{R}^\infty \mid \xi_{k+n} = z\xi_k\} \end{equation} for some $z \in \mathbb{R}^*$. \par The monodromy can also be understood in terms of fundamental solutions. Namely, notice that any two fundamental solutions $V, V' \in (\mathbb{R}^d)^\infty$ of $\mathcal D$ are related by $V' = AV$, where $A \in \mathrm{GL}_d(\mathbb{R})$ acts on $(\mathbb{R}^d)^\infty$ by term-wise multiplication. Furthermore, if $V$ is a fundamental solution of an $n$-periodic operator $\mathcal D$, then so is $T^nV$, which means that $T^n V = AV$ for some $A \in \mathrm{GL}_d(\mathbb{R})$. In other words, we have $V_{k+n} = AV_k$ for every $k \in \mathbb{Z}$, which means that the fundamental solution of a periodic difference operator is always quasi-periodic. Furthermore, the matrix $A$ can be easily seen to be the transpose of the monodromy matrix $M$ of $\mathcal D$, written in the basis of $\Ker \mathcal D$ associated with the fundamental solution~$V$. In particular, this implies that the Wronskian of an $n$-periodic operator $\mathcal D$ satisfies $W_{k+n} = (\det M)W_k$. Combined with \eqref{wrons}, the latter formula gives the following expression for the determinant of the monodromy, which will be used several times throughout the paper: \begin{equation}\label{monodet} \det M = (-1)^{nd} \prod_{k=1}^n \frac{ a_k^{m_-}}{a_k^{m_+}}. \end{equation} The \textit{dual} of the operator~\eqref{genDiffOp} is defined by $$ \mathcal D^* := \sum_{j = l}^{m} T^{-j} a^j = \sum_{j = l}^{m} \tilde a^jT^{-j}, $$ where $\tilde a^j_k = a_{k-j}^j$. In other words, $\mathcal D^*$ is the formal adjoint of $\mathcal D$ with respect to the $l^2$ inner product on $\mathbb{R}^\infty$, i.e. $ \langle \xi, \mathcal D \eta \rangle = \langle \mathcal D^*\xi, \eta \rangle $ whenever at least one of these inner products is well-defined. In the periodic case, the duality between $\mathcal D$ and $\mathcal D^*$ can also be understood as follows. If $\mathcal D$ is an $n$-periodic operator, then $\mathcal D^*$ is $n$-periodic as well. Furthermore, the formula \begin{equation}\label{pairing} \langle \xi, \eta \rangle := \sum_{k=1}^n \xi_k\eta_k \end{equation} defines an inner product on the space $\v{1}$ of $n$-periodic sequences, and the restrictions of $\mathcal D$ and $\mathcal D^*$ to $\v{1}$ are dual to each other with respect to this inner product. More generally, for every $z \in \mathbb{R}^*$, the restriction of $\mathcal D$ to $\v{z}$ is dual to the restriction of $\mathcal D^*$ to $\v{z^{-1}}$ with respect to the pairing between $\v{z}$ and $\v{z^{-1}}$ given by the same formula \eqref{pairing}. As a corollary, we have $$ \dim \Ker \mathcal D^*\vert_{\v{z^{-1}}} = \dim \Ker \mathcal D\vert_{\v{z}}. $$ In particular, a non-zero number $z \in \mathbb{R}^*$ is an eigenvalue of the monodromy of $\mathcal D$ if and only if $z^{-1}$ is an eigenvalue of the monodromy of $\mathcal D^*$. \par \subsection{Difference operators and polygons}\label{sec:polygons} In this section we describe the space of projective equivalence classes of planar polygons as a certain quotient of third order difference operators. \begin{definition} A \textit{polygon} in $\mathbb{R}\mathbb{P}^2$ is a bi-infinite sequence of points $v_k \in \mathbb{R}\mathbb{P}^2$ satisfying the following \textit{$3$-in-a-row} condition: for every $k \in \mathbb{Z}$ the points $v_{k-1}, v_k, v_{k+1}$ are in general position. \end{definition} Polygons modulo projective transformations can be encoded by means of properly bounded third order difference operators, i.e. operators of the form \begin{equation}\label{o3op} \mathcal D = aT^j+ bT^{j+1} + cT^{j+2} + dT^{j+3}, \end{equation} where $a,b,c,d \in \mathbb{R}^\infty$ are such that $a_k \neq 0$, $d_k \neq 0$ for any $k \in \mathbb{Z}$. \begin{proposition}[cf. Proposition 4.1 of \cite{ovsienko2010pentagram}]\label{freedom} For any $j \in \mathbb{Z}$, there is a one-to-one correspondence between projective equivalence classes of planar polygons and properly bounded difference operators $\mathcal D$ supported in $[j,j+3]$, considered up to the action $\mathcal D \mapsto \lambda\circ\mathcal D\circ \mu^{-1}$, where $\lambda, \mu \in (\mathbb{R}^*)^\infty$ are sequences of non-zero real numbers, acting on $\mathbb{R}^\infty$ by term-wise multiplication. \end{proposition} \begin{proof} Given a properly bounded difference operator $\mathcal D$ supported in $[j,j+3]$, consider its fundamental solution $V$, which is a sequence of non-zero vectors in $\mathbb{R}^3$ (see Section~\ref{sec:primer}). Each term $V_k$ of that sequence determines a point $v_k \in \mathbb{R}\mathbb{P}^2$ with homogeneous coordinates given by $V_k$. Furthermore, since $V$ is a fundamental solution, the vectors $V_k$, $V_{k+1}$, $V_{k+2}$ are linearly independent, and thus the sequence $v_k \in \mathbb{R}\mathbb{P}^2$ satisfies the {$3$-in-a-row} condition. Notice also that since the fundamental solution $V$ is unique up to a linear transformation $V \mapsto AV$, it follows that the polygon $\{v_k\}$ is well-defined up to projective equivalence. Thus, with each properly bounded difference operator $\mathcal D$ supported in $[j,j+3]$ one can associate a polygon $\{v_k\}$, defined up to a projective transformation. Conversely, given a polygon $\{v_k\}$, one can revert this construction to obtain a properly bounded difference operator $\mathcal D$ supported in $[j,j+3]$. To that end, one first lifts every point $v_k \in \P^2$ to a vector $V_k \in \mathbb{R}^3$, and then finds an operator $\mathcal D$ whose fundamental solution is given by $V$. Since the lifts $V_k$ of the points $v_k$ are unique up to a transformation $V_k \mapsto \mu_k V_k$, while the choice of an operator $\mathcal D$ with a given fundamental solution $V$ is unique up to $\mathcal D \mapsto \lambda \circ \mathcal D$, where $\lambda\in (\mathbb{R}^*)^\infty$, it follows that the operator $\mathcal D$ corresponding to a given polygon is defined up to the action $\mathcal D \mapsto \lambda \circ \mathcal D \circ \mu^{-1}$, as desired. \end{proof} In what follows, we will be interested in closed and, more generally, twisted polygons. A closed $n$-gon is a polygon satisfying $v_{k+n} = v_k$ for every $k \in \mathbb{Z}$. For such a polygon, the corresponding difference operator $\mathcal D$ can be chosen to be $n$-periodic. The converse is, however, not true: polygons corresponding to periodic operators are, in general, not closed but twisted. Indeed, if $\mathcal D$ is a periodic operator, then its fundamental solution $V \in (\mathbb{R}^3)^\infty$ is in general not periodic but satisfies $ V_{k+n} = AV_k$, where $A \in \mathrm{GL}_3(\mathbb{R})$ is the transposed monodromy of~$\mathcal D$. Therefore, the corresponding polygon satisfies $v_{k+n} = \psi(v_k)$, where $\psi \in \P \mathrm{GL}_3(\mathbb{R})$ is the projective transformation determined by the linear operator~$A$. \begin{definition} A \textit{twisted $n$-gon} in $\mathbb{R}\mathbb{P}^2$ is a polygon $\{v_k\}$ which satisfies $v_{k+n} = \psi(v_k)$ for some projective transformation $\psi \in \P \mathrm{GL}_3$, called the \textit{monodromy of the polygon}. \end{definition} The above construction (see the proof of Proposition \ref{freedom}) allows one to identify the space of projective equivalence classes of twisted $n$-gons with an appropriate quotient of the space of $n$-periodic properly bounded difference operators supported in $[j.j+3]$. Under this identification, closed polygons correspond to those operators whose monodromy is a scalar multiple of the identity (furthermore, one can arrange that the monodromy of an operator corresponding to a closed polygon is exactly the identity).\par Dual difference operators correspond to projectively dual polygons. Recall that the dual of a polygon is the polygon in the dual projective plane whose vertices are the sides of the initial one. Note that while there is, in general, no canonical way to label the vertices of the dual polygon, polygons with odd number of vertices admit one particular labeling which is more symmetric than the others. This labeling is depicted in Figure \ref{Fig:dual}. For closed polygons, such labeling makes projective duality an involution. For twisted polygons, it is only an involution up to the action of the monodromy, but still an actual involution on projective equivalence classes. \begin{definition}\label{def:dual} Let $P$ be a closed or twisted $n$-gon with odd $n$. Then the $k$'th vertex of its \textit{dual polygon} $P^*$ is the side of $P$ which joins the vertices with indices ${k+(n-1)\,/\,2}$, ${k+(n+1)\,/\,2}$. A polygon $P$ is called \textit{self-dual} if it is projectively equivalent to its dual polygon $P^*$. \end{definition} \begin{remark} Closed self-dual polygons are studied in \cite{fuchs2009self}. In that paper, polygons which are self-dual in the sense of Definition \ref{def:dual} are called $n$-self-dual (where $n$ is the number of vertices). \end{remark} With our definition of duality, we have the following: \begin{proposition}\label{prop:duality} Let $n$ be odd. Consider a $n$-periodic properly bounded difference operator supported in $[(n-3)/2, (n+3)/2]$ and its dual operator $ \mathcal D^*. $ Then the polygons corresponding to $\mathcal D$ and $\mathcal D^*$ are dual to each other in the sense of Definition \ref{def:dual}. \end{proposition} \begin{proof} This follows from more general Proposition \ref{dualdual} in the appendix. \end{proof} \subsection{Corner invariants and rescaling}\label{sec:ci} Another description of the space of polygons modulo projective transformations is by means of so-called \textit{corner invariants}~\cite{schwartz2008discrete, ovsienko2010pentagram}. To every vertex $v_k$ of a polygon, one associates two cross-ratios $x_k, y_k$, as shown in Figure \ref{CI}. The definition of the cross-ratio that we use is $$ [t_1, t_2,t_3, t_4] := \frac{(t_1 - t_2)(t_3 - t_4)}{(t_1 - t_3)(t_2 - t_4)}. $$ \begin{figure}[h] \centering \begin{tikzpicture}[scale = 1, rotate = -90] \draw [line width=0.2mm] (0.7,0.4) -- (1,1) -- (2,1.5) -- (3,1) -- (3.3,0.4); \fill (0.7,0.4) circle [radius=2pt]; \coordinate [label={$v_{k+2}$}]() at (0.7, 0.4); \fill (1,1) circle [radius=2pt]; \coordinate [label={45:$v_{k+1}$}]() at (1,1); \fill (2,1.5) circle [radius=2pt]; \coordinate [label={left:$v_{k}$}]() at (2,1.5); \fill (3,1) circle [radius=2pt]; \coordinate [label={-45:$v_{k-1}$}]() at (3,1); \fill (3.3,0.4) circle [radius=2pt]; \coordinate [label={below:$v_{k-2}$}]() at (3.3,0.4); \fill (2,3) circle [radius=2pt]; \coordinate [label={right:$\bar v_{k}$}]() at (2,3); \fill (1.4,1.8) circle [radius=2pt]; \coordinate [label={45:$\hat v_{k}$}]() at (1.4,1.8); \fill (2.6,1.8) circle [radius=2pt]; \coordinate [label={-45:$\tilde v_{k}$}]() at (2.6,1.8); \draw [dashed] (1,1) -- (2, 3); \draw [dashed] (1.4, 1.8) -- (2,1.5) ; \draw [dashed] (2, 3) -- (3,1); \draw [dashed] (2,1.5) -- (2.6,1.8); \node at (1.75,6) () {$x_k = [v_{k-2}, v_{k-1}, \tilde v_k, \bar v_k]$}; \node at (2.25,6) () {$y_k = [\bar v_k, \hat v_k, v_{k+1}, v_{k+2}]$}; \end{tikzpicture} \caption{The definition of corner invariants.}\label{CI} \end{figure} \begin{remark}Note that the definition of $x_k, y_k$ requires somewhat more than the $3$-in-a-row condition. However, we do not need to care about this, since in this paper we will only be dealing with weakly convex polygons (see Definition \ref{def:lcp} below) for which the numbers $x_k, y_k$ are well-defined by definition.\end{remark Clearly, the sequences $x_k$, $y_k$ only depend on the projective equivalence class of the polygon. Furthermore, in the twisted case these sequences are $n$-periodic, and $\{ x_1, y_1, \dots, x_n, y_n\}$ is a coordinate chart on an open dense subset of twisted $n$-gons modulo projective transformations. Therefore, since the pentagram map preserves twisted polygons, it can be written in terms of the $(x,y)$ coordinates. Explicitly, one has \begin{align}\label{pentFormulas} x_k' = x_k \frac{1 - x_{k-1}y_{k-1}}{1 - x_{k+1}y_{k+1}},\quad y_k' = y_{k+1} \frac{1 - x_{k+2}y_{k+2}}{1 - x_{k}y_{k}}, \end{align} where $x_k,y_k$ are the corner invariants of the polygon $P$, and $x_k', y_k'$ are corner invariants of its pentagram image $P'$. Here we label $P'$ as in \cite{ovsienko2010pentagram}. The labeling used in Figure \ref{Fig:labeling} leads to the same formulas with a certain shift of indices. Although we will never use these explicit formulas, we will use the following corollary: the pentagram map, with any labeling of vertices, commutes with the $1$-parametric group of transformations $R_s$ given by \begin{equation}\label{rescaling} R_s \colon x_k \mapsto sx_k, \quad y_k \mapsto s^{-1}y_k. \end{equation} These transformations are known as \textit{rescaling}. They were introduced in \cite{ovsienko2010pentagram} to prove that the pentagram map is a completely integrable system. \par We now discuss the relation between two representations of the space of polygons: in terms of difference operators, and in terms of corner invariants. This relation is most conveniently expressed using the notion of the \textit{cross-ratio of a $2 \times 2$ matrix}, which is by definition the product of off-diagonal entries divided by the product of diagonal entries. This quantity is invariant under the left and right actions of invertible diagonal matrices. Therefore, for a matrix of any size, the cross-ratios of its $2 \times 2$ minors are also invariant under such an action. Consider now the operator \eqref{o3op} corresponding to a given polygon. This operator can be written as an infinite matrix \begin{align}\label{infMatrix} \left(\begin{array}{ccccccc}\ddots & \ddots & \ddots & \ddots & & & \\ & a_k & b_k & c_k & d_k & & \\ & & a_{k+1} & b_{k+1} & c_{k+1} & d_{k+1} &\\ & & & \ddots & \ddots & \ddots & \ddots\end{array}\right). \end{align} By Proposition \ref{freedom}, the polygon determines this matrix up to the left and right actions of diagonal matrices. Therefore, the cross-ratios of $2 \times 2$ minors of this matrix are invariants of the polygon. Not surprisingly, these cross-ratios coincide with the corner invariants: \begin{proposition}\label{prop:xyviad} Assume we are given a polygon $P$ defined by a difference operator \eqref{o3op} supported in $[j,j+3]$. Then the corner invariants of $P$ are given by the cross-ratios of $2 \times 2$ minors of the corresponding infinite matrix~\eqref{infMatrix}: \begin{equation}\label{xyviad} x_{k+ j + 2} = \frac{c_{k}a_{k+1}}{b_{k} b_{k+1}}, \quad y_{k+ j + 2} = \frac{d_kb_{k+1}}{c_{k} c_{k+1}}. \end{equation} \end{proposition} \begin{proof} The proof is a computation following the lines of the proof of Lemma 4.5 in~\cite{ovsienko2010pentagram}. \end{proof} Formulas \eqref{xyviad} allow one to describe rescaling operation \eqref{rescaling} in terms of difference operators: \begin{corollary}\label{cor:rescalingDO} In terms of difference operators, rescaling \eqref{rescaling} can be defined as $$ a T^j + bT^{j+1} + cT^{j+2} + dT^{j+3} \mapsto aT^j + bT^{j+1} + s(cT^{j+2} + dT^{j+3}). $$ \end{corollary} \begin{proof} Formulas \eqref{xyviad} show that multiplying $c$ and $d$ coefficients by $s$ is equivalent to multiplying $x$ variables by $s$ and $y$ variables by $s^{-1}$, which is exactly rescaling \eqref{rescaling}. \end{proof} \begin{remark} Note that since there are many operators corresponding to a given polygon, there are also many different ways to define the rescaling on operators. For example, the following formula defines the same operation on polygons as the formula provided above: $$ a T^j + bT^{j+1} + cT^{j+2} + dT^{j+3} \mapsto a T^j + s^{-{1}/{3}}bT^{j+1} + s^{{1}/{3}}cT^{j+2} + dT^{j+3}. $$ \end{remark} \section{Weakly convex polygons}\label{sec:lcp} In this section we define weakly convex polygons and describe their properties needed to prove Theorems~\ref{thm2} and \ref{thm1}. \begin{definition}\label{def:lcp} A polygon is \textit{weakly convex} if its corner invariants are well-defined and satisfy $$x_k > 0, \quad y_k > 0, \quad x_ky_k < 1.$$ \end{definition} \begin{proposition}\label{xiyi01} Convex polygons are weakly convex. \end{proposition} \begin{proof} For convex polygons, the collinear points $v_{k-2}$, $v_{k-1}$, $\tilde v_k$, $\bar v_k$ in Figure \ref{CI} are distinct and their cyclic order is exactly as shown. So, $x_k$ is well-defined and $0 < x_k< 1$. Likewise, we have $y_k \in (0,1)$. The result follows. \end{proof} \begin{remark} More generally, all corner invariants of a polygon satisfy $x_k, y_k \in (0,1)$ if and only if any five consecutive vertices of that polygon form a convex pentagon (where by a convex pentagon in $\mathbb{R}\mathbb{P}^2$ we mean a pentagon which is convex in a suitable affine chart). So, all polygons satisfying this ``5-in-a-row'' condition are weakly convex. The geometric meaning of the general weak convexity condition is not that clear. However, it turns out to be really convenient for the purposes of the present paper \end{remark} The following is an exhaustive list of properties of weakly convex polygons needed for our purposes: \begin{proposition}\label{alt} Assume that $P$ is a closed or twisted weakly convex $n$-gon, where $n \geq 3$ is odd. Then: \begin{enumerate} \item The corresponding third order $n$-periodic difference operator \eqref{o3op} can be chosen in such a way that for all $k \in \mathbb{Z}$ we have \begin{equation}\label{altCond} a_k, c_k > 0, \quad b_k, d_k < 0. \end{equation} \item For any difference operator \eqref{o3op} corresponding to $P$ and satisfying \eqref{altCond}, consider the operators $\D_\l := a T^j + bT^{j+1}, \D_\r := cT^{j+2} + dT^{j+3}.$ Then the monodromies $z_l, z_r$ of these operators (which are real numbers because the operators are of first order) satisfy $0 < z_l < z_r$. In particular, we have $\Ker \D_\l \cap \Ker \D_\r = 0$. \item Any polygon obtained from $P$ by means of rescaling \eqref{rescaling} with $s > 0$ is weakly convex. \item Any polygon obtained from $P$ by means of rescaling \eqref{rescaling} with $s < 0$ has monodromy with distinct eigenvalues. \end{enumerate} \end{proposition} \begin{proof} To prove the first statement, consider the corner invariants $x_k, y_k > 0$ of $P$, and let $$ \mathcal D: = T^j - T^{j+1} + x_{k+j+2}T^{j+2} - x_{k+j+2}y_{k+j+2}x_{k+j+3}T^{j+3}. $$ Then, by Proposition \ref{prop:xyviad}, the polygon associated with $\mathcal D$ is $P$. Furthermore, the signs of coefficients of $\mathcal D$ satisfy \eqref{altCond}, as needed.\par To prove the second statement, consider an arbitrary operator \eqref{o3op} representing $P$ and satisfying~\eqref{altCond}, along with the associated operators $\D_\r, \D_\l$. Then, by formula~\eqref{monodet}, the monodromies of those operators are given by $$ z_l = -\prod_{k=1}^n \frac{ a_k}{ b_k}, \quad z_r = -\prod_{k=1}^n\frac{ c_k}{ d_k}, $$ so $z_l, z_r > 0$ due to \eqref{altCond} and $n$ being odd. Further, using formulas~\eqref{xyviad}, we get $$ \frac{z_l}{z_r} = \prod_{k=1}^n \frac{ a _k d_{k}}{b_{k} c_{k}} = \prod_{k=1}^n x_{k} y_k, $$ where $x_k, y_k$ are the corner invariants of $P$. But since $P$ is weakly convex, we have $x_ky_k < 1$ and thus $z_l < z_r$. This in turn implies $\Ker \D_\l \cap \Ker \D_\r = 0$, because non-zero elements of the kernel of $\D_\l$ are sequences with monodromy $z_l$, while non-zero elements of the kernel of $\D_\r$ are sequences with monodromy $z_r \neq z_l$. Thus, the second statement is proved. \par The third statement is obvious from the definitions of weak convexity and rescaling, so we proceed to the fourth statement. Let $P_{sd}$ be a polygon obtained from $P$ by means of rescaling with $s < 0$. Then the corner invariants $\hat x_k, \hat y_k$ of $P_{sd}$ satisfy $\hat x_k, \hat y_k < 0$, $\hat x_k\hat y_k < 1$. To show that the monodromy of such a polygon has distinct eigenvalues, we use a result from the appendix to \cite{izosimov2016pentagrams} which says the monodromy of a twisted $n$-gon with corner invariants $\hat x_k, \hat y_k$ is conjugate to the product $L_1\cdot \ldots \cdot L_n$, where \begin{equation}\label{li} L_k := \left(\begin{array}{ccc}1 & 0 & 1 \\1-\hat x_k\hat y_k & 0 & 1 \\0 & -\hat y_k & 0\end{array}\right). \end{equation} Notice that since $\hat y_k < 0$ and $\hat x_k \hat y_k < 1$, the matrices $L_k$ are non-negative. Furthermore, the product of at least three such matrices is positive, so the matrix $M := L_1\cdot \ldots \cdot L_n$ is positive. Therefore, by the Perron-Frobenius theorem, $M$ has a real positive eigenvalue $z_1$ such that its any other eigenvalue $z$ satisfies $|z| < z_1$. Furthermore, since $\hat x_k < 0$ and $n$ is odd, we have $\det M= \hat x_1 \cdot \ldots \cdot \hat x_n(\hat y_1 \cdot \ldots \cdot \hat y_n)^2 < 0$, so the product of two other eigenvalues $z_2, z_3$ of $M$ is negative, which means that they are real and distinct. The result follows.\end{proof} \section{Polygons fixed by the pentagram map and commuting difference operators}\label{sec:fpcdo}\label{sec:cdo} In this section we show that a closed or twisted polygon $P$ projectively equivalent to its pentagram image $P'$ gives rise to commuting difference operators. This is the first step in the proof of both Theorem \ref{thm1} and Theorem \ref{thm2}. In addition, in the self-dual case (i.e. in the setting of Theorem \ref{thm2}) we show that those commuting operators are negative duals of each other. Let $P = \{v_k\}$ be a closed or twisted $n$-gon with odd $n$, and let $P' = \{v_k'\}$ be the image of $P$ under the pentagram map, labeled as in Figure \ref{Fig:labeling}. Then, as explained in Section \ref{sec:polygons}, one can encode $P'$ by means of a difference operator $\mathcal D$ of the form \begin{equation}\label{diffOp2} \mathcal D = a T^{{(n-3)}/{2}}+ bT^{{(n-1)}/{2}} + cT^{{(n+1)}/{2}} + dT^{{(n+3)}/{2}}. \end{equation} The coefficients of this operator are related to the polygon $P'$ by means of the equation $$ a_k V'_{ i+ {(n-3)}/{2}}+ b_kV'_{k+(n-1)/2} + c_kV'_{k + (n+1)/2} + d_kV'_{k+(n+3)/2} = 0, $$ where $V'_k$'s are lifts of the vertices $v_k'$ of $P'$. \begin{proposition}\label{vvp} The vector $$V_k := a_k V'_{k + {(n-3)}/{2}}+ b_kV'_{k+(n-1)/2} = - c_kV'_{k + (n+1)/2} - d_kV'_{k+(n+3)/2}$$ is the lift of the vertex $v_k$ of $P$. \end{proposition} \begin{proof} Indeed, we have $$V_k \in \mathrm{span}(V'_{k + (n-3)/2}, V'_{k+(n-1)/2}) \cap \mathrm{span}(V'_{k + (n+1)/2}, V'_{k+(n+3)/2}),$$ which means that the projection of $V_k$ to $\P^2$ is the intersection point of the lines $(v'_{k + (n-3)/2},v'_{k+(n-1)/2})$ and $(v'_{k + (n+1)/2}, v'_{k+(n+3)/2})$. By definition of the pentagram map with our labeling convention, this is exactly the vertex $v_k$ of $P$, as desired. \end{proof} Now, as in Proposition \ref{alt}, consider the operators \begin{equation}\label{dldr} \D_\l := a T^{{(n-3)}/{2}}+ bT^{{(n-1)}/{2}}, \quad \D_\r := \mathcal D - \D_\l =cT^{{(n+1)}/{2}} + dT^{{(n+3)}/{2}}. \end{equation} By Proposition \ref{vvp}, these operators take the lifts $V_k'$ of the vertices of $P'$ to the lifts $\pm V_k$ of the vertices of $P$. \begin{proposition}\label{prop:cdo} Assume that a closed or twisted $n$-gon (where $n \geq 5$ is arbitrary) $P$ is projectively equivalent to its pentagram image $P'$. Then: \begin{enumerate} \item One can choose the $n$-periodic operator $\mathcal D$ of the form \eqref{diffOp2} associated with $P$ in such a way that the corresponding operators $\D_\l , \D_\r $ given by \eqref{dldr} commute: \begin{equation}\label{dldrcomm} \D_\l \D_\r = \D_\r \D_\l . \end{equation} \item Furthermore, if $P$ is weakly convex and $n$ is odd, then $\mathcal D$ can be chosen to satisfy the alternating signs condition \eqref{altCond}. \item If, on top of that, $P$ is self-dual, then $\D_\l$, $\D_\r$ may be chosen to be negative duals of each other (up to multiplication by $T^{-n}$): $$\D_\l^* = -T^{-n}\D_\r.$$ Equivalently, the operator $\mathcal D = \D_\l + \D_\r$ can be chosen to be anti-self-dual (again, up to multiplication by $T^{-n}$): $$\mathcal D^* = -T^{-n}\mathcal D.$$ \end{enumerate} \end{proposition} \begin{proof}[Proof of Proposition \ref{prop:cdo}] We begin with the first statement. Take an arbitrary $n$-periodic operator~$\tilde \mathcal D$ of the form \eqref{diffOp2} representing the polygon $P$. Then, since $P'$ is projectively equivalent to $P$, there is a fundamental solution $V'$ of $\tilde \mathcal D$ such that the projection of $V'_k$ to $\P^2$ is the $k$'th vertex of $P'$. Consider the projective transformation taking $P'$ to $P$. Any lift $A \in \mathrm{GL}_3(\mathbb{R})$ of this projective transformation will then take the sequence $V'$ to a sequence of lifts of vertices of $P$. On the other hand, by Proposition \ref{vvp}, lifts of vertices of $P$ are given by the sequence $\tilde \D_\l V'$. So, there is an $n$-periodic sequence $\mu$ of non-zero real numbers such that $ AV' = \mu \tilde \D_\l V', $ where $A$ acts on sequences of vectors by term-wise multiplication. Let $\mathcal D:=\mu \tilde \mathcal D$. Then the operator $\mathcal D$ still satisfies $\mathcal D V'=0$ and hence represents the same polygons $P$ and $P'$. Furthermore, the corresponding operator $\D_\l $ satisfies $ AV' = \D_\l V'. $ Applying the operator $\mathcal D$ to both sides, we get $ \mathcal D \D_\l V' = 0. $ Also taking into account that $ \mathcal D V' = 0$, this can be rewritten as \begin{equation}\label{commutator0} ( \mathcal D \D_\l - \D_\l \mathcal D )V' = 0. \end{equation} At the same time, we have $$ \mathcal D \D_\l - \D_\l \mathcal D = ( \D_\l + \D_\r ) \D_\l - \D_\l ( \D_\l + \D_\r ) = \D_\r \D_\l - \D_\l \D_\r = [\D_\r , \D_\l ], $$ so \eqref{commutator0} gives \begin{equation}\label{commutator} [\D_\r , \D_\l ]\,V' = 0. \end{equation} Now it remains to notice that the commutator $[\D_\r , \D_\l ]$ is of the form $ \alpha T^{n-1} + \beta T^n + \gamma T^{n+1}, $ so equation~\eqref{commutator} is equivalent to $ \alpha_k V'_{k + n -1} + \beta_k V'_{k+n} + \gamma_k V'_{k+n+1} = 0, $ which, in view of the $3$-in-a-row condition for $P'$ (which holds because it holds for $P$), gives $\alpha_k = \beta_k = \gamma_k = 0$ and thus $ [\D_\r , \D_\l ] = 0, $ as desired. \par To prove the second statement, one repeats the same argument, with the only modification that the initial operator $\tilde \mathcal D$ should be chosen to satisfy \eqref{altCond}, which can be done by the first statement of Proposition~\ref{alt}. Then the coefficients $a,b,c,d$ of the operator $\mathcal D = \mu \tilde \mathcal D$ satisfy $\sgn(a_k) = -\sgn(b_k) = \sgn( c_k) = -\sgn(d_k)$. Furthermore, we claim that $a_k$'s are all of the same sign. Indeed, using explicit formulas \eqref{dldr} for $\D_\l $ and $\D_\r $ and equating the coefficients of $T^{n-1}$ in \eqref{dldrcomm}, we get \begin{equation}\label{explicitComm} a_k c_{k+(n-3)/2} = c_k a_{k+(n+1)/2}. \end{equation} Furthermore, we have $\sign c_j = \sign a_j$ for any $j$, so taking the signs of both sides of \eqref{explicitComm} we get $$ \sign a_{k+(n+1)/2} = \sign c_{k+(n-3)/2} = \sign a_{k+(n-3)/2}, $$ which means that the sequence $\sign a_k$ is $2$-periodic. But since the period $n$ of the sequence $a_k$ is an odd number, it follows that $\sign a_k = \mathrm{const}$. Now, multiplying $\mathcal D$ by $-1$ is necessary, we can arrange that $a_k > 0$ for all $k$, so that $\mathcal D$ has satisfies \eqref{altCond}, as needed. \par To prove the third statement, we consider the operator $\mathcal D$ constructed above and show that if $P$ is self-dual, then $\mathcal D$ can be replaced with another operator, which has all the properties of $\mathcal D$ and is, in addition, anti-self-dual. To that end, observe that if $P$ is self-dual, then the operators $ \mathcal D$ and $ \mathcal D^*$ represent the same polygon, so \begin{equation}\label{sdpop} \mathcal D^* = \alpha T^{-n } \mathcal D \beta^{-1} \end{equation} for certain sequences $\alpha, \beta$ of non-zero real numbers. Taking the duals, we get $$ \mathcal D = \beta^{-1} T^{n } \mathcal D^* \alpha = \alpha \beta^{-1} \mathcal D \alpha\beta^{-1}, $$ which implies $\beta = \pm \alpha$. Further, since $ \mathcal D$ satisfies \eqref{altCond}, the corresponding coefficients of $ \mathcal D^*$ and $T^{-n } \mathcal D$ are of opposite sign, so \eqref{sdpop} implies $\sgn(\alpha_k) = -\sgn(\beta_k) = \mathrm{const}$, and we must have $\beta = -\alpha$. Therefore, $$ \mathcal D^* = -\alpha T^{-n } \mathcal D \alpha^{-1}, $$ where $\sgn(\alpha_k) = \mathrm{const}$ and without loss of generality we can assume $\alpha_k > 0$. Furthermore, since both operators $ \mathcal D^*$, $T^{-n } \mathcal D$ are $n$-periodic, the sequence $\alpha$ is quasi-periodic, i.e. $\alpha_{k+n} = z\alpha_k$ for some $z \in \mathbb{R}^*$ (actually, $z \in \mathbb{R}_+$). Now, let $\gamma_k := \sqrt{\alpha_k}$. Then the sequence $\gamma$ is also quasi-periodic, so the operator $ \mathcal D': = \gamma \mathcal D \gamma^{-1}$ is $n$-periodic. Moreover, it has all the properties of $\mathcal D$ and is anti-self-dual. Thus, the proposition is proved. \end{proof} \begin{remark}\label{rem:crit} It is easy to see from the proof that the converse of the first statement is also true: if $\D_\r $ and $\D_\l $ commute, than $P$ is projectively equivalent to $P'$. For instance, consider the polygon $P$ from Example~\ref{ex:complex}. The vertices of that polygon can be lifted to vectors $V_k := (\lambda^{2k}, \lambda^{3k}, 1)$, where $\lambda := \exp({{2\pi\mathrm{i}}/{7}})$. The sequence $V_k$ is annihilated by a difference operator with constant coefficients, namely by $\mathcal D := a T^{{(n-3)}/{2}}+ bT^{{(n-1)}/{2}} + cT^{{(n+1)}/{2}} + dT^{{(n+3)}/{2}}$, where $a,b,c,d \in \mathbb{C}$ are such that the roots of the corresponding characteristic equation $a + bx + cx^2 + d x^3 = 0$ are $\lambda^2$, $\lambda^3$, and $1$. Therefore, the associated operators $\D_\r $ and $\D_\l $ also have constant coefficients and hence commute. So, the polygon $P$ is indeed projectively equivalent to its pentagram image $P'$. \end{remark} \par \section{The spectral curve}\label{sec:sc} The results of this section are central to the proof of Theorem \ref{thm2} (and will also be used to derive Theorem \ref{thm1} from Theorem \ref{thm2}). Namely, in Section \ref{ss:genus} we consider the joint spectrum of the commuting difference operators $\D_\l$, $\D_\r$ constructed above (see Section \ref{sec:cdo}), the so-called \textit{spectral curve}, and show that the genus $g$ of that curve is at most $1$. We note that this estimate on the genus is not predicted by the general theory of commuting difference operators. It seems that the best bound one can get from the general theory is $g \leq 2$. Proving the $g \leq 1$ estimate requires somewhat more careful analysis of the field of meromorphic functions. Also note that even the $g \leq 1$ result is still insufficient to prove Theorem~\ref{thm2}. Another important ingredient of the proof is the so-called \textit{eigenvector function}, which encodes the joint eigenvectors of the commuting operators $\D_\l$, $\D_\r$. We study that function in Section~\ref{ss:evf} and in particular prove that it has very few poles. \par \subsection{The spectral curve and a bound on its genus}\label{ss:genus} In this section, we construct the spectral curve associated with commuting difference operators $\D_\l$, $\D_\r$ given by Proposition \ref{prop:sd} and discuss its properties, in particular prove that its genus is at most $1$. \begin{remark} Note that instead of defining the spectral curve using commuting difference operators, we could have done this using the Lax representation, as in \cite{soloviev2013integrability}. However, at the end of the day these two definitions turn out to be equivalent to each other (see Remark \ref{rem:scdef} below). Furthermore, even if we defined the spectral curve using the Lax representation, we would still need commuting difference operators to establish the properties of the curve that we need. So, all in all, these two approaches are equivalent, and our choice is just a matter of convenience. \end{remark} \par Assume that $P$ is a weakly convex twisted $n$-gon, self-dual and projectively equivalent to its pentagram image. Then, by Proposition {\ref{prop:cdo}}, we have an $n$-periodic operator $\mathcal D_l = a T^{{(n-3)}/{2}}+ bT^{{(n-1)}/{2}}$ which commutes with its dual. For notational convenience, we define $$\mathcal D_+:=\mathcal D_l, \quad \mathcal D_- := \mathcal D_l^* = -T^{-n}\mathcal D_r.$$ Periodicity of these operators means that they also commute with $T^n$. Therefore, we have a whole algebra of commuting operators, generated by $\mathcal D_+ $, $\mathcal D_-$, and $T^n$ (to preserve the left-right symmetry, it is natural to include $T^{-n}$ in $\mathcal A$ too, so that $\mathcal A= \mathbb{C}[ \mathcal D_+, \mathcal D_-, T^{\pm n}]$). To such an algebra $\mathcal A$, one can always associate an algebraic curve. This curve may be constructed using any two generic elements of $\mathcal A$. As such elements, we pick the operators $T^n$ and the product $\mathcal D_+\mathcal D_- = \mathcal D_-\mathcal D_+$. This choice is motivated by a particularly simple form of the operator $\mathcal D_+\mathcal D_-$. Namely, that operator is self-dual and is supported in $[-1,1]$: \begin{equation}\label{prodOP} \mathcal D_+\mathcal D_- = T^{-1}\alpha + \beta + \alpha T, \end{equation} where $\alpha, \beta$ are $n$-periodic sequences, and $\alpha_k \neq 0$ for any $k$. The \textit{affine spectral curve} $\Gamma_a$ is defined as the joint spectrum of $T^n$ and $\mathcal D_+\mathcal D_-$: $$ \Gamma_a := \{ (z,w) \in \mathbb{C}^* \times \mathbb{C} \mid \exists\, \xi \in \mathbb{R}^\infty: \xi \neq 0, \,T^n \xi = z\xi, \mathcal D_+\mathcal D_-\xi = w\xi \}. $$ In other words, a point $(z,w) \in \mathbb{C}^* \times \mathbb{C}$ is in $\Gamma_a$ if and only if $w$ is an eigenvalue of the restriction of $\mathcal D_+\mathcal D_-$ to the space $\v{z}$ defined by \eqref{qspace}. \par To obtain an explicit equation of the affine spectral curve $\Gamma_a$, take a basis $e^{1}, \dots, e^{n}$ in $\v{z}$ determined by the condition $e^{j}_k = \delta_k^j$ for $k = 1,\dots, n$. In this basis, the matrix of the operator $\mathcal D_+\mathcal D_-$ is almost tridiagonal, with two additional elements in the upper-right and bottom-left corners: \begin{equation}\label{todaMatrix} \left(\begin{array}{cccccc}\beta_1 & \alpha_1 & & & \alpha_n z^{-1} \\ \alpha_1 & \beta_2 & \alpha_2 & \\ & \ddots & \ddots & \ddots \\ & & \alpha_{n-2} & \beta_{n-1} & \alpha_{n-1} \\ \alpha_n z & & & \alpha_{n-1} & \beta_n\end{array}\right). \end{equation} The affine spectral curve $\Gamma_a$ is the zero locus of the characteristic polynomial of~\eqref{todaMatrix}, which, up to the factor $ \alpha_1\ldots\alpha_n$, reads \begin{equation}\label{charp} p(z,w) = z^{-1} + q(w) + z \end{equation} for a certain polynomial $q(w)$ of degree $n$. In particular, the spectral curve is algebraic, as predicted by the general theory of commuting difference operators. \begin{proposition}\label{prop:irred} The affine spectral curve $\Gamma_a$ is irreducible. \end{proposition} \begin{proof} This curve is the zero locus of the polynomial \eqref{charp}, which is irreducible whenever $q(w)$ is non-constant. \end{proof} We now define the \textit{spectral curve} $\Gamma$ as the Riemann surface corresponding to the affine curve $\Gamma_a$. In other words, $\Gamma$ is the unique Riemann surface biholomorphic to $\Gamma_a$ away from a finite number of points. The existence of such a Riemann surface is guaranteed by Riemann's theorem. It can be obtained from $\Gamma_a$ by means of normalization (which we actually explicitly construct in Remark \ref{resolution} below), followed by compactification. Since $\Gamma_a$ is irreducible (Proposition~\ref{prop:irred}), it follows that $\Gamma$ is connected. Furthermore, the Riemann surface $\Gamma$ comes equipped with: \begin{itemize} \item two meromorphic functions $z$ and $w$, obtained from coordinate functions on $\Gamma_a$, and satisfying the equation $p(z,w) = 0$, with $p$ given by \eqref{charp}; \item a holomorphic involution $\sigma \colon \Gamma \to \Gamma$, coming from the involution $(z,w) \mapsto (z^{-1},w)$ on $\Gamma_a$, and satisfying $\sigma^*w = w$ and $\sigma^*z = z^{-1}$. \end{itemize} \begin{proposition}\label{prop:degrees} The degrees of the functions $w$ and $z$ on $\Gamma$ are equal to $2$ and $n$ respectively. \end{proposition} \begin{proof} Since the polynomial $q(w)$ in \eqref{charp} has degree $n$, the equation $p(z,w) = 0$ has $n$ solutions in terms of $w$ for generic $z$. So, the degree of $z$ on $\Gamma$ is $n$. Likewise, the number of solutions of $p(z,w) = 0$ in terms of $z$ is $2$ for generic $w$, so the degree of $w$ is $2$. \end{proof} Since $w$ is a function of degree $2$, and $\sigma$ is a non-trivial involution preserving $w$, it follows that $\sigma$ interchanges the two points in any level set of $w$. In particular, the fixed points set of $\sigma$ coincides with the set of branch points of $w$, i.e. points where $dw = 0$. At the end of this section we will show that the number of such branch points is at most $4$, which implies, by the Riemann-Hurwitz formula, that the genus of $\Gamma$ is at most $1$. But first we need to discuss in detail the analytic properties of the functions $z$ and $w$, as well as of some other functions on $\Gamma$ which we introduce below. \begin{proposition} The Riemann surface $\Gamma$ is obtained from the normalization of $\Gamma_a$ by adding two points $Z_\pm$, interchanged by the involution $\sigma$. The point $Z_+$ is a zero of order $n$ for the function $z$, while $Z_-$ is its pole of order $n$. Both points are simple poles of the function $w$. \end{proposition} \begin{proof} Let $\Gamma_n \subset \Gamma$ be the normalization of $\Gamma_a$. This set can be described as the preimage of $\Gamma_a$ under the map $(z,w) \colon \Gamma \to \P^1 \times \P^1$. Also note that the image of $\Gamma$ under the latter map is precisely the closure of $\Gamma_a$ in $ \P^1 \times \P^1$, which consists of $\Gamma_a$ and the points $(0, \infty)$ and $(\infty, \infty)$. So, the image of $\Gamma \setminus \Gamma_n$ under the map $(z,w)$ is two points $(0, \infty)$ and $(\infty, \infty)$. This means, first, that any point in $\Gamma \setminus \Gamma_n$ is a pole of $w$, and second, that there are at least two such points. But since $w$ has degree $2$ (Proposition~\ref{prop:degrees}), it follows that $\Gamma \setminus \Gamma_n$ consists of exactly two points, and that these points are simple poles of $w$. Furthermore, at one of these points, which we denote by $Z_+$, we have $z = 0$, while at the other one, which we call $Z_-$, we have $z = \infty$. Finally notice that since all points of $\Gamma$ except $Z_\pm$ belong to $\Gamma_n$, it follows that $z$ does not have zeros or poles except for $Z_\pm$. So, $Z_+$ is a zero of $z$ of order $n$, while $Z_-$ is a pole of order $n$, as desired. \end{proof} Denote also by $S_\pm$ the two zeros of the function $w$ on $\Gamma$. A priori, these two points may coincide, but later on we will show that they are distinct (see the proof of Proposition \ref{cor:table}). Table \ref{table} summarizes information about the orders of the functions $z$ and $w$ at the points $Z_\pm$, $S_\pm$ (recall that the \textit{order} of a meromorphic function $f$ at a point $X$ is equal to $m$ if $f$ has a zero of order $m$ at $X$, $-m$ if $f$ has a pole of order $m$ at $X$, and $0$ otherwise). Also note that the order of $z$ and $w$ at any other point of $\Gamma$ is equal to $0$. The table also contains information about functions $s$ and $\mu_\pm$, which we introduce below. \begin{table}[t] \centering \begin{tabular}{|c|c|c|c|c|c|}\hline Function & Degree & Order at $Z_+$ & Order at $Z_-$ & Order at $S_+$ & Order at $S_-$ \\\hline $z$ & $n$ & $n$ & $-n$ & $0$ & $0$ \\\hline $w$ & $2$ & $-1$ & $-1$ & $1$ & $1$ \\\hline $s$ & $3$ & $-2$ & $2$ & $1$ & $-1$ \\\hline $\mu_+$ & ${(n-1)/}{2}$ & ${(n-3)/}{2}$ & -${(n-1)/}{2}$ & 1 & 0\\\hline $\mu_-$ & ${(n-1)/}{2}$ & -${(n-1)/}{2}$ & ${(n-3)/}{2}$ & 0 & 1 \\\hline \end{tabular} \caption{The orders of the functions $z,w,s, \mu_\pm$ at the points $Z_\pm, S_\pm \in \Gamma$. The order of these functions at any other point of $\Gamma$ is zero.}\label{table} \end{table} \begin{proposition}\label{prop:rankOne} The pair $\mathcal D_+\mathcal D_-$, $T^n$ of commuting difference operators is of \textit{rank $1$}, which means that the generic common eigenspace of these operators is $1$-dimensional. \end{proposition} \begin{proof} As follows from the explicit form \eqref{charp} for the characteristic polynomial of the matrix \eqref{todaMatrix}, for generic $z$ that matrix has distinct eigenvalues and hence one-dimensional eigenspaces. \end{proof} For a generic point $(z,w) \in \Gamma_a$, let $\xi \in \mathbb{R}^\infty$ be the corresponding common eigenvector of $\mathcal D_+\mathcal D_-$ and $T^n$, normalized by the condition $\xi_0 = 1$. \begin{proposition}\label{prop:ev} The components $\xi_k$ of $\xi$ extend to meromorphic functions on $\Gamma$. The corresponding vector-function $\xi$ on $\Gamma$ satisfies the equations \begin{equation}\label{xidefeqns} T^n\xi = z \xi, \quad \mathcal D_+\mathcal D_- \xi = w \xi. \end{equation} \end{proposition} \begin{proof} For $(z,w) \in \Gamma_a$, let $\eta = (\eta_1, \dots, \eta_n)$ be the first row of the comatrix of $L - w\mathrm{Id}$, where $L$ is given by~\eqref{todaMatrix}. Extend $\eta$ to a bi-infinite sequence by the rule $\eta_{k+n} = z \eta_k$. Then $\eta$ is a common eigenvector of $T^n$ and $\mathcal D_+\mathcal D_-$: $ T^n\eta = z \eta$, $\mathcal D_+\mathcal D_- \eta = w \eta. $ Furthermore, the components of $\eta$ are, by construction, rational functions of $z$ and $w$. So, to obtain the desired function $\xi$, it remains to normalize $\eta$: $ \xi_k = {\eta_k}/{\eta_0}. $ Note that $\eta_0 = z^{-1}\eta_n$ does not vanish identically on $\Gamma_a$, because $\eta_n$ is a polynomial in $z,w$ which is linear in $z$ and hence cannot be divisible by the defining polynomial of $\Gamma_a$. So, $\xi$ is a well-defined rational vector-function of $z,w$, and hence a meromorphic function on $\Gamma$. \end{proof} We call the vector-function $\xi$ the \textit{eigenvector function}. Its analytic properties are studied in detail in the next Section \ref{ss:evf}. \begin{remark}\label{rem:holo} Note that at every point $X \in \Gamma \setminus \{Z_\pm\}$, the vector-function $\xi$ is meromorphic in the following strong sense: there exists a local holomorphic function $f$ such that $f\xi$ is holomorphic at $X$. Moreover, the function $f$ can chosen in such a way that $(f\xi)(X)$ does not vanish. Therefore, the value of the function $\xi$ at any point $X \in \Gamma \setminus \{Z_\pm\}$ determines a direction in the infinite-dimensional projective space $\mathbb P^\infty$, regardless of whether the components of $\xi$ are finite or infinite (note also that this direction does not change if we replace our particular normalization $\xi_0 = 1$ by any other normalization). This, is however, not true at the points $Z_\pm$. At those points, the components $\xi_k$ of $\xi$ are still meromorphic, but the order of the pole of $\xi_k$ is an unbounded function of $k$ (see Proposition \ref{behinf} below), so there exists no $f$ such that $f\xi$ is holomorphic. In particular, the value of $\xi$ at $Z_\pm$ does not determine any direction. \end{remark} We now show that every operator $\mathcal L$ from the commutative algebra $\mathcal A=\langle \mathcal D_+ , \mathcal D_-,T^{\pm n} \rangle$ gives rise to a meromorphic function $f_{\mathcal L}$ on $\Gamma$, which is holomorphic everywhere except possibly the points $Z_\pm$ and satisfies $ \mathcal L \xi = f_{\mathcal L} \xi. $ In particular, the assignment $\mathcal L \mapsto f_{\mathcal L}$ is a homomorphism from $\mathcal A$ to the algebra of meromorphic functions on $\Gamma$ which are holomorphic in $\Gamma \setminus \{Z_\pm\}$. We already have $f_{T^{\pm n}}= z^{\pm 1}$ and $f_{\mathcal D_+\mathcal D_-} = w$, so it remains to construct the functions $f_{\mathcal D_+}$ and $f_{\mathcal D_-}$ (of course, one of them determines the other, since their product must be equal to $w$). We denote these functions by $\mu_+$ and $ \mu_-$: \begin{proposition}\label{musholo} There exist meromorphic functions $\mu_+, \mu_-$ on $\Gamma$ which are holomorphic in $\Gamma \setminus \{Z_\pm\}$ and satisfy $ \mathcal D_\pm \xi = \mu_\pm \xi. $ Furthermore, we have $\mu_+\mu_- = w$. \end{proposition} \begin{proof} By Proposition~\ref{prop:rankOne}, a generic common eigenspace of $\mathcal D_+\mathcal D_-$ and $T^n$ is one-dimensional, and is therefore generated by the vector $\xi$, evaluated at the corresponding point of the Riemann surface $\Gamma$. For this reason, since the operator $\mathcal D_+$ commutes with $\mathcal D_+\mathcal D_-$ and $T^n$, at generic points of $\Gamma$ we must have \begin{equation}\label{mupluschar} \mathcal D_\pm \xi = \mu_\pm \xi \end{equation} for certain numbers $\mu_\pm \in \mathbb{C}$ depending on the point of $\Gamma$. Furthermore, since the left-hand side of~\eqref{mupluschar} is a meromorphic vector-function on $\Gamma$, and so is $\xi$, it follows that the functions $\mu_\pm$ also extend to meromorphic functions on the whole of $\Gamma$. Moreover, given a point $X \in \Gamma \setminus \{Z_\pm\}$, renormalizing $\xi$ if necessary we can assume that $\xi(X)$ is finite and non-zero (see Remark \ref{rem:holo}). But then \eqref{mupluschar} implies that the functions $\mu_\pm$ are holomorphic at $X$. Finally, the equation $\mu_+\mu_- = w$ follows directly from \eqref{mupluschar} and the second of equations \eqref{xidefeqns}. \end{proof} \begin{proposition}\label{sigmamumu} We have $\sigma^* \mu_+ = \mu_-$. \end{proposition} \begin{remark} The existence of the involution $\sigma$ on $\Gamma$ is due to the invariance of the algebra $\mathcal A=\langle \mathcal D_+ , \mathcal D_-,T^{\pm n} \rangle$ under operator duality: $\mathcal A = \mathcal A^* := \{ \mathcal L^* \mid \mathcal L \in \mathcal A\}$. So, since $\mathcal D_+^* = \mathcal D_-$, it is only natural that $\sigma^*\mu_+ = \mu_-$. \end{remark} \begin{proof}[Proof of Proposition \ref{sigmamumu}] It suffices to show that $\mu_+(X_+) = \mu_-(X_-)$, where $X_\pm$ is a generic pair of points interchanged by $\sigma$. Since the points $X_\pm$ are generic, one can assume that the vectors $\xi(X_\pm)$ are finite. Under this assumption, we have $ \mathcal D_\pm \xi(X_\pm) = \mu_\pm(X_\pm) \xi(X_\pm). $ Furthermore, we have $ \xi(X_\pm) \in \v{z_+^{\pm 1}}$, where $z_+ := z(X_+) = z(X_-)^{-1}$. So, using the pairing~\eqref{pairing} between $\v{z_+}$ and $\v{z_+^{-1}}$, we get \begin{equation}\label{mumustar} \mu_+(X_+)\left\langle\xi(X_+), \xi(X_-) \right\rangle = \left\langle \mathcal D_+ \xi(X_+), \xi(X_-) \right\rangle = \left\langle \xi(X_+), \mathcal D_- \xi(X_-) \right\rangle = \mu_-(X_-)\left\langle\xi(X_+), \xi(X_-)\right \rangle. \end{equation} So, to complete the proof, it suffices to show that $\left\langle\xi(X_+), \xi(X_-) \right\rangle \neq 0$. To that end, observe that $\xi(X_+)$ belongs to the kernel of the operator $(\mathcal D_+\mathcal D_- - w_0)\vert_{\v{z_+}}$, where $w_0 := w(X_\pm)$, and, in the generic case, spans that kernel. So, the orthogonal complement to $\xi(X_+)$ with respect to the pairing~\eqref{pairing} is the image of the dual operator $$\left((\mathcal D_+\mathcal D_- - w_0)\vert_{\v{z_+\vphantom{z_+^-1}}}\right)^* = \left(\mathcal D_+\mathcal D_- - w_0\right)\vert_{\v{z_+^{-1}}}.$$ But for generic $z_+$ the operator $\mathcal D_+\mathcal D_-$ has simple spectrum on $\v{z_+^{-1}}$ and is, therefore, diagonalizable, which in particular implies $$\Im (\mathcal D_+\mathcal D_- - w_0)\vert_{\v{z_+^{-1}}} \cap \Ker (\mathcal D_+\mathcal D_- - w_0)\vert_{\v{z_+^{-1}}} = 0.$$ Therefore, we have $\left\langle\xi(X_+), \xi(X_-) \right\rangle \neq 0$, as desired. \end{proof} Now, define a meromorphic function $s$ on $\Gamma$ by the formula \begin{equation}\label{SDEF} s:=\frac{\mu_+}{z\mu_-}. \end{equation} This function does not correspond to any difference operator $\mathcal L \in \mathcal A = \mathbb{C}[ \mathcal D_+, \mathcal D_-, T^{\pm n}]$ but can be thought of as corresponding to a pseudo-difference operator $T^{-n}\mathcal D_+\mathcal D_-^{-1}$. Accordingly, the function $s$ satisfies the equation \begin{equation}\label{SEQN} (\mathcal D_+ - sT^n \mathcal D_-)\xi = 0. \end{equation} Recall that the operator on the left-hand side encodes the family of polygons obtained from $P$ by means of rescaling \eqref{rescaling}. \begin{proposition}\label{degIneq} The function $s$ has degree $3$. There exist three distinct points on $\Gamma$ at which $s = -1$. The function $z$ takes three distinct values at those points. \end{proposition} \begin{proof} We first show that $\deg s \leq 3$. Let $X_1, \dots, X_m \in \Gamma$ belong to the level set $s = s_0$. Then, for generic $s_0 \in \mathbb{C}$, these points correspond to distinct points on the affine spectral curve $\Gamma_a$. This, in particular, means that the vectors $\xi(X_1), \dots, \xi(X_m) \in \Ker(\mathcal D_+ - s_0T^n \mathcal D_-)$ are linearly independent (as joint eigenvectors of $T^n$ and $\mathcal D_+\mathcal D_-$ corresponding to distinct eigenvalues). But $$\dim \Ker(\mathcal D_+ - s_0T^n \mathcal D_-) = \ord(\mathcal D_+ - s_0T^n \mathcal D_-)= 3,$$ so $m \leq 3$, and the degree of $s$ is at most $3$, as desired.\par We now show that there exist three distinct points on $\Gamma$ at which $s = -1$, which, in turn, implies that the degree of $s$ is exactly $3$. Since the polygon $P$, given by the operator $\mathcal D_+ - T^n \mathcal D_-$, is weakly convex, it follows from the fourth statement of Proposition \ref{alt} that the monodromy of $\mathcal D_+ + T^n \mathcal D_-$ has simple spectrum. This means that there exist three distinct numbers $z_1, z_2, z_3$ such that the operator $\mathcal D_+ +T^n \mathcal D_-$ has non-trivial (and hence one-dimensional) kernel on $\v{z_k}$. Let $\xi^{k}$ be the generator of that kernel. Then, since the operator $\mathcal D_+ \mathcal D_-$ commutes with $\mathcal D_+ +T^n \mathcal D_-$ and $T^n$, it follows that $\xi^{k} $ is also an eigenvector of $\mathcal D_+ \mathcal D_-$, corresponding to some eigenvalue $w_k$. Then the three points $(z_k,w_k)$ belong to the affine spectral curve $\Gamma_a$, and thus give rise to at least three points $X_1, X_2, X_3 \in \Gamma \setminus \{Z_\pm\}$ with $z(X_k) = z_k$, $w(X_k) = w_k$. We now claim that $s(X_1) = s(X_2) = s(X_3) = -1$. Indeed, the vector $\xi_k$ spans the $(z_k,w_k)$ joint eigenspace of $T^n$ and $\mathcal D_+ \mathcal D_-$. Therefore, at each of the points $X_k$, we have $\xi(X_k) = c_k\xi^{k}$, where $c_k \in \mathbb{C}$ (here we assume here that the vectors $\xi(X_k)$ are finite, which can be always arranged by multiplying $\xi$ by an appropriate meromorphic function, see Remark \ref{rem:holo}). So, by construction of the vectors $\xi^{k}$ we have $$ (\mathcal D_+ +T^n \mathcal D_-) \xi(X_k) = c_k(\mathcal D_+ +T^n \mathcal D_-) \xi^{i}= 0. $$ On the other hand, $$ (\mathcal D_+ +T^n \mathcal D_-) \xi(X_k) = (\mu_+ + z\mu_-)\vert_{X_k} \xi(X_k), $$ so \begin{equation}\label{smone}(\mu_+ + z\mu_-)\vert_{X_k} = 0.\end{equation} Notice also that $X_k$ cannot be a common zero of $\mu_+$ and $\mu_-$, because that would imply $\xi(X_k) \in \Ker \mathcal D_+ \cap \Ker \mathcal D_-$, which is not possible by the second statement of Proposition \ref{alt} (the latter applies to $\mathcal D_\pm$ since $\mathcal D_+ = \D_\l$, $\mathcal D_- = -T^{-n} \mathcal D_r$). Furthermore, $\mu_\pm$ cannot have a pole at $X_k$ by Proposition~\ref{musholo}. But then~\eqref{smone} implies $s(X_k) = -1$, as desired. \end{proof} \begin{remark}\label{rem:scdef} It follows from Proposition \ref{degIneq} that the function $s$ has the following meaning. Fix some generic $s_0 \in \mathbb{C}$. Then there are three points $X_1, X_2, X_3$ in $\Gamma$ with $s = s_0$. Furthermore, the vectors $\xi(X_k) \in \v{z(X_k)}$ belong to $ \Ker(\mathcal D_+ - s_0 T^n \mathcal D_-) $. So, $z(X_1), z(X_2), z(X_3)$ is the spectrum of the monodromy of $\mathcal D_+ - s_0T^n \mathcal D_-$. In other words, if we consider a meromorphic mapping $\Gamma \to \mathbb{C}^2$ given by the functions $(z,s)$, then its image belongs to the algebraic curve $$ \Gamma_a' := \{ (z,s) \in \mathbb{C}^* \times \mathbb{C} \mid z \mbox{ is an eigenvalue of the monodromy of } \mathcal D_+ - sT^n \mathcal D_- \}. $$ Using also that $\deg z = n$ and $\deg s = 3$, it is easy to show that the mapping $\Gamma \to \Gamma_a'$ is generically biholomorphic. So, $\Gamma_a'$ is just another affine model of the spectral curve $\Gamma$. This model can be thought of as the joint spectrum of the operators $T^n$ and $T^{-n} \mathcal D_+\mathcal D_-^{-1}$ (the latter is well-defined on a generic eigenspace of $T^n$). Furthermore, since the operator $ \mathcal D_+ - sT^n \mathcal D_-$ corresponds to the polygon $R_s( P_{})$, where $R_s$ is the rescaling action \eqref{rescaling}, it follows that $\Gamma_a'$ can be regarded as the graph of the spectrum for the monodromy of $R_s( P_{})$. As explained in \cite{ izosimov2016pentagrams}, this definition of the spectral curve coincides with the one used in \cite{soloviev2013integrability} to prove algebraic integrability of the pentagram map. So, as a Riemann surface, our spectral curve is isomorphic to the one of \cite{soloviev2013integrability}. \end{remark} We are now in a position to prove the main result of the section: \begin{proposition}\label{genus2} The genus $g$ of $\Gamma$ satisfies $g \leq 1$. \end{proposition} \begin{proof} The function $w$ on $\Gamma$ is a $2$-fold ramified covering of the Riemann sphere whose branch points coincide with fixed points of the involution $\sigma$. To estimate the number of such fixed points, notice that from Proposition~\ref{sigmamumu} and formula~\eqref{SDEF} it follows that $\sigma^*s = s^{-1}$. Thus, at each fixed point of $\sigma$ we must have $s = \pm 1$. Furthermore, by Proposition~\ref{degIneq}, the function $z$ takes three distinct values at points of $\Gamma$ where $s = -1$, and since the set $s = -1$ is invariant under the involution $\sigma$ which takes $z$ to $z^{-1}$, it follows that those values must be of the form $\pm 1, z_0, {z_0}^{-1}$, where $z_0 \neq \pm 1$. So, $\sigma$ must have exactly one fixed point at the level set $s = -1$. In addition to that, it may have up to three fixed points at the level set $s = 1$, which is up to four fixed points in total. Now, the desired inequality for the genus follows from the Riemann-Hurwitz formula. \end{proof} \begin{remark}\label{z1} In fact, since the values of $z$ at points where $s = -1$ are the eigenvalues of the monodromy of $\mathcal D_+ +T^n \mathcal D_-$, it follows from formula \eqref{monodet} that they are of the form $-1, z_0,z_0^{-1}$. Another way to see this is to notice that by Proposition \ref{sigmamumu} at fixed points of $\sigma$ we must have $\mu_+ = \mu_-$ and thus $z = s^{-1}$ (here we use that the functions $\mu_\pm$ do not have common zeros and also do not have poles in $\Gamma \setminus \{Z_\pm\}$). This also implies that if $\Gamma$ has genus $1$, then $z = 1$ at points where $s = 1$. In other words, all eigenvalues of the monodromy of the polygon $P$ are equal to $1$. Later on, we will see that this monodromy is in fact the identity. In other words, if the spectral curve is elliptic, then in the setting of Theorem \ref{thm2} the polygon $P$ must be closed (see Remark \ref{psdclosed}). \end{remark} \begin{remark} Note that without weak convexity (used to prove Proposition \ref{degIneq}) we would not be able to say that there is just one fixed point of the involution $\sigma$ at the level $s = -1$. In that case, nothing seems to prevent $\sigma$ from having six fixed points, which means that $\Gamma$ may be a genus $2$ curve. Thus, it should be possible to construct a counterexample to Theorems \ref{thm1} and \ref{thm2} in the non-weakly-convex case using genus $2$ curves and their associated genus $2$ theta functions. \par Another way to obtain the estimate $g \leq 2$ is to use the existence on $\Gamma$ of meromorphic functions of degree $2$ and $3$ (namely, $w$ and $s$). However, this does not guarantee the $g \leq 1$ estimate obtained above. \end{remark} We finish this section with two additional results on the spectral curve which will be useful later on. \begin{proposition}\label{cor:table} The degrees of the functions $\mu_\pm, s$ and their orders at points $Z_\pm, S_\pm$ are as shown in Table \ref{table}. \end{proposition} \begin{proof} Since for any $\lambda \in \mathbb{C}^*$ the degree of operators $\mathcal D_\pm - \lambda $ is $(n-1)/2$, an argument analogous to the one we used to show that $\deg s \leq 3$ (see the proof of Proposition \ref{degIneq}) gives $\deg \mu_\pm \leq (n-1)/2$. Further, let $d_\pm := \ord_{Z_+}\mu_\pm$. Then, since $\ord_{Z_+}w=-1$ (see Table \ref{table}), the equation $\mu_+\mu_- = w$ implies \begin{equation}\label{sumofks} d_+ + d_- = -1. \end{equation} Furthermore, using that $\ord_{Z_+}z=n$ and equation \eqref{SDEF}, we get \begin{equation}\label{diffofks} d_+ - d_- = n + \ord_{Z_+}s, \end{equation} so \begin{equation}\label{kminfla} d_- = -\frac{1}{2}(n+1) - \frac{1}{2}\ord_{Z_+}s, \end{equation} and since $\deg \mu_- \leq (n-1)/2$, we must have $d_- \geq - (n-1)/2 $, which implies $\ord_{Z_+}s \leq - 2$. On the other hand, we know that $\deg s = 3$, and from \eqref{kminfla} it follows that $\ord_{Z_+}s$ is even. So, we must have $\ord_{Z_+}s =- 2$, which, along with \eqref{kminfla} implies $\ord_{Z_+}\mu_- = d_- = - (n-1)/2$ and thus $ \deg \mu_- = (n-1)/2$. Similarly, adding up \eqref{sumofks} and \eqref{diffofks}, we get $ \ord_{Z_+}\mu_+ = d_+ = (n-3)/2$. Analogously, replacing the point $Z_+$ with $Z_-$, we find the orders of $\mu_\pm$ and $s$ at $Z_-$, as well as the degree of~$\mu_+$ (one can also use that $\sigma^*\mu_+ = \mu_-$ and $\sigma(Z_+) = Z_-$).\par It now remains to find the orders of the functions $\mu_\pm$ and $s$ at the points $S_\pm$. To that end, we first show that $S_+ \neq S_-$. Assume, for the sake of contradiction, that $S_+ = S_- = S$. Then $S$ is a double zero of the function $w$. Furthermore, we have $\mu_+\mu_- = w$, and both $\mu_+$ and $\mu_-$ are holomorphic and have at worst a simple zero at $S$ (indeed, these functions have degree $ (n-1)/2 $ and zeros of order $(n-3)/2$ at the points $Z_+$ and $Z_-$ respectively). So, both $\mu_+$ and $\mu_-$ must have a simple zero at $S$. But then, from the definition \eqref{SDEF} of the function $s$ it follows that it does not have a zero at $S$. Furthermore, $s$ cannot have zeros at other points of $\Gamma \setminus \{Z_\pm\}$, because the only zero of $\mu_+$ in that domain is the point $S$. But this means that $s$ has just two zeros counting with multiplicities, which is impossible since $\deg s = 3$. Therefore, we must have that $S_+ \neq S_-$.\par Now, the relation $\mu_+\mu_- = w$ implies that at both points $S_\pm$ one of the functions $\mu_\pm$ has a simple zero, while the second one does not have a zero or a pole. Without loss of generality, assume that $\mu_+(S_+) = 0$ and thus $\ord_{S_+}\mu_+ = 1$. Then $\ord_{S_+}\mu_- = 0$, and by formula~\eqref{SDEF} we get $\ord_{S_+}s = 1$. Furthermore, from $\sigma^*\mu_+ = \mu_-$ and $\sigma(S_+) = S_-$ it follows that $\ord_{S_-}\mu_+ = 0$, $\ord_{S_-}\mu_- = 1$, and $\ord_{S_+}s = -1$. Finally, notice that the functions $\mu_\pm$ and $s$ do not have zeros or poles other than the points $Z_\pm, S_\pm$, because for each of them the total number of zeros and poles at those points (counting with multiplicities) coincides with the degree. Thus, the proposition is proved. \end{proof} \begin{proposition}\label{prop:nodal} The affine spectral curve $\Gamma_a$ is a nodal curve (i.e. all its singularities are double points). \end{proposition} \begin{proof} The affine spectral curve $\Gamma_a$ is the zero locus of the characteristic polynomial $p(z,w) = z + z^{-1} + q(w)$ of the matrix~\eqref{todaMatrix}. Computing the differential of that polynomial, we get that $(z_0,w_0) \in \Gamma_a$ is singular if and only if $z_0 = \pm 1$ and $w_0$ is a multiple root of $p(z_0,w)$. Furthermore, computing the Hessian, we get that a singular point $(z_0,w_0) \in \Gamma_a$ is a double point if and only if $w_0$ is a double root $p(z_0,w)$. But for $z_0 = \pm 1$ the matrix ~\eqref{todaMatrix} is symmetric (equivalently, the restriction of the operator $\mathcal D_+\mathcal D_-$ to $\v{\pm 1}$ is self-adjoint), so the multiplicity of the root $w_0$ of its characteristic polynomial is equal to the dimension of the corresponding eigenspace, which is $$\dim \Ker(\mathcal D_+\mathcal D_- - w_0)\vert_{\v{\pm 1}} \leq \ord(\mathcal D_+\mathcal D_- - w_0) = 2. $$ So indeed all singular points of $\Gamma_a$ are double points. \end{proof} \begin{remark}\label{nodal} It is easy to see that the genus of the normalization $\Gamma$ of $\Gamma_a$ is equal to $n - d - 1$ where $d$ is the number of double points of $\Gamma_a$. Furthermore, as can be seen from the proof of Proposition~\ref{prop:nodal}, double points of $\Gamma_a$ correspond to double roots of the polynomials $q(w) \pm 2$. The polynomial $q$ is of degree $n$, so each of those polynomials may have at most $(n-1) / 2$ double roots. Therefore, $\Gamma$ is rational when each of the polynomials $q(w) \pm 2$ has precisely $(n-1) / 2$ double roots (and, in addition, one simple root). Likewise, $\Gamma$ is elliptic when one of the polynomial $q(w) \pm 2$ has $(n-1)/2$ double roots, while the second one has $(n-3)/2$ double roots. Using Remark \ref{z1} one can show that it is the polynomial $q(w) +2$ that has $(n-3)/2$ double roots. \end{remark} \par \subsection{The eigenvector function}\label{ss:evf} In this section, we study in detail the analytic properties of the meromorphic vector-function $\xi$ constructed in Proposition \ref{prop:ev}. This will allow us to obtain analytic formulas for coordinates of vertices of the polygon $P$ (see Section \ref{sec:rat} for the rational case and Section \ref{ss:sd} for the elliptic case). We keep all the notation of Section \ref{ss:genus}. \begin{proposition}\label{behinf} We have $\mathrm{ord}_{Z_\pm}\xi_k = \pm k$. \end{proposition} \begin{proof} Let $d_k := \mathrm{ord}_{Z_+}\xi_k - k$. We need to show that $d_k = 0$ for every $k \in \mathbb{Z}$. Note that $d_0 = 0$ since $\xi_0 = 1$. So it suffices to show that $d_k$ is a constant sequence. Also note that since $\xi_{k+n} = z\xi_k$ and $z$ has a zero of order $n$ at $Z_+$ (see Table \ref{table}), the sequence $d_k$ is $n$-periodic. So, if it is not constant, then there must exist $k \in \mathbb{Z}$ such that $d_{k-1} > d_k \leq d_{k+1}$. But since $\xi$ is the eigenvector of the operator \eqref{prodOP} with eigenvalue $w$, we have \begin{equation}\label{xirel} \alpha_{k-1}\xi_{k-1} + \alpha_{k}\xi_{k+1} = ( w - \beta_k)\xi_k. \end{equation} Since $\alpha$ is a non-vanishing sequence, the order of the left-hand side at $Z_+$ can be bounded as \begin{equation}\begin{gathered} \mathrm{ord}_{Z_+}( \alpha_{k-1}\xi_{k-1} + \alpha_{k}\xi_{k+1}) \geq \min(\mathrm{ord}_{Z_+}\xi_{k-1},\mathrm{ord}_{Z_+}\xi_{k+1}) \\ = \min(d_{k-1} + k -1, d_{k+1} + k + 1) \geq d_k + k. \end{gathered}\end{equation} On the other hand, since $\mathrm{ord}_{Z_+}w = -1$, the order of the right-hand side of \eqref{xirel} is $d_k + k - 1$. So, $d_k$ must be a constant sequence, as desired. \end{proof} We now proceed to describe the behavior of $\xi$ away from the points $Z_\pm$. We begin with the following preliminary lemma. \begin{lemma}\label{indep} Assume that $X_\pm \in \Gamma \setminus \{Z_\pm\}$ are distinct points such that $w(X_+) = w(X_-)$ (equivalently, $\sigma(X_+) = X_-$). Then the directions (i.e. points in $\P^\infty$, see Remark \ref{rem:holo}) determined by the values of $\xi$ at $X_\pm$ are distinct from each other. \end{lemma} \begin{proof} Without loss of generality, assume that the vectors $\xi(X_\pm)$ are finite and non-zero (if not, we multiply $\xi$ by an appropriate meromorphic function, see Remark \ref{rem:holo}). One then needs to show that these vectors are linearly independent. To that end, recall that $T^n\xi(X_\pm) = z(X_\pm)\xi(X_\pm)$. So, if $z(X_+) \neq z(X_-)$, then the vectors $\xi(X_\pm)$ are independent as eigenvectors of $T^n$ corresponding to distinct eigenvalues. Therefore, it suffices to consider the case $z(X_+) = z(X_-)$. In that case, we have $$ z(X_+) = z(X_-) = z(\sigma(X_-))^{-1} = z(X_+)^{-1}, $$ so $z(X_\pm) = \pm 1$. Suppose for the sake of contradiction that the corresponding vectors $\xi(X_\pm)$ are linearly dependent. Then, without loss of generality, we can assume that $\xi(X_+) = \xi(X_-)$ (this can be always arranged by multiplying $\xi$ by an appropriate meromorphic function). Denote $\xi_0:=\xi(X_\pm)$, $z_0 := z(X_\pm) = \pm 1$, $w_0 := w(X_\pm)$. Notice that since $w(X_+) = w(X_-)$, the differential of $w$ does not vanish at $X_\pm$, so $w$ can be taken as a local parameter near those points. Then, differentiating the relation $(T^n- z)\xi = 0$ with respect to $w$ at $X_\pm$, we get $$ (T^n - z_0)\xi'(X_+) = z'(X_+)\xi_0, \quad (T^n - z_0)\xi'(X_-) = z'(X_-)\xi_0. $$ Taking a linear combination of these equations, we obtain $ (T^n - z_0)\hat \xi = 0, $ where $ \hat \xi := z'(X_+)\xi'(X_-) - z'(X_-)\xi'(X_+). $ In other words, we have $\hat \xi \in \v{z_0}.$ Similarly, using the equation $(\mathcal D_+\mathcal D_- - w)\xi = 0$, we get \begin{equation}\label{Jordan} (\mathcal D_+\mathcal D_- - w_0)\hat \xi =\lambda \xi_0, \end{equation} where $ \lambda :=z'(X_+) -z'(X_-). $ Note also that $\lambda \neq 0$. Indeed, $\lambda = 0$ would mean that the two branches of the curve $\Gamma_a$ given by the functions $z(w)$ near $X_\pm$ are tangent to each other. This is however, not possible, since $\Gamma_a$ is a nodal curve (Proposition \ref{prop:nodal}). So, since $\lambda \neq 0$ and $\hat \xi \in \v{z_0}$, it follows from~\eqref{Jordan} that the operator $(\mathcal D_+\mathcal D_-)\vert_{\v{z_0}}$ has a non-trivial Jordan block. This is, however, not possible, since $z_0 = \pm 1$, and thus $\mathcal D_+\mathcal D_-$ is self-adjoint on $\v{z_0}$. So it must be that the vectors $\xi(X_\pm)$ are linearly independent, as desired. \end{proof} \begin{remark}\label{resolution} In the elliptic case (i.e. when the genus of $\Gamma$ is $1$), one can also prove Lemma~\ref{indep} as follows. The vectors $\xi(X_\pm)$ are common eigenvectors of the operators $\mathcal D_+$, $\mathcal D_-$, $T^n$, with the corresponding eigenvalues given by the values of the function $\mu_+$, $\mu_-$, $z$ at the points $X_\pm$. So, it follows that $\xi(X_\pm)$ are independent as long as at least one of the functions $\mu_+, \mu_-, z$ separates $X_+$ from $X_-$. Assume that this is not the case, which means that $\mu_\pm(X_+) = \mu_\pm(X_-)$ and $z(X_+) = z(X_-)$. Then we have $$\mu_-(X_+) = \mu_+(\sigma(X_+)) = \mu_+(X_-) = \mu_+(X_+). $$ Along with $z(X_\pm) = \pm 1$, this gives $s(X_\pm) = \pm 1$. But in the elliptic case four of the six points where $s = \pm 1$ are fixed by $\sigma$ (which would force $X_+ = X_-$), while the remaining two points are separated by the function $z$ (by Proposition \ref{genus2}). So indeed the functions $\mu_\pm, z$ separate any pair of points on $\Gamma$, which proves Lemma \ref{indep}. As a byproduct, we also get the following result: the functions $\mu_\pm, z$ define an embedding $\Gamma \setminus \{Z_\pm\} \hookrightarrow \mathbb{C}^3$. In other words, if we view $\mu_\pm$ and $z$ as rational functions on $\Gamma_a$, then these functions provide a resolution of singularities. \end{remark} Lemma \ref{indep} also admits an infinitesimal version, corresponding to the case when $X_+ = X_- = X$ is a branch point of $w$ (equivalently, a fixed point of $\sigma$). In this case, the role of $\xi(X_\pm)$ is played by the vectors $\xi(X)$, $\xi'(X)$, where the derivative is taken with respect to a local parameter near $X$. Note that upon renormalization of $\xi$, its derivative changes as $\xi' \mapsto f\xi' + f'\xi$, so the direction of $\xi'$ is well-defined modulo the direction of $\xi$. In particular, linear independence of $\xi$ and $\xi'$ is well-defined. \begin{lemma}\label{indep2} Assume that $X \in \Gamma$ is a branch point of $w$ (equivalently, a fixed point of $\sigma$). Then the directions determined by the values of $\xi$ and $\xi'$ at $X$ are distinct from each other. \end{lemma} \begin{proof} Renormalizing $\xi$ if necessary, we can assume that its value at $X$ is finite and non-zero. Then, differentiating the equation $(T^n - z)\xi = 0$ with respect to a local parameter near $X$, we get \begin{equation}\label{JBlock}(T^n - z(X))\xi'(X) = z'(X) \xi(X).\end{equation} Also note that since $\Gamma_a$ is a nodal curve, it follows that the mapping $(z,w) \colon \Gamma \setminus \{Z_\pm\} \to \mathbb{C}^2$ is an immersion, so at a branch point of $w$ me must have $z' \neq 0$. But then \eqref{JBlock} implies that the vectors $\xi(X)$ and $\xi'(X)$ are linearly independent, as desired. \end{proof} \begin{remark}\label{dersol} Differentiating $(\mathcal D_+\mathcal D_- - w)\xi = 0$ at $X$ and using that $w'(X) = 0$, we get $(\mathcal D_+\mathcal D_- - w(X))\xi'(X) = 0$. So, Lemma \ref{indep2} means that $\xi(X)$ and $\xi'(X)$ form a basis of solutions for the equation $(\mathcal D_+\mathcal D_- - w(X))\xi = 0$. \end{remark} \begin{proposition}\label{gpoles} The function $\xi_1$ has $g$ poles in $\Gamma \setminus \{Z_\pm\} $, where $g \in \{0,1\}$ is the genus of $\Gamma$. \end{proposition} \begin{proof} Let $u \in \bar \mathbb{C}$, and let $X_\pm$ be the two preimages of $u$ under the function $w \colon \Gamma \to \bar \mathbb{C}$. Recall that the \textit{trace of a meromorphic function} $f$ on $\Gamma$ under $w$ is a meromorphic function on $\bar \mathbb{C}$ is defined by $(\mathrm{tr}_wf)(u) := f(X_+) + f(X_-) $. Define $$ \zeta(u) := \left|\!\begin{array}{cc}\xi_0(X_+) & \xi_1(X_+) \\ \xi_0(X_-) & \xi_1(X_-)\end{array}\!\right|^2 = (\xi_1(X_+) - \xi_1(X_-))^2. $$ Then $\zeta := 2 \mathrm{tr}_w (\xi_1^2) - (\mathrm{tr}_w \xi_1)^2$, so in particular $\zeta$ is meromorphic (i.e. rational). To understand the behavior of that function, fix a point $u_0 \in \bar \mathbb{C}$. Let $\Sigma := w(\{X \in \Gamma \mid dw(X) = 0\})\subset \bar \mathbb{C}$ be the set of critical values of $w$ (this set contains two or four points depending on the genus of $\Gamma$). Then the following cases are possible: \\ \\ \textbf{Case 1.} $u_0 \notin \Sigma$ is finite, and $\xi_1$ is finite at both preimages $X_\pm$ of $u_0$ under $w$. In this case, $\zeta(u_0)$ is the squared Wronskian of the solutions $\xi(X_\pm)$ of equation $\mathcal D_+\mathcal D_-\eta = u_0\eta$. By Lemma \ref{indep}, these solutions are independent, so $\zeta(u_0)$ is finite and non-zero. \\ \\ \textbf{Case 2.} $u_0 \notin \Sigma$ is finite, $\xi_1$ has a pole of order $d$ at one of the preimages $X_\pm$ of $u_0$ (say, $X_+$), and is finite at the other preimage. In this case, the function $(u-u_0)^{2d}\zeta(u)$ is finite at $u_0$ and is equal to the squared Wronskian of linearly independent solutions $((w - u_0)^d\xi)({X_+})$, $\xi(X_-)$ of $\mathcal D_+\mathcal D_-\eta = u_0\eta$. So, $\zeta$ has a pole of order $2d$ at $u_0$.\\ \\ \textbf{Case 3.} $u_0 \notin \Sigma$ is finite, and $\xi_1$ has poles at both preimages $X_\pm$ of $u_0$. This is not possible, since after renormalizing $\xi$ we would get $$ \left(\!\begin{array}{cc}\xi_0(X_+) & \xi_1(X_+) \\ \xi_0(X_-) & \xi_1(X_-)\end{array}\!\right) = \left(\begin{array}{cc}0 & 1 \\ 0& 1\end{array}\right), $$ which would mean that the Wronskian of $\xi(X_\pm)$ vanishes. \\ \\ \textbf{Case 4.} $u_0 = \infty$ (in which case we also have $u \notin \Sigma$). In this case $X_\pm = Z_\pm$, so $\zeta$ has a pole of order $2$ at $u_0$ by Proposition \ref{behinf}. \\ \\ All in all, the function $\zeta$ does not vanish in $\bar \mathbb{C} \setminus \Sigma$, while the number of its poles in that domain is twice the number of poles of $\xi_1$ in $\{X \in \Gamma \mid dw(X) \neq 0\}$ (counting with multiplicities). Now, consider $u_0 \in \Sigma$, and let $X \in \Gamma$ be the unique point such that $w(X) = u_0$. Then there exists a parameter $t$ near $X$ such that the function $w$ can be locally written as $t \mapsto u_0 + t^2$. So $\zeta(u)$ near $u_0$ can be written as $$ \zeta(u) = \left|\!\begin{array}{cc}\xi_0(t) & \xi_1(t) \\ \xi_0(-t) & \xi_1(-t)\end{array}\!\right|^2, $$ where $t = \sqrt{u - u_0}$. Then at $t = 0$ we have \begin{equation}\label{asym} \zeta(u) \sim t^2\left|\!\begin{array}{cc}\xi'_0(0) & \xi'_1(0) \\ \xi_0(0) & \xi_1(0)\end{array}\!\right|^2, \end{equation} up to a constant factor and higher order terms. So, when $u_0 \in \Sigma$, we have the following two cases:\\\\ \textbf{Case 5.} $u_0 \in \Sigma$, and $\xi_1$ is finite at the preimage $X$ of $u_0$. In this case, in view of Remark \ref{dersol}, the determinant in \eqref{asym} is the Wronskian of two independent solutions of $\mathcal D_+\mathcal D_-\eta = u_0\eta$, so $\zeta(u) \sim t^2 = u - u_0$ and thus has a simple zero at $u_0$.\\ \\ \textbf{Case 6.} $u_0 \in \Sigma$, and $\xi_1$ has a pole of order $d$ at the preimage $X$ of $u_0$. In this case, renormalizing $\xi$ as in Case 2, we get that $\zeta$ has a pole of order $2d - 1$ at $u_0$. \\\\ In the latter case, one can regard a pole of order $2d - 1$ as a pole of order $2d$ that collided with a simple zero. With this understanding, the number of zeros of $\zeta$ is equal to the number of branch points of $w$, while the number of poles of $\zeta$ is twice the number of poles of $\xi_1$ (with some zeros and poles possibly cancelling each other out). And since the number of zeros of $\zeta$ is equal to the number of its poles, it follows that the number of poles of $\xi_1$ is half the number of branch points of $w$, which is $2g + 2$. Furthermore, since $Z_+$ is not a pole of $\xi_1$, while $Z_-$ is its pole of order $1$ (see Proposition \ref{behinf}), it follows that the number of poles of $\xi_1$ in $\Gamma \setminus \{Z_\pm\} $ is exactly $g$, as desired. \end{proof} \begin{corollary}\label{behfin} In the rational case, all functions $\xi_k$ are holomorphic in $\Gamma\, \setminus \, \{Z_\pm\} $, while in the elliptic case all of them have at worst a simple pole at one and the same point $X_p$, and no other poles. \end{corollary} \begin{proof} Since $\xi_0 = 1$ and $\xi_1$ have these properties, the result follows from \eqref{xirel} by induction. \end{proof} \par \section{Proof of Theorem \ref{thm2}: a self-dual polygon fixed by the pentagram map is Poncelet} In this section we prove Theorem \ref{thm2}: any weakly convex self-dual twisted odd-gon $P$ projectively equivalent to its pentagram image $P'$ is Poncelet. To that end, we use the results of Section \ref{sec:sc} to obtain explicit formulas for coordinates of vertices of $P$ (see Section~\ref{sec:rat} for the case $g = 0$ and Section~\ref{ss:sd} for the case $g = 1$) and hence show that $P$ is a Poncelet polygon. \subsection{The rational case: degenerate Poncelet polygons}\label{sec:rat} In this section, we prove Theorem \ref{thm2} in the case when the genus of $\Gamma$ is $0$, i.e. when $\Gamma$ is a rational curve. In that case, we will show that $P$ is a degenerate Poncelet polygon in the sense that the corresponding inscribed and circumscribed conics are not in general position. We keep the notation of the previous two sections. \begin{proposition} The set $s^{-1}(1) := \{X \in \Gamma \mid s(X) = 1\}$ consists of either one or three points. \end{proposition} \begin{proof} This set is invariant under the involution $\sigma$ and contains exactly one fixed point of that involution (see the proof of Proposition \ref{genus2}). So, it must contain odd number of points, and since $\deg s = 3$, it follows that $|s^{-1}(1)| = 1$ or $|s^{-1}(1)| = 3$. \end{proof} We consider the cases $|s^{-1}(1)| = 1$ and $|s^{-1}(1)| = 3$ separately. First, assume that $|s^{-1}(1)| = 3$. Denote points in $s^{-1}(1)$ by $A,B,C$, where $A$ and $B$ are switched by $\sigma$, while $C$ is fixed by~$\sigma$. \begin{proposition}\label{abcind} The vectors $\xi(A)$, $\xi(B)$, $\xi(C)$ form a basis of $\Ker(\mathcal D_+ -T^n \mathcal D_-)$. \end{proposition} \begin{remark} Note that the vectors $\xi(A)$, $\xi(B)$, $\xi(C)$ are finite because, by Corollary \ref{behfin}, the vector-function $\xi$ is holomorphic in $\Gamma \setminus \{Z_\pm\} $, and $A,B,C \neq Z_\pm$ since $s(Z_+) = \infty$ and $s(Z_-) = 0$ (see Table~\ref{table}). \end{remark} \begin{proof}[Proof of Proposition \ref{abcind}] We have $\xi(A), \xi(B), \xi(C) \in \Ker(\mathcal D_+ -T^n \mathcal D_-)$ by \eqref{SEQN}, so it suffices to show that these vectors are linearly independent. To that end, recall that they are eigenvectors of the operator $\mathcal D_+\mathcal D_-$. Furthermore, the eigenvalue $w(C)$ corresponding to $\xi(C)$ is distinct from the eigenvalue $w(A) = w(B)$ corresponding to the other two vectors. So, it suffices to prove the independence of $\xi(A)$ and $\xi(B)$. But that follows from Lemma~\ref{indep}. \end{proof} Now recall that the polygon corresponding to the operator $\mathcal D_+ -T^n \mathcal D_-$ is $P$. So, by Proposition \ref{abcind} the vertices of $P$ (defined up to a projective transformation) are given by $ (\xi_k(A):\xi_k(B): \xi_k(C)) \in \P^2. $ To explicitly compute the coordinates of vertices, we identify $\Gamma$ with $\bar \mathbb{C}$. Note that since automorphisms of $\bar \mathbb{C}$ act transitively on triples of points, the map $u \colon \Gamma \to \bar \mathbb{C}$ may be chosen in such a way that $u(Z_+ ) = 0$, $u(Z_-) = \infty$, and $u(C) = 1$. Then the involution $\sigma$, written in terms of $u$, is $u \mapsto u^{-1}$, while the points $A,B$ are identified with $r$ and $r^{-1}$, where $r \in \mathbb{C}^*\setminus \{\pm 1\}$. Furthermore, from Proposition \ref{behinf} and Corollary \ref{behfin} we get $ \xi_k(u) = c_k u^k, $ where $c_k \neq 0$ is a constant. Therefore, the vertices of $P$ are given by \begin{equation}\label{fund} v_k = (r^k: r^{-k}: 1). \end{equation} So the polygon $P$ is inscribed in a conic with homogeneous equation \begin{equation}\label{circConic}x_1x_2 = x_3^2.\end{equation} Furthermore, since $P$ is self-dual, it is also circumscribed, and hence Poncelet. Thus, the proof of Theorem \ref{thm2} in the case when the spectral curve is rational and $|s^{-1}(1)| = 3$ is complete. \begin{remark}A direct calculation shows that the conic inscribed in the polygon~\eqref{fund} is \begin{equation}\label{insConic}x_1x_2 = \left(\frac{1}{2} + \frac{1}{4}(r + r^{-1})\right)x_3^2.\end{equation} The conics \eqref{circConic} and \eqref{insConic} are tangent to each other at two points $(1:0:0)$ and $(0:1:0)$. In particular, they are not in general position (instead of four intersections we have two intersections of multiplicity~$2$). \end{remark} We now consider the case $|s^{-1}(1)| = 1$. To begin with, notice that this case can be thought of as a limit of the case $|s^{-1}(1)| = 3$, with the points $A$, $B$, $C$ colliding together and forming a single point $D \in s^{-1}(1)$. This observation leads to the following version of Proposition \ref{abcind}: \begin{proposition}\label{dind} The vectors $\xi(D)$, $\xi'(D)$, $\xi''(D)$ form a basis of $\Ker(\mathcal D_+ -T^n \mathcal D_-)$, where the derivatives are taken with respect to any local parameter near $D$. \end{proposition} \begin{proof} First, notice that since the set $s^{-1}(1)$ consists of a single point $D$, the latter must be an order two branch point of the function $s$. In other words, we have $s'(D) = s''(D) = 0$. So, differentiating the equation $(\mathcal D_+ -sT^n \mathcal D_-)\xi = 0$ at the point $D$ twice, we get $$ (\mathcal D_+ -T^n \mathcal D_-)\xi'(D) = (\mathcal D_+ -T^n \mathcal D_-)\xi''(D) = 0. $$ Thus, we have $\xi(D), \xi'(D), \xi''(D) \in \Ker(\mathcal D_+ -T^n \mathcal D_-)$, and it suffices to show that these vectors are linearly independent. To that end, we differentiate the equation $(\mathcal D_+\mathcal D_- - w)\xi = 0$ twice at $D$. Using that $D$ is a branch point of $w$ and thus $w'(D) = 0$, we get $$ (\mathcal D_+\mathcal D_- - w(D))\xi'(D) = 0, \quad (\mathcal D_+\mathcal D_- - w(D))\xi''(D) = w''(D)\xi(D). $$ Furthermore, since the degree of the function $w$ is $2$, $D$ is an order $1$ branch point for $w$, so $w''(D) \neq 0$, which means that $\xi(D)$, $\xi'(D)$ are eigenvectors of $\mathcal D_+\mathcal D_-$, while $\xi''(D)$ is not. Furthermore, the vectors $\xi(D)$ and $\xi'(D)$ are linearly independent by Lemma \ref{indep2}. So, $\xi(D)$, $\xi'(D)$, $\xi''(D)$ are indeed independent, as desired. \end{proof} We now find the vertices of the polygon $P$ in the same fashion as in the case $|s^{-1}(1)| = 3$. Namely, choose an identification $u \colon \Gamma \to \P^1$ in such a way that $u(Z_+ ) = 0$, $u(Z_-) = \infty$, and $u(D) = 1$. Then, as in the case $|s^{-1}(1)| = 3$, we get $ \xi_k(u) = c_k u^k, $ where $c_k \neq 0$ is a constant. In particular, at the point $D$ we get $\xi_k = c_k$, $\xi_k' = k c_k$, $\xi_k'' = k(k-1) c_k$, so up to a projective transformation the vertices of $P$ are given by \begin{equation}\label{fundnilp} v_k = (k: k^2: 1). \end{equation} These points belong to a conic \begin{equation}\label{circConic2}x_2x_3 = x_1^2,\end{equation} so $P$ is inscribed and hence Poncelet. Thus, the proof of Theorem \ref{thm2} in the rational case is complete. \begin{remark}A direct calculation shows that the conic inscribed in the polygon \eqref{fundnilp} is \begin{equation}\label{insConic2}x_2x_3 = x_1^2 + \frac{1}{4}x_3^2.\end{equation} This is an even more degenerate case: the conics \eqref{circConic} and \eqref{insConic} intersect each other at one single point~$(0:1:0)$, of multiplicity $4$. \end{remark} \subsection{The elliptic case: genuine Poncelet polygons}\label{ss:sd} In this section, we complete the proof of Theorem \ref{thm2} in the case when the spectral curve $\Gamma$ has genus~$1$, i.e. is elliptic. The argument is similar to the rational case, but instead of elementary expressions~\eqref{fund} and \eqref{fundnilp}, we obtain formulas for vertices of $P$ in terms of theta functions. \par Recall that in the elliptic case the involution $\sigma$ on $\Gamma$ has four fixed points, at three of which we have $s = 1$, while at the fourth one we have $s = -1$ (see the proof of Proposition \ref{genus2}). Denote those points by $A,B,C, D$, where $s(D) = -1$, and $s(A) = s(B) = s(C) = 1$. \begin{proposition} The directions in $\P^\infty$ determined by the values of the vector-function $\xi$ at the points $A$, $B$, $C$ (see Remark \ref{rem:holo}) are linearly independent. \end{proposition} \begin{proof} They are eigendirections of the operator $\mathcal D_+\mathcal D_-$ corresponding to distinct eigenvalues $w(A)$, $w(B)$, $w(C)$. \end{proof} As in the rational case, it follows that the vectors $\xi(A)$, $\xi(B)$, $\xi(C)$ form a basis of $\Ker(\mathcal D_+ -T^n \mathcal D_-)$ (as usual, one may need to renormalize these vectors to ensure that they are finite, see Remark \ref{rem:holo}). Therefore, the vertices of the corresponding polygon $P$ (defined up to projective transformation) are given by $ (\xi_k(A):\xi_k(B): \xi_k(C)) \in \P^2. $ \begin{remark}\label{psdclosed} Since $z(A) = z(B) = z(C) = 1$ (see Remark \ref{z1}), it follows that the infinite vectors $\xi(A)$, $\xi(B)$, $\xi(C)$ are $n$-periodic, so the polygon $P$ is closed. \end{remark} To explicitly compute the coordinates of vertices of $P$, we identify $\Gamma$ with $\mathbb{C} \,/\, \Lambda$, where $\Lambda \subset \mathbb{C}$ is a lattice. Without loss of generality, assume that $\Lambda$ is spanned by $1$ and $\tau$, where $\tau$ is in the upper half-plane. Furthermore, one can choose an identification between $\Gamma$ and $\mathbb{C} \,/\, \Lambda$ in such a way that the point $D \in \Gamma$ gets identified with $(1 + \tau)/2$. Then $\sigma$, understood as an involution in $\mathbb{C}\, /\, \Lambda$, is simply $u \mapsto -u$. So, $A,B,C$ must coincide with the remaining order $2$ points in $\mathbb{C}\, /\, \Lambda$, namely $0$, ${1}/{2}$, ${\tau}/{2}$. Without loss of generality, assume that $A = 0$, $B = {1}/{2}$, $C = {\tau}/{2}$. Once the identification $\Gamma \simeq \mathbb{C} \,/\ \Lambda$ is done, meromorphic functions on $\Gamma$ can be expressed in terms of the \textit{theta function} corresponding to the lattice $\Lambda = \langle 1, \tau \rangle$. Recall (see e.g. \cite{mumford2007tata}) that this function is defined by \begin{equation} \theta(u) := \sum_{k \in \mathbb{Z}}\exp(\pi \mathrm{i}(2ku + k^2 \tau)) \end{equation} where $\mathrm{i} = \sqrt{-1}$, and the dependence of $\theta$ on $\tau$ is suppressed for notational convenience. It is easily seen from this definition that the theta function is holomorphic in $\mathbb{C}$, even, periodic with period $1$, and quasi-periodic with period $\tau$: \begin{equation*} \theta(-u) = \theta(u), \quad \theta(u+1) = \theta(u), \quad \theta(u+\tau) =\exp(-\pi \mathrm{i}(2u + \tau)) \theta(u). \end{equation*} In addition to that, one can show using the argument principle that the theta function has a unique simple zero at the point $(1 + \tau)/2$, and no other zeros in the fundamental parallelogram spanned by $1$ and $\tau$. These properties allow one to express any meromorphic function on $\mathbb{C}\, /\, \Lambda$ in terms of the theta function. The construction is based on the following well-known result: there exists a meromorphic function with zeros at $p_1, \dots, p_m \in \mathbb{C} \,/\, \Lambda$ and poles at $q_1, \dots, q_m \in \mathbb{C} \,/\, \Lambda$ if and only if $\sum p_k = \sum q_k$. So assume that we are given a collection of points with this property. Then the expression \begin{equation}\label{thetaexpr} f(u) := \frac{\prod\limits_{k=1}^m \theta(u - p_k + (1 + \tau)/2)}{\prod\limits_{k=1}^m \theta(u - q_k +(1 + \tau)/2)} \end{equation} defines a meromorphic function on $\mathbb{C}$ which can be easily seen to be periodic with respect to both $1$ and $\tau$ (here we regard $p_k$'s and $q_k$'s as points in $\mathbb{C}$ and assume that they are chosen in such a way that $\sum p_k = \sum q_k$ exactly, and not just modulo $\Lambda$). Therefore, this function can be viewed as a meromorphic function on $\mathbb{C} \,/\, \Lambda$. Furthermore, the only zeros of $f(u)$ in $\mathbb{C} \,/\, \Lambda$ are $p_k$'s, while its only poles are $q_k$'s. Since zeros and poles determine a meromorphic function up to a constant factor, it follows that any meromorphic function on $\mathbb{C} \,/\, \Lambda$ with zeros at $p_1, \dots, p_m$ and poles $q_1, \dots, q_m$ can be written as \eqref{thetaexpr} times a constant. \par To apply formula \eqref{thetaexpr} in our situation, choose complex numbers $x_p, z_\pm \in \mathbb{C}$ whose images in $\mathbb{C}\, /\, \Lambda$ are the points $X_p, Z_\pm \in \Gamma$. Note that the points $Z_\pm$ are interchanged by $\sigma$, so $z_+ + z_- = 0$ modulo $\Lambda$. Therefore, without loss of generality one can assume that $z_+ + z_- = 1 + \tau$. Then, using Proposition~\ref{behinf} and Corollary \ref{behfin}, we get \begin{equation}\label{xiifla} \xi_k(u) = c_k\frac{ \theta^k(u - z_+ + d )\theta(u - x_p + k\delta + d)}{\theta^k(u - z_- + d)\theta(u - x_p + d)}, \end{equation} where $c_k$ is a non-zero constant, $d:= (1 + \tau)/2$, $\delta:= z_+ - z_-$, and the term containing $\delta$ is found by equating the sum of zeros with the sum of poles. \begin{remark} Note that the functions $\xi_k$ may, but not necessarily do, have poles at $X_p$ (see Corollary~\ref{behfin}). However, formula~\eqref{xiifla} is valid anyway. Indeed, if $\xi_k$ does not have a pole at $X_p$, then its only pole is the point $Z_-$ (which is of order $k$), while its only zero is the point $Z_+$ (which is also of order $k$). So, we must have $kz_+ = iz_-$ modulo $\Lambda$, i.e. $k \delta \in \Lambda$. But then the factor ${\theta(u - x_p + k\delta + d)}/{\theta(u - x_p + d)}$ in \eqref{xiifla} is a non-vanishing holomorphic function, so the analytic properties (i.e. zeros and poles) of the right-hand side of \eqref{xiifla} are the same as for the left-hand side, which means that these functions coincide for a suitable value of $c_k$. \end{remark} Note that since we are only interested in the direction of the vector $\xi$, we may multiply all $\xi_k$'s by $\theta(u - x_p + d)$, which results in $$ \tilde \xi_k(u) = c_k\frac{ \theta^k(u - z_+ + d )\theta(u - x_p + k\delta + d)}{\theta^k(u - z_- + d)}{}{}. $$ These are no longer meromorphic functions on $\mathbb{C}\, /\, \Lambda$, but still meromorphic functions on $\mathbb{C}$. Furthermore, in contrast to $\xi_k$'s, the functions $\tilde \xi_k$ are always finite at the points $0, {1}/{2}, {\tau}/{2} \in \mathbb{C}$ corresponding to $A,B,C \in \Gamma$, so the vertices of $P$ are given by $ (\tilde \xi_k(0):\tilde \xi_k({{1}/{2}}): \tilde \xi_k({{\tau}/{2}})). $ Also notice that the values of the constants $c_k$ do not affect the latter expression, so one can assume that $c_k=1$. Under this assumption, we get $$ \tilde \xi_k(0) =\frac{ \theta^k( d - z_+ )}{ \theta^k( d - z_-)}\theta( k\delta + d - x_p) = \theta( k\delta + d - x_p), $$ where the last equality follows from $d - z_- = -(d -z _+)$ and $\theta(-u) = \theta(u)$. Similarly, we have \begin{align*} \begin{gathered} \tilde \xi_k\left({{1}/{2}}\right) =\frac{ \theta^k({{1}/{2}} + d - z_+ )}{ \theta^k( {{1}/{2}} + d - z_-)}\theta( {{1}/{2}} + k\delta + d - x_p) \\= \frac{ \theta^k(-{{1}/{2}} + d - z_+ )}{ \theta^k( {{1}/{2}} + d - z_-)}\theta( {{1}/{2}} + k\delta + d - x_p) = \theta( \textstyle{{1}/{2}} + k\delta + d - x_p). \end{gathered} \end{align*} where the second last equality follows from $1$-periodicity of $\theta$, and the last one from $\theta(-u) = \theta(u)$. Finally, \begin{align*} \begin{gathered} \tilde \xi_k\left(\textstyle{{\tau}/{2}}\right) = \frac{ \theta^k({{\tau}/{2}} + d - z_+ )}{ \theta^k( {{\tau}/{2}} + d - z_-)} \theta( {{\tau}/{2}} + k\delta + d - x_p) = \frac{ \theta^k({{1}/{2}} + \tau - z_+ )}{ \theta^k( {{1}/{2}} + \tau - z_-)}\theta( {{\tau}/{2}} + k\delta + d - x_p) \\ = \exp( \pi k\mathrm{i} ( 2z_+ - 1- \tau)) \frac{ \theta^k({{1}/{2}} - z_+ )}{\theta^k( {{1}/{2}} + \tau - z_-)}\theta( {{\tau}/{2}} + k\delta + d - x_p) \\ = \exp( \pi k \mathrm{i} \delta)\theta( \textstyle{{\tau}/{2}} + k\delta + d - x_p), \end{gathered} \end{align*} where the second equality uses the definition $d = (1 + \tau)/2$, the third one uses the formula for $\theta(u + \tau)$, while the last one uses that $\theta$ is even along with the relation $\delta = 2z_+ - 1- \tau$. The obtained formulas can be written in a more concise way using \textit{theta functions with (half-integer) characteristics}, defined by \begin{align*} \begin{gathered} \theta_{00}(u) := \theta(u), \quad \theta_{01}(u) := \theta(u + 1/2), \quad \theta_{10}(u) := \exp(\pi \mathrm{i}(u+ \tau / 4)) \theta(u + \tau/2), \\ \theta_{11}(u) := \exp(\pi \mathrm{i}( u+{\tau}/{4} + {1}/{2})) \theta(u + (1+\tau)/2). \end{gathered} \end{align*} Indeed, we have $\tilde \xi_k(0)= \theta_{00}( k\delta + d - x_p)$, $\tilde \xi_k\left(\textstyle{{1}/{2}}\right) = \theta_{01}( k\delta + d - x_p),$ while $\tilde \xi_k\left(\textstyle{{\tau}/{2}}\right) = \theta_{10}( k\delta + d - x_p)$ up to a factor not depending on $k$. Since the latter factor does not affect the projective equivalence class of $P_{}$, one can assume that the vertices of $P_{}$ are given by \begin{equation}\label{verttheta} v_k = ( \theta_{00}( k\delta + d - x_p) : \theta_{01}( k\delta + d - x_p) : \theta_{10}( k\delta + d - x_p)). \end{equation} Now, to prove that $P_{}$ is Poncelet it suffices to establish the following: \begin{proposition} The image of the map $\Phi \colon \mathbb{C} \to \mathbb{C}\P^2$ given by \begin{equation}\label{phiMap} \Phi(u) := (\theta_{00}(u):\theta_{01}(u):\theta_{10}(u)) \end{equation} is a conic. \end{proposition} \begin{proof} First of all, notice that the functions $\theta_{00}$, $\theta_{01}$, $\theta_{10}$ have no common zeros, so the mapping $\Phi$ is well-defined. Further, following \cite{mumford2007tata}, define the following operators $\mathcal S, \mathcal T$ on holomorphic functions on~$\mathbb{C}$: $$ (\mathcal Sf)(u) := f(u + 1), \quad (\mathcal Tf)(u) = \exp(\pi \mathrm{i}( 2u+\tau)) f(u + \tau). $$ Then \begin{equation}\label{Heis} \mathcal S\theta_{jk} = (-1)^{j}\theta_{jk}, \quad \mathcal T\theta_{jk} = (-1)^{k}\theta_{jk}. \end{equation} In particular, we have $\mathcal S^2\theta_{jk} = \theta_{jk}$, $\mathcal T^2\theta_{jk} = \theta_{jk}$, which means that \begin{equation}\label{charqp} \theta_{jk}(u + 2) = \theta_{jk}(u), \quad \theta_{jk}(u + 2\tau) = \exp(-4\pi \mathrm{i}(u + \tau))\theta_{jk}(u). \end{equation} From the latter it follows that $\Phi$ descends to a holomorphic mapping $ \mathbb{C}\, /\, 2\Lambda \to \mathbb{C}\P^2$, so the image of $\Phi$ is an algebraic curve. To find the degree of that curve, one needs to find the number of its intersections with a generic line. Clearly, that number can be found as ${m}\,/\,{\deg \Phi}$, where $m$ is the number of zeros of a generic linear combination of $\theta_{00}$, $\theta_{01}$, $\theta_{10}$ in the fundamental parallelogram of the lattice $2L$, while $ \deg \Phi$ is the degree of $\Phi$, when the latter is regarded as a mapping $ \mathbb{C}\, /\, 2\Lambda \to \mathbb{C}\P^2$. The number $m$ can be easily computed using quasi-periodicity relations \eqref{charqp} and the argument principle. That number is equal to $4$. Further, notice that the functions $\theta_{00}$, $\theta_{01}$, $\theta_{10}$ are even, so $\Phi(-x) = \Phi(x)$, which means that $\deg \Phi \geq 2$. Therefore, the degree of the image of $\Phi$ is either $2$ or $1$, i.e. the image of $\Phi$ is a conic or a straight line. However, it cannot be a straight line, because the functions $\theta_{k,j}$ are linearly independent by \eqref{Heis}. So, the image of $\Phi$ is a conic. \end{proof} Thus, we conclude that the vertices \eqref{verttheta} of the polygon $P_{}$ lie on a conic. Since $P_{}$ is self-dual, it is also circumscribed about a conic, and hence Poncelet, q.e.d. So, Theorem \ref{thm2} is proved. \begin{remark} One can also explicitly describe the image of the mapping \eqref{phiMap} and hence the conic circumscribed about $P$ using Riemann's relation \begin{equation}\label{rr0} \sum_{j,k \in \{0,1\}} \theta_{jk}(\alpha_1)\theta_{jk}(\alpha_2)\theta_{jk}(\alpha_3)\theta_{jk}(\alpha_4) = 2\,\theta_{00}(\beta_1)\theta_{00}(\beta_2)\theta_{00}(\beta_3)\theta_{00}(\beta_4), \end{equation} where $\beta_1 := (\alpha_1 + \alpha_2 + \alpha_3 + \alpha_4)\,/\,2$, $\beta_2 := (\alpha_1 + \alpha_2 - \alpha_3 - \alpha_4)\,/\,2$, $\beta_3 := (\alpha_1 - \alpha_2 + \alpha_3 - \alpha_4)\,/\,2$, $\beta_4 := (\alpha_1 - \alpha_2 - \alpha_3 + \alpha_4)\,/\,2$. Taking $\alpha_1 = 0$, $\alpha_2 = u$, $\alpha_3 = v$, $\alpha_4 = u + v$, we get the identity \begin{equation}\label{rr1} \begin{gathered} -\theta_{00}(0)\theta_{00}(u)\theta_{00}(v)\theta_{00}(u + v) + \theta_{01}(0)\theta_{01}(u)\theta_{01}(v)\theta_{01}(u + v) \\+\, \theta_{10}(0)\theta_{10}(u)\theta_{10}(v)\theta_{01}(u + v) = 0, \end{gathered} \end{equation} which, after a further substitution $v = 0$, becomes \begin{equation}\label{rr} - \theta_{00}^2(0) \theta_{00}^2(u) + \theta_{01}^2(0) \theta_{01}^2(u) + \theta_{10}^2(0) \theta_{10}^2(u) = 0. \end{equation} So, the conic circumscribed about $P$ is given by \begin{equation}\label{explconic} - \theta_{00}^2(0) x_1^2 + \theta_{01}^2(0) x_2^2 + \theta_{10}^2(0) x_3^2 = 0. \end{equation} Similarly, the conic inscribed in $P$ is \begin{equation}\label{explconic2} - \theta_{00}^2(\delta/2) x_1^2 + \theta_{01}^2(\delta/2) x_2^2 + \theta_{10}^2(\delta/2) x_3^2 = 0. \end{equation} Indeed, let $t_k: = k\delta + d - x_p$, $m:= k+ 1/2$, and $t'_{m}:= (t_k + t_{k+1})/2$. Then, as follows from \eqref{rr}, the point \begin{equation}\label{tangentpt} v'_{m} := \left(\frac{\theta_{00}(0)\theta_{00}(t'_m)}{\theta_{00}(\delta/2)} : \frac{\theta_{01}(0)\theta_{01}(t'_m)}{\theta_{01}(\delta/2)} : \frac{\theta_{10}(0)\theta_{10}(t'_m)}{\theta_{10}(\delta/2)}\right) \end{equation} belongs to the conic \eqref{explconic2}. Furthermore, the tangent line to \eqref{explconic2} at $v'_{m}$ passes through the vertices $v_k$ and $v_{k+1}$ of $P$. Indeed, that is equivalent to the relation \begin{align} \begin{gathered} - \theta_{00}(0)\theta_{00}(\delta/2)\theta_{00}(t'_m) \theta_{00}(t_{m \pm 1/2}) + \theta_{01}(0)\theta_{01}(\delta/2)\theta_{01}(t'_m)\theta_{01}(t_{m\pm 1/2}) \\ +\, \theta_{10}(0)\theta_{10}(\delta/2)\theta_{10}(t'_m)\theta_{10}(t_{m \pm 1/2}) = 0, \end{gathered} \end{align} which is a particular case of \eqref{rr1} corresponding to $u = t_{m \pm 1/2} $, $v = \mp \delta/2$. So indeed the polygon $P$ is circumscribed about the conic~\eqref{explconic2}.\par \end{remark} \begin{remark} Note that formula \eqref{verttheta} describes a \textit{family} of polygons, parametrized by $x_p$. Our argument shows that all these polygons are inscribed in one and the same conic~\eqref{explconic} and circumscribed about one and the same conic \eqref{explconic2}. So, polygons~\eqref{verttheta} form what is called a \textit{Poncelet family}, i.e. a family of polygons inscribed in the same conic and circumscribed about the same conic (recall that every Poncelet polygon is a member of such a family by Poncelet's porism). Also note that the expression~\eqref{verttheta} is periodic in $x_p$ with the periods given by the lattice $2\Lambda$. So, the Poncelet family containing our polygon $P$ is parametrized by the elliptic curve $\mathbb{C}\, /\, 2\Lambda$, which is a $4$-to-$1$ covering of the spectral curve $\Gamma = \mathbb{C} \,/\, \Lambda$. As a corollary, the Poncelet family containing $P$ contains four polygons projectively equivalent to $P$: one of those polygons is $P$, while the other three can be obtained from $P$ by replacing $x_p$ in formula~\eqref{verttheta} with $x_p + 1$, $x_p + \tau$, and $x_p + 1 + \tau$. This quadruple of polygons admits a geometric description when the circumscribed conic $C_1$ and inscribed one $C_2$ are confocal. In this case, these polygons can be obtained from $P$ by means of reflection with respect to the common symmetry axes of $C_1$, $C_2$.\par This argument also shows that the spectral curve is the same for all polygons in a Poncelet family. Using a different approach, this was earlier proved in \cite{schwartz2015pentagram}. Formulas for Poncelet families similar to~\eqref{verttheta} are given in \cite{veselov1988integrable}. \end{remark} \begin{remark} Note that since the polygon $P_{}$ is closed (Remark \ref{psdclosed}), the expression \eqref{verttheta} must be $n$-periodic in $k$. Therefore, we must have $n\delta \in 2\Lambda$. Another way to see this is to consider the function $(s-1)\mu_-$ on $\Gamma$. Using Table \ref{table} and the fact that $s(A) = s(B) = s(C) = 1$, we conclude that this function has simple zeros at $A$, $B$, $C$, a zero of order $(n-3)/2$ at $Z_-$, and a pole of order $(n+3)/2$ at $Z_+$. So, we have $ 0 + {1}/{2} + {\tau}/{2}+ {(n-3)}/{2} \cdot z_- = {(n+3)/}{2} \cdot z_+ \,(\mathrm{mod}\,\Lambda), $ which implies $ {n}/{2} \cdot \delta = {n}/{2} \cdot(z_+ - z_-) = {1}/{2} + {\tau}/{2} - {3}/{2} \cdot (z_+ + z_-) = -2d = 0 \,(\mathrm{mod}\,\Lambda), $ and thus $n\delta \in 2\Lambda$, as desired.\par Also note that formula \eqref{verttheta} still defines a Poncelet polygon if $n\delta \in \Lambda \setminus 2\Lambda$. It is then a \textit{twisted} $n$-gon, which can also be viewed as a closed $2n$-gon. Such twisted Poncelet polygons do not arise in our setting, because they are not fixed points of the pentagram map. \end{remark} \section{Proof of Theorem \ref{thm1}: a closed polygon fixed by the pentagram map is Poncelet In this section, we derive Theorem \ref{thm1} from Theorem \ref{thm2}. To that end, we first show, in Section \ref{sec:sd}, that the self-duality assumption of Theorem \ref{thm2} is not very restrictive. Namely, any polygon satisfying all the assumptions of the theorem except for possibly self-duality, can be transformed, by means of rescaling~\eqref{rescaling} with $s > 0$, into a self-dual polygon. From that we conclude that a polygon as in Theorem~\ref{thm1} (i.e. weakly convex, closed, and projectively equivalent to its pentagram image) must be Poncelet up to rescaling \eqref{rescaling} with $s > 0$. So to show that that polygon is actually Poncelet we need to prove that the rescaling is trivial, i.e. corresponds to $s = 1$. To that end, we show that if a weakly convex Poncelet polygon is rescaled in a non-trivial way, then the resulting polygon cannot be closed. This is done separately in the rational (see Section \ref{ss:genrat}) and elliptic (see Section \ref{ss:genell}) cases. In the rational case we have a very simple explicit description of the corresponding degenerate Poncelet polygons (see Section~\ref{sec:rat}), so in that case the proof is completely elementary. As for the the elliptic situation, in that case the proof relies on the study of the real part of the corresponding elliptic curve and location of various special points within that real part. \par \subsection{Self-duality up to rescaling}\label{sec:sd} \begin{proposition}\label{prop:sd} Assume that a closed or twisted weakly convex polygon $P$ is projectively equivalent to its pentagram image $P'$. Then one can choose the $n$-periodic operator $\mathcal D$ of the form \eqref{diffOp2} associated with $P$ in such a way that the corresponding commuting operators $\D_\l , \D_\r $ given by \eqref{dldr} satisfy \begin{equation}\label{sde} \D_\r = - s_0 T^{n} \D_\l ^* \end{equation} for certain $ s_0 \in \mathbb{R}_+$. \end{proposition} \begin{proof} Let $ \mathcal D$ be an $n$-periodic operator corresponding to $P$ such that the corresponding operators $ \D_\l , \D_\r $ commute, and, moreover, the coefficients of $ \mathcal D$ satisfy the alternating signs condition \eqref{altCond} (such $ \mathcal D$ exists by Proposition \ref{prop:cdo}). Then the operator $T^{-n} \D_\l \D_\r$ has the form \begin{equation}\label{symmeOP} T^{-n} \D_\l \D_\r = \alpha T^{-1} + \beta + \gamma T. \end{equation} Moreover from the alternating signs condition we have $\alpha_k, \gamma_k > 0$ for all $k \in \mathbb{Z}$. Therefore, the operator~\eqref{symmeOP} can be symmetrized. Namely, there exists a positive quasi-periodic sequence $\lambda$ such that the operator $ \lambda T^{-n} \D_\l \D_\r \lambda^{-1} $ is self-dual. That sequence can be found from the equation $ {\lambda_{k+1}}/{\lambda_k} = \sqrt{{\gamma_k}/{\alpha_{k+1}}} $. So, conjugating $\mathcal D$ by $\lambda$ if needed, we may assume that the operator \eqref{symmeOP} is self-dual, meaning that \begin{equation}\label{prodsd} T^{-n} \D_\l \D_\r = T^{n} \D_\r ^* \D_\l ^*. \end{equation} We now show that under that assumption we must have \eqref{sde}. Let $z_l$, $z_r$ be the monodromies of $\mathcal D_l$, $\mathcal D_r$ respectively. Then, by the second statement of Proposition \ref{alt}, we have $0 < z_l < z_r$. Furthermore, since $\D_\l$ and $\D_\r$ commute, it follows that the kernels of both of them are contained in $\Ker \D_\l\D_\r$. So, the spectrum of the monodromy of $\D_\l\D_\r$ is $\{z_l, z_r\}$. Moreover, we have $$ \Ker (\D_\l\D_\r)\vert_{\v{z_l}} = \Ker \D_\l, \quad \Ker (\D_\l\D_\r)\vert_{\v{z_r}} = \Ker \D_\r. $$ Similarly, using that the monodromy of $\D_\l^*$ and $\D_\r^*$ is given by $z_l^{-1}$ and $z_r^{-1}$ respectively, we conclude that the monodromy of $\D_\r ^* \D_\l ^*$ is $\{z_l^{-1}, z_r^{-1}\}$, which, in view of \eqref{prodsd} and the inequality $0 < z_l < z_r$ implies $z_l = z_r^{-1}$. Furthermore, we have $$ \Ker \D_\l^* = \Ker (\D_\r^*\D_\l^*)\vert_{\v{z_l^{-1}}}= \Ker (\D_\l\D_\r)\vert_{\v{z_l^{-1}}} = \Ker (\D_\l\D_\r)\vert_{\v{z_r}} = \Ker \D_\r, $$ so \begin{equation}\label{asde} \D_\l ^* = T^{-n} \mu \D_\r \end{equation} for a certain $n$-periodic sequence $\mu$ of non-zero real numbers. Taking the dual of both sides, we also get $ \D_\r ^* = T^{-n}\D_\l \mu^{-1}, $ so $$ \D_\l ^* \D_\r ^* = T^{-2n}\mu \D_\r \D_\l \mu^{-1} = T^{-2n}\mu \D_\l \D_\r \mu^{-1}. $$ At the same time, we have $$ \D_\l ^* \D_\r ^* = \D_\r ^* \D_\l ^* = T^{-2n} \D_\l \D_\r , $$ so $\mu$ commutes with $ \D_\l \D_\r $. But that is only possible if $\mu$ is a constant sequence $\mu_k = c$. So,~\eqref{asde} implies~\eqref{sde}, with $ s_0 = -c^{-1}$. Furthermore, since the coefficient of the highest degree term in $\mathcal D_r$ is a sequence of negative numbers, while the coefficient of the coefficient of the highest degree term in $\mathcal D_l^*$ is a sequence of positive numbers, equation~\eqref{sde} can only be satisfied for $ s_0 > 0$, as desired. \end{proof} \begin{corollary}\label{cor:rescalingPon} Assume that a closed or twisted weakly convex polygon $P$ is projectively equivalent to its pentagram image $P'$. Then there exists a polygon $P_{sd}$ with the same properties which is, in addition, self-dual (and hence Poncelet by Theorem \ref{thm2}), such that $P = R_{s_0} (P_{sd})$ where $R_{s_0} $ is the rescaling \eqref{rescaling} with $ s = s_0 > 0$. \end{corollary} \begin{proof} Take the operator $\mathcal D$ provided by Proposition {\ref{prop:sd}}. It has the form $ \mathcal D = \D_\l - s_0 T^n \D_\l ^*, $ where $ s_0 \in \mathbb{R}_+$. Consider also the operator $ \mathcal D_{sd} = \D_\l - T^n \D_\l ^*, $ and the associated polygon $P_{sd}$. Then, by Corollary \ref{cor:rescalingDO}, we have $P = R_{s_0} (P_{sd})$. In particular, $P_{sd}$ is projectively equivalent to its pentagram image (because the pentagram map commutes with rescaling) and weakly convex (by the third statement of Proposition~\ref{alt}). Furthermore, we have $ \mathcal D_{sd}^* = -T^{-n} \mathcal D_{sd}, $ so $P_{sd}$ is self-dual, as desired \end{proof} \par \subsection{End of proof in the rational case}\label{ss:genrat} Let $P$ be a weakly convex closed polygon projectively equivalent to its pentagram image $P'$, as in Theorem \ref{thm1}. Then, by Corollary \ref{cor:rescalingPon}, there exists a generally speaking twisted polygon $P_{sd}$ such that $P = R_{s_0} (P_{sd})$ for some $s_0 > 0$, and $P_{sd}$ is self-dual. Consider the spectral curve associated with $P_{sd}$, constructed in the proof of Theorem \ref{thm2}. In this section, we prove Theorem \ref{thm1} in the case when the genus of $\Gamma$ is $0$, i.e. when $\Gamma$ is rational. To that end, we will show that $s_0 = 1$, so $P = P_{sd}$ and hence Poncelet. \par As we know from Section \ref{sec:rat}, in the rational case the vertices of $P_{sd}$ are given by \eqref{fund} or \eqref{fundnilp}. In case \eqref{fund}, the associated difference operator reads \begin{equation}\label{ccoperator} \mathcal D_{sd} = T^{{(n-3)}/{2}}-aT^{{(n-1)}/{2}} + aT^{{(n+1)}/{2}} -T^{{(n+3)}/{2}}, \end{equation} where $a$ is such that the roots of the corresponding characteristic polynomial $1 - ax + ax^2 - x^3$ are $r, r^{-1}$, and $1$ (note that since the polygon $P_{sd}$ is real, $a$ must be real too, so we must have $|r| = 1$). Indeed, the kernel of such an operator is spanned by the sequences $r^k$, $r^{-k}$, and a constant sequence, so the associated polygon is precisely \eqref{fund}. Likewise, in the case \eqref{fundnilp}, the associated difference operator is also of the form \eqref{ccoperator}, with $a = 3$. So, since the polygon $P_{sd}$ is defined by the operator \eqref{ccoperator}, the polygon $P = R_{s_0}(P_{sd})$ is defined by $$ \mathcal D = T^{{(n-3)}/{2}}-aT^{{(n-1)}/{2}} + s_0(aT^{{(n+1)}/{2}} -T^{{(n+3)}/{2}}). $$ The kernel of this operator is spanned by the sequences $x_1^k$, $x_2^k$, $x_3^k$, where $x_1$, $x_2$, $x_3$ are the roots of the characteristic polynomial $h(x) := 1 - ax + s_0(ax^2 - x^3)$ (note that we do not need to consider the case of multiple roots, because in that case the monodromy of $\mathcal D$ is not diagonalizable, and the polygon $P$ cannot be closed). Moreover, since $P$ is closed, we must have $x_1^n = x_2 ^n = x_3^n$, so $|x_1| = |x_2| = |x_3| = \lambda$, where $\lambda > 0$ is a real number. So, the roots of the polynomial $h(\lambda x) = 1 - a\lambda x + s_0(a\lambda ^2x^2 - \lambda^3x^3)$ must all have absolute value $1$. Also taking into account that this polynomial is real, and that $s_0 \lambda^3 > 0$, we conclude that the roots of $h(\lambda x)$ are of the form $1, \alpha, \bar \alpha$, where $|\alpha| = 1$. But this yields $s_0 \lambda^3 = 1$ and $s_0 \lambda^2 = \lambda$, so $s_0 = 1$. Therefore, the polygon $P$ coincides with $P_{sd}$ and hence Poncelet. Thus, the proof of Theorem \ref{thm1} in the rational case is complete. \begin{remark} One can also give a more concrete description of $P$, as follows. Since the vertices of $P$ are given by \eqref{fund} (with \eqref{fundnilp} being impossible due to closedness of $P$), and $P$ is a closed $n$-gon, it follows that $r^n = 1$. So, applying a linear transformation to \eqref{fund}, we get a polygon whose vertices have affine coordinates $ \cos({2\pi mk}/{n})$, $\sin({2\pi mk}/{n}),$ where ${2\pi m}/{n} = \arg r$. In particular, if $m = 1$, then $P$ is a regular $n$-gon. \end{remark} \par \subsection{End of proof in the elliptic case}\label{ss:genell} In this section, we prove Theorem \ref{thm1} in the case when the genus of $\Gamma$ is $1$, i.e. when $\Gamma$ is elliptic. As in the rational case, we show that $s_0 = 1$, so $P = P_{sd}$ and hence Poncelet. We keep the notation of Sections \ref{ss:genus} and \ref{ss:sd}\par Recall that a \textit{real structure} on a Riemann surface $\Gamma$ is an anti-holomorphic involution $\rho \colon \Gamma \to \Gamma$. The \textit{real part} $\Gamma_\mathbb{R}$ of $\Gamma$ (with respect to the real structure $\rho$) is then defined as the set of fixed points of $\rho$: $\Gamma_\mathbb{R} := \{ X \in \Gamma \mid \rho(X) = X\}$. A meromorphic function $f$ on $\Gamma$ is called a \textit{real function} if $\rho^*f = \bar f$. Real functions take real values at real points (i.e. points in $\Gamma_\mathbb{R}$).\par In our case, the spectral curve $\Gamma$ is endowed with a {real structure} $\rho \colon \Gamma \to \Gamma$ induced by the involution $(z,w) \mapsto (\bar z, \bar w)$ on the affine spectral curve $\Gamma_a$. \begin{proposition}\label{realfunctions} The functions $z$, $w$, $\mu_\pm$, $s$, $\xi$ on $\Gamma$ are real (see Section \ref{ss:genus} for the definition of those functions). \end{proposition} \begin{proof} The functions $z,w$ are real by construction of the real structure $\rho$. To prove that the vector-function $\xi$ is real, notice that it is defined by equations \eqref{xidefeqns} up to a scalar factor. Taking the complex conjugate of those equations and then applying $\rho^*$, we get that $\rho^* \bar \xi = f\xi$ for a certain meromorphic function $f$. But then the normalization condition $\xi_1 = 1$ implies $f = 1$, as desired. Now, the reality of the functions $\mu_\pm$ follows from equation \eqref{mupluschar}, while reality of $s$ follows from its definition \eqref{SDEF}. \end{proof} \begin{corollary}\label{realpoints} The points $Z_\pm,S_\pm,A,B,C,D \in \Gamma$ are real (see Section \ref{ss:genus} for the definition of $Z_\pm, S_\pm$ and Section \ref{ss:sd} for the definition of $A,B,C,D$). \end{corollary} \begin{proof} Since $z$ is real function (Proposition \ref{realfunctions}), the involution $\rho$ takes zeros of $z$ to zeros of $z$. But the only zero of $z$ is $Z_+$ (see Table \ref{table}), so the latter must be real. Analogously, $Z_-$ is real as the only pole of $z$, $S_+$ is real as the only simple zero of $s$, $S_-$ is real as the only simple pole of $s$, while $D$ is real as the only point where both $s$ and $z$ are equal to $-1$ (see Remark \ref{z1}). To show that $A, B, C$ are real, observe that they constitute the set of points where $s = 1$, so $\rho$ takes the set $\{A, B, C\}$ to itself. Further, notice that the values of the function $w$ at $A, B, C$ are eigenvalues of a self-adjoint operator $(\mathcal D_+\mathcal D_-)\vert_{\v{1}}$ and hence real. Furthermore, those values are distinct, because $A,B,C$ are branch points of $w$, while $\deg w = 2$. But if, say, $\rho(A) = B$, then we must have $w(B) = \bar w(A)$, which is not possible since $w(A), w(B)$ are real and distinct. So, $\rho$ cannot permute $\{A, B, C\}$ and thus preserves each of them. \end{proof} \begin{corollary} The real part $\Gamma_\mathbb{R}$ of $\Gamma$ consists of two disjoint circles. \end{corollary} \begin{proof} The real part of any Riemann surface consists a finite number of disjoint circles (ovals). Furthermore, since the genus of $\Gamma$ is $1$, the number of connected components of $\Gamma_\mathbb{R}$ is at most $2$ by Harnack's theorem. At the same time, the number of connected components is non-zero since the real part $\Gamma_\mathbb{R}$ of $\Gamma$ is not empty (by Corollary \ref{realpoints}). So, it remains to determine whether the number of connected components is $1$ or $2$. These cases can be distinguished by counting the number of real points of order $2$ on $\Gamma$. Namely, if $\Gamma$ is identified with $\mathbb{C} \,/\, \Lambda$ in such a way that $0$ is a real point, then $\Gamma_\mathbb{R}$ is a subgroup of $\Gamma$ isomorphic to $S^1$ if $\Gamma_\mathbb{R}$ is connected, and $S^1 \times \mathbb{Z}_2$ if $\Gamma_\mathbb{R}$ has two components. So the number of real order $2$ points in $\Gamma_\mathbb{R}$ is $2^{m}$, where $m$ is the number of components of $\Gamma$. Identifying $\Gamma$ with $\mathbb{C} / L$ as in Section \ref{ss:sd}, we see that the order $2$ points are $A,B,C,D$, which are all real. So, $m=2$, q.e.d. \end{proof} This argument also shows that one of the components of $\Gamma_\mathbb{R}$ contains the point $D$ and one of the points $\{A, B, C\}$, while the second component of $\Gamma_\mathbb{R}$ contains the remaining two points. Without loss of generality, assume that $C$ and $D$ are located in the same component. Denote that component by~$\Gamma_\mathbb{R}^0$. \begin{proposition} We have $Z_\pm, S_\pm \in \Gamma_\mathbb{R}^0$. \end{proposition} \begin{proof} The function $z$ is real-valued on $ \Gamma_\mathbb{R}^0$ and satisfies $z(C) = 1$, $z(D) = -1$. So, there must be at least two points on $ \Gamma_\mathbb{R}^0$ where $z$ changes sign. But the only points which have this property are $Z_\pm$ (see Table \ref{table}). Similarly, $s(C) = 1$, $s(D) = -1$, so the function $s$ should also change sign at two points. Moreover, these cannot be the points $Z_\pm$, since at those points $s$ has a zero and a pole of order $2$. So, we must have $S_\pm \in \Gamma_\mathbb{R}^0$, as desired. \end{proof} \begin{figure}[t] \centering \begin{tikzpicture}[scale = 1] \draw (0,0) circle (1); \coordinate (C) at (0,1); \coordinate (D) at (0,-1); \coordinate (Zp) at (-0.8,0.6); \coordinate (Zm) at (0.8,0.6); \coordinate (Sp) at (-0.8,-0.6); \coordinate (Sm) at (0.8,-0.6); \node[label={[shift={(0,-0.05)}]above:${C}$}] at (C) () {}; \node[label={[shift={(0,0.05)}]below:${D}$}] at (D) () {}; \node[label={[shift={(0,0)}]left:${S_+}$}] at (Zp) () {}; \node[label={[shift={(0,0)}]right:${S_-}$}] at (Zm) () {}; \node[label={[shift={(0,0)}]left:${Z_+}$}] at (Sp) () {}; \node[label={[shift={(0,0)}]right:${Z_-}$}] at (Sm) () {}; \fill (C) circle (0.07); \fill (D) circle (0.07); \fill (Zp) circle (0.07); \fill (Zm) circle (0.07); \fill (Sp) circle (0.07); \fill (Sm) circle (0.07); \end{tikzpicture} \caption{Location of the points $C,D,Z_\pm,S_\pm$ in the component $\Gamma_\mathbb{R}^0$ of the real part of the spectral curve.}\label{Fig:realpart} \end{figure} \begin{proposition}\label{cyclicOrder} The cyclic order of the points $C,D,Z_\pm,S_\pm$ on $ \Gamma_\mathbb{R}^0$ is as shown in Figure \ref{Fig:realpart}. \end{proposition} The proof is based on the following two lemmas. \begin{lemma}\label{flemma2} We have $z(S_+) \in (0,1)$. \end{lemma} \begin{proof} Without loss of generality, assume that the vector $\xi(S_+)$ is finite and non-zero (see Remark~\ref{rem:holo}). Then, using the definition of the function $\mu_+$ and the fact that $\mu_+(S_+) = 0$ (see Table \ref{table}), we get $ \mathcal D_+ \xi(S_+) =\mu_+(S_+) \xi(S_+) = 0. $ Therefore, $ \xi(S_+) $ spans the kernel of the operator $\mathcal D_+$, while $z(S_+)$ is the monodromy of that operator. So, by the second statement of Proposition \ref{alt}, the number $z(S_+)$ is positive and is less than the monodromy of $\mathcal D_r = -T^n\mathcal D_+^*$ But the monodromy of the latter operator is the same as the monodromy of $\mathcal D_+^*$, which is $z(S_+)^{-1}$. So, we get $ 0 < z(S_+) < z(S_+)^{-1}, $ and the result follows. \end{proof} \begin{lemma}\label{flemma1} The only point in $\Gamma_\mathbb{R}^0$ where $z = 1$ is the point $C$. \end{lemma} \begin{proof} Assume that $X \in \Gamma_\mathbb{R}^0$ and $z(X) = 1$. Then the latter condition in particular implies $X \neq Z_\pm$. Therefore, without loss of generality, we may assume that the vector $\xi(X)$ is finite and non-zero (if not, we renormalize $\xi$, see Remark \ref{rem:holo}). Under this assumption, using the inner product \eqref{pairing} on $\v{\pm 1}$, we get $$ \mu_+(X) \left\langle \xi(X), \xi(X) \right\rangle = \left\langle \mathcal D_+\xi(X), \xi(X) \right\rangle = \left\langle \xi(X), \mathcal D_-\xi(X) \right\rangle = \mu_-(X) \left\langle \xi(X), \xi(X) \right\rangle. $$ Furthermore, since the vector $\xi(X)$ is real, it follows that $\left\langle \xi(X), \xi(X) \right\rangle > 0$, and thus $\mu_+(X) = \mu_-(X)$. So, using formula \eqref{SDEF} for the function $s$, we get $s(X) = z(X)^{-1} = 1$ (here we use that the value $\mu_+(X) = \mu_-(X)$ is finite and non-zero, which is true because the functions $\mu_\pm$ do not have common zeros or poles, see Table \ref{table}). Furthermore, recall that the set of points where $s = 1$ consists of the point $C$, plus points $A$ and $B$ which do not belong to $\Gamma_\mathbb{R}^0$. The result follows. \end{proof} \begin{proof}[Proof of Proposition \ref{cyclicOrder}] \begin{figure}[b] \centering \begin{tikzpicture}[scale = 0.9] \node at (0,0) () { \begin{tikzpicture}[scale = 0.8] \node at (-1.9,1.3) () {(a)}; \draw (0,0) circle (1); \coordinate (C) at (0,1); \coordinate (D) at (0,-1); \coordinate (Zp) at (-0.8,0.6); \coordinate (Zm) at (0.8,0.6); \coordinate (Sp) at (-0.8,-0.6); \coordinate (Sm) at (0.8,-0.6); \node[label={[shift={(0,-0.1)}]above:${C}$}] at (C) () {}; \node[label={[shift={(0,0.1)}]below:${D}$}] at (D) () {}; \node[label={[shift={(0.1,0)}]left:${S_+}$}] at (Zp) () {}; \node[label={[shift={(-0.1,0)}]right:${S_-}$}] at (Zm) () {}; \node[label={[shift={(0.1,0)}]left:${Z_-}$}] at (Sp) () {}; \node[label={[shift={(-0.1,0)}]right:${Z_+}$}] at (Sm) () {}; \fill (C) circle (0.1); \fill (D) circle (0.1); \fill (Zp) circle (0.1); \fill (Zm) circle (0.1); \fill (Sp) circle (0.1); \fill (Sm) circle (0.1); \end{tikzpicture} }; \node at (4.5,0) () { \begin{tikzpicture}[scale = 0.8] \node at (-1.9,1.3) () {(b)}; \draw (0,0) circle (1); \coordinate (C) at (0,1); \coordinate (D) at (0,-1); \coordinate (Zp) at (-0.8,0.6); \coordinate (Zm) at (0.8,0.6); \coordinate (Sp) at (-0.8,-0.6); \coordinate (Sm) at (0.8,-0.6); \node[label={[shift={(0,-0.1)}]above:${C}$}] at (C) () {}; \node[label={[shift={(0,0.1)}]below:${D}$}] at (D) () {}; \node[label={[shift={(0.1,0)}]left:${Z_+}$}] at (Zp) () {}; \node[label={[shift={(-0.1,0)}]right:${Z_-}$}] at (Zm) () {}; \node[label={[shift={(0.1,0)}]left:${S_+}$}] at (Sp) () {}; \node[label={[shift={(-0.1,0)}]right:${S_-}$}] at (Sm) () {}; \fill (C) circle (0.1); \fill (D) circle (0.1); \fill (Zp) circle (0.1); \fill (Zm) circle (0.1); \fill (Sp) circle (0.1); \fill (Sm) circle (0.1); \end{tikzpicture} }; \node at (9,0) () { \begin{tikzpicture}[scale = 0.8] \node at (-1.9,1.3) () {(c)}; \draw (0,0) circle (1); \coordinate (C) at (0,1); \coordinate (D) at (0,-1); \coordinate (Zp) at (-0.8,0.6); \coordinate (Zm) at (0.8,0.6); \coordinate (Sp) at (-0.8,-0.6); \coordinate (Sm) at (0.8,-0.6); \node[label={[shift={(0,-0.1)}]above:${C}$}] at (C) () {}; \node[label={[shift={(0,0.1)}]below:${D}$}] at (D) () {}; \node[label={[shift={(0.1,0)}]left:${Z_+}$}] at (Zp) () {}; \node[label={[shift={(-0.1,0)}]right:${Z_-}$}] at (Zm) () {}; \node[label={[shift={(0.1,0)}]left:${S_-}$}] at (Sp) () {}; \node[label={[shift={(-0.1,0)}]right:${S_+}$}] at (Sm) () {}; \fill (C) circle (0.1); \fill (D) circle (0.1); \fill (Zp) circle (0.1); \fill (Zm) circle (0.1); \fill (Sp) circle (0.1); \fill (Sm) circle (0.1); \end{tikzpicture} }; \end{tikzpicture} \caption{Impossible locations of the points $C,D,Z_\pm,S_\pm$ on $\Gamma_\mathbb{R}^0$.}\label{Fig:realpartimp} \end{figure} The restriction of the involution $\sigma$ to $\Gamma_\mathbb{R}^0$ preserves the points $C,D$, interchanges $Z_+$ with $Z_-$, and interchanges $S_+$ with $S_-$. For this reason, the only possible locations of those points on $\Gamma_\mathbb{R}^0$ are the one depicted in Figure \ref{Fig:realpart}, as well as the ones depicted in Figure \ref{Fig:realpartimp}. Assume that $C,D,Z_\pm,S_\pm$ are located as in Figure \ref{Fig:realpartimp}a. Then, since $z(S_+) \in (0,1)$ by Lemma \ref{flemma2}, while $Z_-$ is a pole of $z$, there must be a point $X$ in the open arc $(S_+, Z_-)$ such that $z(X) = 1$ or $z(X) = 0$ (here and below $(X,Y)$ denotes an open arc going from $X$ to $Y$ in counter-clockwise direction). However, the former is impossible by Lemma \ref{flemma1}, while the latter is impossible since the only zero of $z$ is the point $Z_+$. So, the points cannot be located as in Figure \ref{Fig:realpartimp}a. Further, since $z(D) = -1$, while the only points where $z$ changes sign are $Z_\pm$, in Figures \ref{Fig:realpartimp}b and \ref{Fig:realpartimp}c we must have $z(S_+) < 0$, which is impossible by Lemma \ref{flemma2}. So, the points $C,D,Z_\pm,S_\pm$ are located as in Figure~\ref{Fig:realpart}. \end{proof} Now recall that the elliptic curve $\Gamma$ is associated with a Poncelet $n$-gon $P_{sd}$, and in addition we have a closed $n$-gon $ P = R_{s_0}(P_{sd})$, where $s_0 > 0$. Our aim is to show that $s_0 = 1$. \begin{proposition} There is a point $X_0 \in \Gamma_\mathbb{R}^0$ such that $s(X_0) = s_0$ and $z(X_0) = s_0^{-n/3}$. \end{proposition} \begin{proof} The function $s$ has one simple pole and one double pole in $\Gamma_\mathbb{R}^0$ (see Table \ref{table}). Therefore, the degree of the mapping $s \colon \Gamma_\mathbb{R}^0 \to \mathbb{R}\mathbb{P}^1$ is equal to $\pm 1$ (depending on the orientations). In particular, this mapping is surjective. So there exists $X_0 \in \Gamma_\mathbb{R}^0$ such that $s(X_0) = s_0$. To show that $z(X_0) = s_0^{-n/3}$, recall that the polygon $P$ associated with the operator $\mathcal D_+ -s_0T^n \mathcal D_-$ is closed. Therefore, the monodromy of the that operator has the form $\lambda \mathrm{Id}$. At the same time, since $\mathcal D_- = \mathcal D_+^*$, the explicit form of that operator is $$\mathcal D_+ -s_0T^n \mathcal D_- = aT^{(n-3)/2} + bT^{(n-1)/2} - s_0 \tilde b T^{(n+1)/2} - s_0 \tilde aT^{(n+3)/2},$$ where the sequences $\tilde a$, $\tilde b$ coincide with $a$, $b$ up to a shift of indices. So, by formula~\eqref{monodet}, the determinant of the monodromy of this operator is $s_0^{-n}$. Thus, we have $\lambda = s_0^{-n/3}$, and the result follows. \end{proof} We will now show that $X_0 = C$, which implies $s_0 = 1$ and thus proves Theorem \ref{thm1}. To that end, notice that since $s(X_0) = s_0$ is finite and positive, $X_0$ must be located in the open arc $(S_-, S_+)$ (see Figure~\ref{Fig:realpart}). At the same time, since the function $s$ is equal to $1$ at $C$, has a pole at $S_-$, and does not take values $0,1, \infty$ in $(S_-, C)$, it follows that $s > 1$ in $(S_-, C)$. Furthermore, the same argument applied to the function $z$ shows that $z > 1$ in $(Z_-, C)$, and, in particular, in $(S_-, C)$. But then $X_0$ cannot belong to $(S_-, C)$, because it is not possible that both $s(X_0) = s_0$ and $z(X_0) = s_0^{-n/3}$ are greater than~$1$. Analogously, $s$ and $z$ are both less than $1$ in $(C, S_+)$, so $X_0$ cannot belong there either. Therefore, we must have $X_0 = C$, which implies $s_0 = 1$. But this means that the polygon $P$ is the same as the polygon $P_{sd}$ and hence Poncelet. So, Theorem \ref{thm1} is proved. \par \section{Appendix: Duality of difference operators and polygons}\label{sec:app} The goal of this appendix is to prove that polygons corresponding to dual difference operators are dual to each other. This seems to be a well-known result, and it explicitly appears as Proposition~4.4.3 in~\cite{morier2014linear}. Here we give a different proof, based on interpretation of difference operators as infinite matrices. \begin{proposition}\label{dualdual} Let $\mathcal D$ be a properly bounded difference operator supported in $[m_-,m_+]$, and let $P = \{v_k\}$ be the corresponding polygon in $\P^{d-1}$, where $d = m_+ - m_-$ is the order of $\mathcal D$. Then the dual operator $\mathcal D^*$ corresponds to a polygon $P^* = \{v_k^*\}$ in the dual space $(\P^{d-1})^*$ whose $k$'th vertex $v_k^*$ is the hyperplane in $\P^{d-1}$ spanned by the vertices $v_{k + m_-+1}, \dots, v_{k + m_+-1}$ of $P$. \end{proposition} \begin{proof} Let $\mathcal D = \sum_{j = m_-}^{m_+} a^j T^j$. Then one can interpret $\mathcal D$ as a finite-band matrix \eqref{infMatrix0} whose non-zero diagonals have labels $m_-, \dots, m_+$. (Here and in what follows, the $k$'th diagonal of an infinite matrix is the collection of its entries $a_{ij}$ such that $j - i = k$. In other words, the diagonals are labeled from southwest to northeast, with the main diagonal labeled by $0$.) Note that even though infinite matrices do not form an algebra, any infinite matrix can be multiplied by a finite band matrix. \begin{lemma}\label{pseudoinverse} There exists an infinite matrix $\mathcal L$ such that:\begin{enumerate}\item $\mathcal D\mathcal L = \mathcal L\mathcal D = 0$. \item The diagonals of $\mathcal L$ with labels $-m_++1, \dots, -m_- - 1$ vanish. \item None of the entries of $\mathcal L$ on the diagonals with labels $-m_+$ and $-m_-$ vanish. \end{enumerate} \end{lemma} \begin{remark} One can think of infinite matrices as formal Laurent series in terms of the shift operator $T$, with coefficients given by sequences. In this language, Lemma \ref{pseudoinverse} states the existence of $\mathcal L$ of the form $\sum_{j = -\infty}^{-m_+} b^j T^j + \sum_{j = -m_-}^{+\infty} b^j T^j,$ where $b_k^{-m_+} \neq 0$, $b_k^{-m_-} \neq 0$ for any $k \in \mathbb{Z}$. \end{remark} \begin{proof}[Proof of Lemma \ref{pseudoinverse}] The infinite matrix $\mathcal D$ can be regarded as an element of two groups: the group $\mathrm{GL}_{\infty}^+$ of invertible infinite matrices with finitely many non-zero diagonals below the main diagonal, and the group $\mathrm{GL}_{\infty}^-$ of invertible infinite matrices with finitely many non-zero diagonals above the main diagonal. Denote by $\hat \mathcal D^{-1}, \check \mathcal D^{-1}$ the inverses of $\mathcal D$ in these two groups, and set $ \mathcal L := \hat \mathcal D^{-1} - \check \mathcal D^{-1}. $ Then we clearly have $\mathcal D\mathcal L = \mathcal L\mathcal D = 0$. To see that $\mathcal L$ is of desired form, write $\mathcal D$ as $ a^{m_-}T^{m_-}(1 + \dots), $ where the dots denote terms of higher order in $T$. Then the inverse of $(1 + \dots)$ in $\mathrm{GL}^+_{\infty}$ can be computed using the Taylor series $(1+x)^{-1} = 1 - x + \dots$. So, the inverse of $\mathcal D$ in $\mathrm{GL}^+_{\infty}$ reads $ \hat \mathcal D^{-1}=\,\, (1 + \dots)^{-1}T^{-m_-}(a^{m_-})^{-1}$ and hence is of the form $\sum_{j = -m_-}^{+\infty} b^j T^j$ with $b_k^{-m_-} \neq 0$ . Likewise, $\check \mathcal D^{-1}$ is of the form $ \sum_{j = -\infty}^{-m_+} b^j T^j$ with $b_k^{-m_+} \neq 0$. The result follows. \end{proof} We now finish the proof of Proposition \ref{dualdual}. Let $V=\{V_k \in \mathbb{R}^d\}$ be a sequence of lifts of vertices $v_k$ of $P$ such that $\mathcal D V = 0$. Then any scalar sequence $\xi \in \Ker \mathcal D$ can be obtained from $V$ by means of term-wise application of a linear functional. In particular, since $\mathcal D \mathcal L = 0$, this applies to columns of the matrix $\mathcal L$. So, the $j$'th column of $\mathcal L$ is of the form $W_j(V_k)$ for a certain linear functional $W_j \in (\mathbb{R}^d)^*$. Furthermore, since the diagonals of $\mathcal L$ with labels $-m_++1, \dots, -m_- - 1$ vanish, it follows that $W_j$ annihilates $V_{j+m_-+1}, \dots, V_{j + m_+ -1}$. Moreover, since $\mathcal L$ has a non-vanishing diagonal, we have $W_j \neq 0$. Therefore, the projection of $W_j$ to $(\P^{d-1})^* = \P(\mathbb{R}^d)^*$ is exactly the hyperplane spanned by the vertices $v_{j + m_-+1}, \dots, v_{j + m_+-1}$ of $P$. So, to complete the proof, it suffices to show that the sequence of $W_j$'s is annihilated by $\mathcal D^*$. To that end, notice that since $\mathcal L\mathcal D = 0$, the rows of $\mathcal L$ are annihilated by $\mathcal D^*$. But those rows are of the form $W_j(V_k)$, and since $V_k$'s span $\mathbb{R}^d$, it follows that the sequence $W_j$ is annihilated by $\mathcal D^*$, as desired. \end{proof} \begin{remark} It is also easy to see that the matrix $\mathcal L$ provided by Lemma \ref{pseudoinverse} is unique up to a constant factor. It takes a particularly simple form when the polygon $P$ is closed. To show that, assume for simplicity that $m_- = 0$, so that the operator $\mathcal D$ is supported in $[0,d]$. Furthermore, assume that $\mathcal D$ is $n$-periodic and has trivial monodromy (in particular, the polygon $P$ corresponding to $\mathcal D$ is closed). Then, as shown in \cite{krichever2015commuting}, there exists an $n$-periodic operator $\mathcal R$ supported in $[0,n-d]$ such that $\mathcal R \mathcal D = \mathcal D \mathcal R = 1 - T^n$ (the operator $\mathcal R$ is closely related to the so-called \textit{Gale dual} of $\mathcal D$). Using that, one can find the inverses of $\mathcal D$ in $\mathrm{GL}^\pm_{ \infty}$ as $$ \begin{gathered} \hat \mathcal D^{-1} = \mathcal R (\widehat{1 - T^n})^{-1} = \mathcal R (1 + T^n + T^{2n} + \dots),\\ \check \mathcal D^{-1} = \mathcal R (\widecheck{1 - T^n})^{-1} = -\mathcal RT^{-n}(\widecheck{1 - T^{-n}})^{-1} = -\mathcal RT^{-n}(1 + T^{-n} + T^{-2n} + \dots) \\ = -\mathcal R(T^{-n} + T^{-2n} + \dots). \end{gathered} $$ As a result, one gets $$ \mathcal L = \hat \mathcal D^{-1} - \check \mathcal D^{-1} = \mathcal R\sum_{j = -\infty}^{+\infty} T^{jn}. $$ \end{remark} \par\medskip \bibliographystyle{plain}
1,108,101,565,390
arxiv
\section*{Introduction} Experimental studies of the $D_2$ spectrum have been started just after discovery of atomic deuterium \cite{Urey1932_1, Urey1932_2}. First reports were motivated by the problem of spectroscopic determination of the nuclear spin of deuterium \cite{LewisAshley1933, MurphyJohnston1934} and by appearance of the opportunity to get more information about structure of diatomic molecules $NH$ and $H_2$ from spectra of their isotopic species $ND$ \cite{DiekeBlue1933}, and $HD, D_2$ \cite{DiekeBlue1934, DiekeBlue1935, Dieke1935}. Later on studies of spectra and structure of molecular deuterium were stimulated not only by understandable general interest (an isotopomer of simplest neutral diatomic molecule --- natural touchstone for theoretical models), but because of direct practical value in connection with wide use of $D_2$ in physical experiments and in various applications. However, our knowledge of optical spectrum of molecular deuterium is still insufficient in spite of tremendous efforts by spectroscopists over the previous century. Up to now most of spectral lines have not yet been assigned. As an example, in the latest compilation of experimental data \cite{FSC1985} the working list of 27488 recorded lines (within the wavelength range $\approx309-2780$ nm) contains only 8243 assignments. The band spectrum of the $D_2$ molecule is caused by both singlet--singlet and triplet--triplet radiative electronic--vibro--rotational (rovibronic) transitions\footnote{Well known and rather important feature of emission spectra of diatomic hydrogen isotopomers --- wide ($160-500$ nm) continuum due to spontaneous transitions from vibro--rotational levels of the bond $a^3\Sigma_g^+$ electronic state to the repulsive $b^3\Sigma_u^+$ state --- is out of the scope of present paper because it can't be used for determination of rovibronic term values.}. The intercombination lines have not yet been observed. The most interesting resonance singlet band systems are located in vacuum ultraviolet (VUV). Measurements of wavenumbers of separate rovibronic lines and empirical determination of singlet rovibronic term values is in progress up to now \cite{RLTBjcp2006, RLTBjcp2007, RIVLTUmol2008, LSJUMchp2010, GJRT2011, DIUROJNTGSKEmol2011}. The precision of wavenumber measurements for the $D_2$ rovibronic lines in VUV is now close to $0.05-0.1$ cm$^{-1}$ \cite{DIUROJNTGSKEmol2011} by conventional methods, while a laser technique achieved unprecedented accuracy of $\approx 0.006$ cm$^{-1}$ \cite{RIVLTUmol2008}. The triplet transitions are responsible for a major part of light emission of ionized gases and plasma in near infrared, visible and near ultraviolet\footnote{Bands located in visible part of spectrum are especially interesting because they are often used for spectroscopic diagnostics of non-equilibrium plasmas (see e.g. \cite{RDHL2008}).}. All empirical data concerning wavenumbers and rovibronic term values of $D_2$ obtained by means of emission, absorption, Raman and anticrossing spectroscopy were collected, analyzed and reported in \cite{FSC1985}. Since that time very few new experimental data about triplet rovibronic transitions of $D_2$ were obtained by Fourier transform infrared (FTIR) \cite{DabrHerz}, IR tunable laser \cite{Davies}, and emission \cite{LU2008, LU2009} spectroscopy. It should be noted that at present almost all available experimental data on rovibronic line wavenumbers of the $D_2$ molecule were obtained by photographic recording of emission or absorption spectra. Our recent studies \cite{LU2008, LU2009} revealed that wavenumber values for triplet rovibronic transitions reported in \cite{FSC1985} have significant differences from values predicted by Rydberg-Ritz combination principle and our own data obtained by photoelectric recording \cite{LU2009}\footnote{The same situation was earlier observed for rovibronic transition wavelength values from \cite{Dieke1972} in the $H_2$ spectrum, see e.g. the spread of experimental points on the fig.3 in \cite{ALMU2008}.}. The minority of the differences is caused by misprints and erroneous line assignments. But vast majority of the differences are about $0.01 \div 0.1$ cm$^{-1}$ and show random spread around "synthesized" wavenumber values, calculated as differences of optimal energy level values from \cite{LU2008}. We suppose that they appear due to a finite precision in reading from photo plates by microphotometric comparators, round-up errors in calculating the wavenumber values from measured in air wavelengths, and shifts of the photographic density maxima for blended lines relative to an actual position of the line centers. This random spread of available wavenumber values together with absence of reliable error bars for each value seriously limited determination of rovibronic term values by means of optimization method \cite{LR2005} when it was applied for an analysis of triplet rovibronic transitions of the $D_2$ molecule in \cite{LU2008}. Therefore we decided to start systematic studies of visible part of emission spectra of the $D_2$ molecule for obtaining more precise and more reliable wavenumber values of rovibronic transitions. The present paper reports first results for certain limited part of the spectrum. \section*{Experimental} We used experimental setup described elsewhere \cite{ALMU2008, LMU2011}. Emission of plasma inside molybdenum capillary located between anode and cathode of a gas discharge tube was used as a light source. The flux of radiation through a hole in an anode was focused by achromatic lens on the entrance slit of the spectrometer. Detailed description of the self--made high resolution automatic spectrometer and corresponding software was reported in \cite{LMU2011}. The $2.65$ m Ebert-Fastie spectrograph with $1800$ grooves per mm diffraction grating was equipped with additional camera lens (that gives effective focus length $F=6786 \pm 8$ mm) and computer-controlled CMOS matrix detector ($22.2 \times 14.8$ mm$^2$, $1728 \times 1152$ pixels). The apparatus has linear dispersion of $0.076 \div 0.065$ nm/mm in the wavelength range $400-700$ nm, dynamic range of measurable intensities greater than $10^4$ and maximum resolving power up to $2 \times 10^5$. However, actual resolving power in our conditions was mainly limited by Doppler broadening of the $D_2$ spectral lines due to small reduced mass of nuclei. It should be emphasized once more that the overwhelming majority of data on the wavenumbers for rovibronic transitions of the $D_2$ molecule was obtained by photographic recording of spectra. Our way of determination of the rovibronic transition wavenumbers developed in \cite{ALMU2008, LU2009, LMU2011} is based on linear response of the CMOS matrix detector on the spectral irradiance and digital intensity recording. Both things provide an extremely important advantage of our technique over traditional photographic recording with microphotometric comparator reading. It not only makes it easier to measure the relative spectral line intensities but also makes it possible to investigate the shape of the individual line profiles and, in the case of overlap of the contours of adjacent lines (so--called blending), to carry out numerically the deconvolution operation (inverse to the convolution operation) and thus to measure the intensity and wavelength of even blended lines. As is well known, it is this blending that makes it very hard to analyze dense multiline spectra of molecular hydrogen isotopomers \cite{Dieke1972, FSC1985} It is known that, in the case of long--focus spectrometers, the dependence of the wavelength on the coordinate $x$ along direction of dispersion, is close to linear in the vicinity of the center of the focal plane. It can be represented as a power series expansion over of the small parameter $x/F$, which in our case does not exceed $2 \times 10^{-3}$. \footnote{The $x$--coordinate actually represents small displacement from the center of the matrix detector. $F$ is the focal length of the spectrometer mirror.} On the other hand, the wavelength dependence of the refractive index of air $n(\lambda)$ is also close to linear inside a small enough part of the spectrum. Thus, when recording narrow spectral intervals, the product $\lambda_{vac}(x) = \lambda(x) n(\lambda(x))$ has the form of a power series of low degree. This circumstance makes it possible to calibrate the spectrometer directly in vacuum wavelengths $\lambda_{vac} = 1 / \nu$, thereby avoiding the technically troublesome problem of accurate measuring the refractive index of air under the various conditions under which measurements are made. Another peculiarity of our calibration technique is using of experimental vacuum wavelength values from \cite{FSC1985} as standard reference data. We already mentioned above that those data show small random spread around smooth curve representing dependence of the wavelengths on positions of corresponding lines in the focal plane of the spectrometer (see e.g. \cite{ALMU2008}. Moreover those random errors are in good accordance with normal Gaussian distribution function. Thus it is possible to obtain precision for new wavenumber values better than that of the reference data due to smoothing. To be sure that the data from \cite{FSC1985} are free from systematic errors we have had to perform special experiments with capillary--arc lamp analogous to that described in \cite{LSh1979} (capillary diameter $d = 1.5$ mm and current density $j = 30$ A/cm$^2$) but filled with the $H_2+D_2+Ne$ mixture (1:1:2) under total pressure $P \approx 8$ Torr. For vacuum wavelength calibration we used bright free of blending lines of the $D_2$ and $H_2$ molecules as well as $Ne$ spectral lines with reference data from \cite{FSC1985, Dieke1972, SS2004} respectively. \begin{figure}[!ht] \begin{center} \epsfig{file=fig001.eps, width=0.5\columnwidth,clip} \end{center} \caption{Fragment of dependences of vacuum wavelengths $\lambda_{vac}$ (a) and the differences $\Delta \lambda_{vac}$ (b) of the brightest $D_2$, $H_2$ and $Ne$ spectral lines on the coordinate (in pixels) in the focal plane of the spectrometer. Points $1$ --- the values for $D_2$ molecule from \cite{FSC1985}; Points $2$ --- the values for $Ne$ atom from \cite{SS2004}; Points $3$ --- the values for $H_2$ molecule from \cite{Dieke1972}. Solid line represents the approximation of experimental data.}\label{d2h2ne} \end{figure} As an example the dependence of vacuum line wavelength on its position on CMOS matrix in pixels for strong unblended lines is shown on fig.\ref{d2h2ne}(a) for wavelength region corresponding to size of CMOS matrix. One may see that the dependence of the wavelengths for most of the lines on the coordinate is monotonic and close to rectilinear. The calibration curve of the spectrometer was obtained by the polynomial least--squares fitting of the data. Our measurements showed that, using a linear hypothesis is inadequate and a third--degree polynomial is excessive, while an approximation by a second--degree polynomial provides calibration accuracy better than $2 \times 10^{-3}$ nm. Such a wavelength calibration allows us to get new experimental values for the rovibronic line wavenumbers. The differences $\Delta \lambda_{vac}$ between the new values and used reference data are shown in fig.\ref{d2h2ne}(b). One may see that the differences have certain spread around calibration curve, that does not exceed $0.002$ nm. Thus our measurements show that experimental wavenumber values from \cite{FSC1985, Dieke1972, SS2004} are in good agreement with each other. Therefore in our studies of the $D_2$ spectrum the vacuum wavelength values from \cite{FSC1985} were used as the reference data set. Such "internal reference light source" gave us an opportunity to eliminate experimental wavenumber errors caused by the shift between a spectrum under the study and the reference spectrum from another reference light source, due to a different illumination of the grating by the different lamps (see e.g.\cite{RLTBjcp2006}). \section*{Results and discussion} For small regions of the spectrum ($\approx 0.5$ nm wide)\footnote{That corresponds to one third of the matrix: $550-600$ pixels wide.} the observed spectral intensity distribution was approximated by superposition of a certain number of Gauss or Voigt profiles with the linewidth $\Delta \nu_{obs}$\footnote{ We use usual meaning for a linewidth, namely full width on half maximum (FWHM). In our case observed linewidth $\Delta \nu_{obs}$ includes both instrumental profile and that of broadening in plasma, mainly due to Doppler effect.} equal for all the profiles within a spectral region under the consideration. Thus, optimal values for adjustable parameters (line centers, relative intensities and one common value of $\Delta \nu_{obs}$ for the region) were obtained by solving the reverse spectroscopic problem in the framework of maximum--likelihood principle by means of special computer program based on Levenberg--Marquardt's algorithm \cite{L1944, M1963}. Determination of wavenumber values for line centers (wavenumbers of rovibronic transitions) by means of the deconvolution process described above gives us an opportunity to reach much higher resolution than that predicted by the Rayleigh criterion. This fact may be illustrated by the example shown in fig.\ref{3exp}. It represents experimental intensity distributions (hollow circles) for the same narrow wavenumber range measured under three different conditions: \begin{figure}[!ht] \begin{center} \epsfig{file=fig002.eps, width=0.45\columnwidth,clip} \end{center} \caption{Fragment of the $D_2$ spectrum in the spectral range $17024-17028$ cm$^{-1}$ containing $R4$, $R5$ and $R6$ spectral lines for the $(1-1)$ band of the $i^3\Pi_g \to c^3\Pi_u$ electronic transition. Experimental intensity $J$ in counts is shown by open circles. Dotted lines represent Gaussian (a), (b) and Voigt (c) profiles for separate lines obtained by deconvolution, while the solid line corresponds to the total intensity obtained by summing over the components. Cases (a), (b), and (c) correspond to experimental conditions (a), (b), and (c) (see text).}\label{3exp} \end{figure} \begin{enumerate}[(a)] \item Hot cathode capillary--arc discharge lamp LD2-D \cite{GLT1982} under current density $j = 10$ A/cm$^2$ and gas temperature in plasma $T = 1500 \pm 150$ K\footnote{The temperature was obtained from the intensity distribution in Q-branch of (2-2) band of $d^3\Pi_u^- \to a^3\Sigma_g^+$ electronic transition.} (large Doppler width, $\Delta \nu_D = 0.22$ cm$^{-1}$) and the entrance slit width $\Delta X = 60$ $\mu m$ four times over than the so-called normal width (large instrumental profile); \item the same conditions in plasma as those for case (a), but the slit width $\Delta X = 15$ $\mu m$ close to normal (providing optimal width of the instrumental profile); \item cold cathode glow discharge in water cooled quartz tube under $j = 0.4$ A/cm$^2$, $T = 640 \pm 50$ K\footnote{ The temperature was obtained both from the intensity distribution in Q-branch of (2-2) band of $d^3\Pi_u^- \to a^3\Sigma_g^+$ electronic transition and from Doppler broadening of spectral lines.} (smaller Doppler width $\Delta \nu_D = 0.15$ cm$^{-1}$), and $\Delta X = 15$ $\mu m$. \end{enumerate} The observed line profiles for strong unblended lines in the cases (a) and (b) were close to the Gaussian shape except for insignificant far wings. Therefore, the intensity distribution was approximated by superposition of a certain number of Gaussian profiles. In the case (c) the gas temperature is lower thus the observed line profiles were determined by Doppler and instrumental broadening. Therefore the intensity distributions obtained in the experiment (c) were fitted by the superposition of a certain number of Voigt profiles. As the result of the fitting we obtained following values of observed linewidths $\Delta \nu_{obs} = 0.38$, $0.27$, and $0.18$ cm$^{-1}$ for cases (a), (b), and (c) respectively. One may see that when resolution of optical part of the spectrometer is insufficient (case (a)) then only 3 bright lines are distinguished. Decrease of $\Delta \nu_{obs}$ in the case (b) (due to more narrow instrumental profile) makes it possible to observe that each of those lines consist of two distinguishable components having the same intensity ratio close to $2$. Further decrease of $\Delta \nu_{obs}$ in the case (c) (due to smaller Doppler broadening) leads to the same values of the component wavenumbers and ratio of their intensities (see table \ref{tabdouplets}). Joint analysis of splitting in such visible "doublets" (about $0.20$ cm$^{-1}$) and relative intensities of two main components of visible "doublets" (about $2.0$) show that they represent partly resolved fine structure of lines determined by triplet splitting of lower rovibronic levels of various $(1s\sigma nl\pi)^3\Lambda_g \to (1s\sigma 2p\pi)^3\Pi_u$ electronic transitions. Present paper reports the results of two main experiments corresponding to cases (a) and (c). The results for wavelength range $545 \div 627$ nm are presented in Table \ref{tabnewlines}\footnote{ Spectral range for experiment (a) was $400 \div 700$ nm thus only a part of wavenumbers obtained in this experiment is presented in Table \ref{tabnewlines}. Therefore we used original numbering of spectral lines for that experiment, therefore the column $K_1$ does not begin with a unit.}. One may see from the table that the wavelength values obtained in our experiments differ from those collected in \cite{FSC1985} not only quantitatively but qualitatively as well. We observed much more lines and part of them could be visible components of the fine structure of rovibronic lines. Detailed analysis of the data is now in progress and will be reported elsewhere. Separation of observed doublets ($0.17 \pm 0.01$ cm$^{-1}$) corresponds to data obtained by means of FTIR \cite{DabrHerz} and laser \cite{Davies} spectroscopy in infrared part of the spectrum. The observed ratio of intensities of these doublets is close to that calculated by Burger--Dorgello--Ornstein sum rule (2.0). Partly resolved fine structure of spectral lines provides the opportunity to expand the existing identification of triplet rovibronic lines by detecting those doublets in experimental spectra. Within the spectral region under the study ($545 \div 627$ nm) there are more than 200 pairs of unassigned lines which may represent visible doublets of partly resolved triplet structure of rovibronic transitions between $^3\Lambda_g$ and $c^3\Pi_u^-$ electronic states of the $D_2$ molecule. Obtained results reveal new opportunities for identifying the great number of currently unassigned the $D_2$ lines in visible part of the spectrum. Present work was financially supported in part by the Russian Foundation for Basic Research, Grant No. 10-03-00571-a.
1,108,101,565,391
arxiv
\section{Introduction} The picture of the large-scale structures reveals that matter in the Universe forms an intricate and complex system, defined as ``cosmic web'' \citep{Z82,S83,E84,Bond96,a2010}. First attempts of mapping the three-dimensional spatial distribution of galaxies in the Universe \citep{g78,de86,geller89,Sh96}, as well as more recent large galaxy surveys \citep{Col03,sdss,H05}, display a strongly anisotropic morphology. The galactic mass distribution seems to form a rich cosmos containing clumpy structures, as clusters, sheets and filaments, surrounded by large voids \citep{vdw09}. A similar cosmic network has emerged from cosmological N-body simulations of the Dark Matter distribution \citep{Bond96,aragon2007,HAHN2007}. The large scale structures are expected to span a range of scales that goes from a few up to hundreds of megaparsec. Despite the many well-established methods to identify clusters and voids, there is not yet a clear characterization of filaments and sheets. Due to their complex shape, there is not a common agreement on the definition and the internal properties of these objects \citep{B2010}. Moreover, their detection in observations is extremely difficult due to the projection effects. Nevertheless, several automated algorithms for filament and sheet finding, both in 3D and 2D, have been developed \citep{N06,aragon2007,Sou08,B2010}. Several galaxy filaments have been detected by eye \citep{C05,P08} and Dark Matter filaments have also been detected from their weak gravitational lensing signal \citep{D12}. Powerful methods for the cosmic web classification, are based on the study of the Hessian of the gravitational potential and the shear of the velocity field \citep{HAHN2007,hoffman12}. From the qualitative point of view, several elaborate theories have been proposed. The cosmic web nature is intimately connected with the gravitational formation process. In the standard model of hierarchical structure formation, the cosmic structure has emerged from the growth of small initial density fluctuations in the homogeneous early Universe \citep{Peebles80,Davis85,WF91}. The accretion process involves the matter flowing out of the voids, collapsing into sheets and filaments, and merging into massive clusters. Thus, galaxy clusters are located at the intersection of filaments and sheets, which operate as channels for the matter to flow into them \citep{van93,Colb99}. The innermost part of clusters tends to eventually reach the virial equilibrium. As result of this gravitational collapse, clusters of galaxies are the most recent structures in the Universe. For this reason, they are possibly the most easy large-scale systems to study. Mass measurement of galaxy clusters is of great interest for understanding the large-scale physical processes and the evolution of structures in the Universe \citep{W2010}. Moreover, the abundance of galaxy clusters as function of their mass is crucial for constraining cosmological models: the cluster mass function is an important tool for the determination of the amount of Dark Matter in the Universe and for studying the nature and evolution of Dark Energy \citep{H01,C09,Allen11}. The oldest method for the cluster mass determination is based on the application of the virial theorem to positions and velocities of the cluster members \citep{Z33}. This method suffers from the main limitation that the estimated mass is significantly biased when the cluster is far from virialization. More recent and sophisticated techniques also rely strongly on the assumption of hydrostatic or dynamical equilibrium. The cluster mass profile can be estimated, for example, from observations of density and temperature of the hot X-ray gas, through the application of the hydrostatic equilibrium equation \citep{Ettori02,Bor04,Zappa06,SA07,HH11}. Another approach is based on the dynamical analysis of cluster-member galaxies and involves the application of the Jeans equations for steady-state spherical system. \citep{Gir98,LM03,LW06,mamon10}. Additional cluster mass estimators have been proposed, which are independent of the cluster dynamical state. A measurement of the total cluster mass can be achieved by studying the distortion of background galaxies due to gravitational lensing \citep{M10,L11}. The lensing technique is very sensitive to the instrument resolution and the projection effects. The caustic method has been proposed by \cite{D99}. This method requires very large galaxy surveys, in order to determine the caustic curve accurately. Therefore, the development of new techniques and the combination of different independent methods, is extremely useful for providing a more accurate cluster mass measurement. The Coma cluster of galaxies (Abell 1656) is one of the most extensively studied system of galaxies \citep{Biviano98}, as the most regular, richest and best-observed in our neighborhood. The X-ray observations have provided several mass estimates \citep{Hug89,Watt92}, obtained by assuming hydrostatic equilibrium. Dynamical mass measurements with different methods, based on the assumption of dynamical equilibrium, are reported in \citep{The86,LM03}. \cite{Geller99} perform a dynamical measurement of the Coma cluster, using the caustic method, and weak lensing mass estimates of Coma have been carried on by \cite{kubo07} and \cite{gavazzi09}. In the present paper we propose a new method for estimating the mass of clusters. We intend to infer total cluster mass from the knowledge of the kinematics in the outskirts, where the matter has not yet reached equilibrium. The key of our method is the analysis of filamentary and sheetlike structures flowing outside the cluster. We apply our method for the total virial mass estimate to the Coma cluster, and we compare our result with some of the previous ones in the literature. Our method also provides an estimation of the orientation of the structures we find, in the three dimensional space. This can be useful to identify a major merging plane, if a sufficient number of structures are detected and at least three of them are on the same plane. The paper is organized as follows. In section 2 we derive the relation between the velocity profile of galaxies in the outer region of clusters and the virial cluster mass. In section 3 we propose a method to detect filaments or sheets by looking at the observed velocity field. In section 4 we test the method to a cosmological simulated cluster-size halo and we present the result on the mass measurement. In section 5 we present the structures we find around the Coma cluster and the Coma virial mass determination. \section{Mass estimate from the radial velocity profile} Galaxy clusters are characterized by a virialized region where the matter is approximately in dynamical equilibrium. The radius that delimitates the equilibrated cluster, i.e. the virial radius $r_{\rm v}$, is defined as the distance from the centre of the cluster within which the mean density is $\Delta$ times the critical density of the Universe $\rho_{c}$. The virial mass is then given by \begin{equation} \label{eqn:vmass} M_{\rm v}=\frac{4}{3}\pi\,r_{\rm v}^{3}\,\Delta\,\rho_{c} \, . \end{equation} The critical density is given by \begin{equation} \label{eqn:vmass2} \rho_{c}=\frac{3\,H^{2}}{8\pi\,G}\, , \end{equation} where $H$ is the Hubble constant and $G$ the universal gravitational constant. The circular velocity $V_{\rm v}$ at $r=r_{\rm v}$, i.e. the virial velocity, is defined as \begin{equation} \label{eqn:vvel} V_{\rm v}^{2}=\frac{G\,M_{\rm v}}{r_{\rm v}}. \end{equation} The immediate environments of galaxy clusters outside the virial radius are characterized by galaxies and groups of galaxies which are falling towards the cluster centre. These galaxies are not part of the virialized cluster, but they are gravitationally bound to it. The region where the infall motion is most pronounced extends up to three or four times the virial radius \citep{Mamon04,W05,Rines06,Cuesta08,Falco2013}. At larger scales, typically beyond $6-10\,r_{\rm v}$, the radial motion of galaxies with respect to the cluster centre, is essentially dominated by the Hubble flow. In the transition region between the infall regime and the Hubble regime, the galaxies are flowing away from the cluster, but they are still gravitationally affected by the presence of its mass. At this scale, the gravitational effect of the inner cluster mass is to perturb the simple Hubble motion, leading to a deceleration. The total mean radial velocity of galaxies outside clusters is therefore the combination of two terms: \begin{equation} \label{eqn:vel} \overline v_{\rm r} (r)=H\,r+\overline v_{\rm p} (r)\, , \end{equation} the pure Hubble flow, and a mean negative infall term $\overline v_{\rm p} (r)$, that accounts for the departure from the Hubble relation. Section (\ref{infall}) is dedicated to the characterization of the function $\overline v_{\rm p} (r)$. The mean infall velocity depends on the halo mass, being more significant for larger mass haloes. Therefore, we can rewrite equation~(\ref{eqn:vel}) as \begin{equation} \label{eqn:velvir} \overline v_{\rm r} (r,M_{\rm v})=H\,r+\overline v_{\rm p}(r,M_{\rm v})\, , \end{equation} where we include the dependence on the virial mass $M_{\rm v}$. Therefore, once we know the relation between $\overline v_{\rm p}$ and $M_{\rm v}$, equation~(\ref{eqn:velvir}) can be used to infer the virial mass of clusters. In the next section, we will derive the equation that connects the peculiar velocity of galaxies $\overline v_{\rm p}$ with the virial mass of the cluster $M_{\rm v}$. \subsection{Radial infall velocity profile} \label{infall} Simulations have shown a quite universal trend for the radial mean velocity profile of cluster-size haloes, when normalized to their virial velocities \citep{Prada06,Cuesta08}. This feature can be seen, for example, in Fig.~\ref{fig:3velocities}, where the median radial velocity profile for three samples of stacked simulated haloes is displayed. The units in the plot are the virial velocity $V_{\rm v}$ and virial radius $r_{\rm v}$. The virial masses for the samples are: $M_{\rm v}=0.8\times 10^{14}\, M_{\odot}$ (blue, triple-dot dashed line), $M_{\rm v}=1.1\times 10^{14}\, M_{\odot}$ (green dot dashed line), $M_{\rm v}=4.7\times 10^{14}\, M_{\odot}$ (red dashed line). The cosmological N-body simulation we used is described in section~\ref{sec4}. \begin{figure} \centering \includegraphics[width=\hsize]{infall.eps} \caption{Median radial velocity profile for three samples of stacked simulated halos. The virial masses for the samples are: $M_{\rm v}=0.8\times 10^{14}\, M_{\odot}$ (blue, triple-dot dashed line), $M_{\rm v}=1.1\times 10^{14}\, M_{\odot}$ (green dot dashed line), $M_{\rm v}=4.7\times 10^{14}\, M_{\odot}$ (red dashed line). The black solid line is our simultaneous fit to the three profiles.} \label{fig:3velocities} \end{figure} In order to derive an approximation for the mean velocity profile, the spherical collapse model has been assumed in several works \citep{Pei06,Pei08,KN10,Nas11}. Here we make a more conservative choice. We parametrize the infall profile using only the information that it must reach zero at large distances from the halo centre, and then we fit the universal shape of the simulated haloes profiles. Therefore, we don't assume the spherical infall model. In the region where the Hubble flow starts to dominate and the total mean radial velocity becomes positive, a good approximation for the infall term is \begin{equation} \label{eqn:velpec} \overline v_{\rm p} (r)\approx\,-v_0\,\left(\frac{r}{r_{\rm v}}\right)^{-b}\, , \end{equation} with $v_0=a\,V_{\rm v}$, where $V_{\rm v}$ is the virial velocity, and $r_{\rm v}$ is the virial radius. We fit equation~(\ref{eqn:velpec}) to the three profiles in Fig.~\ref{fig:3velocities} simultaneously, with $a$ and $b$ as free parameters. The fit is performed in the range $r=3-8\,\rm r_{\rm v}$. The best fit is the black solid line, corresponding to parameters: $a=0.8$ and $b=0.42$. This allows to fix a universal shape for the mean velocity of the infalling matter, as function of the virial velocity, i.e. the virial mass, in the outer region of clusters. \section{Filaments and sheets around galaxy clusters} The method we propose for measuring the virial cluster mass, consists in using only observed velocities and distances of galaxies, which are outside the virialized part of the cluster, but whose motion is still affected by the mass of the cluster. Given the dependence of the infall velocity on the virial mass, we wish to estimate $M_{\rm v}$ by fitting the measured velocity of galaxies moving around the cluster with equations~(\ref{eqn:velvir}) and~(\ref{eqn:velpec}). To this end, we need to select galaxies which are sitting, on average, in the transition region of the mean radial velocity profile. For the fit to be accurate, the galaxies should be spread over several megaparsec in radius. Observations give the two-dimensional map of clusters and their surroundings, namely the projected radius of galaxies on the sky $R$, and the component of the galaxy velocities along the line of sight $v_{\rm los}$. The reconstruction of the radial velocity profile would require the knowledge of the radial position of the galaxies, i.e. the radius $r$. The velocity profile that we infer from observations is also affected by the projection effects. If the galaxies were randomly located around clusters, the projected velocities would be quite uniformly distributed, and we would not see any signature of the radial velocity profile. The problem is overcome because of the strong anisotropy of the matter distribution. At several megaparsec away from the cluster centre, we will select collections of galaxies bound into systems, as filaments or sheets. The presence of such objects can break the spatial degeneracy in the velocity space. In sections~(\ref{sec31}) and~(\ref{sec32}), we explain in details how such objects can be identified as filamentary structures in the projected velocity space. \subsection{Line of sight velocity profile} \label{sec31} In order to apply the universal velocity profile~(\ref{eqn:velpec}) to observations, we need to transform the 3D radial equation~(\ref{eqn:vel}) in a 2D projected equation. We thus need to compute the line of sight velocity profile $v_{\rm los}$ as function of the projected radius $R$. Let's consider a filamentary structure forming an angle $\alpha$ between the 3-dimensional radial position of galaxy members $r$ and the 2-dimensional projected radius $R$. Alternatively, let's consider a sheet in the 3D space lying on a plan with inclination $\alpha$ with respect to the plan of the sky (see the schematic Fig.~\ref{fig:drawing}). The transformations between quantities in the physical space and in the redshift space are \begin{equation} R=\cos\alpha\,r \end{equation} for the spatial coordinate, and \begin{equation} \label{eqn:vellos} v_{\rm los} (R)=\sin\alpha\,v_{\rm r}(r) \end{equation} for the velocity. By inserting equation~(\ref{eqn:velvir}) in equation~(\ref{eqn:vellos}), we obtain the following expression for the line of sight velocity in the general case: \begin{eqnarray} \label{eqn:vellos2} v_{\rm los} (R,\alpha,M_{\rm v})= \sin\alpha\,\left[H\,\frac{R}{\cos\alpha}+v_{\rm p}\left(\frac{R}{\cos\alpha},M_{\rm v}\right)\right] . \end{eqnarray} If we use our model for the infall term, given by equation~(\ref{eqn:velpec}), the line of sight velocity profile in equation~(\ref{eqn:vellos2}) becomes \begin{eqnarray} \label{eqn:vellos3} v_{\rm los} (R,\alpha,M_{\rm v}) =\sin\alpha\,\left[H\,\frac{R}{\cos\alpha}-a\,V_{\rm v}\,\left(\frac{R}{\cos\alpha\,r_{\rm v}}\right)^{-b}\right] \, . \end{eqnarray} By using equation~(\ref{eqn:vellos3}), it is, in principle, possible to measure both the virial cluster mass $M_{\rm v}$ and the orientation angle $\alpha$ of the structure. In fact, if we select a sample of galaxies which lie in a sheet or a filament, we can fit their phase-space coordinates ($R,v_{\rm los}$) with equation~(\ref{eqn:vellos3}), where only two free parameters ($\alpha,M_{\rm v}$) are involved. The identification of structures and the accuracy on the mass estimate require a quite dense sample of galaxies observed outside the cluster. \begin{figure} \begin{center} \centering \includegraphics[width=0.7\linewidth]{3d.eps} \caption{Schematic drawing of a filament or a sheet in 3D with inclination $\alpha$ between the radial distance $r$ and the projected radius $R$. The cluster is represented by the red circle in the centre of the frame. The $z$-axis corresponds to the observer line of sight.} \label{fig:drawing} \end{center} \end{figure} \subsection{Linear structures in the velocity field} \label{sec32} Our interest here is thus in finding groups of galaxies outside clusters, that form a bound system with a relatively small dispersion in velocity, and that lie on a preferential direction in the 3D space. In particular, we are interested in such objects when they are far enough from the cluster, to follow a nearly linear radial pattern in the velocity space, corresponding to a decelerated Hubble flow. We expect these objects to form filament-like structures in the projected velocity space. In fact, if we apply the formula in equation~(\ref{eqn:vellos2}) to galaxies with the same orientation angle $\alpha$ within a small scatter, the radial velocity shape given by equation~(\ref{eqn:velvir}) is preserved. Thus, these galaxies can be identified as they are collected on a line in the observed velocity space. Nevertheless, we can look at the structure in the 2D map (the ($x,y$) plane in Fig.~\ref{fig:drawing}). If all the selected galaxies lie on a line, within a small scatter, also in the ($x,y$) plane, they can be defined as a filament. If they are confined in a region within a small angular aperture, they might form a sheet (see the Fig.~\ref{fig:drawing}). Complementary papers will analyze properties of such sheets \citep{Thejs,sparre2013,Wadekar}. We want to point out here that Fig.~\ref{fig:drawing} describes the ideal configuration for filaments and sheets to have a quasi-linear shape in the observed velocity plane. Therefore, not all the filaments and sheets will satisfy this requirement, i. e. not all the structures outside clusters can be detected by looking at the velocity field. Our method for identifying these objects is optimized towards structures which are narrow in velocity space, while still containing many galaxies, and therefore which are closer to face-on than edge-on. It consists in selecting a region in the sky, and looking for a possible presence of an overdensity in the corresponding velocity space. We will describe the method in details in the next section. \section{Testing the method on Cosmological Simulation} \label{sec4} As a first test of our method, we apply it to a cluster-size halo from a cosmological N-body simulation of pure Dark Matter (DM). The N-body simulation is based on the $WMAP3$ cosmology. The cosmological parameters are $\Omega_{\rm M}=0.24$ and $\Omega_{\Lambda}=0.76$, and the reduced Hubble parameter is $h=0.73$. The particles are confined in a box of size $160\,h^{-1}$ Mpc. The particle mass is $3.5\times\,10^{8}\,\rm M_\odot$, thus there are $1024^{3}$ particles in the box. The evolution is followed from the initial redshift $z=30$, using the MPI version of the ART code \citep{Kra1997,G2008}. The algorithm used to identify clusters is the hierarchical friends-of-friends (FOF) with a linking length of 0.17 times the mean interparticle distance. The cluster centres correspond to the positions of the most massive substructures found at the linking length eight times shorter than the mean interparticle distance. We define the virial radius of halos as the radius containing an overdensity of $\Delta=93.8$ relative to the critical density of the Universe. More details on the simulation can be found in \citep{Wojtak08}. For our study, we select, at redshift $z=0$, a halo of virial quantities $M_{\rm v}=4.75\times\,10^{14}\, \rm M_\odot$, $\rm r_{\rm v}=2.0\,Mpc$ and $V_{\rm v}=1007.3\, \rm km/s$. We treat the DM particles in the halo as galaxies from observations. The first step is to project the 3D halo as we would see it on the sky. We consider three directions as possible lines of sight. For each projection, we include in our analysis all galaxies in the box $x=[-20,20]\,\rm Mpc$ and $y=[-20,20]\,\rm Mpc$, where $x,y$ are the two directions perpendicular to the line of sight. The method described in the next section is applied to all the three projections. \subsection{Identification of filaments and sheets from the velocity field} Our goal is to find structures confined in a relatively small area in the $(x,y)$ plane. To this end, we split the spatial distribution into eight two-dimensional wedges (for example in Figure~\ref{fig:haloxy} the orange points represent one of the wedges) and we look at each of them in the $(R,v_{\rm los})$-space (for example in Fig.~\ref{fig:fil1} we look at the orange wedge in Fig.~\ref{fig:haloxy}, in the velocity space), where we aim to look for overdensities. We confine the velocity field to the box: $v_{\rm los}=[-4000,4000]\,\rm km/s$ and $\rm R=[4,20]\,\rm Mpc$, and we divide the box into $50$ cells, $4\, \rm Mpc$ large and $400\, \rm km/s$ high. For each of the selected wedges, we want to compare the galaxy number density $n_{i}$ in each cell $i$, with the same quantity calculated for the the rest of the wedges in the same cell. More precisely, in each cell, we calculate the mean of the galaxy number density of all the wedges but the selected one. This quantity acts as background for the selected wedge, and we refer to it as $n^{bg}_{i}$. In Fig.~\ref{fig:haloxy}, the wedge under analysis is represented by the orange points, and the background by the green points. We exclude from the background the two wedges adjacent to the selected one (gray points in Fig.~\ref{fig:haloxy}). We need this step because, if any structure is sitting in the selected wedge, it might stretch to the closest wedges. \begin{figure} \centering \includegraphics[width=\hsize]{halosimxy.eps} \caption{Two-dimensional projection of the simulation box, centered on the selected simulated halo. The black triangles represent the particles inside the virial radius of the halo. The orange points belong to one of the eight wedges we select in the $(x,y)$ plane. The background for the selected wedge is given by the green crosses. The two wedges adjacent to the selected wedge, gray diamonds, are excluded from the analysis. In the selected wedge, we identify a sheet that is represented by the red circles. The blue squares correspond to the total overdensity we find in the wedge, with the method described in the text.} \label{fig:haloxy} \end{figure} The overdensity in the cell $i$ is evaluated as \begin{equation} \label{eqn:od} m_{i}= \frac{n_{i}-n^{bg}_{i}}{n^{bg}_{i}} \, , \end{equation} \begin{figure} \centering \includegraphics[width=\hsize]{halofil2.eps} \caption{Line of sight velocity $v_{\rm los}$ as function of the projected distance $R$ from the centre of the simulated halo. \emph{Upper panel}: The background in the analysis is represented by the green crosses. The black triangles are all the particles within the virial radius. \emph{Bottom panel}: The orange points represent our signal, i.e. the selected wedge. The blue points correspond to the overdensity in the wedge. The only almost straight inclined line is shown in red circles. We identify this filamentary-like structure as a sheet.} \label{fig:fil1} \end{figure} and we calculate the probability density $p(m_{i})$ for the given wedge. We take only the cells in the top $1\,\sigma$ region of the probability density distribution, i.e. where the integrated probability is above $(100-16.8)\%$, in order to reduce the background noise. Among the galaxies belonging to the selected cells, we take the ones lying on inclined lines within a small scatter, while we remove the unwanted few groups which appear as blobs or as horizontal strips in the $(R,v_{\rm los})$-space. We apply this selection criterion because we are interested in extended structures which have a coherent flow relative to the cluster. This method leaves us with only one structure inside the wedge in Fig.~\ref{fig:haloxy} (red points). It is a sheet, as it appears as a two-dimensional object on the sky, opposed to a filament which should appear one-dimensional. We see such sheet only in one of the three projections we analyse. The bottom panel of Fig.~\ref{fig:fil1} shows the velocity-distance plot corresponding to all the galaxies belonging to the selected wedge (orange points), while the selected strips of galaxies are shown as blue points. The desired sheet (red points) is an almost straight inclined line crossing zero velocity roughly near 5-10 Mpc and contains 88 particles. The background wedges are displayed in the upper panel of Fig.~\ref{fig:fil1}. \subsection{Analysis and result} \label{sec42} Having identified one sheet around the simulated halo, we can now extract the halo mass, using the standard Monte Carlo fitting methods. We apply the Monte Carlo Markov chain to the galaxies belonging to the sheet. The model is given by equation~(\ref{eqn:vellos3}), where the free parameters are $(\alpha,\rm M_{\rm vir})$. We set $\Delta=93.8$ and $H=73\,\rm km/(s\,Mpc)$, as these are the values set in the cosmological simulation. We run one chain of $5000$ combinations of parameters and then we remove the burn-in points. \begin{figure} \begin{center} \includegraphics[width=1.1\linewidth]{scatterplotSIMnew.eps} \end{center} \caption{Result of the Monte Carlo Markov chain applied to the sheet found outside the simulated halo. \emph{Central panel}: Scatter plot of the two free parameters (${\rm cos}(\alpha),M_{\rm vir}$) obtained by the MCMC chain. \emph{Upper panel}: Probability density function of the virial mass. \emph{Left panel}: Probability density function of the viewing angle. The initial number of points is 5000 and we remove the points of burn-in. The mean value for the virial mass and the cosine of the angle are $M_{\rm vir}=(4.3\pm\,2.2)\times\,10^{14}\,\rm M_\odot$ and ${\rm cos}(\alpha)=0.48\pm\,0.02$, which are comparable to the true halo virial mass $M_{\rm vir}=4.75\times\,10^{14}\,\rm M_\odot$ and angle ${\rm cos}(\alpha)=0.5$.} \label{fig:scatterplotSIM} \end{figure} In Fig.~\ref{fig:scatterplotSIM} we show the scatter plot on the plane of the two parameters, and the one-dimensional probability distribution functions of the virial mass and the orientation angle. The mean value for the virial mass is $M_{\rm vir}=(4.3\pm\,2.2)\times\,10^{14}\,\rm M_\odot$, which is comparable to the true halo virial mass $M_{\rm vir}=4.75\times\,10^{14}\,\rm M_\odot$. The mean value for the cosine of the angle between $R$ and $r$ is ${\rm cos}(\alpha)=0.48\pm\,0.02$, corresponding to $\alpha=-1.07\pm\,0.02$~rad. In Fig.~\ref{fig:sheetSIM} we show the sheet in the 3D space (blue points). The best fit for the plane where the sheet is laying, is shown as the green plane, and the corresponding angle is $\alpha=-1.05$~rad, giving ${\rm cos}(\alpha)=0.5$. Our estimation is thus consistent, within the statistical error, with the true orientation of the sheet in 3D. Although our method provides the correct halo mass and orientation angle within the errors, the results slightly underestimate the true values, for both parameters. Systematic errors on the mass and angle estimation might be due to the non ideal shape of the structures. The sheet we find has finite thickness, and it is not perfectly straight in the 3D space. The closer the detected structure is to an ideal infinite thin and perfectly straight object, the smaller the errors would be. Another problem might reside in the assumption of spherical symmetry. The median radial velocity profile of a stack of haloes, might slightly differ from the real velocity profile of each individual halo. Intrinsic scatter of the simulated infall velocity profiles leads to additional systematic errors on the determination of the best fitting parameters. Our estimate of this inaccuracy yields $50\%$ for the virial mass and $2.5\%$ for the angle. The presence of this systematic is confirmed by Fig.~\ref{fig:ScatterplotHALOreal}. The bottom panel represents our result of the sheet analysis, when using a fit to the real mean radial velocity of the halo, which is shown in the upper panel. The best fit parameters to the radial velocity profile of the halo, with equation~(\ref{eqn:velpec}), are $a=1.5$ and $b=0.89$. In Fig.~\ref{fig:ScatterplotHALOreal}, the black solid line is the fit to the halo velocity profile (red dashed line) and the green dot-dashed line is the universal velocity profile used in the previous analysis. The two profiles overlap in the range $\approx\,3-5\,r_{\rm v}$, but they slightly differ for larger distances, where our sheet is actually sitting. Replacing the universal radial velocity profile with the true one, eliminates the small off set caused by the departure of the two profiles. In the new analysis, the mean value for the virial mass is $M_{\rm vir}=(4.67\pm\,1.9)\times\,10^{14}\,\rm M_\odot$, while the mean value for the cosine of the angle between $R$ and $r$ is ${\rm cos}(\alpha)=0.5\pm\,0.01$. They are in very good agreement with the true values of the parameters $M_{\rm vir}=4.7\times\,10^{14}\,\rm M_\odot$ and ${\rm cos}(\alpha)=0.5$. \begin{figure} \begin{center} \centering \includegraphics[width=0.5\textwidth]{SHEETsim.eps} \caption{The sheet we found outside the simulated halo in the three-dimensional space. The $z$-axis corresponds to the line of sight direction. The blue points represent the particles belonging to the sheet, and the green plane is the best fit for the sheet's plane, corresponding to $\alpha=-1.05$ (${\rm cos}(\alpha)=0.5$) rad. The red points represent the particles within the virial radius of the halo.} \label{fig:sheetSIM} \end{center} \end{figure} \begin{figure} \centering \begin{minipage}[b]{.5\textwidth} \includegraphics[width=0.8\linewidth]{infallhalo.eps} \end{minipage} \quad \begin{minipage}[b]{.5\textwidth} \includegraphics[width=1.0\linewidth]{scatterplotHALOtrue.eps} \caption{ The top figure shows the median radial velocity profile for the simulated halo (red dashed line). The black solid line is our fit to the profile. The green dot-dashed line is the universal radial profile showed in Fig.~\ref{fig:3velocities}. The bottom figure shows the result of the Monte Carlo Markov chain applied on the sheet found around the simulated halo, using the fit to the mean velocity profile of the halo (top figure). \emph{Central panel}: Scatter plot of the two free parameters (${\rm cos}(\alpha),M_{\rm vir}$) obtained by the MCMC chain. \emph{Upper panel}: Probability density function of the virial mass. \emph{Left panel}: Probability density function of the viewing angle. The initial number of points is 5000 and we remove the points of burn-in. The mean value for the virial mass is $M_{\rm vir}=(4.67\pm\,1.9)\times\,10^{14}\,\rm M_\odot$, which is very close to the true halo virial mass $M_{\rm vir}=4.75\times\,10^{14}\,\rm M_\odot$. The mean value for the cosine of the angle is ${\rm cos}(\alpha)=0.5\pm\,0.01$, in agreement with the real value ${\rm cos}(\alpha)=0.5$.} \label{fig:ScatterplotHALOreal} \end{minipage} \end{figure} \section{Result on Coma Cluster} In this section, we will apply our method to real data of the Coma cluster. We search for data in and around the Coma Cluster in the SDSS database \citep{aa2009}. We take the galaxy NGC 4874 as the centre of the Coma cluster \citep{kent82}, which has coordinates: RA=12h59m35.7s, Dec=+27deg57'33''. We select galaxies within 18 degrees from the position of the Coma centre and with velocities between 3000 and 11000 km/s. The sample contains 9000 galaxies. We apply the method for the identification of structures outside clusters to the Coma data. We detect two galactic sheets in the environment of Coma. We denote our sheets as \emph{sheet 1} and \emph{sheet 2}. \begin{figure} \centering \begin{minipage}[b]{.5\textwidth} \includegraphics[width=1.0\linewidth]{comaxy-fil1.eps} \end{minipage} \quad \begin{minipage}[b]{.5\textwidth} \includegraphics[width=1.0\linewidth]{comaxy-fil2.eps} \caption{Sky map of the Coma cluster. The top figure shows the \emph{sheet 1} and the bottom figure shows the \emph{sheet 2}. The black triangles represent the particles inside the virial radius of Coma. The orange points belong to one of the eight wedges we select in the $(x,y)$ plane. The background for the selected wedge is given by the green crosses. The two wedges adjacent to the selected wedge, gray diamonds, are excluded from the analysis. In the selected wedge, we identify a sheet that is represented by the red circles. The blue squares correspond to the total overdensity we find in the wedge, with the method described in the text.} \label{fig:Comaxy} \end{minipage} \end{figure} Fig.~\ref{fig:Comaxy} shows the Coma cluster and its environment up to 18 degrees from the cluster centre. The number of galaxies with spectroscopically measured redshifts within $2.5\,$Mpc, which is roughly the virial radius of Coma, is 748. These galaxies are indicated as black triangles. The sheets are the red circles. The upper panel refers to the \emph{sheet 1}, which contains 51 galaxies. The bottom panel refers to the \emph{sheet 2}, which is more extended and contains 228 galaxies. In Fig.~\ref{fig:Comav}, we show the sheets in the velocity space. They both appear as inclined straight lines. The \emph{sheet 1} goes from $\approx\,7\,$Mpc to $\approx\,14\,$Mpc. As the velocities are negative, the sheet is between us and Coma. The \emph{sheet 2} goes from $\approx\,11\,$Mpc to $\approx\,22\,$Mpc. As the velocities are positive, the sheet is beyond Coma. \begin{figure} \centering \begin{minipage}[b]{.5\textwidth} \includegraphics[width=1.0\linewidth]{comav-fil1.eps} \end{minipage} \quad \begin{minipage}[b]{.5\textwidth} \includegraphics[width=1.0\linewidth]{comav-fil2.eps} \caption{Line of sight velocity $v_{\rm los}$ as function of the projected distance $R$ from the centre of Coma. The velocities are scaled by the velocity of Coma $v_{\rm Coma}=4000$km/s. The top figure shows the \emph{sheet 1} and the bottom figure shows the \emph{sheet 2}. \emph{Upper panels}: The background in the analysis is represented by the green crosses. The black triangles are all the galaxies within $r=2.5$ Mpc. \emph{Bottom panel}: The orange points represent the signal, i.e. the selected wedge. The blue points correspond to the overdensity. The almost straight inclined lines are shown in red circles. We identify these filamentary-like structures as sheets.} \label{fig:Comav} \end{minipage} \end{figure} As we did for the cosmological simulation, we have removed the collections of galaxies which are horizontal groups in ($R,v_{\rm los}$)-space by hand. For example, in the case of the \emph{sheet 1} in the upper panel of Fig.~\ref{fig:Comav}, we define the sheet only by including the inclined pattern and therefore, by excluding the horizontal part of the strip. \begin{figure} \centering \begin{minipage}[b]{.5\textwidth} \includegraphics[width=1.1\linewidth]{scatterplotSHEET1.eps} \end{minipage} \quad \begin{minipage}[b]{.5\textwidth} \includegraphics[width=1.1\linewidth]{scatterplotSHEET2.eps} \caption{Result of the Monte Carlo Markov chain applied to the two sheets found outside the Coma cluster. The top figure refers to the \emph{sheet 1} and the bottom figure refers to the \emph{sheet 2}. \emph{Central panels}: Scatter plot of the two free parameters (${\rm cos}(\alpha),M_{\rm vir}$) obtained by the MCMC chain. \emph{Upper panels}: Probability density function of the virial mass. \emph{Right panels}: Probability density function of the viewing angle. The initial number of points is 5000 and we remove the points of burn-in.} \label{fig:ScatterplotCOMA} \end{minipage} \end{figure} We then fit the line of sight velocity profiles of the two sheets with equation~(\ref{eqn:vellos3}). We set $\Delta=93.8$ and $H=73\,\rm km/(s\,Mpc)$, as for the cosmological simulation. In Fig.~\ref{fig:ScatterplotCOMA} we show the scatter plot on the plane of the two parameters $({\rm cos}(\alpha),\rm M_{\rm vir})$, and the one-dimensional probability distribution functions of the virial mass and the orientation angle, for both the sheets. The angle $\alpha$ can be very different for different sheets, as it only depends on the position of the structure in 3D. Instead, we expect the result on the cluster mass $M_{\rm vir}$ to be identical, as it refers to the same cluster. \begin{figure} \centering \includegraphics[width=\hsize]{scatterplot2SHEETS.eps} \caption{The probability density function of the Coma virial mass, derived through the sheet technique. The distribution coming from the \emph{sheet 2} is the blue one, slightly to the left. The violet slightly narrower distribution corresponds to the \emph{sheet 1}. The best mass estimate based on these measurement is: $M_{\rm vir}=(9.2\pm\,2.4)\times\,10^{14}\rm M_\odot$.} \label{fig:masscoma} \vspace{-0.2in} \end{figure} In Fig.~\ref{fig:masscoma}, we overplot the probability distributions for the virial mass of Coma, from the analysis of the two sheets. The two probability distributions are very similar. The mean value of the virial mass is $M_{\rm vir}=(9.7\pm\,3.6)\times\,10^{14}\rm M_\odot$ for the \emph{sheet 1} and $M_{\rm vir}=(8.7\pm\,3.3)\times\,10^{14}\rm M_\odot$ for the \emph{sheet 2}. When applying equation~(\ref{eqn:vmass}), these values give a virial radius of $r_{\rm vir}=2.5\,$Mpc and $r_{\rm vir}=2.4\,$Mpc, respectively. The best mass estimate based on the combination of these measurements is: $M_{\rm vir}=(9.2\pm\,2.4)\times\,10^{14}\rm M_\odot$. Our result is in good agreement with previous estimates of the Coma cluster mass. In \cite{Hug89}, they obtain a virial mass $M_{\rm vir}=(13\pm\,2)\times10^{14}\,M_{\odot}$ from their X-ray study. From the galaxy kinematic analysis, \cite{LM03} report a virial mass $M_{100}=(15\pm\,4.5)\times10^{14}\,M_{\odot}$, corresponding to a density contrast $\Delta=100$, which is very close to our value. \cite{Geller99} find a mass $M_{200}=15\,\times\,10^{14}\,M_{\odot}$, corresponding to a density contrast $\Delta=200$. The weak lensing mass estimate in \cite{kubo07} gives $M_{200}=2.7^{+3.6}_{-1.9}\,\times\,10^{15}\,M_{\odot}$. The mean value for cosine of the orientation angle is ${\rm cos}(\alpha)=0.36\pm\,0.01$, corresponding to $\alpha=-1.2\pm0.01$~rad, for the \emph{sheet 1} and ${\rm cos}(\alpha)=0.64\pm\,0.02$, corresponding to $\alpha=0.87\pm\,0.02$~rad, for the \emph{sheet 2}. These results are affected by a statistical error of $50\%$ for the mass and $2.5\%$ for the angle, as discussed in Section~\ref{sec42}. The value obtained for the orientation $\alpha$ of a sheet corresponds to the mean angle of all the galaxies belonging to the sheet. By knowing $\alpha$, we can calculate the corresponding coordinate along the line of sight for all the galaxies, and therefore, we reconstruct the three dimensional map of the two structures, as shown in Fig.~\ref{fig:COMAsheets}. The sheets we find are lying on two different planes. \begin{figure} \begin{center} \includegraphics[width=0.5\textwidth]{SHEETsComa.eps} \caption{The sheets we found outside the Coma cluster in the three-dimensional space. The blue and the green points represent the particles belonging to the \emph{sheet 1} and the \emph{sheet 2}, respectively. The Coma cluster is indicated as a red sphere centered at ($x,y,z$)=(0,0,0).} \label{fig:COMAsheets} \end{center} \end{figure} \section{Summary and Conclusion} The main purpose of this paper is to propose and test a new method for the mass estimation of clusters within the virial radius. The idea is to infer it only from the kinematical data of structures in the cluster outskirts. In the hierarchical scenario of structure formation, galaxy clusters are located at the intersection of filaments and sheets. The motion of such non-virialized structures is thus affected by the presence of the nearest massive cluster. We found that modeling the kinematic data of these objects leads to an estimation of the neighbor cluster mass. The gravitational effect of the cluster mass is to perturb the pure Hubble motion, leading to a deceleration. Therefore, the measured departure from the Hubble flow of those structures allows us to infer the virial mass of the cluster. We have developed a technique to detect the presence of structures outside galaxy clusters, by looking at the velocity space. We underline that the proposed technique doesn't aim to map all the objects around clusters, but it is limited to finding those structures that are suitable for the virial cluster mass estimation. Our mass estimation method doesn't require the dynamical analysis of the virialized region of the cluster, therefore it is not based on the dynamical equilibrium hypothesis. However, our method rely on the assumption of spherical symmetry of the system. In fact, we assume a radial velocity profile. Moreover, our method is biased by fixing the phenomenological fit to the radial infall velocity profile of simulation, as universal infall profile. From the practical point of view, this technique requires gathering galaxy positions and velocities in the outskirts of galaxy clusters, very far away from the cluster centre. A quite dense sample of redshifts is needed, in order to identify the possible presence of structures over the background. Once the structures are detected, the fit to their line of sight velocity profiles has to be performed. The fitting procedure involves only two free parameters: the virial mass of the cluster and the orientation angle of the structure in 3D. This makes the estimation of the virial cluster mass quite easy to obtain. We have analysed cosmological simulations first, in order to test both the technique to identify structures outside clusters and the method to extract the cluster mass. We find one sheet outside the selected simulated halo, and we infer the correct halo mass and sheet orientation angle, within the errors. We then applied our method to the Coma cluster. We have analysed the SDSS data of projected distances and velocities, up to $20\,$Mpc far from the Coma centre. Our work led to the detection of two galactic sheets in the environment of the Coma cluster. The estimation of the Coma cluster mass through the analysis of the two sheets, gives $M_{\rm vir}=(9.2\pm\,2.4)\times\,10^{14}\rm M_\odot$. This value is in agreement with previous results from the standard methods. We note however that our method tends to underestimate the Coma virial mass, compared with previous measurements, which either assume equilibrium or sphericity. In the near future, we aim to apply our technique to other surveys, where redshifts at very large distances from the clusters centre are available. If a large number of sheets and filaments will be found, our method could also represent a tool to deproject the spatial distribution of galaxies outside galaxy clusters into the three-dimensional space. \vspace{-0.05in} \section{Acknowledgements} The authors thank Stefan Gottloeber, who kindly agreed for one of the CLUES simulations (http://www.clues-project.org/simulations.html) to be used in the paper. The simulation has been performed at the Leibniz Rechenzentrum (LRZ), Munich. The Dark Cosmology Centre is funded by the Danish National Research Foundation.
1,108,101,565,392
arxiv
\section{Introduction} \label{intsec} NASA's {\it Kepler} mission has been an enormous success, discovering over 3500 planet candidates to date \cp{Borucki2011,Batalha2012}. Among the mission's many firsts and accomplishments, however, one of the most revolutionary is that for the first time we have a robust determination of the relative abundance of different sizes of planets stretching from Earth-sized all the way up to the largest hot Jupiters \cp{Howard2011a,Fressin2013,Petigura2013}. In particular, {\it Kepler} has discovered an abundant new population of $\sim$3 $R_{\mathrm{\oplus}}$ planets \cp{Fressin2013,Petigura2013}. Although smaller than Neptune, these planets are large enough that they must have substantial hydrogen and helium (hereafter H/He) envelopes to explain their radii. Such planets are unlike anything found in our own Solar System and fundamental questions about their structure and formation are still not understood. Are these Neptune-like planets that form beyond the snow-line and contain large amounts of volatile ices \cp{Rogers2011}, or are these scaled up terrestrial worlds with H/He envelopes that formed close to their current orbits \cp{Hansen2013,Chiang2013}? In an attempt to address these questions, a great deal of effort has been invested in acquiring precise masses for a large number of these transiting planets. In recent years this has generated a much fuller understanding of the mass-radius relation, especially for sub-Neptune and super-Earth sized planets \cp{Weiss2013}. In particular, there are now several multi-planet {\it Kepler} systems like Kepler-11 with masses determined from Transit Timing Variations (TTVs) \cp[e.g][]{Lissauer2011a,Carter2012,Cochran2011,Lissauer2013}. Although rare, such systems are incredibly valuable because with both a mass and a radius we can estimate a planet's bulk composition using models of interior structure and thermal evolution \cp[e.g.][]{Rogers2010a, Nettelmann2011, Miller2011, Lopez2012, Valencia2013}. Thus far efforts have been focused on individually determining compositions for this handful of planets. This paucity stands in stark contrast to the over 3500 Kepler Candidates with only measured radii. Unfortunately the vast majority of these candidates are in dynamically inactive systems without strong TTVs or around distant stars too faint for radial velocity measurements. Moreover, even with precise masses and radii there are inherent degeneracies which limit one's ability to constrain the bulk compositions of super-Earth sized planets. For ~1-2 $R_{\mathrm{\oplus}}$ planets the densities of water, silicate rocks, and iron (i.e. $\sim$ 1-10 $\mathrm{g \, cm^{-3}}$) are similar enough that it is impossible to uniquely constrain the relative abundance of these components \cp{Valencia2007,Rogers2010a}. To some extent models of planet collisions can set upper limits on the maximum iron or water mass fractions that are physically achievable \cp{Marcus2009,Marcus2010}, but for a given planet this still allows a wide range of internal compositions. Fortunately, models are still able to set clear and useful constraints on composition. In particular, thermal evolution models can set robust constraints on the fraction of a planet's mass in a H/He envelope. Due to its significantly lower density, even a relatively minor amount H/He (e.g., $\sim$1\% of total planet mass) has a large impact on planetary radius. For sub-Neptune sized planets $\sim$ 3-4 $R_{\mathrm{\oplus}}$, such an envelope will dominate a planet's size regardless of the abundance of other elements. Moreover, for sub-Neptune sized planets at fixed bulk-composition, theoretical mass-radius curves are remarkably flat; i.e., planets with a given H/He abundance have very similar sizes regardless of their mass \cp{Lopez2012}. As a result, there is a remarkably tight relationship between planetary radius and H/He envelope fraction that is independent of planet mass. Critically, this opens up the hope of constraining compositions for the vast population of Neptune and sub-Neptune sized {\it Kepler} candidates without measured masses. This is what we begin to explore in this paper. Whenever possible it is still preferable to obtain a well measured mass. Planet mass is critical for understanding how volatile rich planets accrete their initial H/He envelope \cp{Bodenheimer2000,Ikoma2012} and whether they can retain it against X-ray and EUV driven photo-evaporation \cp{Lopez2012,Lopez2013,Owen2012,Owen2013}. Nevertheless, for systems of sub-Neptunes like Kepler-11, even factor of $\sim$2 uncertainties on planet masses are sufficient to tightly constrain composition with precise radii \cp{Lissauer2013}. This fact means that instead of only examining the {\it radius} distribution of {\it Kepler} candidates, we can begin thinking about a {\it composition} distribution. \section{Models} \label{ModelSec} In order to understand how planetary radius relates to planet mass and composition, it is necessary to fully model how a planet cools and contracts due to thermal evolution. For this work, we have used the thermal evolution presented in \ct{Lopez2012}, where additional model details can be found. Similar models are frequently used to track the evolution of sub-Neptunes and hot Jupiters. \cp[e.g,][]{Miller2011,Nettelmann2011}. Unlike \ct{Lopez2012} and \ct{Lopez2013}, here we do not consider the effects of photo-evaporation. Although photo-evaporation can have a large impact on the composition of a planet \cp[e.g.][]{Baraffe2006,Hubbard2007b,Lopez2012,Owen2012}, the effect on the thermal state of the interior is relatively minor \cp{Lopez2013}. Here we are primarily interested in the relationship between radius and composition as controlled by thermal evolution, as a result the effects of photo-evaporation can be ignored. In essence, present-day composition determines the radius, but that composition may have been strongly effected by formation and photo-evaporation. At a given age, a model is defined by the mass of its heavy element core, the mass of its H/He envelope, the amount of incident radiation it receives, and the internal specific entropy of its H/He envelope. As a default model, we assume an isothermal rock/iron core with an Earth-like 2:1 rock/iron ratio, using the ANEOS olivine \cp{Thompson1990} and SESAME 2140 Fe \cp{Lyon1992} equations of state (EOS). When determining composition error bars for observed planets, however, we varied this iron fraction from pure rock to the maximum possible iron fraction from impact models in \ct{Marcus2010}. For the H/He envelope we assume a fully adiabatic interior using the \ct{Saumon1995} EOS. In addition we consider the possibility of water-worlds and three component models using the H2O-REOS for water \cp{Nettelmann2008}. Finally atop the H/He envelope is a relatively small radiative atmosphere, which we assume is isothermal at the equilibrium temperature. We define a planet's radius at 20 mbar, appropriate for the slant viewing geometry in optical transits \cp{Hubbard2001}. In order to quantitatively evaluate the cooling and contraction of the H/He envelope, we use a model atmosphere grid over a range of surface gravities and intrinsic fluxes. These grids relate the surface gravity and internal specific entropy to the intrinsic flux emitted for a given model. These one-dimensional radiative-convective models are computed for solar metallicity and for 50$\times$ solar metallicity enhanced opacity atmospheres using the methods described in \ct{Fortney2007} and \ct{Nettelmann2011}. These atmosphere models are fully non-gray, i.e. wavelength dependent radiative transfer is performed rather than simply assuming a single infrared opacity. The atmospheres of Neptune and sub-Neptune sized planets might be significantly enhanced in metals \cp{Fortney2013} or host extended clouds that greatly enhance atmospheric opacity \cp{Morley2013}. Therefore, our two atmosphere grids are a way to make a simplified first estimate of the role of enhanced opacity on planetary thermal evolution. For all runs we use the H/He \ct{Saumon1995} EOS for the envelope. At very early times and very low masses, the models reach gravities beyond the edge of our cooling grid. In such cases we logarithmically extrapolate the intrinsic temperature $T_{\mathrm{int}}$ as a function of gravity. This does not significantly affect our results, however, as the dependence of $T_{\mathrm{int}}$ on gravity is slight and the models are only at such low gravities in the first few Myr. Finally, we include heating from radioactive decay in the rock/iron core and the delay in cooling due to the core's heat capacity. In order to correctly determine the mass-radius-composition relationship, it is vital to include these thermal evolution effects, since these will significantly delay cooling and contraction, particularly for planets less than $\sim$5 $M_{\mathrm{\oplus}}$. As with previous models, we assume that planets initially form with a large initial entropy according to the traditional "Hot-Start" model \cp{Fortney2007,Marley2007}. Specifically we start our models at an age of 1 Myr with a large initial entropy of 10 $k_{\mathrm{b}} \, \mathrm{baryon}^{-1}$. This assumption does not significantly affect any of our results since hot-start and cold-start models are indistinguishable by the time planets are $\sim$100 Myr old \cp{Marley2007,Lopez2012}. Moreover, \ct{Mordasini2013} recently showed that for planets less massive than Jupiter gravitional heating due to settling of heavy elements in the envelope can erase any difference between hot and cold starts. For low-mass planets, the hot-start assumption results in extremely large initial radii $\gtrsim$10 $R_{\mathrm{\oplus}}$. However, as we explore in Section 3.2, such models cool extremely rapidly such that significant contraction has already occurred by several Myr. In general we present results at ages $>$10 Myr, when our results are insensitive to the initial choice of entropy. \section{A Mass Radius Parameter Study} \label{studysec} \begin{figure}[h!] \begin{center} \includegraphics[width=3.5in,height=6.0in]{Study_MvR_10_21_2013.eps} \end{center} \caption{Here we show model mass-radius relations from 1-20 $M_{\mathrm{\oplus}}$ and how these depend on composition, irradiation, and age, indicated by the colors. Solid lines correspond to enhanced opacity models, while dotted lines correspond to solar metallicity. The dashed rust-colored lines show the size of bare rocky planets with Earth-like compositions. Our default model is 5\% H/He, 5 Gyr old, and receives $\sim$100 $F_{\mathrm{\oplus}}$. In panel a) we vary the envelope fraction from 0.1-60\% H/He, this has by far the largest impact on planet size. Below $\sim$3\% H/He radius increases modestly with mass due to the dominance of the rocky core. For larger envelopes, the mass-radius relation is remarkably flat until for gas giant sized planets it decreases slightly with higher mass due to the increasing self gravity of the envelope. In panel b) we vary the incident flux a planet receives from 1-1000 $F_{\mathrm{\oplus}}$. Despite varying the irradiation by 4 orders of magnitude, the radius never changes by more than $\sim$30\%. Finally, in panel c) we show a time evolution from 10 Myr to 10 Gyr. At early times low-mass planets are larger than higher-mass planets due to their lower gravities. However, these low-mass planets are able to cool more rapidly which gradually flattens the mass-radius relation. \label{studyfig}} \end{figure} Planetary radius is an invaluable tool in understanding the nature of low-mass planets; however, without the aid of thermal evolution models like those used here, it can be quite difficult to interpret. In order to better understand the information contained in planet radii, we performed a detailed parameter study of our thermal evolution and structure models for sub-Neptune type planets with rock/iron cores and thick H/He envelopes. As part of this parameter study we ran over 1300 thermal evolution models varying planet mass, incident flux, envelope fraction, and atmospheric metallicity. We covered planets from 1-20 $M_{\mathrm{\oplus}}$, 0.1-1000 $F_{\mathrm{\oplus}}$, 0.01-60\% H/He, for both solar metallicity and enhanced opacity models.We then recorded planet radius at every age from 10 Myr to 10 Gyr. The results of this study are summarized in Figure \ref{studyfig} and Tables 2-7. Examining Figure \ref{studyfig}, it is immediately clear that iso-composition mass-radius curves are in fact remarkably flat for sub-Neptune or larger planets, at least once they are a few Gyr old. In each panel, we show theoretical mass-radius curves while varying the envelope fraction, incident flux, and age of the model planets. For the parameters that are not varying in each panel, we use representative values of 5\% H/He, 100 $F_{\mathrm{\oplus}}$, and 5 Gyr. Turning to panel a), we see the enormous effect that varying the H/He envelope fraction has on planetary radius. By comparison, any other changes to incident flux, age, or internal structure are secondary. For planets with envelopes $\sim$0.1\% of their total mass, the mass-radius curve does increase slightly from $\sim$1.5 $R_{\mathrm{\oplus}}$ at 1 $M_{\mathrm{\oplus}}$ to $\sim$2.5 $R_{\mathrm{\oplus}}$ at 20 $M_{\mathrm{\oplus}}$. For envelopes this insubstantial, a planet's size is still dominated by its rocky/iron core and so the mass-radius curves have a similar slope to the bare rock curve shown in Figure \ref{studyfig}. However, as we increase the envelope fraction, the mass-radius curves rapidly flatten, beginning at low-masses, until by $\sim$3\% H/He, the curves are almost completely flat. By comparison, panel b) in Figure \ref{studyfig} shows the much more modest effect of varying the incident flux. More irradiated planets tend to be slightly larger because they have a large scale height in their atmospheres and because the irradiation alters the radiative transfer through their atmosphere, slowing their contraction \cp{Fortney2007}. Nonetheless, despite varying the incident flux by four orders of mangitude, planet radii vary by less $\sim$30\%. Finally, panel c) shows how these mass-radius curves evolve over time. At early times lower mass planets are significantly larger than higher mass planets due to their similarly large internal energies and lower gravities. Over time, however, these low mass planets are able to cool more rapidly than their more massive relatives, which gradually flattens the mass-radius curves. By the time the planets are $\sim$ 1 Gyr old we see the characteristically flat mass-radius curves for H/He rich planets. \begin{figure}[h!] \begin{center} \includegraphics[width=3.0in,height=8.3in]{PowerLaw_MvR_9_15_2013.eps} \end{center} \caption{Four panels showing how the radius of the H/He envelope $R_{\mathrm{env}}=R_{\mathrm{p}}-R_{\mathrm{core}}-R_{\mathrm{atm}}$ varies with planet mass, envelope mass fraction, incident flux, and planet age for representative values. Red dotted lines correspond to solar metallicity atmospheres, while blue dashed lines correspond to enhanced opacity. Solid lines indicate power-law fits as described in equation (\ref{powerlaweq}). Here we use default values of 5 $M_{\mathrm{\oplus}}$, 100 $F_{\mathrm{\oplus}}$, 5\% H/He, and 5 Gyr. \label{powerfig}} \end{figure} \subsection{Describing Radius with Power-Laws} \label{powersec} A quick inspection of Figure \ref{studyfig} makes clear that not all of a planet's properties have an equal impact on planet size. Planet mass and incident flux have only a modest impact on planet size, while planet age has a larger impact, particularly at younger ages. However, by far the largest determinate of a planet's size is the fraction of its mass in a H/He envelope. One way to quantify the relative importance of composition is to construct analytic fits for radius as a function of planet mass $M_{\mathrm{p}}$, envelope fraction $f_{\mathrm{env}}$, incident flux $F_{\mathrm{\oplus}}$, and age. In \ct{Lopez2013} we performed a similar analysis examining planets' vulnerability to photo-evaporative mass loss. Fortunately, the relationships between radius and each of these parameters are all reasonably well described by power-laws and the effects of each variable are relatively independent. As a result, we can do a reasonably good job of describing the results of our full parameter study with a set of four independent power-laws. The one caveat is that we do not fit for the total planet radius $R_{\mathrm{p}}$, but instead the radius of the H/He envelope $R_{\mathrm{env}} \approx R_{\mathrm{p}}-R_{\mathrm{core}}$, where $R_{\mathrm{core}}$ is the size of the rock/iron core. We do this because as $f_{\mathrm{env}}$ approaces zero, the planet radius does not approach zero but instead assymptotes to $R_{\mathrm{core}}$. To first order, however, the rock/iron equation of state is very incompressible and so we can approximate $R_{\mathrm{core}}$ with the mass-radius curve of a envelope free rocky planet. Assuming an Earth-like composition, then $R_{\mathrm{core}}$ is described by equation (\ref{rockeq}) to within $\sim$2\%. If we also allow the iron-fraction of the core to vary then this error rises to $\sim$10\%, but for the qualitative analysis we attempting here such errors are unimportant. $M_{\mathrm{core}}$ in equation (\ref{rockeq}) refers to the mass of the rock/iron core, which for sub-Neptune sized planets is approximately the same as the total planet mass $M_{\mathrm{p}}$. \begin{equation}\label{rockeq} R_{\mathrm{core}} = \left(\frac{M_{\mathrm{core}}}{M_{\mathrm{\oplus}}}\right)^{0.25} \approx \left(\frac{M_{\mathrm{p}}}{M_{\mathrm{\oplus}}}\right)^{0.25} \end{equation} Likewise, we must make a small correction to account for the size of the radiative upper atmosphere. To first approximation, this atmosphere is isothermal at the planet's equilibrium temperature $T_{\mathrm{eq}}$. For sub-Neptune sized planets at several Gyr, the radiative-convective boundary is typically $\sim$100-1000 bar. For transiting planets the broadband optical radius is typically $\sim$20 mbar, or $\approx$8-10 scale heights higher. Thus the size of the radiative atmopshere is approximately given by equation (\ref{atmeq}), where $g$ is a planet's gravity and $\mu_{\mathrm{H/He}}$ is the mean molecular weight. Generally however, this correction is typically quite small $\sim$0.1 $R_{\mathrm{\oplus}}$ except at the very highest levels of irratiation. \begin{equation}\label{atmeq} R_{\mathrm{atm}} \approx \log{\left(\frac{100 \, \mathrm{bar}}{20 \, \mathrm{mbar}}\right)} H \approx 9 \left(\frac{k_{\mathrm{b}} \, T_{\mathrm{eq}} }{g \, \mu_{\mathrm{H/He}}}\right) \end{equation} With equations (\ref{rockeq}) and (\ref{atmeq}) in place, we can now fit for $R_{\mathrm{env}}$, and then simply add $R_{\mathrm{core}}$ and $R_{\mathrm{atm}}$to get the total radius. The results of these fits are summarized in Figure \ref{powerfig} and equation (\ref{powerlaweq}). Figure \ref{powerfig} compares our power-law fits to the results of our full models for representative values of $M_{\mathrm{p}}$, $f_{\mathrm{env}}$, $F_{\mathrm{\oplus}}$, and age. The error bars in each panel show the 1$\sigma$ scatter about the power-law fits for the full suite of models in our parameter study. Remarkably, this simple power-law description does a reasonable job of reproducing the results of our full model. In general, the analytic formulation in equation (\ref{powerlaweq}) matches our full models to within $\sim$0.1 dex. For the age evolution, we fit separate power-laws for solar metallicity and enhanced opacity models. The solar metallicity models cool more rapidly initially. As a result, they are already relatively cold by $\sim$100 Myr and so the subsequent contraction is slower. However, the enhanced opacity models must eventually cool and by several Gyr any differences are erased. We fit power-laws only to the evolution after 100 Myr. For solar metallicity $R_{\mathrm{env}} \sim t^{0.11}$ while for enhanced opacity $R_{\mathrm{env}} \sim t^{0.18}$. Equation (\ref{powerlaweq}) shows the results for the enhanced opacity models. \begin{equation}\label{powerlaweq} \begin{split} R_{\mathrm{env}} = R_{\mathrm{p}}-R_{\mathrm{core}}-R_{\mathrm{atm}} = 2.06 \, R_{\mathrm{\oplus}} \left(\frac{M_{\mathrm{p}}}{M_{\mathrm{\oplus}}}\right)^{-0.21} \\ \times \left(\frac{f_{\mathrm{env}}}{5\%}\right)^{0.59} \left(\frac{F_{\mathrm{p}}}{F_{\mathrm{\oplus}}}\right)^{0.044} \left(\frac{age}{5 \, \mathrm{Gyr}}\right)^{-0.18} \end{split} \end{equation} It is important to note however, that the results of these fits are only meant to be a rough approximation of the full models summarized Figure \ref{studyfig} and tables 2-6. These fits are done purely to help understand the qualitative behavior of our thermal evolution models, not to be used in place of the full models. Also, equation (\ref{powerlaweq}) shows the fit to our the enhanced opacity models. At late times the solar metallicity models have a slightly shallower dependence on age, due to more rapid cooling at early ages. Nonetheless, equations (\ref{rockeq}) and (\ref{powerlaweq}) do make several things quite clear. First of all, we can now quantify the importance of H/He envelope fraction; doubling $f_{\mathrm{env}}$ has an order of magnitude larger effect on $R_{\mathrm{p}}$ than doubling $F_{\mathrm{p}}$ and more than twice as large as an effect of doubling the age. We can also now see how flat the mass-radius curves are. Although, $R_{\mathrm{env}}$ decreases slightly with mass, this is almost exactly balanced by the increase in $R_{\mathrm{core}}$ with increasing mass. So long as $R_{\mathrm{env}} \gtrsim R_{\mathrm{core}}$, then these terms will roughly balance and the mass-radius curves will be quite flat. This typically happens for planets that are $\gtrsim$1\% H/He or $\gtrsim$2.5 $R_{\mathrm{\oplus}}$. Thus for most of {\it Kepler}'s Neptune and sub-Neptune sized planets, radius is quite independent of planet mass and is instead a direct measure of bulk H/He envelope fraction. \begin{figure}[h!] \begin{center} \includegraphics[width=3.5in,height=2.5in]{Plot_evolution_5me_1.eps} \end{center} \caption{Here we show the planet luminosity budget vs. time for a representative example thermal evolution model with 1\% H/He on a 5 $M_{\mathrm{\oplus}}$ planet, receiving 100 $F_{\mathrm{\oplus}}$ from a sun-like star. The black solid line shows the overall cooling rate while the dotted and dashed lines show the cooling rate of the rock/iron core and the heating from radioactive decay, respectively. The solid gray line shows the cooling rate if we ignore radioactivity and the need to cool the core. This clearly demonstrates the need to include these terms when calculating the thermal evolution of sub-Neptune like planets.\label{evolfig}} \end{figure} \begin{figure}[h!] \begin{center} \includegraphics[width=3.5in,height=2.5in]{entropy_cooling_bymass.eps} \end{center} \caption{Shown is an example calculation in which all models start at the same young age and initial specific entropy. Internal specific entropy in the H/He envelope vs. time is shown for various planet masses. Solid lines show enhanced opacity, while dotted show solar metallicity. Planets start with large initial entropy, then rapidly cool. By 10-100 Myr, the models are insensitive to the choice of initial entropy. Low-mass planets experience more rapid cooling, leading to the flat mass-radius curves seen in Figure \ref{studyfig}. Solar metallicity models cool rapidly at young ages and then experience more gradual cooling, while enhanced opacity models cool more steadily at all ages.\label{entfig}} \end{figure} \begin{figure}[h!] \begin{center} \includegraphics[width=3.5in,height=2.5in]{TintVsMass.eps} \end{center} \caption{Intrinsic temperature $T_{\mathrm{int}}$, i.e., the equivalent blackbody temperature a planet's net outgoing flux, vs. planet mass for 5 Gyr old planets receiving 100 $F_{\mathrm{\oplus}}$ with enhanced opacity atmospheres. Colors show different H/He envelope fractions. Clearly, by several Gyr lower-mass planets are signficantly colder than higher mass planets. This demonstrates the need to perform full thermal evolution calculations. Simply assuming a fixed luminosity per mass will greatly overestimate the size of planets below $\sim5$ $M_{\mathrm{\oplus}}$. \label{tintfig}} \end{figure} \subsection{Why is the Mass-Radius Relation Flat?} \label{flatsec} One of the key features of our thermal evolution and structure models is the relative flatness of mass-radius curves at fixed H/He envelope fraction. In sections \ref{studysec} and \ref{powersec}, we showed that for planet with $\gtrsim$1\% H/He, planet size is more or less indepent of mass. Thus far, however, we have not explained the origin of this flatness. In fact, searching through the literature will show a wide range of mass-radius curves with very different behavior at low masses. Although, all the models tend to agree above $\sim$10-20 $M_{\mathrm{\oplus}}$, there can be large disagreements below $\sim$5 $M_{\mathrm{\oplus}}$. In some cases, radius decreases with decreasing mass in much the same way as the Earth-like mass radius curves in Figure \ref{studyfig}. In other cases, the radius increases to implausibly large sizes due to the planets' lower gravity \cp{Rogers2011}. Generally, these models face one of two limitations. Either they they ignore the contributions of the rock/iron core to the thermal evolution, i.e., the need to cool the core and heating from radioactive decay, or they do not perform an evolution calculation at all and instead use static structure models in which the internal energy of the planet is treated as a free parameter. For the Neptune and sub-Neptune sized planets that we are focusing on here, $\sim$90-99\% of a planet's mass is contained in the rock/iron core. As a result, ignoring the effects of that core on the thermal evolution will significantly underestimate a planet's cooling timescale, and therefore its radius. This is a common simplification with thermal evolution models, like our own, that were originally developed to model massive gas giants, where the core has a negligible impact on the overall thermal evolution. The importance of these effects, however, is clearly demonstrated in Figure \ref{evolfig}, which shows the various contributions to the overall thermal evolution, for a typical 5 $M_{\mathrm{\oplus}}$, 1\% H/He sub-Neptune sized planet. At every age, the cooling luminosity of the planet is dominated by these core cooling and heating terms. At early times, the thermal evolution is largely regulated by the need to cool the rock/iron core with its relatively large heat capacity \cp{Alfe2002,Guillot1995}. At ages $\gtrsim$1 Gyr, radioactive heating also becomes comparable to the core cooling rate, thanks mostly due to the decay of $^{40}$K \cp{Anders1989}. On the other hand, ignoring these terms leads to a planet that is $\sim$30-100$\times$ less luminous at late times, and underestimates the final radius by $\sim$0.5 $R_{\mathrm{\oplus}}$. Some models \cp[e.g.,][]{Mordasini2012c}, make the compromise of including radiogenic heating but not including the effect of the core's heat capacity. This is much better than ignoring the core altogether, but as shown in Figure \ref{evolfig} both terms are important and this will lead to underestimating the radii of sub-Neptune planets, especially at ages $\lesssim$1 Gyr. On the other hand, it is also quite common to use static internal structure models which do not track a planet's thermal evolution, but instead assume a fixed specific luminosity (i.e. power per unit mass), which is then treated as a free variable \cp{Rogers2011}. This is a common simplification made when a small H/He envelope is added to detailed models of terrestrial planets, for which the cooling history is harder to determine and has little impact on overall planet size \cp{Valencia2007}. When calculating possible compositions for a single planet \cp[e.g.,][]{Rogers2010b}, this is fine, so long as the resulting uncertainty in the internal energy is accounted for. However, when plotting iso-composition mass-radius curves, this leads to an unphysical upturn at low masses. Low-mass planets of course have lower surface gravities, so assigning them the same specific luminosities as more massive planets will significantly inflate their radii. In reality though, low mass planets are able to cool much more quickly. Partly this is due to their low gravities which slightly increases the rate of radiative transfer through their atmospheres \cp{Fortney2007}. Mostly, however, it is simply due to the fact that lower mass planets have similar radiating surface areas to slightly higher mass planets, but significantly smaller total internal energies. As a result, even if different mass planets start with similar specific internal energies, low mass planets will more quickly deplete their thermal energy reserves, leading to much shorter cooling times. This fact is summarized in Figures \ref{entfig} and \ref{tintfig}. Figure \ref{entfig} shows various cooling curves for the internal entropy in the H/He envelope. Planets start with large initial entropy, and therefore radii. Models rapidly cool for the first few Myr until the cooling timescale is comparable to the age. As described above, all things being equal, less massive planets will tend to have shorter cooling timescales due to their smaller energy reservoirs. As a result, lower mass models tend to be colder at all ages. This counter balances the fact that lower mass planets have lower gravities and produces the flat mass-radius curves seen in Figure \ref{studyfig}. This result is insensitive to our choice of initial entropy for ages $\gtrsim$10 Myr. As in Figure, \ref{powerfig} solar metallicity models cool rapidly for their first $\sim$10 Myr and then contract more slowly. The enhanced opacity models on the other hand, cool more steadily throughout their history. Eventually however, the enhanced opacity models must also cool and contract and by several Gyr they have largely erased any differences with the solar models. At the same time, there is a slight change in the cooling rates due to the decay of $^{40}K$. Figure \ref{tintfig} shows the end result of this evolution. Here we show planetary intrinsic temperature $T_{\mathrm{int}}$ versus planet mass for various H/He envelope fractions for 5 Gyr old planets receiving 100 $F_{\mathrm{\oplus}}$. $T_{\mathrm{int}}$ is the equivalent blackbody temperature of the net radiation leaving a planet, effectively it is the temperature the planet would have if the parent star was removed. As we can see, by 5 Gyr, low mass planets are always significantly cooler than higher mass planets at the same compositions, regardless of H/He fraction or atmosphere metallicity. The result is that the lower gravities of lower mass planets is balanced out by their shorter cooling timescales and we arrive at the flat mass radius curves shown in Figure \ref{studyfig}. \section{The Mass-Composition Relation} \label{compsec} \begin{figure}[h] \begin{center} \includegraphics[width=3.5in,height=2.5in]{bigmassradius.eps} \end{center} \caption{Planetary radius vs. mass for all ~200 transiting planets with measured masses. Each planet is colored according to the fraction of its mass in a H/He envelope, assuming a water-free interior. Rust-colored open circles indicate potentially rocky planets. Points are sized according to the incident flux they receive from their parent stars, relative to $F_{\mathrm{\oplus}}$ the flux that the Earth receives from the Sun. For comparison, we include theoretical mass-radius relations for pure silicate rock, pure water, and pure H/He at 500 $F_{\mathrm{\oplus}}$. There is a very strong correlation between planetary radius and H/He envelope fraction, both of which are more weakly correlated with mass up to $\sim$100 $M_{\mathrm{\oplus}}$. \label{mrfig}} \end{figure} Using our thermal evolution and structure models, we calculated H/He envelope fractions for all $\sim$200 confirmed planets with well determined masses, assuming a water-free interior. We excluded any planets which only have upper limits on mass or purely theoretical mass constraints. We used masses and radii from exoplanets.org \cp{Wright2011}, except for where there are more recent values in the literature. For CoRoT-7b, the five inner Kepler-11 planets, and 55 Cancri e we used masses and radii from \ct{Hatzes2011}, \ct{Lissauer2013}, and \ct{Dragomir2013b}, respectively. We exclude confirmed planets with analytical TTV mass estimates from \ct{Xie2012} due to the degeneracy between planet mass and free eccentricity. For inflated hot Jupiters with radii larger than that of pure H/He, we simply assigned 100\% H/He since such planets are beyond the scope of this work. Meanwhile, for potentially rocky planets like CoRoT-7b \cp{Leger2009,Queloz2009} and Kepler-10b \cp{Batalha2011}, we set strict upper limits on the size of any potential H/He envelope. Table 1 summarizes the results for 33 planets with measured masses $<100 \, M_{\mathrm{\oplus}}$ and radii $<12 \, R_{\mathrm{\oplus}}$. In order to calculate the uncertainty on these compositions we included the affects of 1$\sigma$ variations in the observed planet masses, radii, ages, and levels of irradiation. In addition, we included theoretical uncertainties on core iron fraction, core heat capacity, atmospheric albedo, etc., as described in \ct{Lopez2012}. In general, uncertainties in the stellar radius and therefore the planetary radius are the dominant source of uncertainty. Typically this is followed by the unknown iron fraction in the core which is typically equivalent to a $0.1$ $R_{\mathrm{\oplus}}$ uncertainty in the radius for low-mass planets. Figure \ref{mrfig} plots the current measured mass-radius relation with 1$\sigma$ uncertainties for all confirmed transiting planets with measured masses up to $1000 \, M_{\mathrm{\oplus}}$ and radii $20 \, R_{\mathrm{\oplus}}$. The color of each point shows the H/He envelope fractions calculated by our models. Rust-colored open circles show potentially volatile-free rocky planets. Meanwhile, the size of the points correspond to the incident flux that each planet receives from its parent star, relative to $F_{\mathrm{\oplus}}$, the incident flux that the Earth receives from the Sun. Finally, we include three theoretical iso-composition curves. The rust colored curve shows pure silicate rock (specifically olivine). The dark blue curve corresponds to pure water worlds on a 10 day orbit around a 5 Gyr old Sun-like star, however, varying these details does not significantly change the curve. Finally, the black curve corresponds to pure H/He hot Jupiters receiving 500 $F_{\mathrm{\oplus}}$ (i.e., 500 times the current incident flux that the Earth receives from the Sun) from a 5 Gyr old Sun-like star. Roughly speaking, this last curve forms the dividing line between the inflated and non-inflated hot Jupiters. Several features of the mass-radius relation are immediately apparent. As noted in \ct{Weiss2013}, there is a roughly power-law increase in radius from $\sim$ 1-100 $M_{\mathrm{\oplus}}$, above which radius saturates at approximately a Jupiter radius. Below $\sim$ 10 $M_{\mathrm{\oplus}}$ there is a particularly large scatter in radius, with planets ranging from the potentially rocky to sub-Neptune sized planets with $\sim$3\% H/He. For low-mass planets there is also an inverse correlation between radius and incident flux which may be due to photo-evaporative loss of H/He \cp{Lopez2012, Owen2013}. Above $\sim 100 M_{\mathrm{\oplus}}$ we find the true gas giants including the highly inflated hot Jupiters. Here the correlation with incident flux is the reverse of that at low-mass with the most irradiated planets being extremely inflated. It is unclear why there do not appear to be any super-inflated hot Jupiters below $\sim 100 M_{\mathrm{\oplus}}$, it is possible that such planets would be unstable to photo-evaporation or Roche-lobe overflow \cp{Jackson2010} or have a high mass fraction of heavy elements \cp{Miller2011}. Turning to the compositions of these planets, it is immediately clear that H/He envelope fraction is strongly correlated with both planet mass and radius. Below $\sim 10$ $M_{\mathrm{\oplus}}$, planets range from potentially rocky super-Earth sized planets to sub-Neptunes with a few percent H/He envelopes. From $\sim 10-50$ $M_{\mathrm{\oplus}}$, we have the Neptunes and super-Neptunes with $\sim10-30$\% of their mass in the envelope. Finally above $\sim 50$ $M_{\mathrm{\oplus}}$, planets transition to true gas giants where both the mass and radius are completely dominated by gas accreted during formation. However, on closer inspection, where there is scatter in the mass-radius relationship it is the planet radius that correlates with composition. We argue here that planet radius is first and foremost a proxy for a planet's H/He inventory. The fact that both composition and radius correlate with mass is due to the fact that more massive planets are able accrete more gas during formation. The radius saturates at $\sim$100 $M_{\mathrm{\oplus}}$ because planet size does not simply increase with increasing H/He mass but rather with increasing H/He mass {\it fraction}. As shown in section \ref{studysec}, there is an approximately power-law relationship between the size of a planet's H/He envelope and the planets H/He mass fraction. A 100 $M_{\mathrm{\oplus}}$ planet with a 10 $M_{\mathrm{\oplus}}$ core, is already 95\% H/He, as a result doubling the mass will not significantly increase the H/He envelope fraction or the radius. Although incredibly valuable, planetary radius is in some sense not a fundamental parameter of a planet. It changes as a planet evolves and only through the aid of thermal evolution and structure models like those used here, does it tell us about a planet's structure and composition. Fortunately, however such models allow us to translate radius into an estimate of planet composition. \begin{figure}[h!] \begin{center} \includegraphics[width=3.5in,height=2.5in]{radius_comp_relation.eps} \end{center} \caption{H/He envelope fraction vs. planet radius, for the ~200 transiting planets shown in figure \ref{mrfig}. Here each planet is color-coded according to its mass. The grey shaded region shows the effect of varying the water abundance of the interior. Clearly there is a very tight correlation between size and envelope fraction, lending credence to our claim that radius can be used as a proxy for planetary composition.\label{rcfig}} \end{figure} \begin{figure}[h!] \begin{center} \includegraphics[width=3.5in,height=2.5in]{mass_comp_relation.eps} \end{center} \caption{Similar to figure \ref{rcfig} but with H/He envelope fraction plotted against planetary mass, and color-coded by radius. Below $\sim$10 $M_{\mathrm{\oplus}}$ there is a mix of rocky planets, possible water worlds, and sub-Neptunes with a few percent H/He. From $\sim$10-100 $M_{\mathrm{\oplus}}$ there is a strong increase in both radii and H/He envelope fraction transitioning from Neptune sized planets with $\sim$10\% H/He up to true gas giants that are almost entirely H/He. Above $\sim$100 $M_{\mathrm{\oplus}}$ we find the familiar hot Jupiters, many of which have large inflated radii. The dashed black line shows a toy-model in which all planets have a 10 $M_{\mathrm{\oplus}}$ core. \label{mcfig}} \end{figure} Figure \ref{rcfig} shows the observed sample of transiting planet except that here we have plotted H/He envelope fraction against radius. This clearly demonstrates the close relationship between the observed radius and the fundamental bulk composition. At a given radius, planet mass, shown by the color bar, can span up to a factor of $\sim$3. Nonetheless the scatter in envelope fraction is typically only $\sim$0.3 dex. This is what we mean when we state that radius is primarily a proxy for composition. Thus far, however, we have only considered dry interiors with H/He envelopes atop rock/iron cores. The gray shaded region in Figure \ref{rcfig} shows the effect of varying the water abundance of planets in our model. Using our three layer models we varied the water abundance of the interior from completely-dry, up to 90\% of core mass, where by ``core'' we mean the combined mass of the rock and water layers. For clarity, we then fit power-laws to best fit radii and compositions under both scenarios; the gray shaded region shows the area in between these fits. Clearly, allowing this degeneracy does slightly increase the scatter in the radius-composition relationship. Nonetheless, above $\sim$3 $R_{\mathrm{\oplus}}$ this does not alter the conclusion that radius and H/He envelope fraction are intimately related. As a result, this means that we can recast the mass-radius relationship in Figure \ref{mrfig} as a mass-{\it composition} relationship. This is shown in Figure \ref{mcfig}. By doing this we have transformed the observable mass-radius relationship into one that is directly relatable to models of planet formation. Here we can clearly see that there is a fundamental change in the relationship around $\sim$10 $M_{\mathrm{\oplus}}$. Below this planets typically have less than $\sim$5\% of their mass in H/He with no clear relationship between envelope fraction and mass. Above this, however, we see a steady rise in envelope fraction from sub-Neptunes up to gas giants. These trends are all understandable in the light of the traditional core accretion model of planet formation \cp[e.g.,][]{Hayashi1985,Bodenheimer1986}. If a planet's rocky core becomes sufficiently massive, typically $\sim$5-10 $M_{\mathrm{\oplus}}$, then its gravity becomes sufficiently strong to trigger runaway accretion from the disk. For comparison, the dashed black line in Figure \ref{mcfig} shows a the simple toy model in which all planets have 10 $M_{\mathrm{\oplus}}$ core with solar metalcity H/He envelopes. In reality most planets lie to the right of this curve, possibly indicating that the accreted additional planetessimals embedded in the nebula \cp{Mordasini2013}. The results of Figures \ref{mrfig} and \ref{mcfig} are consistent with this traditional picture of core accretion. Below $\sim$10 $M_{\mathrm{\oplus}}$, according to our models planets are almost entirely composed of heavy elements by mass. Above this, most planets are roughly consistent with $\gtrsim 10$ $M_{\mathrm{\oplus}}$ of heavy elements along with accreted H/He envelopes. There is of course a simplified view of planet formation. In reality there is considerable variation in disk mass, lifetime, metallicity, planet history, etc. all of which introduces considerable scatter into the mass-composition relation. Nonetheless, Figure \ref{mcfig} offers clear evidence for the core-accretion model of planet formation, at least for the close-in planets found by {\it Kepler} \section{Super-Earth vs. Sub-Neptune} Throughout this paper, we have repeatedly used the terms super-Earth and sub-Neptune to refer to low-mass {\it Kepler} planets. What exactly is the difference between these classes of planets? For our purpose a sub-Neptune is any planet whose radius cannot be explained by a bare rock/iron model, i.e., it must have some sort of large optically-thick H/He or water envelope. Super-Earth on the other hand implies a more terrestrial planets, one that may have a solid or liquid surface and where the atmosphere, if any, contributes a negligible fraction to the planet's size. Although this may seem like semantics, one of the long-term goals of exoplanet science is to search for biomarkers in the transmission spectra of potentially habitable super-Earths. Whether or not a planet has a large H/He envelope tens of kbar deep has very important implications for habitability. The current definition used by the {\it Kepler} mission is that planets 1.5-2.0 $R_{\mathrm{\oplus}}$ are super-Earths, while planets 2.0-4.0 $R_{\mathrm{\oplus}}$, are sub-Neptunes. These round numbers however, do not quite correspond to our more physically motivated definition of whether or not a planet has a thick envelope. Figure \ref{minfig} plots the minimum H/He envelope fractions required by our models vs. planet mass for several different radii in the 1.5-2.0 $R_{\mathrm{\oplus}}$ super-Earth/sub-Neptune transition region. \begin{figure}[h!] \begin{center} \includegraphics[width=3.5in,height=2.5in]{MinimumHHE2.eps} \end{center} \caption{H/He envelope fraction vs. planet mass for super-Earth and sub-Neptune sized planets. Curves are color-coded according to planet radius ranging from 1.5-2.5 $R_{\mathrm{\oplus}}$. Here we assume water-free sub-Neptunes with H/He envelopes atop Earth-like rocky cores.\label{minfig}} \end{figure} It is quite difficult to construct a 2.0 $R_{\mathrm{\oplus}}$ planet that does not have some sort of thick envelope. Assuming an Earth-like interior, such planets would have to be 16.5 $M_{\mathrm{\oplus}}$, to explain their size without any type of envelope. For a completely iron-free interior, it is possible to construct a 2.0 $R_{\mathrm{\oplus}}$ that is only 11 $M_{\mathrm{\oplus}}$. However, completely iron-free is probably not a realistic composition for planets of several earth masses. Indeed both Kepler-10b and CoRoT-7b, may be slightly enhanced in iron compared to the Earth \cp{Batalha2011,Hatzes2011}. This stands in contrast to the observed sample of likely rocky planets all of which are $<$10 $M_{\mathrm{\oplus}}$. It is possible that more massive rocky planets are yet to be found, however, the ${\it Kepler}$ is essentially complete for 2.0 $R_{\mathrm{\oplus}}$ within 100 days \cp{Petigura2013}. For follow-up RV and TTV mass measurements to have missed a population of $<$10 $M_{\mathrm{\oplus}}$ rocky planets, they would need to somehow be biased against more massive and therefore easier to detect planets. Moreover, there are basic arguments in core-accretion theory that lead us to expect that there should not be $\sim$20 $M_{\mathrm{\oplus}}$ rocky planets. By the time a planet is $\sim$10 $M_{\mathrm{\oplus}}$, its gravity should be sufficiently strong that it should be able to accrete a substantial H/He envelope from the disk \cp{Ikoma2012}, and for periods $\gtrsim$10 days be able to retain it against photo-evaporation \cp{Lopez2013}. On the other hand, if we assume a more typical low-mass planet with a 5 $M_{\mathrm{\oplus}}$ Earth-like core, then to be 2.0 $R_{\mathrm{\oplus}}$ it would need 0.5\% of its mass in a H/He envelope. This may not sound like much, but it corresponds to $\sim$20 kbars of hydrogen and helium, $\sim 20 \times$ higher than the pressure at the bottom of the Marianias Trench. Moreover, the temperature at the bottom of such an envelope would be $\gtrsim$3000 K, even for ages of several Gyr. We believe that such a planet is more properly classified as a sub-Neptune. As a result, 2.0 $R_{\mathrm{\oplus}}$ is more of a quite hard upper limit for the size of a envelope-free super-Earth and most of the planets between $\sim$ 1.75 and 2.0 $R_{\mathrm{\oplus}}$ are likely to be H/He rich sub-Neptunes. If 2.0 $R_{\mathrm{\oplus}}$ is really the hard upper limit for the super-Earth/sub-Neptune transition, then what is the lower limit? As shown in Figure \ref{minfig}, for planets $\lesssim$1.5 $R_{\mathrm{\oplus}}$ it is entirely possible to explain their radii without any H/He. Moreover if such planets do have any H/He, then it must be $\lesssim$0.1\% of their mass, even if we assume a maximally iron-rich core. This is small enough of envelope that the rock/iron core dominates the planets size. Moreover, as shown in \ct{Lopez2013} and \ct{Owen2013}, such tenuous envelopes are quite-vulnerable to being completely photo-evaporated, at least at period $\lesssim$100 days. This does not exclude the possibility that 1.5 $R_{\mathrm{\oplus}}$ cannot have large water envelopes, but it does suggest that they are unlikely to have large H/He envelopes. To summarize, we can say that 2.0 $R_{\mathrm{\oplus}}$ is likely a hard upper limit for the maximum size of envelope-free rocky super-Earths and 1.5 is likely a lower limit for the minimum size of a H/He rich sub-Neptune. As a result, we suggest using 1.75 $R_{\mathrm{\oplus}}$ rather than 2.0 $R_{\mathrm{\oplus}}$ for the dividing line between these classes of planets. \section{Discussion} In Sections \ref{studysec} and \ref{compsec}, we showed that planetary radius is to first order a proxy for a planet's composition above $\sim$2 $R_{\mathrm{\oplus}}$. This means that the observed radius occurrence distribution for {\it Kepler} candidates found by \ct{Fressin2013} and \ct{Petigura2013}, is in reality a {\it composition} occurrence distribution for close in planets at several Gyr. In particular, \ct{Fressin2013} and \ct{Petigura2013} found that there is a sharp, roughly power-law like drop off in the frequency of planet occurrence above $\sim$3 $R_{\mathrm{\oplus}}$, while below this there a plateau in the plant occurrence rate down to at least 1 $R_{\mathrm{\oplus}}$. This distribution makes sense in the light of traditional core accretion theory. The timescale for planetesimal collisions to form rocky planets is short compared to the typical lifetime of a disk and such planetesimals are preferentially concentrated deep in the stars potential well, so nature easily makes large populations of irradiated rocky planets \cp{Chiang2013,Hansen2013}. At larger sizes, planets are limited by their ability to accrete a H/He envelope from the disk before the disk dissipates \cp{Bodenheimer2000,Ikoma2012,Mordasini2012c}. In these models the accretion of the envelope is limited by the ability of the proto-planetary envelope to cool and contract. This makes it difficult to accrete larger initial H/He envelopes, particularly if the {\it Kepler} population formed in situ \cp{Ikoma2012}. It easier to form large planets further out, particularly beyond the snow-line where the increase in the local solid mass makes it easier to trigger runaway accretion to make a gas-giant. The relative scarcity of hot Jupiters found by \ct{Fressin2013} and \ct{Petigura2013}, is an indication that whatever migration mechanism brings in gas giants to orbits $\lesssim$100 days must be fairly rare. One key puzzle, however, is the location of the break in the planet occurrence rate distribution. If it were due to a transition from a large rocky population to a sub-Neptune population, with planet occurrence declining with increase envelope fraction, then one would expect the break to occur at $\sim$1.5-1.8 $R_{\mathrm{\oplus}}$ the likely maximum size for bare rocky planets. Instead the break occurs at 2.8 $R_{\mathrm{\oplus}}$, indicating that the occurence plateau must include many volatile rich planets. Although 2.8 $R_{\mathrm{\oplus}}$ is far too large for bare rocky planets, it is achievable for H/He free water-worlds. A 10 $M_{\mathrm{\oplus}}$ planet with 80\% of its mass in a water envelope would be $\sim$2.7 $R_{\mathrm{\oplus}}$. As a result, it is at least possible that the break in the planet occurence distribution is a transition from an abundant population of rocky {\it and} water rich planets to a population with accreted H/He envelopes. Otherwise, models must explain why plateau should include a substantial population of planets with $\sim$1-3\% of their mass in H/He envelopes before dropping off at larger envelope fractions. One potential explanation is that perhaps the $\sim$2-3 $R_{\mathrm{\oplus}}$ planets have hydrogen envelopes that were outgassed instead of accreted directly from the nebula. \ct{Elkins-Tanton2008} showed that low-mass planets can outgass up to $\sim$5\% of their mass after formation in H$_2$. However, this was only the case if the planets interiors we initially very wet, with $\sim$ half the mass of their initial mantles in water. This again requires a large of amount of water or other volatile ices to migrate to short period orbits. It is also important to note that although the observed radius distribution may tell us the composition distribution of {\it Kepler} candidates today, this is not the same as the initial distribution the planets formed with. As showed in \ct{Lopez2012}, \ct{Lopez2013}, and \ct{Owen2013} the observed {\it Kepler} population has likely been significantly sculpted by photo-evaporation. Close-in low-mass planets have likely lost a significant fraction of their initial H/He inventories, resulting in smaller radii today. This effect is compounded by the fact that less irradiated planets should be able to accrete larger initial H/He envelopes in the first place \cp{Ikoma2012}. As more quarters of data are analyzed and the occurrence distribution pushes out to longer periods there should be a distinct increase in the abundance of Neptune and sub-Neptune sized planets. Another potential effect of photo-evaporation is the opening up a slight ``occurrence valley'' in the radius-flux distribution \cp{Lopez2013,Owen2013}. Photo-evaporation makes it less likely that planets will survive with envelopes $\lesssim$1\% of their mass on highly irradiated orbits. Planets will tend to either retain more substantial envelope, or lose them entirely. More work needs to be done to carefully search for such a deficit, however there are some preliminary indications that it may exist. Examining both the raw candidate distribution \cp{Owen2013}, and a well-studied sample of M-dwarfs \cp{Morton2013}, appears to show a slight dip in the frequency of planets at $\sim$2 $R_{\mathrm{\oplus}}$. Such hints are still preliminary, but if real this has important implications for constraining the compositions of the {\it Kepler} population, since any large variation in the water fraction of close-in will tend to erase such a feature \cp{Lopez2013}. Using the models presented here, it is possible to instead study the {\it Kepler} envelope fraction distribution, which should aid in detecting any such ``occurrence valley.'' \section{Summary} One of the key strengths of the thermal evolution models used here is that they allow us to predict the radius of a planet as a function of mostly observable parameters; namely, planet mass, incident flux, age, and composition. For Neptune and sub-Neptune size planets, we showed in section \ref{studysec}, the effect of varying planet mass or incident flux on the radius is an order of magnitude smaller than the effect of varying the fraction of a planet's mass in a H/He envelope. In section \ref{flatsec}, we described how this flatness in iso-composition mass-radius curves, arises as a natural result of our thermal evolution models. As a result of these features, planetary radius is to first order a proxy for the H/He inventory of sub-Neptune and larger planets, almost independent of their mass. In section \ref{compsec} we showed this close connection between radius and envelope fraction for the observed population of transiting planets with measured masses. We then demonstrated how our models allow us to recast the observed mass-radius distribution as a mass-{\it composition} relationship, allowing a more direct comparison to models of planet formation and evolution. \acknowledgements{EDL would like to thank Angie Wolfgang, Jack Lissauer, Lauren Weiss, and Leslie Rogers for many helpful conversations. This research has made use of the Exoplanet Orbit Database and the Exoplanet Data Explorer at exoplanets.org." We acknowledge the support of NASA grant NNX09AC22G, NSF grant AST-1010017, and the UCSC Chancellor's Dissertion Year Fellowship.}
1,108,101,565,393
arxiv
\section{Introduction}\label{sec:intro} Nowadays, most advanced signal processing applications, such as wireless communications, graphics, industrial control and medical imaging, strongly rely on linear algebra algorithms. Some of these algorithms require performing QR Decomposition (QRD)~\cite{matrixCompu}\cite{Shoaib2013}\cite{Korat2019}. QRD decomposes an input matrix $A^{m\times n}$ into two new matrices, $Q^{m\times m}$ and $R^{m \times n}$, whose product is equal to $A$. Furthermore, it is fulfilled $R^{m \times n}$ is an upper triangular matrix and $Q^{m\times m}$ is an orthogonal matrix. QRD or QR factorization is a very compute-intensive operation which requires high-throughput in many applications. Due to that reason, many researchers have investigated the implementation of QRD on hardware for embedded systems. There are several methods to compute QRD, but the Givens Rotation Method (and its variations) is probably the most widely used to implement QRD for embedded systems. This is due to its robust numerical properties and its easy parallelization. The Givens Rotation Method is based on a unitary transformation, called Givens rotation, which allows inserting a zero element at a selected location of a matrix. Then, following a predefined schedule, the input matrix is transformed into an upper triangular matrix $R$ by successive Givens rotations, whereas the same rotations over the identity matrix produce an orthogonal matrix $Q$. To perform each Givens rotation, first, the rotation angle $\theta$, which allows zeroing an element, has to be computed by using the first non-zero pair of elements of the two target rows. Then, all pairs of elements within said rows have to be rotated by $\theta$. Therefore, at first, the implementation of Givens rotations requires complex logic to compute trigonometric functions. However, most implementations get rid of this complex hardware by using the CORDIC algorithm~\cite{Volder}. Using the same datapath, this algorithm allows computing $\theta$ (vectoring mode) and the vector rotations (rotation mode) based only on addition and shift operations. There are many works on hardware implementation of QRD based on CORDIC architectures, such as in~\cite{5742719}\cite{6936368}\cite{6271699}\cite{Luo2012}\cite{6419866}\cite{TCAS15}\cite{Robles2017}\cite{Korat2019}. The majority of these works focus on fixed-point implementation, mostly because Floating-Point (FP) implementations require more resources. However, the increasing complexity of new algorithms produces that the use of FP numbers becomes compulsory for many applications due to either stability or precision requirements. Some of these applications include adaptive beam-forming~\cite{1193571}\cite{Lightbody200067}\cite{Surapong2011}, Space-Time Adaptive Processing (STAP) for radar applications~\cite{7015041}\cite{5960667}\cite{Kulikov2020}, adaptive FIR filtering in fetal electrocardiography~\cite{6871780} and sub-Nyquist sampling~\cite{7168793}. In this paper, we study the implementation of a FP Givens rotation unit with high-throughput and low area and energy cost based on the CORDIC algorithm. Our pipelined arithmetic unit focuses only on the computation itself and requires an extremely simple control based on only one signal. Therefore, our design could be useful for specific applications on embedded systems, and also for being included as computation core in FPGA accelerators designed for high-performance computing applications. Our design is based on the fixed-point Givens rotator used in~\cite{Luo2012} and~\cite{TCAS15} which achieves a very high-throughput with a very low cost. That is accomplished by eliminating the z-coordinate datapath, and fully overlapping the angle computation and row-element rotation. In this paper, we investigate the adaptation of this fixed-point architecture to support FP numbers with a minimum cost increase. Although there are some proposals for generic FP implementation of CORDIC algorithm, to the best of our knowledge there is no implementation of a specialized FP CORDIC architecture to perform Givens rotations. Furthermore, in this paper, the proposed Givens rotator for standard FP numbers is enhanced by transforming it to support Half-Unit Biased (HUB) FP numbers~\cite{TC15}. Similarly to~\cite{ISCE15QRD}, this new format allows us to reduce simultaneously the area, the delay and the energy consumption of the proposed FP Given rotator. The main contributions of this paper are: \begin{itemize} \item the proposal of a FP Givens-rotation unit with high throughput and reduced area for IEEE754-like numbers; \item the enhancement of the proposed FP Givens rotator by using HUB approach; \item an experimental error analysis to optimize the internal word-length and number of CORDIC iterations; \item a FPGA implementation analysis and a comparison of different parameters and approaches; \item a comparison with a fixed-point implementation and other previous FP proposals related to QRD. \end{itemize} The rest of this paper is structured as follows: Section~\ref{sec:review} provides a review of some previous proposals of FP CORDIC architectures and FP Givens rotators. The proposed new architecture to implement the FP Givens rotators is described in Section~\ref{sec:std-arc}. The improvement of this architecture by using HUB format is presented in Section~\ref{sec:hub-arc}. The implementation results of the studied architecture along with their error analysis and comparisons are provided in Section~\ref{sec:eval}. Finally, Section~\ref{sec:con} gives the conclusions of this work. \section{Previous works on FP CORDIC and FP Givens rotation}\label{sec:review} The CORDIC (COordinate Rotation DIgital Computer) algorithm is an iterative algorithm only based on shifts and additions~\cite{Volder}. It allows rotating an input vector, which is specified by its coordinates $(X, Y)$, through an angle $\theta$, which is usually called $Z$ coordinate. The same CORDIC circuit may operate in two modes, vectoring or rotation mode. The former rotates the input vector until its $Y$ coordinate reaches zero, such as the $Z$ coordinates indicates the angle of the input vector, and the $X$ coordinate, the modulus. The latter rotates the input vector through a specific angle $\theta$. Therefore, a CORDIC unit could be used to compute the angle for a Givens Rotation (vectoring mode), and then performs the rotation through the rest of the row using said angle (rotation mode). Since the CORDIC algorithm was first proposed by Volder in~\cite{Volder}, many researchers have proposed different improvements to the algorithm or its implementation, whereas others have utilized them in a wide variety of applications. One of these improvements is the FP implementation of this algorithm. In~\cite{25655}, a FP CORDIC processor for matrix computation was presented. This word-serial implementation performs each iteration using FP addition and shifting for the $X$ and $Y$ datapath. However, the $Z$ coordinate is represented using fixed-point values with the same bit-width as the significand of the $X$ and $Y$ coordinates. According to the authors, this hybrid processor possesses sufficient accuracy to compute QRD and other matrix computation. A similar approach was used in~\cite{1212843} to implement an SVD processor. Again, an iterative CORDIC architecture is also used, but a two-stage pipeline of the datapath allows performing two independent rotations at the same time. On the other hand, a FP $Z$ coordinate (angle) was used by~\cite{378100} and~\cite{Munoz2010} for their word-serial implementation. In the latter, all operations are performed in FP format, which requires a large area occupation, considering it is an iterative architecture. On the contrary, the design in~\cite{378100} only uses a full FP representation for the angle, but the computation of iterations is performed using block FP (i.e., exponents remains constant through the iterations). They state that renormalization of significands between consecutive additions are expensive and not required~\cite{378100}. In \cite{15343} and \cite{4637696}, a similar solution is utilized for pipeline implementations, because the implementation of FP addition on each step would be unfeasible due to the amount of hardware required. In this case, the inputs and outputs of a fixed-point pipeline CORDIC core are adapted by a floating-to-fixed-point and a fixed-to-FP converter, respectively. In~\cite{15343}, this approach is utilized to implement a CMOS CORDIC processor for 21-bit FP numbers. In~\cite{4637696}, further optimization of the angle datapath allows implementing a high-throughput double-precision FP CORDIC processor on an FPGA. All these previous CORDIC approaches have to deal with the implementation of the $Z$ coordinate datapath which, under FP format, is much more complex than the $X$ and $Y$ datapath. However, the computation of the angle itself is not required to implement a Givens rotation unit, because only knowing the direction of each microrotation in vectoring mode is enough to perform the rotation mode~\cite{Luo2012}. Thus, the use of generic CORDIC implementations instead of a specific Givens rotation unit is very inefficient since an important amount of the circuit is wasted computing unnecessary results. There are other CORDIC designs for computation of specific functions in FP format, such as \cite{Nguyen2015}, \cite{Zhu2017} and \cite{Surapong2011}. In \cite{Nguyen2015} and \cite{Zhu2017}, an architecture to compute only sine and cosine in FP format is proposed. To optimize the design, in the former the angle is introduced in fixed-point format whereas in the latter CORDIC is combined with Taylor expansion and rectangular multipliers. In~\cite{Surapong2011} a FP phase and magnitude digital detector is proposed. However, all these architectures do not allow computing the QRD by themselves. Another approach to compute QRD consists in avoiding the use of the CORDIC algorithm and computing the FP Givens rotation using standard FP arithmetic operations as in~\cite{Wang20093}. The work in~\cite{Wang20093} proposes a 2D-systolic array to perform FP computation of QRD in parallel. This approach uses table look-up and Taylor series expansion to implement the FP division and square root required to implement the Givens rotations. As a consequence, the resulting architecture requires a considerable amount of memory and many multipliers which increases the hardware requirements. In the next section, we propose a FP CORDIC-based pipeline architecture to perform Givens rotations using very reduced hardware. This reduction is achieved because the angle computation and the rotations themselves share the same hardware. Furthermore, the $Z$ datapath is eliminated and the computation is performed in fixed-point arithmetic since the FP inputs are previously converted into fixed-point and the other way around at the output. Since both operations, angle calculation and rotation, are almost completely overlapped, the pipeline approach allows very high throughput. \section{FP Givens rotation unit}\label{sec:std-arc} In this section, we propose a new FP Givens rotation unit based on the pipeline architecture described in~\cite{TCAS15}. All the content of this section is new but Subsection~\ref{sec-fixcordic} which summarizes the core of the architecture in~\cite{TCAS15}. \begin{figure} [thb] \centering \includegraphics[width=0.50\textwidth ]{ArquiGen_mio} \caption{General architecture of the proposed FP Givens rotation unit} \label{ArquiGen_rec} \end{figure} As in some previous implementations of FP CORDIC algorithm, here we adapt the fixed-point CORDIC-based architecture to FP numbers by using format converters at its inputs and outputs. These converters do not consider special FP values like NaN, infinity, or subnormals. Fig.~\ref{ArquiGen_rec} shows the general architecture of the proposed FP Givens rotation unit. An input converter transforms the FP input coordinates (\textit{Xflo} and \textit{Yflo}) into block FP output coordinates, i.e. two aligned signed significands (\textit{Xfix} and \textit{Yfix}) sharing the exponent (\textit{mExp}). The aligned significands are two's complement numbers which have one sign bit, one integer bit, and $n-2$ fractional bits. These significands are processed by a fixed-point Givens rotator whereas the exponent is transmitted untouched (except the pipeline delays) through the pipeline to the output converter. The fixed-point Givens rotator performs the desired rotation over the significands producing the fixed-point results (\textit{X'fix} and \textit{Y'fix}). The output converter transforms back the processed block FP results (i.e., the fixed-point output significands along with the common exponent) to independent FP values (\textit{X'flo} and \textit{Y'flo}). The signal \textit{v/r} indicates whether the required operation within the Givens rotator is a vectoring operation for computing the angle or a rotation operation for rotating the corresponding rows. Following, these three circuits are described in detail. \subsection{Input converter for FP to fixed-point conversion}\label{sec-ic} The input converter of Fig.~\ref{ArquiGen_rec} transforms the two FP values corresponding to the $X$ and $Y$ coordinates into the internal block-FP representation. To do this, the significand of the coordinate with the lowest exponent is aligned by right shifting it, such as this coordinate shares the exponent with the other coordinate. Moreover, both significands are converted from sign-and-magnitude to two's complement representation to facilitate the basic operations required by CORDIC algorithm Let us consider that the bit-width of the input significands, $m$, is smaller than the bit-width of the internal significands, $n$. This prevents losing a considerable amount of precision due to the conversion as we will see in Section~\ref{sec:eval}. Fig.~\ref{FloatToFix_rec} represents the proposed architecture for the input converter with rounding. The two FP input values, $X$ and $Y$, are split into sign, exponent, and significand. In Fig.~\ref{FloatToFix_rec}, \textit{Sx}, \textit{ExpX}, and \textit{Mx} represent the sign, the exponent, and the significand of $X$, respectively, and similarly for the input $Y$. On the other hand, the outputs \textit{Xfix} and \textit{Yfix} are the two's complement significands sharing the block exponent \textit{mExp}, obtained from the two inputs values. \begin{figure} [tbh] \centering \includegraphics[width=0.75\textwidth ]{FloatToFix_rec} \caption{ FP to fixed-point converter with rounding.} \label{FloatToFix_rec} \end{figure} First, the two signed-magnitude input significands, \textit{Mx} and \textit{My}, are converted into two's complement representation by selecting the two's complement of the value when the corresponding sign bit (\textit{Sx} or \textit{Sy}) is one, and appending the sign bit in the most significant position. Then, the resulting values are expanded to fit the required size of $n$ bits by appending $n-m-1$ zeros to the right. While the significands are processed, one input exponent is subtracted from the other for determining both the absolute difference between them and the greatest one. The two possible subtractions ($ExpX-ExpY$ and $ExpY-ExpX$) are performed in parallel to make the computation faster. The only positive result of both subtractions is used to select the number of shifting positions for aligning the significand with the lowest exponent. The sign of the result of the first subtraction is used to control the multiplexers which select the output exponent (\textit{mExp}) and the significand for being right shifted. We could implement only one subtraction and use the sign of the result to select the lowest operand, but then a two's complement operation over the difference may be required if its sign is negative. that's mean similar hardware but much lower speed. As it is said above, depending on the result of the exponent comparison, the significand with the lowest exponent is right-shifted by as many bit positions as the absolute difference between the exponents. In the implementation of Fig.~\ref{FloatToFix_rec}, the final value is rounded to nearest tie-to-even~\cite{lang04} based on the discarded bits after shifting to keep the accuracy as high as possible. However, this rounding requires the computation of the sticky-bit and a possible addition for rounding-up which mean a very significant amount of hardware. Another option is simply discarding the Least Significant Bits (LSBs) of the shifted significand, which simplifies the hardware at the cost of some loss of precision. Both approaches are evaluated in Section~\ref{sec:eval}. We should note that Fig.~\ref{FloatToFix_rec} does not represent the circuit to include the leading one of the input significands. Furthermore, the shifter includes a logic which forces the output to zero if the number of positions to be shifted is greater than $n$. \subsection{Fixed-point Givens rotation unit}\label{sec-fixcordic} To implement the fixed-point Givens rotator, the modified pipeline CORDIC architecture presented in \cite{TCAS15} is used. This pipelined architecture performs both vectoring and rotation mode using the same datapath. Moreover, it gets rid of the $Z$-coordinate computation by utilizing directly the direction of each microrotation obtained in the angle computation (vectoring mode) for rotating the row elements in the next cycles (rotation mode). \begin{figure}[tbh] \centering \includegraphics[width=0.70\textwidth]{rotador} \caption{Pipelined stage for fixed-point Givens rotation unit} \label{fig_CORDIC} \end{figure} Fig.~\ref{fig_CORDIC} shows a pipeline stage of the circuit presented in~\cite{TCAS15}. The right part contains the typical CORDIC X-Y datapath whereas the left part corresponds to the control logic which substitutes the Z datapath. In vectoring mode, this circuit selects the microrotation direction indicated by the sign of $Y$ coordinate which is used to control the $add/sub$ circuits in the X-Y datapath. Furthermore, this bit is stored in a register to be used in the subsequent vector rotations (rotation mode). In the rotation mode, the registers $\sigma$ controls the adders of the X-Y datapath to select the direction of the microrotation. A control signal $v/r$, which is propagated through the pipeline, is used to select between vectoring or rotation mode. An active signal indicates a new angle computation (vectoring mode) on the actual stage. Then, each active $v/r$ is followed by as many inactive cycles (rotation mode) as elements of the row have to be rotated using the computed angle. Therefore, this circuit can perform one element rotation per cycle (either angle computation or row elements rotation) as far as a new pair of row elements are provided at the input each clock cycle. \subsection{Output converter for fixed-point to FP conversion}\label{sec-oc} After the CORDIC rotation, the two rotated fixed-point coordinates have to be converted back to FP representation. This transformation requires the same operations in both fixed-point values: taking the sign bit, normalizing and rounding the significands, and computing each exponent by actualizing the common exponent according to said normalization. That could be performed using the architecture shown in Fig.~\ref{FixToFloat_rec}, where the rotated fixed-point coordinates \textit{X'fix} and \textit{Y'fix} are converted into FP coordinates \textit{X'flo} and \textit{Y'flo}. \begin{figure} [thb] \centering \includegraphics[width=0.75\textwidth]{FixToFloat} \caption{Fixed-point to FP converter (under/overflow logic not represented). } \label{FixToFloat_rec} \end{figure} First, the absolute value of the input coordinates is calculated by the two's complement units and the multiplexer controlled by the Most Significant Bit (MSB) of the corresponding coordinate. This MSB is also the sign of the FP output. Then, these unsigned values are normalized by the normalization module. This module is composed of a leading one detector and a left-shifter. Using this circuit, the input value is left-shifted until its MSB equals one. Besides the normalized coordinate, the normalization module also provides the number of shifted positions which is used to compute the corresponding new exponent. This new exponent is calculated by subtracting the number of shifted positions from the common exponent. If the latter operation produces underflow that coordinate is flushed to zero, although for clarity, the logic performing this is not represented in Fig.~\ref{FixToFloat_rec}. After normalization, only the $m$ MSBs of the computed value are selected for the output significand. To improve the accuracy of the FP results, rounding to nearest tie-to-even~\cite{lang04} is utilized. That involves certain logic for computing the sticky-bit and selecting the direction of the rounding based on the discarded bits. Finally, the selected $m$-bit value is incremented by one if a rounding up is required. This rounding may produce an overflow and, in this case, the exponent is incremented accordingly. \section{Improvement based on HUB (Half-Unit Biased) approach}\label{sec:hub-arc} The Half-Unit Biased (HUB) representation is a new family of formats which allows optimizing real number computation by simplifying rounding to nearest and two's complement operation~\cite{TC15}. Basically, HUB formats append an Implicit Least Significant Bit (ILSB) to the binary number to get the represented value. This ILSB is constant and equals one. For example, the HUB number $1.0010$ represents the value $1.00101$. When using HUB numbers, rounding to nearest is performed simply by truncation. For example, the nearest 5-bit HUB number to the value $1.101011$ is $1.1010$ (which actually represents $1.10101$), whereas for a conventional representation would be $1.1011$. In this particular example, the amount of rounding error is the same ($0.000001$) for both cases, but this is not true in the general case. In fact, it is fulfilled that the addition of the absolute value of the rounding error corresponding to both approaches equals the rounding error bound (in this particular example, $0.00001$). That means the better a value is represented under HUB format, the worst it is represented under conventional one. However, the bounds of the rounding errors for conventional and HUB formats are the same~\cite{TC15}. Therefore, although HUB and conventional approaches provide different result values, both representations allow the same accuracy. On the other hand, another advantage of HUB numbers is the fact that two's complement is performed simply by bit-wise inversion~\cite{TC15}. For example, let consider the signed HUB number $A=01.0110$, then, $-A=10.1001$ (note that the ILSB absorbs the effect of the required increment). This property allows simplifying the implementation of the CORDIC algorithm and the conversion between FP and fixed-point numbers. Thanks to those properties, the HUB approach has been very useful to improve both fixed-point or FP designs. In fixed-point designs, the improvement of accuracy allows reducing the bit-width of numbers and, consequently, also area and delay~\cite{ISCE15QRD}\cite{asil14}. In FP designs, this simplification improves directly the implementation of arithmetic units~\cite{Hormigo2016MeasuringRound-to-Nearest}. Thus, we propose using this approach to enhance the implementation of our FP Givens rotation unit. Let us consider that the input and output coordinates in our HUB version of the Givens rotation unit are represented under the HUB FP format. That means the significands has an ILSB whereas exponents remain in conventional format~\cite{TC15}. Similarly, all internal fixed-point significands are also HUB numbers. However, exponent and other auxiliary numbers use conventional representation. Following we show how the circuits described in Section~\ref{sec:std-arc} are turned into a HUB architecture. It would be almost straightforward to adapt this architecture to receive standard FP inputs and deliver HUB FP outputs and the other way around. Combining these three approaches would be very easy to design a QRD unit with standard FP inputs and outputs but working internally with HUB FP numbers. \subsection{HUB input converted}\label{sec-hic} Here, the input converter described in Section~\ref{sec-ic} is adapted to support HUB numbers, which simplifies it. Fig.~\ref{FloatToFixHub_rec} illustrates the new input converter. First, a simply bit-wise inversion substitutes the two's complement logic in the design of Fig.~\ref{FloatToFix_rec}, since the final addition is not required for HUB numbers. \begin{figure}[thb] \centering \includegraphics[width=0.70\textwidth]{FloatToFixHub_rec} \caption{FP to fixed-point HUB converter} \label{FloatToFixHub_rec} \end{figure} Second, the extension of the m-bit significands to reach the $n$ bits requires appending the ILSB first (which equals one), and then to append the $n-m-1$ zeros. However, the obtained n-bit number is also a HUB number with a new ILSB. That implies an implicit rounding up operation, which may produce some bias in the conversion. To prevent this bias, the extension could be transformed so that the implicit rounding may be either up or down. That could be achieved by extending the number randomly by either '$1000\cdots$' or '$0111\cdots$'~\cite{Hormigo2016MeasuringRound-to-Nearest}. In the architecture of Fig.~\ref{FloatToFixHub_rec} the explicit LSB of the significand has been used as the random variable, such as the significand is extended with this LSB followed by as many bits set to the inverse of the LSB as required to reach the desired bit-width. In Section~\ref{sec:eval}, we compare this approach with the biased one (zero extension) in terms of precision and hardware cost. Third, the main problem of FP HUB formats is the fact that they can not exactly represent integer numbers. In general, that is not a problem in real number computation since integers appear with the same probability as any other real number. Nevertheless, in QR decomposition the identity matrix, which contains 0's and 1's, is introduced as an input if the computation of Q is required. The zeros are not a problem since they are treated as a special number in any case, but when the 1's are managed as a HUB number some error is introduced due to the ILSB. We have studied the introduction of a specific logic to detect the 1's of the identity matrix. If this case is detected, the ILSB of the significand is not appended to the value in the conversions into fixed-point, so that $n-m$ zeros are appended to get an n-bit number. The 1's case is detected by checking that the input exponent is zero (the bits equals '$011\cdots1$' in an IEEE-like representation of the exponent) and the input significand is also zero. In any case, the logic for the latter is included to detect the zero value before appending the implicit leading one to the significand. Hence, only the exponent detection has to be added. In Section~\ref{sec:eval}, we show the pros and cons of using this additional logic. Finally, as in the input converter for conventional numbers, after obtaining the two expanded n-bit significands, the one corresponding to the lowest exponent may be right-shifted to align both input values. In the HUB approach, the obtained shifted value is effectively rounded to the nearest simply by truncation. In contrast to the conventional approach, no additional logic is required for that rounding. \subsection{HUB Fixed-point Givens rotation unit} Although the HUB fixed-point CORDIC-based architecture for QRD was proposed in~\cite{ISCE15QRD}, here, a more detailed description is provided. Since X and Y coordinates are HUB numbers, on each microrotation the ILSBs of both input coordinates have to be considered before the addition/subtraction operation. The introduction of the ILSBs has two significant implications. First, the two's complement operation utilized for subtraction is simplified since it does not require the addition of the value one. Hence, the input carry of the adder is available because it is not set to one for subtraction. Second, at first, the adder needs one bit more to operate the ILSB appended to the HUB numbers. However, since only the $n$ MSBs of the addition/subtraction are delivered at the output, the (n+1)th sum bit is not required and only the carry bit has to be actually computed. Taking into account that the (n+1)th MSB of the non-shifted coordinate is always one (since it is the ILSB), this additional bit of addition could be overcome by connecting the input carry of the n-bit adders to the (n+1)th MSB of the shifted coordinate~\cite{ISCE15QRD}. In the first stage, this bit is one, since it corresponds to the ILSB. Fig.\ref{HUBtrans} details the required transformation of each CORDIC stage for the implementation of the addition/subtraction operation. \begin{figure} [thb] \centering \includegraphics[width=0.75\textwidth]{HUBCORDICtrans} \caption{Transformation of the CORDIC add/sub circuit for HUB approach} \label{HUBtrans} \end{figure} Apart from the explicit changes to the circuits, using HUB numbers produces another important effect. Now, the shifted coordinates are not simply truncated but actually rounded before operating them, which produces an increase of the precision. Specifically, in~\cite{ISCE15QRD} is shown that the error in fixed-point QRD computation is halved by using HUB approach. Thus, HUB implementation could reduce the bit-width by one and keep the same precision as the conventional one. \subsection{HUB output converter} Fig.~\ref{FixToFloatHub_rec} shows the circuit of the output converter transformed to support HUB numbers. Similarly to the input converter, a simply bit-wise inversion substitutes the two's complement logic of the previous design to compute the absolute value of the input coordinates. The normalization module is very similar to the conventional one, but the ILSB has to be explicitly appended to the number before left shifting it. However, this solution produces a slightly biased error due to the new ILSB corresponding to the final result. To prevent that bias, in the shifting process the number could be extended randomly with '$1000\cdots$' or '$0111\cdots$'. As in the HUB input converter, this could be implemented by extending the number with a first bit set to its LSB and the rest using the LSB negated. In the next section, we evaluate both cases, biased and unbiased extension. \begin{figure} [thb] \centering \includegraphics[width=0.75\textwidth]{FixToFloatHub_rec} \caption{Fixed-point to FP HUB converter} \label{FixToFloatHub_rec} \end{figure} The greatest difference comes after normalization since the HUB implementation discards the $n-m-1$ LSBs instead of utilizing them to compute the round direction. Thus, the HUB output converter gets rid of the round logic which mainly includes the sticky-bit computation logic and the adder (see Fig.~\ref{FixToFloat_rec}). Furthermore, the possibility of having a significand overflow is also eliminated and as a consequence, the increment of the exponent. Those eliminations, along with the two's complement computation, means a very significant reduction in both area and delay. \section{Results and comparison}\label{sec:eval} To analyze the correctness and effectiveness of the proposed architectures, two parametrized Givens rotation units have been implemented in VHDL, one for each approach (conventional and HUB numbers). These designs allow selecting: the bit-width of the floating-point (exponent and significand) and the fixed-point formats, the number of CORDIC microrotations, either rounding or truncation for conventional input converter, and either unbiased or biased extension and identity matrix detection for HUB converters. The Xilinx ISE 14.3 design suite for FPGAs has been used to analyze different aspects of these architectures. \subsection{Error analysis}\label{sec:error} To perform the error analysis, we have used the Monte Carlo method. Our FP Givens rotators are utilized as building blocks to implement a QRD computation unit for 4x4 matrices following the pipeline architecture proposed in~\cite{TCAS15}. Although the proposed rotator supports any exponent and significand bit-width, only IEEE single-precision FP format (32 bits) has been used in the analysis to simplify it. However, the exponent and the significand bit-width could be adjusted to fit the required dynamic range and the relative precision, respectively. On each experiment, 10,000 4x4 matrices, with FP values randomly generated in a range bounded by $\pm 2^{\pm r}$ (being $r$ a parameter representing the dynamic range of the input values) are used as inputs. The corresponding Q and R matrices obtained as results of the QRD operation are multiplied ($B=Q^t\times R)$ using double-precision and compared with the original matrix. As error measurement, we use the mean of the Signal-to-Noise Ratio (SNR$_{dB}=10\cdot \log_{10}\left( \sum_{i,j} a_{i,j}^2/\sum (a_{i,j}-b_{i,j})^2\right)$, where $A$ is the input matrix, and $B$ is the matrix obtained) . As a reference, the same experiments have been carried out using the Matlab function "qr" for single-precision. \begin{figure} [thb] \centering \includegraphics[width=0.75\textwidth]{HUBvsIEEE} \caption{Precision of different Givens rotation units when varying $r$ (dynamic range of the input)} \label{fig-HUBvIEEE} \end{figure} As an example, Fig.~\ref{fig-HUBvIEEE} shows the results of the experiments with $r$ ranging between 1 and 20 for both approaches and several fixed-point bit-width ($N={25, 27, 29}$) and $(N-3)$ CORDIC microrotations. The results obtained using Matlab has been included. First, it is observed that the SNR only change slightly with the dynamic-range parameter $r$. Hence, we will use the mean of the SNR for all tested values of $r$, since this catches most of the information. Moreover, as expected, the HUB approach performs better than IEEE almost in all cases. Secondly, to study the ideal numbers of CORDIC microrotations for each case, we have run all the experiments for $N$ ranging from 25 to 30 bits, using different numbers of CORDIC microrotations and $r$ from 1 to 20. In Fig~\ref{fig-iter}, we represent the SNR obtained for each architecture combination. For the conventional approach, using $(N-3)$ microrotations achieves the maximum SNR results and using any more microrotation produces a decrease in the precision. For $N= \{29, 30\}$, $(N-4)$ microrotations achieves almost the same results. Surprisingly, the HUB approach required one microrotation more ($(N-2)$) to reach the top of precision and, in this case, using more microrotations improve very slightly the SNR. It is also clearly observed that the internal fixed-point numbers of the HUB rotators require one bit less than its conventional counterpart to reach the same precision. For the HUB architecture using as less as two guard bits ($N=26$) is enough to reach the same precision as Matlab. Furthermore, $N=29$ and $N=30$ reach the same precision which is the precision of a single-precision FP number. That is why it is not possible to go farther. \begin{figure}[tb] \centering \subfloat[Conventional approach]{\includegraphics[width=0.75\textwidth]{IterSNR1}% \label{fig-iterIEEE}}\\ \subfloat[HUB approach]{\includegraphics[width=0.75\textwidth]{IterSNR3}% \label{fig-iterHUB}} \caption{Precision achieved when varying the number of CORDIC microrotations for different values of $N$ (internal significand bit-width)} \label{fig-iter} \end{figure} Finally, to analyze the effectiveness of some design proposals, we have run the experiment for different versions of the same architecture, specifically, for IEEE approach, the input converter with truncation (IEEETrunc) and rounding (IEEERound); and for the HUB approach, unbiased version of the converters and detection of the identity matrix ($I$) (HUBFull), only unbiased (HUBunbias) or only $I$ detection (HUBDetectI), and the basic architecture with biased converters and no detection (HUBBasic). Fig.~\ref{fig-mejoras} shows the different results obtained when varying $N$ (the SNR is the mean of the values obtained for $r$ between 1 and 20). For the IEEE versions, it is clear that using rounding in the input converter does not improve the results. On the contrary, $I$ detection enhances the precision of HUB approaches up to 4 dB whereas unbiased conversion only has a significant impact when $I$ detection is not implemented. \begin{figure} [tb] \centering \includegraphics[width=0.75\textwidth]{MejorasSNR} \caption{Precision of different approaches when varying $N$ (internal significand bit-width)} \label{fig-mejoras} \end{figure} \subsection{Implementation results} Before presenting the implementation results, we have to note some details of the implemented circuits. First, the internal CORDIC pipeline appends two integer bit to the $N$-bit fixed point coordinates to accommodate the growth of the value due to the scale factor~\cite{lang04}. Second, the scale factor compensation could be performed in the embedded multipliers but it is not included in the implementation results of the Givens rotators since it is not always necessary. Finally, the converters have been pipelined to balance their delay with the CORDIC stages. Specifically, the input converter has two stages whereas the output converter has three. Both proposed approaches of the Givens rotation unit (IEEE, and HUB version), have been synthesized using Xilinx ISE 14.3 software for a wide range of configurations targeting a Virtex-6 XV6VLX240T-2 FPGA. Here, we summarize only the most relevant results obtained using this software tool. First, Table~\ref{tab:DIvsH}, Table~\ref{tab:AIvsH}, and Table~\ref{tab:PIvsH} allow us to compare the implementation results of both approaches for the most typical FP sizes. For a fair comparison, the internal fixed-point bit-width ($N$) has been selected such as both approaches achieve similar precision. Therefore, according to Section~\ref{sec:error}, the HUB version uses one bit less than the IEEE one and both have the same number of CORDIC stages (see Fig.~\ref{fig-iter}). Furthermore, the IEEE version uses truncation in the input converters and HUB version uses identity matrix detection and unbiased extension. Although we show only some concrete values of $N$ (to reach a wider range), relative values are similar for other sizes. The power and energy consumption per operation have been estimated using Xilinx XPS supposing the units work at maximum speed. Along with the obtained results, the ratio between both approaches (HUB/IEEE) has been included to facilitate the comparison. Clearly, the HUB approach outperforms IEEE one in delay, area, and energy consumption. Using practically the same the number of registers, HUB format reduces the number of LUTs used between 7\% and 18\% and the critical path delay between 24\% to 33\%. IEEE versions require much less power due to the lower frequency, but they consume slightly more energy per operation than the HUB ones (between 3\% and 7\%). \begin{table}[thb] \caption{Critical path for Givens rotation units in Virtex-6} \label{tab:DIvsH} \centering \begin{tabular}{llllll} \hline\noalign{\smallskip} & \multicolumn{2}{l}{$N$} &\multicolumn{3}{l}{Delay (ns)}\\ FP & IEEE& HUB&IEEE& HUB&ratio\\ \noalign{\smallskip}\hline\noalign{\smallskip} Half &14& 13& 2.863& 2.18 & 0.76 \\ &16& 15& 3.134& 2.315& 0.74 \\ \noalign{\smallskip}\hline\noalign{\smallskip} Single &26& 25& 3.306& 2.337& 0.71 \\ &28& 27& 3.373& 2.458& 0.73 \\ &30& 29& 3.463& 2.678& 0.77 \\ \noalign{\smallskip}\hline\noalign{\smallskip} Double &55& 54& 4.355& 2.932& 0.67 \\ &57& 56& 4.65 & 2.865& 0.62 \\ &59& 58& 4.506& 2.999& 0.67 \\ \noalign{\smallskip}\hline \end{tabular} \end{table} \begin{table}[thb] \caption{Area results for Givens rotation units in Virtex-6} \label{tab:AIvsH} \centering \begin{tabular}{lllllllll} \hline\noalign{\smallskip} & \multicolumn{2}{l}{$N$} &\multicolumn{3}{l}{Area(LUTs)}&\multicolumn{3}{l}{Area (Registers)}\\ FP & IEEE& HUB&IEEE& HUB&ratio&IEEE& HUB& ratio\\ \noalign{\smallskip}\hline\noalign{\smallskip} Half &14& 13& 839 & 689 & 0.82& 536 & 513 & 0.96\\ &16& 15& 1030& 825 & 0.80& 680 & 645 & 0.95\\ \noalign{\smallskip}\hline\noalign{\smallskip} Single &26& 25& 2365& 2057& 0.87& 1632& 1587& 0.97\\ &28& 27& 2631& 2300& 0.87& 1856& 1845& 0.99\\ &30& 29& 2957& 2550& 0.86& 2134& 2060& 0.97\\ \noalign{\smallskip}\hline\noalign{\smallskip} Double &55& 54& 8052& 7400& 0.92& 6484& 6461& 1.00\\ &57& 56& 8508& 7766& 0.91& 6960& 6853& 0.98\\ &59& 58& 9012& 8226& 0.91& 7426& 7313& 0.98\\ \noalign{\smallskip}\hline \end{tabular} \end{table} \begin{table}[thb] \caption{Power consumption for Givens rotation units in Virtex-6} \label{tab:PIvsH} \centering \begin{tabular}{lllllllll} \hline\noalign{\smallskip} & \multicolumn{2}{l}{$N$}&\multicolumn{3}{l}{Power (W)}&\multicolumn{3}{l}{Energy (pJ)}\\ \hline FP & IEEE& HUB&IEEE& HUB&ratio&IEEE& HUB& ratio\\ \noalign{\smallskip}\hline\noalign{\smallskip} Half &14& 13& 0.068& 0.085& 1.24& 195.1 & 184.5 & 0.95\\ &16& 15& 1043& 0.091& 1.26& 225.1 & 209.7 & 0.93\\ \noalign{\smallskip}\hline\noalign{\smallskip} Single &26& 25& 0.131& 0.178& 1.36& 434.0 & 415.8 & 0.96\\ &28& 27& 2654& 0.189& 1.33& 478.9 & 464.1 & 0.97\\ &30& 29& 2985& 0.190& 1.23& 534.4 & 508.1 & 0.95\\ \noalign{\smallskip}\hline\noalign{\smallskip} Double &55& 54& 0.331& 0.481& 1.45& 1440.8& 1409.1& 0.98\\ &57& 56& 8548& 0.518& 1.57& 1530.4& 1483.4& 0.97\\ &59& 58& 9054& 0.525& 1.46& 1622.7& 1573.0& 0.97\\ \noalign{\smallskip}\hline \end{tabular} \end{table} Since there is certain regularity in the implementation results, Table~\ref{tab:vari} condenses the mean area cost of different variations of the architectures presented in Table~\ref{tab:AIvsH}. This table shows only the area increment since delay variations are much lower and less predictable. First and second columns present this relative increment when increasing by one the number of CORDIC microrotations and the fixed-point bit-width ($N$), respectively. Note that increasing $N$ also means increasing the number of microrotations to take full advantage of the higher precision. Third and fourth columns show the area increment in the HUB version when using the unbiased extension approach and Identity matrix detection, respectively. Taking into account the precision improvement versus the area cost, identity matrix detection is worth implementing when the computation of Q is required. However, the implementation of the unbiased extension seems less likely to be worthwhile. \begin{table}[thb] \caption{Relative area cost when modifying the design parameters} \label{tab:vari} \centering \begin{tabular}{lllllll} \hline\noalign{\smallskip} &\multicolumn{2}{c}{microrotation}&\multicolumn{2}{c}{$N$}&Unbiased& I Detection\\ FP & IEEE& HUB&IEEE& HUB& HUB& HUB\\ \noalign{\smallskip}\hline\noalign{\smallskip} Half & 4.4\%& 5.3\%& 10.0\%& 12.8\% & 0.3\%& 1.0\% \\ Single & 3.1\%& 2.8\%& 5.3\%& 6.0\%& 2.0\%&0.3\% \\ Double & 1.4\%& 1.6\%& 3.1\%& 3.1\%& 0.2\%& 0.1\%\\ \noalign{\smallskip}\hline \end{tabular} \end{table} \subsection{Comparison with fixed-point rotators} The main reason to use a FP implementation instead of a fixed-point one is the fact that the FP approach increases the dynamic range of inputs values supported keeping a reasonable accuracy. To show that, we have performed a similar experiment as in Subsection~\ref{sec:error} for the fixed-point architecture described in~\cite{TCAS15} and the FP ones proposed in this paper varying the dynamic-range parameter $r$ from 1 to 40 (i.e. the magnitude of the values in the input matrices range from $2^{-r}$ to $2^r$). For each value of $r$, 10,000 input 4x4 matrices are generated randomly using double-precision FP. These input matrices are scaled and/or rounded to fit the corresponding input format in each tested architecture. Fig.~\ref{fig-fixvsfp} shows the mean SNR of the results for both, fixed and floating point approaches. The fixed-point approach (FixP) has 32 bit-width and the FP ones use single-precision (32 bits for inputs and outputs) and $N=26$ bits (the internal precision of the significand) for both IEEE and HUB approaches. The results using MatLab for single-precision is included as a reference. Fig.~\ref{fig-fixvsfp40} presents the results for all experiments whereas Fig.~\ref{fig-fixvsfp10} shows a zoom-in for $r$ up to 10. It is observed that the fixed-point approach produces better results than the FP ones for low values of the dynamic-range parameter $r$ (see Fig.~\ref{fig-fixvsfp10}). This is because the number of effective bits used for computation in FixP is larger than in the FP ones. However, this advantage decreases rapidly when $r$ increases, being the SNR of the FixP implementation lower than the FP-HUB one from $r=8$. As it is shown in Fig.~\ref{fig-fixvsfp40}, the SNR for FP approaches remains in reasonable values when $r$ increases while the SNR of FixP decreases steadily until $r$ reach 14 when it slumps. As a consequence, although a FP implementation is typically more expensive than a fixed-point one, for some specific applications, the use of FP may be compulsory due to the dynamic range of the input values. \begin{figure}[thb] \centering \subfloat[]{\includegraphics[width=0.75\textwidth]{FixvsFP40}% \label{fig-fixvsfp40}}\\ \subfloat[]{\includegraphics[width=0.75\textwidth]{FixvsFP10}% \label{fig-fixvsfp10}} \caption{Precision of fixed- and floating- point approaches when varying $r$ (dynamic range of the input). } \label{fig-fixvsfp} \end{figure} To compare the implementation cost, we use the same fixed-point rotator that we used in the error analysis shown in Fig.~\ref{fig-fixvsfp} and the best of the FP ones (the HUB version). Note that the 32 bit-width fixed-point rotator utilizes 27 CORDIC iterations since that number of iterations offers the maximum precision for that bit-width; and the FP-HUB rotator with $N=26$ utilizes 24 CORDIC iterations. The fixed-point implementation results do not include the circuit for scaling the input and output values that this implementation may require. Table~\ref{tab:fixvsFP} summarized the obtained implementation results. As expected, FP implementation requires more area than the fixed-point one, but this increase is slight, and even the number of registers utilized decreases. In contrast, the critical-path delay is lower for the FP rotator and, consequently, the throughput is increased in the same amount (since the number of cycles per rotation is the same for both approaches). However, we should point out that the latency of the FP rotator is slightly higher due to the input and output converters. Similarly, power consumption is significantly increased by the FP implementation, but that is mainly due to the increase in speed. If the frequency were set to the maximum frequency of the fixed-point approach, the power consumption of the FP approach would be 0.138 Watts, which is a very little increase compared to the fixed-point approach. On the other hand, even considering the 18\% of speed increase, the consumption increase in terms of energy is scarce. \begin{table}[th] \caption{Fixed-point vs FP implementation results in Virtex-6} \label{tab:fixvsFP} \centering \begin{tabular}{llllll} \hline\noalign{\smallskip} Format & Delay& LUTs&Registers&Power&Energy\\ \noalign{\smallskip}\hline\noalign{\smallskip} FixP(32) & 3.26 ns & 1947 & 1914 & 0.132 W &430 pJ \\ FPHUB 32(26)& 2.66 ns& 2182 & 1785&0.168 W& 448 pJ\\ \noalign{\smallskip}\hline\noalign{\smallskip} FP/FixP (\%) &-18.4& 12.1& -6.7& 27.3& 4.2 \\ \noalign{\smallskip}\hline \end{tabular} \end{table} Summarizing, if the target application has a low dynamic range, the fixed-point approach may provide the best precision and less hardware cost. However, for applications with higher dynamic range, FP architectures will provide much better precision with a slight increase of area and energy but with higher throughput. \subsection{Comparison with previous FP implementations} As we said before, to the best of our knowledge there is no previous hardware implementation of a specialized FP Givens rotation unit based on CORDIC. Therefore, we provide a comparison with circuits as similar to our proposal as we have found. Taking this into account, we should note that we can only provide a rough comparison and evaluation. Also notice that to provide comparable results, our designs have been re-synthesized using the same FPGA family as the ones in \cite{Munoz2010},\cite{4637696}, and \cite{Wang20093}, specifically Virtex-5 (XC5VLX330T-2). For the other designs, we use the data provided by the authors on those papers. The CORDIC co-processors in~\cite{4637696} and~\cite{Munoz2010} allow performing the Givens rotation carried out by our rotator, although they are not optimized for this purpose. Table~\ref{tab:Pcomparison} and Table~\ref{tab:Acomparison} summarize the comparison between these FP double-precision CORDICs and our HUB rotator using the same precision and technology. The circuit to control the rotation operation or to store temporal values for the CORDIC processor approaches are not considered in these results. The initiation interval (i.e., the minimum number of cycles between two consecutive rotations) is represented depending on $e$, which is the number of elements in each row. Conversely, the throughput at maximum supported frequency is calculated considering an example with 8 elements per row (the same size used in the error analysis) to facilitate comparison. This throughput is expressed in millions of Givens rotations per second. \begin{table}[thb] \caption{Performance comparison among similar designs on Virtex-5} \label{tab:Pcomparison} \centering \begin{tabular}{lllll} \hline\noalign{\smallskip} Design&Max Freq & Latency& Initiation Interval & Throughput \\ & (MHz) & (cycles)& (cycles) & (MOp/s) \\ \noalign{\smallskip}\hline\noalign{\smallskip} FP CORDIC (\cite{Munoz2010})& 67.1&224 &$212+e\times224$& 0.033 (e=8) \\ FP CORDIC (\cite{4637696})&173.3&69x2&$69+e\times1$& 2.25 (e=8) \\ HUB FP rotator &255.8& 60&$e\times1$ & 31.97 (e=8) \\%($N=58$ y 55 iteraciones) \noalign{\smallskip}\hline\noalign{\smallskip} 7x7 FP QRD (\cite{Wang20093}) &132.0& 954& 364&0.36\\ Our 7x7 HUB FP QRD& 287.8 &296 &7 &41.11 \\ \noalign{\smallskip}\hline \end{tabular} \end{table} \begin{table}[thb] \caption{Area comparison among similar designs on Virtex-5} \label{tab:Acomparison} \centering \begin{tabular}{lllllll} \hline\noalign{\smallskip} Design& Precision & LUTs&Registers&Slices&DSPs&BRAM\\ \noalign{\smallskip}\hline\noalign{\smallskip} FP CORDIC (\cite{Munoz2010})&Double& 11,718&600&- &0&0\\ FP CORDIC (\cite{4637696})&Double& 22,189&20,443&-&0&0\\ HUB FP rotator&Double &8,463&7,598&-&0&0\\%($N=58$ y 55 iteraciones) \noalign{\smallskip}\hline\noalign{\smallskip} 7x7 FP QRD (\cite{Wang20093})& Single &-&-&126,585&102&56\\ Our 7x7 HUB FP QRD& Single &- &- &50,547&52 &0\\ \noalign{\smallskip}\hline \end{tabular} \end{table} Being a fully pipelined design, as expected, the main advantage of our rotator is its throughput. Considering 4x4 matrices ($e=8$), the throughput of our HUB rotator is 15 times higher than for~\cite{4637696} and three orders of magnitude higher than for~\cite{Munoz2010} (see Table~\ref{tab:Pcomparison}). This is because both generic CORDICs need to perform completely the angle calculation before starting to rotate the row elements. Hence, \cite{4637696} could not take full advantage of its pipeline implementation and it could produce at most one Givens rotation each $(69+e)$ cycles. In contrast, our design could perform a rotation each $e$ cycles. Consequently, even considering the same frequency, the difference is very significant for small matrices. For larger matrices, the relative difference would be reduced but our design always will have higher throughput than~\cite{4637696}. Furthermore, our rotator requires almost a third of the FPGA resources used by~\cite{4637696} (see Table~\ref{tab:Acomparison}) and the latency is less than half. If we consider~\cite{Munoz2010}, the difference is beyond comparison. Since \cite{Munoz2010} is a word-serial implementation, it uses less than 10 times less register than our pipeline architecture, but surprisingly a 40\% more LUTS. Similarly, this word-serial nature produces that the throughput of \cite{Munoz2010} is several orders of magnitude smaller than ours for small matrices, and this difference would increase dramatically if the size of the matrices increases. However, we must remember that the design in~\cite{4637696} can compute many other elementary functions, and our design only computes Givens rotations. On the other side, Table~\ref{tab:Pcomparison} and Table~\ref{tab:Acomparison} show also a comparison with the QRD calculator presented in~\cite{Wang20093} (see Section~\ref{sec:review}). In~\cite{Wang20093}, authors provides implementation results for a FP single-precision QRD calculator for 7x7 matrices. We have calculated the cost of implementing an equivalent QRD calculator using the HUB version of our Givens rotator for the same technology (Virtex-5) configured with the architecture proposed in~\cite{TCAS15}. Table~\ref{tab:Acomparison} shows that our design utilizes less than half of the resources utilized by the architecture proposed in~\cite{Wang20093}, even without counting the BRAMs. More importantly, Table~\ref{tab:Pcomparison} shows that using the maximum frequency supported for each circuit, our design has six times less latency and computes 100 times more matrices per second. This fact reinforces the idea that the CORDIC approach is the best way of implementing Givens rotations in hardware. \section{Conclusion}\label{sec:con} In this paper, we propose a very effective hardware design of a Givens rotation unit for floating-point computation based on the CORDIC algorithm. As in previous FP CORDIC architectures proposals, the FP Givens rotator is based on a fixed-point one with input and output converters. We provide a detailed description of two different approaches, one for conventional FP formats and another for new HUB formats. The error analysis and the FPGA implementation results reveal that the proposed FP units expand the dynamic range of inputs values and require only a moderate increase of hardware utilization compared to previous fixed-point one. Furthermore, the HUB approach significantly improves the area, delay, and energy consumption of the conventional one. Comparison with other FP units to compute QRD shows that using the proposed design to compute QRD improves the throughput, more than one order of magnitude, and simultaneously reduces the area by more than half. The proposed units could be used to design both highly parallel QRD units and low-cost iterative ones. \begin{acknowledgements} This work was supported in part by following Spanish projects: TIN2016-80920-R, and JA2012 P12-TIC-1692. \end{acknowledgements}
1,108,101,565,394
arxiv
\section{Introduction\label{Sec1}} The consistent consideration of quantum processes in the vacuum violating backgrounds has to be done in the framework of nonperturbative calculations for quantum field theory, in particular, QED. Different analytical and numerical methods were applied to study the effect of electron-positron pair creation from vacuum; see recent reviews \cite{Ruffini,Gelis}. Among these methods, there are ones based on the existence of exact solutions of the Dirac (or Klein-Gordon) equation in the corresponding background fields; e.g., see Refs. \cite{FGS,GavGit16}. They give us exactly solvable models for QED that are useful to consider the characteristic features of theory and could be used to check approximations and numeric calculations. Recently, we present the review of particle creation effects in time-dependent uniform external electric fields that contains three most important exactly solvable cases: Sauter-like electric field, $T$-constant electric field, and exponentially growing and decaying electric fields \cite% {AdoGavGit17}. These electric fields switched on and off at the initial and the final time instants, respectively. We refer to such kind of external fields as the $t$-electric potential steps. Choosing parameters for the exponentially varying electric fields, one can consider both fields in the slowly varying regime and fields that exist only for a short time in a vicinity of the switching on and off time. The case of the $T$-constant electric field is distinct. In this case the electric field is constant within the time interval $T$ and is zero outside of it, that is, it is switched on and off ``abruptly'' at definite instants. The model with the $T$-constant electric field is important to study the particle creation effects; see Ref. \cite{AdoGavGit17} for the review. Then the details of switching on and off for the $T$% -constant electric field is of interest. To estimate the role of the switching on and off effects for the pair creation due to the $T$-constant electric field we consider a composite electric field that grows exponentially in the first interval $t\in \mathrm{I}=\left( -\infty ,t_{1}\right) $, remains constant in the second interval $t\in \mathrm{II}=% \left[ t_{1},t_{2}\right] $,$\ $and decreases exponentially in the last interval $t\in \mathrm{III}=\left( t_{2},+\infty \right) $. We essentially use notation and final formulas from Ref. \cite{AdoGavGit17}. The article is organized as follows: In Sec. \ref{Sec2}, we introduce the composite field and summarize details concerning the exact solutions of the Dirac equation with such a field. We find exact formulas for the differential mean number of particles created from the vacuum, the total number of particles created from the vacuum and the vacuum-to-vacuum transition probability. In Sec. \ref{Sec3} we consider the general properties of the differential mean numbers of pairs created. We visualize how these mean numbers are distributed over the quantum numbers, especially in cases where asymptotic approximations involved are not applicable. In Sec. \ref{Sec4} we compute differential and total quantities in some special field configurations of interest. We show that the results for slowly varying fields are completely predictable using recently developed version of a locally constant field approximation. We study configurations that simulate finite switching-on and -off processes within and beyond the slowly varying regime. Final comments are placed in Sec. \ref{conclusions}. \section{IN and OUT solutions in a composite electric field\label{Sec2}} In this section we summarize general aspects on exact solutions of the Dirac equation with the field under consideration and briefly discuss the calculation of differential and total numbers of pairs creation. The composite electric field a $d=D+1$ dimensional Minkowski space-time is homogeneous, positively oriented along a single direction $\mathbf{E}\left( t\right) =\left( E^{i}\left( t\right) =\delta _{1}^{i}E\left( t\right) \,,\ \ i=1,...,D\right) $ and described by a vector along the same direction, $% A^{\mu }=\left( A^{0}=0,\mathbf{A}\left( t\right) \right) $, $\mathbf{A}% \left( t\right) =\left( A^{i}\left( t\right) =\delta _{1}^{i}A_{x}\left( t\right) \right) $, whose explicit forms are% \begin{eqnarray} &&E\left( t\right) =E\left\{ \begin{array}{ll} e^{k_{1}\left( t-t_{1}\right) }\,, & t\in \mathrm{I}\,, \\ 1\,, & t\in \mathrm{II\,}, \\ e^{-k_{2}\left( t-t_{2}\right) }\,, & t\in \mathrm{III\,},% \end{array}% \right. \ \ \left( E,k_{1},k_{2}\right) >0\,, \label{s2.0} \\ &&A_{x}\left( t\right) =E\left\{ \begin{array}{ll} k_{1}^{-1}\left( -e^{k_{1}\left( t-t_{1}\right) }+1-k_{1}t_{1}\right) \,, & t\in \mathrm{I}\,, \\ -t\,, & t\in \mathrm{II\,}, \\ k_{2}^{-1}\left( e^{-k_{2}\left( t-t_{2}\right) }-1-k_{2}t_{2}\right) \,, & t\in \mathrm{III}\,,% \end{array}% \right. \label{s2.1} \end{eqnarray}% where $t_{1}<0$ and $t_{2}>0$ are fixed time instants. Throughout the text, we refer to \textrm{I} as the switching-on interval, \textrm{III} as the switching-off interval and \textrm{II} as the constant field interval. This field configuration encompasses the $T$\ -constant field \cite{GavGit96}, characterized by the absence of exponential parts, and the peak field \cite{AdoGavGit16}. The Dirac equation\footnote{% The subscript \textquotedblleft $\perp $\textquotedblright\ denotes spacial components perpendicular to the electric field (e. g. $\mathbf{x}_{\perp }=\left\{ x^{2},...,x^{D}\right\} $) and $\psi (x)$ is a $2^{[d/2]}$% -component spinor ($[d/2]$ stands for the integer part of the ratio $d/2$). As usual, $m$ denotes the electron mass, $\gamma ^{\mu }$ are $\gamma $% -matrices in $d$ dimensions and $U\left( t\right) $ denotes the potential energy of a particle with algebraic charge $q$. We select the electron as the main particle, $q=-e$ with $e$ representing the absolute value of the electron charge. Hereafter we use the relativistic system of units ($\hslash =c=1$), except when indicated otherwise.}% \begin{eqnarray} &&i\partial _{t}\psi \left( x\right) =H\left( t\right) \psi \left( x\right) \,,\ \ H\left( t\right) =\gamma ^{0}\left( \boldsymbol{\gamma }\mathbf{P}% +m\right) \,, \notag \\ &&\,P_{x}=-i\partial _{x}-U\left( t\right) ,\ \ \mathbf{P}_{\bot }=-i% \boldsymbol{\nabla }_{\perp },\ \ U\left( t\right) =qA_{x}\left( t\right) \,, \label{s3} \end{eqnarray}% can be solved exactly in each one of the intervals above. Once the corresponding exact solutions are known (see, e.g., the review \cite{AdoGavGit17}), we only present few details to obtain such a solutions. Firstly, we represent the Dirac spinors $\psi _{n}\left( x\right) $ in terms of new time-dependent spinors $\phi _{n}(t)$ as% \begin{eqnarray} &&\psi _{n}\left( x\right) =\exp \left( i\mathbf{pr}\right) \psi _{n}\left( t\right) \,,\ \ n=(\mathbf{p},\sigma )\,, \notag \\ &&\psi _{n}\left( t\right) =\left\{ \gamma ^{0}i\partial _{t}-\left[ p_{x}-U\left( t\right) \right] -\boldsymbol{\gamma }\mathbf{p}+m\right\} \phi _{n}(t)\,, \label{2.10} \end{eqnarray}% and separate the spinning degrees of freedom by the substitution $\phi _{n}(t)=\varphi _{n}\left( t\right) v_{\chi ,\sigma }$, in which $v_{\chi ,\sigma }$ and $\varphi _{n}\left( t\right) $ denotes a set of constant orthonormalized spinors and scalar functions, respectively. The constant spinors satisfy \begin{equation} \gamma ^{0}\gamma ^{1}v_{\chi ,\sigma }=\chi v_{\chi ,\sigma }\,,\ \ v_{\chi ,\sigma }^{\dag }v_{\chi ^{\prime },\sigma ^{\prime }}=\delta _{\chi ,\chi ^{\prime }}\delta _{\sigma ,\sigma ^{\prime }\,}, \label{e2a} \end{equation}% where $\chi =\pm 1$ are eigenvalues of $\gamma ^{0}\gamma ^{1}$ and $\sigma =(\sigma _{1},\sigma _{2},\dots ,\sigma _{\lbrack d/2]-1})$ represent a set of additional eigenvalues, corresponding to spin operators compatible with $% \gamma ^{0}\gamma ^{1}$. The constant spinors are subjected to additional conditions depending on the space-time dimensions, whose details can be found in Ref. \cite{AdoGavGit17}. After these substitutions, the Dirac spinor can be obtained through the solutions of the second-order ordinary differential equation\footnote{% For scalar particles, the exact solutions for the Klein-Gordon equation $% \phi _{n}\left( x\right) $ are connected with the scalar functions as $\phi _{n}\left( x\right) =\exp \left( i\mathbf{pr}\right) \varphi _{n}\left( t\right) $. Since spinning degrees-of-freedom are absent in this case, $n=% \mathbf{p}$ and $\chi =0$ in Eq. (\ref{s2}) as well as in all subsequent formulas.}% \begin{equation} \left\{ \frac{d^{2}}{dt^{2}}+\left[ p_{x}-U\left( t\right) \right] ^{2}+\pi _{\perp }^{2}-i\chi \dot{U}\left( t\right) \right\} \varphi _{n}\left( t\right) =0\,,\ \ \pi _{\perp }=\sqrt{\mathbf{p}_{\perp }^{2}+m^{2}}\,. \label{s2} \end{equation}% In the switching-on \textrm{I} and -off \textrm{III} intervals, the solutions are expressed in terms of Confluent Hypergeometric Functions (CHFs.),% \begin{align} & \varphi _{n}^{j}\left( t\right) =b_{2}^{j}y_{1}^{j}\left( \eta _{j}\right) +b_{1}^{j}y_{2}^{j}\left( \eta _{j}\right) \,, \notag \\ & y_{1}^{j}\left( \eta _{j}\right) =e^{-\eta _{j}/2}\eta _{j}^{\nu _{j}}\Phi \left( a_{j},c_{j};\eta _{j}\right) \,, \notag \\ & y_{2}^{j}\left( \eta _{j}\right) =e^{\eta _{j}/2}\eta _{j}^{-\nu _{j}}\Phi \left( 1-a_{j},2-c_{j};-\eta _{j}\right) \,, \label{i.3.3} \end{align}% while at the constant interval \textrm{II}, the solutions are expressed in terms of Weber Parabolic Cylinder Functions (WPCFs.),% \begin{eqnarray} \varphi _{n}\left( z\right) &=&b^{+}u_{+}\left( z\right) +b^{-}u_{-}\left( z\right) \,, \notag \\ u_{+}\left( z\right) &=&D_{\beta +\left( \chi -1\right) /2}\left( z\right) \,,\ \ u_{-}\left( z\right) =D_{-\beta -\left( \chi +1\right) /2}\left( iz\right) \,. \label{ii.5} \end{eqnarray}% At these equations, $a_{j}$, $c_{j}$, $\nu _{j}$ and $\beta $ are parameters% \begin{eqnarray} &&a_{1}=\frac{1}{2}\left( 1+\chi \right) +i\Xi _{1}^{-}\,,\ \ a_{2}=\frac{1}{% 2}\left( 1+\chi \right) +i\Xi _{2}^{+}\,, \notag \\ &&\Xi _{j}^{\pm }=\frac{\omega _{j}\pm \Pi _{j}}{k_{j}}\,,\ \ c_{j}=1+2\nu _{j}\,,\ \ \nu _{j}=\frac{i\omega _{j}}{k_{j}}\,,\ \ \beta =\frac{i\lambda }{% 2}\,, \notag \\ &&\omega _{j}=\sqrt{\Pi _{j}^{2}+\pi _{\perp }^{2}}\,,\ \ \Pi _{j}=p_{x}-% \frac{eE}{k_{j}}\left[ \left( -1\right) ^{j}+k_{j}t_{j}\right] \,,\ \ \lambda =\frac{\pi _{\perp }^{2}}{eE}\,, \label{i.3} \end{eqnarray}% $z$ and $\eta _{j}$ are time-dependent functions% \begin{eqnarray} &&\eta _{1}\left( t\right) =ih_{1}e^{k_{1}\left( t-t_{1}\right) }\,,\ \ \eta _{2}\left( t\right) =ih_{2}e^{-k_{2}\left( t-t_{2}\right) }\,,\ \ h_{j}=% \frac{2eE}{k_{j}^{2}}\,, \label{i.0} \\ &&z\left( t\right) =\left( 1-i\right) \xi \left( t\right) \,,\ \ \xi \left( t\right) =\frac{eEt-p_{x}}{\sqrt{eE}}\,, \label{ii.3} \end{eqnarray}% and $b_{1,2}^{j}$, $b^{\pm }$ are constants, fixed by initial conditions. In addition, the index $j$ in Eqs. (\ref{i.3.3}), (\ref{i.3}) and (\ref{i.0}) distinguish quantities associated to the switching-on $\left( j=1\right) $ from the switching-off $\left( j=2\right) $ intervals. In virtue of asymptotic properties of the CHFs. at $t\rightarrow \pm \infty $% , the solutions given by Eq. (\ref{i.3.3}) can be classified as particle/antiparticle states% \begin{eqnarray} \ _{+}\varphi _{n}\left( t\right) &=&\ _{+}\mathcal{N}\exp \left( i\pi \nu _{1}/2\right) y_{2}^{1}\left( \eta _{1}\right) \,,\,\ _{-}\varphi _{n}\left( t\right) =\ _{-}\mathcal{N}\exp \left( -i\pi \nu _{1}/2\right) y_{1}^{1}\left( \eta _{1}\right) \,,\ \ t\in \mathrm{I}\,, \notag \\ \ ^{+}\varphi _{n}\left( t\right) &=&\ ^{+}\mathcal{N}\exp \left( -i\pi \nu _{2}/2\right) y_{1}^{2}\left( \eta _{2}\right) \,,\,\ ^{-}\varphi _{n}\left( t\right) =\ ^{-}\mathcal{N}\exp \left( i\pi \nu _{2}/2\right) y_{2}^{2}\left( \eta _{2}\right) \,,\ \ t\in \mathrm{III}\,, \label{i.4.1} \end{eqnarray}% since, at the infinitely remote past $t\rightarrow -\infty $ and future $% t\rightarrow +\infty $, the set above behaves as plane-waves,% \begin{equation} \ _{\zeta }\varphi _{n}\left( t\right) =\ _{\zeta }\mathcal{N}e^{-i\zeta \omega _{1}t}\,,\ \ t\rightarrow -\infty \,,\ \ ^{\zeta }\varphi _{n}\left( t\right) =\ ^{\zeta }\mathcal{N}e^{-i\zeta \omega _{2}t}\,,\ \ t\rightarrow +\infty \,, \label{i.4.0} \end{equation}% where $\omega _{1}$ denotes the energy of initial particles at $t\rightarrow -\infty $, $\omega _{2}$ denotes the energy of final particles at $% t\rightarrow +\infty $ and $\zeta $ labels electron $\left( \zeta =+\right) $ and positron $\left( \zeta =-\right) $ states. With the help of such solutions, one may construct IN $\left\{ \ _{\zeta }\psi \left( x\right) \right\} $ and OUT $\left\{ \ ^{\zeta }\psi \left( x\right) \right\} $ sets of Dirac spinors. The normalization constants$\;_{\zeta }\mathcal{N}=\ _{\zeta }CV_{\left( d-1\right) }^{-1/2}$ and $\;^{\zeta }\mathcal{N}=\ ^{\zeta }CV_{\left( d-1\right) }^{-1/2}$ are calculated with respect to the usual inner product for Fermions and Bosons, where $\ _{\zeta }C$ and $\ ^{\zeta }C$ given by% \begin{equation} \ _{\zeta }C=\left\{ \begin{array}{ll} \left( 2\omega _{1}q_{1}^{\zeta }\right) ^{-1/2}\,, & \mathrm{Fermi\,,} \\ \left( 2\omega _{1}\right) ^{-1/2}\,, & \mathrm{Bose\,,}% \end{array}% \right. \,,\ ^{\zeta }C=\left\{ \begin{array}{ll} \left( 2\omega _{2}q_{2}^{\zeta }\right) ^{-1/2}\,, & \mathrm{Fermi\,,} \\ \left( 2\omega _{2}\right) ^{-1/2}\,, & \mathrm{Bose\,,}% \end{array}% \right. \,,\ q_{j}^{\zeta }=\omega _{j}-\chi \zeta \Pi _{j}\,. \label{i.4.2} \end{equation}% For further details, e.g., see Ref. \cite{AdoGavGit17}. With the exact solutions discussed above, one can write complete sets of solutions for the whole time interval $t\in \left( -\infty ,+\infty \right) $% . Using the classification (\ref{i.4.1}) and the solutions given by Eq. (\ref% {ii.5}), Dirac spinors (\ref{2.10}) (or Klein-Gordon solutions) for all time duration can be calculated from the following set of solutions,% \begin{eqnarray} \ ^{+}\varphi _{n}\left( t\right) &=&\left\{ \begin{array}{ll} \kappa g\left( _{-}|^{+}\right) \ _{-}\varphi _{n}\left( t\right) +g\left( _{+}|^{+}\right) \ _{+}\varphi _{n}\left( t\right) \,, & t\in \mathrm{I}\, \\ b_{1}^{+}u_{+}\left( t\right) +b_{1}^{-}u_{-}\left( t\right) \,, & t\in \mathrm{II\,} \\ \ ^{+}\mathcal{N}\exp \left( -i\pi \nu _{2}/2\right) y_{1}^{2}\left( \eta _{2}\right) \,, & t\in \mathrm{III\,}% \end{array}% \right. ; \label{v1} \\ \ _{-}\varphi _{n}\left( t\right) &=&\left\{ \begin{array}{ll} \;_{-}\mathcal{N}\exp \left( -i\pi \nu _{1}/2\right) y_{1}^{1}\left( \eta _{1}\right) \,, & t\in \mathrm{I\,} \\ b_{2}^{+}u_{+}\left( t\right) +b_{2}^{-}u_{-}\left( t\right) \,, & t\in \mathrm{II\,} \\ g\left( ^{+}|_{-}\right) \ ^{+}\varphi _{n}\left( t\right) +\kappa g\left( ^{-}|_{-}\right) \ ^{-}\varphi _{n}\left( t\right) \,, & t\in \mathrm{III\,}% \end{array}% \right. , \label{v4} \end{eqnarray}% where $b_{1,2}^{\pm }$, $g\left( _{\pm }|^{+}\right) $, and $g\left( ^{\pm }|_{-}\right) $ are some coefficients, $g\left( ^{\zeta ^{\prime }}|_{\zeta }\right) =g\left( _{\zeta ^{\prime }}|^{\zeta }\right) ^{\ast }$. Here $% \kappa $ is an auxiliary constant that allow us to present solutions for the Klein-Gordon $\left( \kappa =-1\right) $ or Dirac $\left( \kappa =+1\right) $ equations. For the solutions of the Dirac equation, the $g$-coefficients satisfy unitarity relations% \begin{equation} \sum_{\varkappa }g\left( ^{\zeta }|_{\varkappa }\right) g\left( _{\varkappa }|^{\zeta ^{\prime }}\right) =\sum_{\varkappa }g\left( _{\zeta }|^{\varkappa }\right) g\left( ^{\varkappa }|_{\zeta ^{\prime }}\right) =\delta _{\zeta ,\zeta ^{\prime }}\, \label{v4.2} \end{equation}% while for the solutions of the Klein-Gordon equation, the $g$-coefficients satisfy unitarity relations \begin{equation} \sum_{\varkappa }\varkappa g\left( ^{\zeta }|_{\varkappa }\right) g\left( _{\varkappa }|^{\zeta ^{\prime }}\right) =\sum_{\varkappa }\varkappa g\left( _{\zeta }|^{\varkappa }\right) g\left( ^{\varkappa }|_{\zeta ^{\prime }}\right) =\zeta \delta _{\zeta ,\zeta ^{\prime }}\,, \label{v4.4} \end{equation} To obtain the $g$-coefficients, we conveniently consider continuity conditions at instants $t_{1}$, $t_{2}$% \begin{equation*} \ _{-}^{+}\varphi _{n}\left( t_{1,2}-0\right) =\ _{-}^{+}\varphi _{n}\left( t_{1,2}+0\right) \,,\ \ \partial _{t}\ _{-}^{+}\varphi _{n}\left( t_{1,2}-0\right) =\partial _{t}\ _{-}^{+}\varphi _{n}\left( t_{1,2}+0\right) \,, \end{equation*}% substitute appropriate normalization constants for each case, given by Eqs. (% \ref{i.4.2}), and use Wronskian determinants for CHFs. and WPCFs. After these manipulations, one can readily verify that $g\left( _{-}|^{+}\right) $ and $g\left( ^{+}|_{-}\right) $ for the Dirac case reads% \begin{eqnarray} g\left( _{-}|^{+}\right) &=&\sqrt{\frac{q_{1}^{-}}{8eE\omega _{1}q_{2}^{+}\omega _{2}}}\exp \left[ \frac{i\pi }{2}\left( \nu _{1}-\nu _{2}+\beta +\frac{\chi }{2}\right) \right] \left[ f_{1}^{-}\left( t_{2}\right) f_{2}^{+}\left( t_{1}\right) -f_{1}^{+}\left( t_{2}\right) f_{2}^{-}\left( t_{1}\right) \right] \,, \notag \\ g\left( ^{+}|_{-}\right) &=&\sqrt{\frac{q_{2}^{+}}{8eE\omega _{2}q_{1}^{-}\omega _{1}}}\exp \left[ \frac{i\pi }{2}\left( \nu _{2}-\nu _{1}+\beta +\frac{\chi }{2}\right) \right] \left[ f_{1}^{+}\left( t_{1}\right) f_{2}^{-}\left( t_{2}\right) -f_{1}^{-}\left( t_{1}\right) f_{2}^{+}\left( t_{2}\right) \right] \,, \notag \\ f_{k}^{\pm }\left( t_{j}\right) &=&\left. \left[ (-1)^{j}k_{j}\eta _{j}\frac{% dy_{k}^{j}\left( \eta _{j}\right) }{d\eta _{j}}+y_{k}^{j}\left( \eta _{j}\right) \partial _{t}\right] u_{\pm }\left( z\right) \right\vert _{t=t_{j}}\,, \label{r4} \end{eqnarray}% while for the Klein-Gordon case have the form,% \begin{eqnarray} g\left( _{-}|^{+}\right) &=&-\frac{1}{\sqrt{8eE\omega _{1}\omega _{2}}}\exp % \left[ \frac{i\pi }{2}\left( \nu _{1}-\nu _{2}+\beta \right) \right] \left. % \left[ f_{1}^{-}\left( t_{2}\right) f_{2}^{+}\left( t_{1}\right) -f_{1}^{+}\left( t_{2}\right) f_{2}^{-}\left( t_{1}\right) \right] \right\vert _{\chi =0}\,, \notag \\ g\left( ^{+}|_{-}\right) &=&\frac{1}{\sqrt{8eE\omega _{1}\omega _{2}}}\exp % \left[ \frac{i\pi }{2}\left( \nu _{2}-\nu _{1}+\beta \right) \right] \left. % \left[ f_{1}^{+}\left( t_{1}\right) f_{2}^{-}\left( t_{2}\right) -f_{1}^{-}\left( t_{1}\right) f_{2}^{+}\left( t_{2}\right) \right] \right\vert _{\chi =0}\,. \label{r7} \end{eqnarray} Taking into account that the $g$'s coefficients establish the Bogoliubov transformations, one may compute fundamental quantities concerning vacuum instability for Fermions (the Dirac case) and Bosons (the Klein-Gordon case), for example, the differential mean number of pairs created from the vacuum $N_{n}^{\mathrm{cr}}$, the total number $N$ and the vacuum-to-vacuum transition probability $P_{v}$ as% \begin{eqnarray} &&N_{n}^{\mathrm{cr}}=\left\vert g\left( _{-}|^{+}\right) \right\vert ^{2}\,,\ \ N^{\mathrm{cr}}=\sum_{n}N_{n}^{\mathrm{cr}}\,, \notag \\ &&P_{v}=\exp \left[ \kappa \sum_{n}\ln \left( 1-\kappa N_{n}^{\mathrm{cr}% }\right) \right] \,. \label{NP} \end{eqnarray} \section{General properties of the differential mean numbers of pairs created \label{Sec3}} The $g$-coefficients (\ref{r4}) and (\ref{r7}) enjoy certain properties under time/momentum reversal that result in symmetries for differential quantities. More precisely, the simultaneous change% \begin{equation} k_{1}\leftrightarrows k_{2}\,,\ \ t_{1}\leftrightarrows -t_{2}\,,\ \ p_{x}\leftrightarrows -p_{x}\,, \label{sym1} \end{equation}% yields to a number of identities, for instance, $\Pi _{1}\leftrightarrows -\Pi _{2}$, $\omega _{1}\leftrightarrows \omega _{2}$, $a_{1}% \leftrightarrows a_{2}$, $c_{1}\leftrightarrows c_{2}$ so that $g\left( _{-}|^{+}\right) $ and $g\left( ^{+}|_{-}\right) $ are related by% \begin{equation} g\left( _{-}|^{+}\right) \leftrightarrows \kappa g\left( ^{+}|_{-}\right) \,, \label{sym2} \end{equation}% implying, in particular, that $N_{n}^{\mathrm{cr}}$ (and therefore total quantities) are even with respect to the exchanges (\ref{sym1}). Moreover (% \ref{r4}) and (\ref{r7}) are even with respect to $\mathbf{p}_{\perp }$, so that all quantum quantities in Eq. (\ref{NP}) are symmetric with respect to the momenta $\mathbf{p}$ (for Fermions, these quantities does not depend on spin polarizations as well). Such properties are helpful in computing asymptotic estimates in several regimes, some of them discussed in subsequent section. Aside these properties, it is useful to visualize how the differential mean numbers $N_{n}^{\mathrm{cr}}$ are distributed over the quantum numbers (for instance $p_{x}$) to outline some preliminary remarks concerning pair creation, especially in cases where asymptotic approximations of the WPCFs. and CHFs. involved in the $g$-coefficients are not applicable\footnote{% For example, when the argument of WPCFs. $z_{j}$ or of CHFs. $\eta _{j}$ are finite quantities. Also when the parameters $a_{j}$, $c_{j}$ are also finite.% }. To this end, we present below some plots of the mean number of particles created from the vacuum $N_{n}^{\mathrm{cr}}$ (\ref{NP}) as a function of $% p_{x}$ for different values of $k_{1}$, $k_{2}$ and $T$ (Figs. \ref{Fig1a}, % \ref{Fig1b} for Fermions and \ref{Fig2a}, \ref{Fig2b} for Bosons) for a fixed amplitude $E$ of the composite field. For the sake of simplicity, we set $\mathbf{p}_{\perp }=0$ and select a convenient system of units, in which besides $\hslash =c=1$ the electron mass is also set equal to the unity, $m=1$. In this system, the Compton wavelength corresponds to one unit of length ${\mkern0.75mu\mathchar '26\mkern -9.75mu\lambda} _{e}=\hslash /mc=1 \approx 3.8614\times 10^{-14}\,\mathrm{m}$, the Compton time corresponds to one unit of time $% {\mkern0.75mu\mathchar '26\mkern -9.75mu\lambda} _{e}/c=1 \approx 1.3\times 10^{-21}\,% \mathrm{s}$ and the electron rest energy corresponds to one unit of energy $% mc^{2}=1 \approx 0.511\ $\textrm{MeV}. In all plots below, the longitudinal momentum $p_{x}$, time duration $T$and phases $k_{j}$ are relative to electron's mass $m$, corresponding to dimensionless quantities, i. e., $p_{x}/m$, $mT$ and $k_{j}/m$, respectively. \begin{figure}[th] \begin{center} \includegraphics[scale=0.48]{Fig1F.pdf} \end{center} \caption{(color online) Differential mean number of Fermions created from the vacuum $N_{n}^{\mathrm{cr}}$ (solid lines) by a symmetrical composite field, with $k_{1}=k_{2}=k$ and amplitude $E=E_{\mathrm{c}}=m^{2}/e=1$ fixed. Graph (A) shows distributions with $k/m=1$ fixed, while Graph (B) shows $mT$ $\ $ is fixed, $mT=5$. In (A), the solid lines labeled with $(\mathrm{i})$, $(\mathrm{ii})$ and $(\mathrm{iii})$ refers to $mT=5$, $mT=10$ and $mT=50$, respectively. In (B), $(\mathrm{i})$, $(\mathrm{ii})$ and $(\mathrm{iii})$ refers to $k/m=0.1$, $k/m=0.05$ and $k/m=0.01$, respectively. The horizontal dashed line corresponds to the uniform distribution $e^{-\protect \pi \protect\lambda }$ which, in this system of units and $\mathbf{p}% _{\perp }=0$, is $e^{-\protect\pi }$.} \label{Fig1a} \end{figure} \begin{figure}[th] \begin{center} \includegraphics[scale=0.48]{Fig2F.pdf} \end{center} \caption{(color online) Differential mean number of Fermions created from the vacuum $N_{n}^{\mathrm{cr}}$ (solid lines) by asymmetrical composite fields, with $k_{1}\neq k_{2}$ and amplitude $E=E_{\mathrm{c}}=m^{2}/e=1$ fixed. In both graphs, $mT$ is fixed, where in Graph (C) shows $mT=10$ and $k_{1}/m=0.5$ while in Graph (D) shows $mT=5$. In (C), the solid lines labeled with $(\mathrm{i})$, $(\mathrm{ii})$ and $(\mathrm{iii})$ refers to $k_2/m=1$, $k_2/m=5$ and $k_2/m=10$, respectively. In (D), $(\mathrm{i})$ denotes $k_1/m=0.5,\,k_2/m=0.3$, $(\mathrm{ii})$ denotes $k_1/m=0.1,\,k_2/m=0.07$ and $(\mathrm{iii})$ denotes $k_1/m=0.01,\,k_2/m=0.008$. The horizontal dashed line corresponds to the uniform distribution $e^{-% \protect\pi \protect\lambda }$ which, in this system of units and $% \mathbf{p}_{\perp }=0$, is $e^{-\protect\pi }$.} \label{Fig1b} \end{figure} \begin{figure}[th] \begin{center} \includegraphics[scale=0.48]{Fig3F.pdf} \end{center} \caption{(color online) Differential mean number of Bosons created from the vacuum $N_{n}^{\mathrm{cr}}$ (solid lines) by a symmetrical composite field, with $k_{1}=k_{2}=k$ and amplitude $E=E_{\mathrm{c}}=m^{2}/e=1$ fixed. Graph (A) shows distributions with $k/m=1$ fixed, while Graph (B) shows $mT$ is fixed, $mT=5$. In (A), the solid lines labeled with $(\mathrm{i})$, $(\mathrm{ii})$ and $(\mathrm{iii})$ refers to $mT=5$, $mT=10$ and $mT=50$, respectively. In (B), $(\mathrm{i})$, $(\mathrm{ii})$ and $(\mathrm{iii})$ refers to $k/m=0.1$, $k/m=0.05$ and $k/m=0.01$, respectively. The horizontal dashed line corresponds to the uniform distribution $e^{-\protect\pi \protect% \lambda }$ which, in this system of units and $\mathbf{p}_{\perp }=0$% , is $e^{-\protect\pi }$.} \label{Fig2a} \end{figure} \begin{figure}[th] \begin{center} \includegraphics[scale=0.48]{Fig4F.pdf} \end{center} \caption{(color online) Differential mean number of Bosons created from the vacuum $N_{n}^{\mathrm{cr}}$ (solid lines) by asymmetrical composite fields, with $k_{1}\neq k_{2}$ and amplitude $E=E_{\mathrm{c}}=m^{2}/e=1$ fixed. In both graphs, $mT$ is fixed, where Graph (C) shows $mT=10$ and $k_{1}/m=0.5$ while in Graph (D) shows $mT=5$. In (C), the solid lines labeled with $(\mathrm{i})$, $(\mathrm{ii})$ and $(\mathrm{iii})$ refers to $k_2/m=1$, $k_2/m=5$ and $k_2/m=10$, respectively. In (D), $(\mathrm{i})$ denotes $k_1/m=0.5,\,k_2/m=0.3$, $(\mathrm{ii})$ denotes $k_1/m=0.1,\,k_2/m=0.07$ and $(\mathrm{iii})$ denotes $k_1/m=0.01,\,k_2/m=0.008$. The horizontal dashed line corresponds to the uniform distribution $e^{-\protect\pi \protect \lambda }$ which, in this system of units and $\mathbf{p}_{\perp }=0$ is $e^{-\protect\pi }$.} \label{Fig2b} \end{figure} The results displayed in all pictures above, reveal wider distributions corresponding to composite electric fields with larger $T$ (red/dark blue lines for Fermions/Bosons in graphs (a), Figs. \ref{Fig1a}, \ref{Fig2a}) or smaller $k_{j}$ (red/dark blue lines for Fermions/Bosons in graphs (b), Figs. \ref{Fig1a}, \ref{Fig2a}) and thinner distributions corresponding to opposite configurations, associated with smaller $T$(orange/purple lines for Fermions/Bosons in graphs (a), Figs. \ref{Fig1a}, \ref{Fig2a}) or larger $% k_{j}$ (orange/purple lines for Fermions/Bosons in graphs (b), \ref{Fig1a}, % \ref{Fig2a}). Once the time duration is designated by $T$ and $k_{j}^{-1}$ ($% k_{j}^{-1}$ represent a scale of time duration for increasing and decreasing phases), these results are consistent with the fact that the larger the duration of an electric field, the longer it has to accelerate pairs. Therefore larger values to $p_{x}/m$ are expected to occur in cases corresponding to electric fields with larger time duration. Moreover, it should be noted that the distributions above tend to the uniform distribution $N_{n}^{\mathrm{cr}}=e^{-\pi \lambda }$ (horizontal dashed lines) for $T$ and $k_{j}^{-1}$ sufficiently large. This is not unexpected since the composite field tends to a constant field as soon as $T$ and $% k_{j}^{-1}$ increase, becoming infinitely constant in the limit $% T\rightarrow \infty $ and $k_{j}^{-1}\rightarrow \infty $. At last, but not least, observing Figs. \ref{Fig1b}, \ref{Fig2b} we find that asymmetrical configurations $\left( k_{1}\neq k_{2}\right) $ yields to asymmetrical distributions. This is associated with the fact that different phases $% k_{1},k_{2}$ implies in different times to accelerate pairs during the switching-on and -off processes in general. An interpretation of these results follows from the semiclassical analysis: Electrons created from the vacuum have quantum numbers $p_{x}$ within the range $-eE\left( T/2+k_{1}^{-1}\right) \leq p_{x}\leq eE\left( T/2+k_{2}^{-1}\right) $, corresponding to longitudinal kinetic momenta $\Pi _{x}\left( t\right) =p_{x}+eA_{x}\left( t\right) $ which, at $t\rightarrow +\infty $, varies according to $-eE\left( T+k_{1}^{-1}+k_{2}^{-1}\right) \leq \Pi _{x}\left( +\infty \right) \leq 0$. Assuming that pairs are materialized from the vacuum with zero longitudinal momentum $\Pi _{x}\left( t\right) =0$, it follows from the classical equations of motion that the kinetic longitudinal momentum at $t\rightarrow +\infty $ has the form $\Pi _{x}\left( +\infty \right) =-e\int_{t}^{+\infty }dt^{\prime }E\left( t^{\prime }\right) $, where $t$ is the time of creation. Thus, if an electron is created at $% t\rightarrow -\infty $, its longitudinal kinetic momentum at $t\rightarrow +\infty $ is maximal (in absolute value) $\Pi _{x}\left( +\infty \right) =-eE\left( T+k_{1}^{-1}+k_{2}^{-1}\right) $. At the same time, its longitudinal kinetic momentum is expressed in terms of $p_{x}$ as $\Pi \left( +\infty \right) =p_{x}-eE\left( T/2+k_{2}^{-1}\right) $, which means that such a electron is found to have the a minimal value to $p_{x}$, namely $p_{x}\rightarrow p_{x}^{\min }=-eE\left( T/2+k_{1}^{-1}\right) $. On the other hand, if the electron is created at $t\rightarrow +\infty $, then its longitudinal kinetic momentum tends to zero, $\Pi _{x}\left( +\infty \right) \rightarrow 0$, which means that the corresponding quantum number $p_{x}$ tends to its maximum, $p_{x}\rightarrow p_{x}^{\max }=eE\left( T/2+k_{2}^{-1}\right) $. According to this interpretation, asymmetric configurations result in asymmetric distributions. This explains asymmetric distributions in graphs (C) and (D), Figs. \ref{Fig1b}, \ref{Fig2b}, for instance. \section{Differential and total quantities in some special configurations \label{Sec4}} Irrespective of the $t$-electric potential step under consideration, it is known that the most favorable conditions for pair creation from the vacuum are associated with strong fields acting over a sufficiently large period of time, in which differential and total quantities are significant. For the composite electric field (\ref{s2.0}), the time duration is encoded in two sets, namely, $\left( k_{1}^{-1},k_{2}^{-1}\right) $ and $\left( t_{1},t_{2}\right) $. The former represent scales of time duration for the increasing and decreasing phases of the electric field, defined at intervals $\mathrm{I}$ and $\mathrm{III}$, while the latter corresponds to the time duration in which the field is constant, defined at interval $\mathrm{II}$. If the period $T$\ is a relatively short (see, e.g., the cases with $mT=5$ and $k/m<0.5$ on the right side of Figs. \ref{Fig1a} - \ref{Fig2b}), the effects of pair creation tend to ones obtained for the peak field \cite{AdoGavGit17,AdoGavGit16}. The latter field correspond to a limit of the composite field when the intermediate interval $T$ is absent. From the results above, we observe that the existence of a finite interval $T$, between \textquotedblleft slow\textquotedblright\ switching-on and -off processes, has no significant influence on the distribution of the differential mean numbers $N_{n}^{% \mathrm{cr}}$ over the quantum numbers (see appropriate asymptotic formulae in Ref. \cite{AdoGavGit17}). The influence of the $T$-constant interval appears only in the next-to-leading order. A composite electric field of large duration corresponds to small values for the switching-on phase $k_{1}$, switching-off phase $k_{2}$ and large $% T=t_{2}-t_{1}$,\footnote{% Without loss of generality, we select from now on a symmetrical interval \textrm{II}, in which $t_{1}=-T/2=-t_{2}$.} satisfying the following condition% \begin{equation} \min \left( \sqrt{eE}T,eEk_{1}^{-2},eEk_{2}^{-2}\right) \gg \max \left( 1,% \frac{m^{2}}{eE}\right) \,. \label{s3.1} \end{equation}% The condition (\ref{s3.1}) defines a configuration in which the field takes a sufficiently large time to reach the constant regime (slow switching-on process, $k_{1}^{-1}$ large), remains constant over a sufficiently large interval $T$ and finally takes another sufficiently large time to switch-off completely (slow switching-on process, $k_{2}^{-1}$ large). The most important objects in vacuum instability by external fields are the total number of particles created from the vacuum $N^{\mathrm{cr}}$ and the vacuum-to-vacuum transition probability $P_{v}$, both given by Eq. (\ref{NP}% ). The first quantity corresponds to the summation of the differential mean numbers $N_{n}^{\mathrm{cr}}$ over the momenta $\mathbf{p}$, and spin degrees-of-freedom% \begin{equation} N^{\mathrm{cr}}=V_{\left( d-1\right) }n^{\mathrm{cr}}\,,\ \ n^{\mathrm{cr}}=% \frac{J_{\left( d\right) }}{\left( 2\pi \right) ^{d-1}}\int d\mathbf{p}% N_{n}^{\mathrm{cr}}\,, \label{st1} \end{equation}% which, in fact, is reduced to the calculation of the density of pairs created from the vacuum $n^{\mathrm{cr}}$. Here the summation over $\mathbf{p% }$ was transformed into an integral and $J_{\left( d\right) }=2^{\left[ d/2% \right] -1}$ denotes the total number spin projections in a $d$-dimensional space. These are factored out since the numbers $N_{n}^{\mathrm{cr}}$ are independent of spin polarization. The dominant contribution of the densities $n^{\mathrm{cr}}$ in the slowly varying regime are proportional to the total increment of the longitudinal kinetic momentum, $\Delta U=\left\vert \Pi _{2}-\Pi _{1}\right\vert =e\left\vert A_{x}\left( +\infty \right) -A_{x}\left( -\infty \right) \right\vert $, which is the largest parameter in the problem \cite{GavGit17}. Hence it is meaningful to approximate the total density $n^{\mathrm{cr}}$ by its dominant contribution $\tilde{n}^{% \mathrm{cr}}$, corresponding to an integral over an specific domain $\Omega$% \begin{equation} \Omega: n^{\mathrm{cr}}\approx \tilde{n}^{\mathrm{cr}}=\frac{J_{\left( d\right) }}{\left( 2\pi \right) ^{d-1}}\int_{\mathbf{p}\in \Omega }d\mathbf{p% }N_{n}^{\mathrm{cr}}\,, \label{st1b} \end{equation}% whose result is proportional to $\Delta U$. As it is general for $t$% -electric potential steps, such domain $\Omega $ is defined by a specific range of values to the longitudinal momentum $p_{x}$ and restricted values to the perpendicular momentum $\mathbf{p}_{\perp }$ which, under the condition (\ref{s3.1}), is% \begin{equation} \Omega :\left\{ \frac{|p_{x}|}{\sqrt{eE}}\leq\sqrt{eE}\frac{T}{2}+\frac{3}{2}\sqrt{\frac{h_1}{2}}\,,\,\, \sqrt{\lambda}<K_{\perp}\,,\,\,K_{\perp}^{2}\gg\max \left(1,\frac{m^2}{eE}\right)\right\}\,. \label{st1c} \end{equation} In this case using asymptotic formulas given by Ref. \cite{AdoGavGit17} one can see that the differential mean numbers are practically uniform over a wide range of values to the kinetic momenta of the domain $\Omega $ while decreases exponentially beyond these ranges. In leading-order approximation, the mean numbers are% \begin{equation} N_{n}^{\mathrm{cr}}\sim \left\{ \begin{array}{ll} \exp \left( -2\pi \Xi _{1}^{-}\right) \,, & \mathrm{for}\ \ p_{x}/\sqrt{eE}<-% \sqrt{eE}T/2\,, \\ e^{-\pi \lambda }\,, & \mathrm{for}\ \ \left\vert p_{x}\right\vert /\sqrt{eE}% \leq \sqrt{eE}T/2\,, \\ \exp \left( -2\pi \Xi _{2}^{+}\right) \,, & \mathrm{for}\ \ p_{x}/\sqrt{eE}>+% \sqrt{eE}T/2\,.% \end{array}% \right. \label{fas17} \end{equation}% It is clear that the asymptotic forms (\ref{fas17}) specified in each range above, coincides with asymptotic forms of the $T$-constant and exponential electric fields; see, e. g., Ref. \cite% {AdoGavGit17}. Thus, we see that in each domain of $\Omega $ with a particular type of field, principal terms in the distribution $N_{n}^{\mathrm{cr}}$ do not depend on the type of fields in neighboring regions, only terms of following orders acquire such a dependence. It follows that\emph{\ }the dominant contribution for the density of pairs created by the composite field is expressed as a sum of the dominant contribution for the $T$% -constant and exponential electric fields,% \begin{equation} \tilde{n}^{\mathrm{cr}}\approx \sum_{j}\tilde{n}_{j}^{\mathrm{cr}},\ \ \tilde{n}_{j}^{\mathrm{cr}}=\frac{J_{\left( d\right) }}{\left( 2\pi \right) ^{d-1}}\int_{t\in D_{j}}dt\left[ eE_{j}\left( t\right) \right] ^{d/2}\exp % \left[ -\pi \frac{m^{2}}{eE_{j}\left( t\right) }\right] \,, \label{st13c} \end{equation}% where the index $j=1,2,3$ denotes each interval of the composite field, $% D_{1,2,3}=\mathrm{I,II,III}$. It is known \cite{AdoGavGit17} that% \begin{eqnarray} \tilde{n}^{\mathrm{cr}}_{1,3} &=&\frac{J_{\left( d\right) }}{\left( 2\pi \right) ^{d-1}}\frac{\left( eE\right) ^{d/2}}{k_{1,2}}e^{-\pi m^{2}/eE}G\left( \frac{d}{2},\frac{\pi m^{2}}{eE}\right) \,. \notag \\ \tilde{n}^{\mathrm{cr}}_{2} &=&\frac{J_{\left( d\right) }\left( eE\right) ^{d/2}T}{\left( 2\pi \right) ^{d-1}}\exp \left[ -\frac{\pi m^{2}}{eE}\right] \,, \label{st13d} \end{eqnarray}% where $G\left( \alpha ,z\right) $ is expressed in terms of the incomplete gamma function $\Gamma \left( \alpha ,z\right) $ \cite{DLMF} as% \begin{equation} G\left( \alpha ,z\right) =\int_{1}^{\infty }\frac{ds}{s^{\alpha +1}}% e^{-z\left( s-1\right) }=e^{z}z^{\alpha }\Gamma \left( -\alpha ,z\right) \,. \label{st7} \end{equation}% Calculating the vacuum-to-vacuum probability of the composite field, we obtain that it is product of the partial $P_{v}^{j}$ for the $T$-constant and exponential electric fields, respectively, $\ln P_{v}=\sum_{j}\ln P_{v}^{j}$; see the Ref. \cite{AdoGavGit17}. It is important to point out that the result above may be reproduced from the universal form for the total density of pairs created by $t$-electric potential steps in the slowly varying regime \cite{GavGit17}. Such a form does not demand knowledge on the exact solutions of the Dirac/Klein-Gordon equations. This is a consequence of the fact that in the approximation by leading terms, the distribution $% N_{n}^{\mathrm{cr}}$ in each region of $\Omega $ is formed independently of neighboring regions. While the results for slowly varying fields are completely predictable, configurations in which fields act over a relatively short time to reach the constant regime (fast switching-on process, $k_{1}^{-1}$ small), remain constant over a sufficiently large interval $T$ and takes a short interval to switch-off completely (fast switching-off process, $k_{2}^{-1}$ small) have to be studied in more details. These configurations simulate finite switching-on and -off processes, whose considerations are discussed below. To study such configurations, one has to compare parameters involving momenta with ones involving time scales, such as $\sqrt{eE}T$, $eEk_{1}^{-2}$ and $eEk_{2}^{-2}$. Regarding the dependence on the perpendicular momenta $% \mathbf{p}_{\perp }$ for instance, it is well known that a $t$-electric potential step of large time duration does not create a significant number of pairs with large $\mathbf{p}_{\perp } $. This is meaningful as long as charged pairs are accelerated along the direction of the electric field, having thereby a wider range of values of $p_{x}$ instead $\mathbf{p}_{\perp }$. By virtue of that, one may simplify the calculation of differential quantities and consider restricted values to $\mathbf{p}_{\perp }$, ranging from zero till a finite number so that the inequality% \begin{equation} \sqrt{\lambda }<K_{\perp }\,,\ \ K_{\perp }^{2}\gg \max \left( 1,\frac{m^{2}% }{eE}\right) \,, \label{s3.2} \end{equation}% is fulfilled. Here $K_{\perp }$ is a moderately large number that sets an upper bound to the perpendicular momenta of pairs created. Thus, taking into account the inequality above, we assume that \begin{equation} \sqrt{eE}T\gg K_{\perp }^{2}\,,\ \ \max \left( eEk_{1}^{-2},eEk_{2}^{-2}\right) \leq \max \left( 1,\frac{m^{2}}{eE}\right) \,, \label{s3.3} \end{equation}% As a consequence, the field satisfies the following inequalities% \begin{equation} \sqrt{eE}T/2\gg \max \left( \sqrt{eE}k_{1}^{-1},\sqrt{eE}k_{2}^{-1}\right) \leftrightarrow \max \left( k_{1}T/2,k_{2}T/2\right) \gg 1\,. \label{s3.3.1} \end{equation} To study differential quantities in this case we select a definite sign for $% p_{x}$ which, for convenience, the negative is chosen $-\infty <p_{x}\leq 0$% . Next we use the properties of symmetry discussed in Eqs. (\ref{sym1}) and (% \ref{sym2}) to generalize results for $p_{x}$ positive. Here $\xi _{1}$ varies from large negative to large positive values while $\xi _{2}$ is always large and positive; $\Pi _{1}/\sqrt{eE}$ changes from large positive to large negative values while $\Pi _{2}/\sqrt{eE}$ that is always large and negative. However once $h_{1}$,$h_{2}$ are finite, we find that the asymptotic behavior of $N_{n}^{\mathrm{cr}}$ is classified according to three main ranges% \begin{eqnarray} &&\left( \mathrm{a}\right) \ \ -\sqrt{eE}\frac{T}{2}\leq \xi _{1}\leq -% \tilde{K}_{1}\leftrightarrow \sqrt{eE}\frac{T}{2}+\sqrt{\frac{h_{1}}{2}}\geq \frac{\Pi _{1}}{\sqrt{eE}}\geq \tilde{K}_{1}+\sqrt{\frac{h_{1}}{2}}\,, \notag \\ &&\left( \mathrm{b}\right) \ \ -\tilde{K}_{1}<\xi _{1}<\tilde{K}% _{1}\leftrightarrow \tilde{K}_{1}+\sqrt{\frac{h_{1}}{2}}>\frac{\Pi _{1}}{% \sqrt{eE}}>-\tilde{K}_{1}+\sqrt{\frac{h_{1}}{2}}\,, \notag \\ &&\left( \mathrm{c}\right) \ \ \xi _{1}\geq \tilde{K}_{1}\leftrightarrow \frac{\Pi _{1}}{\sqrt{eE}}\leq -\tilde{K}_{1}+\sqrt{\frac{h_{1}}{2}}\,, \label{fas19} \end{eqnarray}% where $\tilde{K}_{1}$ is a sufficiently large number satisfying $\sqrt{eE}T>% \tilde{K}_{1}\gg K_{\perp }^{2}$. Moreover, as long as $\xi _{2}$ is large and positive, $c_{2}$ is also large so that one case use the asymptotic approximation (9.246.1) in Ref. \cite{Gradshteyn} for the WPCfs. $u_{\pm }\left( z_{2}\right) $ and Eq. (13.8.2) in Ref. \cite{DLMF} for the CHF $% y_{1}^{2}\left( \eta _{2}\right) $, throughout all ranges above. In the range $\left( \mathrm{a}\right) $, $\xi _{1}$ is large and negative and $c_{1}$ is large as well. Then using the asymptotic expansions (9.246.2), (9.246.3) in Ref. \cite{Gradshteyn} for the WPCfs. $u_{\pm }\left( z_{1}\right) $ and Eq. (13.8.2) in \cite{DLMF} for the CHF $% y_{2}^{1}\left( \eta _{1}\right) $, one finds that the mean number of particles created, in the leading order approximation, admit the following form% \begin{eqnarray} N_{n}^{\mathrm{cr}} &\sim &\frac{\exp \left[ -\pi \left( \lambda +\Xi _{1}^{-}-\Xi _{2}^{+}\right) \right] }{\sinh \left( 2\pi \omega _{2}/k_{2}\right) \sinh \left( 2\pi \omega _{1}/k_{1}\right) } \notag \\ &\times &\left\{ \begin{array}{ll} \sinh \left( \pi \Xi _{2}^{-}\right) \sinh \left( \pi \Xi _{1}^{+}\right) \,, & \mathrm{Fermi} \\ \cosh \left( \pi \Xi _{2}^{-}\right) \cosh \left( \pi \Xi _{1}^{+}\right) \,, & \mathrm{Bose}% \end{array}% \right. \,, \label{fas20} \end{eqnarray}% as $T\rightarrow \infty $. The combination of hyperbolic functions above tends to the unity since, in this range, the frequencies $\omega _{1}$, $% \omega _{2}$ and the parameters $\Xi _{1}^{+}$, $\Xi _{2}^{-}$ are large quantities, namely $\omega _{1}\simeq \sqrt{eE}\left\vert \xi _{1}\right\vert $, $\omega _{2}\simeq \sqrt{eE}\xi _{2}$, $\Xi _{1}^{+}\sim \sqrt{2h_{1}}\left\vert \xi _{1}\right\vert $, $\Xi _{2}^{-}\simeq \sqrt{% 2h_{2}}\xi _{2}$. In virtue of that, the dominant contribution of Eq. (\ref% {fas20}) has the form% \begin{equation} N_{n}^{\mathrm{cr}}\sim \exp \left[ -\pi \left( \lambda +2\Xi _{1}^{-}\right) \right] \,, \label{fas21} \end{equation}% as $T\rightarrow \infty $, being valid both for Fermions and Bosons. In this last result, the parameter $\Xi _{1}^{-}$ is a small quantity, $\Xi _{1}^{-}\sim \sqrt{h_{1}/2}\left( \lambda /2\left\vert \xi _{1}\right\vert \right) $, so that its contribution to $N_{n}^{\mathrm{cr}}$ are negligible in comparison to $\lambda $. As a result, the differential mean numbers are practically uniform over the range $\left( \mathrm{a}\right) $, $N_{n}^{% \mathrm{cr}}\sim e^{-\pi \lambda }$. In the range $\left( \mathrm{c}\right) $, $\xi _{1}$ is large and positive and $c_{1}$ is also large. Hence one may use the asymptotic expansions (9.246.1) in Ref. \cite{Gradshteyn} for the WPCfs. $u_{\pm }\left( z_{1}\right) $ and Kummer transformations for the CHF $y_{2}^{1}\left( \eta _{1}\right) $ to prove that the mean number of particles created is significantly small% \begin{equation} N_{n}^{\mathrm{cr}}\sim \mathcal{F}_{1}\left[ O\left( \xi _{1}^{-6}\right) +O\left( \xi _{2}^{-6}\right) +O\left( \xi _{1}^{-3}\xi _{2}^{-3}\right) % \right] \,, \label{fas22} \end{equation}% as $T\rightarrow \infty $, in which $\mathcal{F}_{1}$ is a combination of hyperbolic functions similar to Eq. (\ref{fas20}),% \begin{eqnarray} \mathcal{F}_{1} &=&\frac{\exp \left[ \pi \left( \Xi _{2}^{+}+\Xi _{1}^{+}\right) \right] }{\sinh \left( 2\pi \omega _{2}/k_{2}\right) \sinh \left( 2\pi \omega _{1}/k_{1}\right) } \notag \\ &\times &\left\{ \begin{array}{ll} \sinh \left( \pi \Xi _{2}^{-}\right) \sinh \left( \pi \Xi _{1}^{-}\right) \,, & \mathrm{Fermi\,,} \\ \cosh \left( \pi \Xi _{2}^{-}\right) \cosh \left( \pi \Xi _{1}^{-}\right) \,, & \mathrm{Bose\,.}% \end{array}% \right. \label{fas23} \end{eqnarray}% In this range, the frequencies $\omega _{j}$ and the parameters $\Xi _{j}^{-} $ are large quantities $\omega _{j}\simeq \sqrt{eE}\xi _{j}$, $\Xi _{j}^{-}\sim \sqrt{2h_{j}}\xi _{j}$ so that, as in the range $\left( \mathrm{% a}\right) $, $\mathcal{F}_{1}$ can be approximated to $\mathcal{F}_{1}\sim 1$% . Therefore the differential mean numbers are significantly small in this range. In the range $\left( \mathrm{b}\right) $, $\xi _{1}$ varies from large negative to large positive values while $c_{1}$ varies from large to finite values. By this reason, it is not possible to use any asymptotic approximations for the special functions $u_{\pm }\left( z_{1}\right) $ and $% y_{2}^{1}\left( \eta _{1}\right) $, although one can still consider the same approximations (9.246.1) in Ref. \cite{Gradshteyn} and (13.8.2) in Ref. \cite% {DLMF} for the WPCfs. $u_{\pm }\left( z_{2}\right) $ and CHF $% y_{1}^{2}\left( \eta _{2}\right) $, respectively. The resulting expression shall depend explicitly on the exact forms of $u_{\pm }\left( z_{1}\right) $ and $y_{2}^{1}\left( \eta _{1}\right) $. The most significant contribution for the differential mean numbers for $% p_{x}$ positive, $0\leq p_{x}<+\infty $, can be obtained by a similar analysis, taking into account the properties of symmetry (\ref{sym1}) and (% \ref{sym2}). We finally find the domain of dominant contribution to the mean number of particles created. In this domain in the leading order approximation, it has a form \begin{equation} N_{n}^{\mathrm{cr}}\sim e^{-\pi \lambda }\times \left\{ \begin{array}{ll} \exp \left( -2\pi \Xi _{1}^{-}\right) \,, & \mathrm{for}\ \ -\sqrt{eE}T/2+% \tilde{K}_{1}<p_{x}/\sqrt{eE}\leq 0\,, \\ \exp \left( -2\pi \Xi _{2}^{+}\right) \,, & \mathrm{for}\ \ 0<p_{x}/\sqrt{eE}% \leq \sqrt{eE}T/2-\tilde{K}_{2}\,,% \end{array}% \right. \label{fas27} \end{equation}% as $T\rightarrow \infty $, valid for Fermions and Bosons. This approximation is almost uniform over this wide range of values to the longitudinal momentum since the parameters $\Xi _{1}^{-}$ and $\Xi _{2}^{+}$ are negligible in comparison to $\lambda $. In virtue of that, the switching-on and -off effects on the differential mean numbers, in the present configuration, manifest themselves as next-to-leading corrections to the uniform distribution $e^{-\pi \lambda }$. This means that the influence of the switching-on and -off processes on differential quantities are negligible for $T$ sufficiently large. From these results, the present configuration can be referred as a \textquotedblleft fast\textquotedblright\ switching-on and -off configuration, in virtue of Eq. (% \ref{s3.3.1}) and from the fact that the mean number of particles created are mainly characterized by the uniform distribution $e^{-\pi \lambda }$. In this case, the leading contribution to the number density $\tilde{n}^{% \mathrm{cr}}$, given by Eq.~(\ref{st1b}),\ is proportional to the total increment of the longitudinal kinetic momentum,\ $\Delta U=eET\ $, and then the time duration $T$. We see that both the $T$-constant field itself and the composite field under condition (\ref{s3.3.1}) can be considered as regularizations of a constant field. The present discussion encompasses the $T$\ -constant limit, characterized by the absence of exponential parts and defined by the limit $k\rightarrow \infty $. We know that a possibility of describing particle creation by the $T$% -constant field in the slowly varying approximation depends of the value of dimensionless parameter $\sqrt{eE}T>1.$ According to condition (\ref{s3.3}) a magnitude of the lower boundary $\vartheta =\min \sqrt{eE}T$\ is proportional to $m^{2}/eE$ if $m^{2}/eE>1$. Accordingly, a contribution of switching-on and -off processes to the particle creation effect becomes more pronounced for not too strong fields. It is useful to compare switching-on and -off effects for the $T$-constant field and for the composite field in the case when the parameter $\sqrt{eE}T$ approaches the above mentioned threshold values. From the plots on the left side of Figs. \ref{Fig1a} - \ref{Fig2b}) one can see that $\sqrt{eE}T=10$ is near threshold value. To this end we compute exact plots of the mean differential number of Fermions (\ref{r4}) and Bosons (\ref{r7}) created as a function of $p_x/m$ for two typical cases of critical field and very strong field, respectively. In the case of the $T$-constant field, we calculate the $p_{x}/m$ dependence using exact Eqs. (4.9) and (4.11) given in Ref. \cite{AdoGavGit17}. Results of these computations are presented on Figs. \ref{Fermi} and \ref{Bose}. We see that differential mean numbers of pairs created by the composite electric field (solid lines) and the $T$% -constant field (dashed lines) oscillate around the uniform distribution $% e^{-\pi \lambda }$. It can be seen that for fields with a critical magnitude, $E=E_{\mathrm{c}}$\ and $\sqrt{eE}T=10$ (plots (a)), the oscillations around the uniform distribution are greater than for fields with overcritical magnitude, $E=10E_{\mathrm{c}}$\ and $\sqrt{eE}T=10\sqrt{10% }$\ (plots (b)), both for the composite field and the $T$-constant field. \begin{figure}[th] \begin{center} \includegraphics[scale=0.48]{FermiF.pdf} \end{center} \caption{(color online) Differential mean number of electron/positron pairs created from the vacuum by a symmetric composite field (solid red lines, labeled with (i)) with $k_{1}/m=k_{2}/m=1$ and by a $T$-constant field (dashed light red lines, labeled with (ii)). In panel (A), $E=E_{\mathrm{c}}$ while in panel (B), $E=10E_{\mathrm{c}}$. In both cases, $mT=10$ and $\mathbf{p}_{\perp }=0$% . The horizontal dashed black line denotes the uniform distributions, being $% e^{-\protect\pi } $ in (A) and $e^{-\protect\pi /10}$ in (B).} \label{Fermi} \end{figure} \begin{figure}[th] \begin{center} \includegraphics[scale=0.48]{BoseF.pdf} \end{center} \caption{(color online) Differential mean number of scalar particles created from the vacuum by a symmetric composite field (solid blue lines, labeled with (i)) with $% k_{1}/m=k_{2}/m=1$ and by a $T$-constant field (dashed light blue lines, labeled with (ii)). In panel (A), $E=E_{\mathrm{c}}$ while in panel (B), $% E=10E_{\mathrm{c}}$. In both cases, $mT=10$ and $\mathbf{p}_{\perp }=0$. The horizontal dashed black line denotes the uniform distributions, $e^{-\protect% \pi }$ in (A) and $e^{-\protect\pi /10}$ in (B).} \label{Bose} \end{figure} We see that distributions $N_{n}^{\mathrm{cr}}$ for the $T$-constant field always oscillate greater around the uniform distribution than for the composite field. And in the case of bosons, these deviations from the uniform distribution are more significant. On the other hand, the plot of $% N_{n}^{\mathrm{cr}}$\ for the $T$-constant field is more \textquotedblleft rectangular\textquotedblright\ than for the composite field (for overcritical magnitudes).\emph{\ }Such wide distributions arose because of contributions of exponential tails,\emph{\ }$\left\vert p_{x}\right\vert /m<\left( \frac{eE}{m^{2}}\right) \left( \frac{mT}{2}+\frac{m}{k}\right) $. Note also that for $\left\vert p_{x}\right\vert /m>\left( \frac{eE}{% m^{2}}\right) \left( \frac{mT}{2}+\frac{m}{k}\right) $, mean numbers for the composite field for both magnitudes are negligible, whereas for the $T$% -constant field this is not always true: in fact, for critical magnitudes, the mean numbers for $\left\vert p_{x}\right\vert /m$ slightly larger than $% \left( \frac{eE}{m^{2}}\right) \left( \frac{mT}{2}\right) $ are not negligible, although they are small.\emph{\ }The characteristic behavior$\ $% in the case of the slowly varying regime, when$\ \tilde{n}^{\mathrm{cr}}\sim T,\ $is quite noticeable in the case of fermions already for the value of the dimensionless parameter$\ \sqrt{eE}T=10\ $and is pronounced for large values of this parameter.\emph{\ }It can be concluded that for fermions the quantity$\ \sqrt{eE}T\ =\ 10\ $is close to the threshold value. However, for bosons at$\ \sqrt{eE}T\ =\ 10,\ $the approximation of the slowly varying regime does not work yet.\emph{\ }To be applicable this approximation requires larger values of the parameter $\sqrt{eE}T$. The slowly varying regime is working for both fermions and bosons at $\sqrt{eE}T=10\sqrt{10}$. Comparing these two cases,\ we\ see\ that\ the regularization by switching\ on\ and\ off\ exponential\ fields is less\emph{\ }disturbing\emph{\ }than by the\ $T$-constant field,\ which entails considerable oscillations in the distributions, and can even lead to sharp bursts $N_{n}^{\mathrm{cr}}$\ in narrow regions of $p_{x}$. The latter circumstance, however, is not essential for estimating of dominant contributions for the density of created pairs due to the very strong $T$-constant field. However, the above calculation method which is using the composite field is more realistic and preferable for the analysis of next-to-leading terms. \section{Concluding remarks\label{conclusions}} We find exact formulas for differential mean numbers of fermions and bosons created from the vacuum due to the composite electric field of special configuration that simulate finite switching-on and -off processes within and beyond the slowly varying regime. We show that the results for slowly varying fields are completely predictable using recently developed version of a locally constant field approximation. Using exact results beyond the slowly varying regime, we find that the leading contribution to the number density of created pairs is independent of fast switching-on and -off if the time duration $T$\ of a slowly varying field is sufficiently large. It means that composite fields of such configurations can be used as regularizations of a slowly varying field, in particular, of a constant field.\emph{\ }We have studied effects of fast switching-on and -off in a number of cases, when the value of the total increment of the longitudinal kinetic momentum, characterized by the dimensionless parameter $\sqrt{eE}T>1$,\ approaches the threshold that determines the transition from a regime that is sensitive to parameters of on-off switching to the slowly varying regime. It is shown that for bosons this threshold value is much higher.\emph{\ }We\ see\ that\ the regularization by faster switching\ on\ and\ off\ is more disturbing% \emph{,\ }which entails considerable oscillations in distributions, and can even lead to sharp bursts $N_{n}^{\mathrm{cr}}$\ in narrow regions of $p_{x}$% \emph{. }The latter circumstance, however, is not essential for estimating of dominant contributions to the density of created pairs due to the very strong field. However, the above calculation method which is using the composite field is more realistic and preferable for the analysis of next-to-leading terms. Thus, details of switching-on and -off may be important for a more complete description of the vacuum instability in some physical situations, for example, in physics of low dimensional systems, such as graphene and similar nanostructures, whose transport properties may be interpreted as pair creation effects under low energy approximations. \section*{Acknowledgements} The reported study was partially funded by RFBR according to the research project No. 18-02-00149. The authors acknowledge support from Tomsk State University Competitiveness Improvement Program. D.M.G. is also supported by Grant No. 2016/03319-6, Funda\c{c}\~{a}o de Amparo \`{a} Pesquisa do Estado de S\~{a}o Paulo (FAPESP), and permanently by Conselho Nacional de Desenvolvimento Cient\'{\i}fico e Tecnol\'{o}gico (CNPq), Brazil.
1,108,101,565,395
arxiv
\section{Introduction} After the observation of Bose-Einstein condensate (BEC) \cite{expt1,rmp1999} of alkali atoms, there have been many experimental studies to explore different quantum phenomena involving matter wave previously not accessible for investigation in a controlled environment, such as, quantum phase transition \cite{qpt}, vortex-lattice formation \cite{vl}, collapse \cite{bosenova}, four-wave mixing \cite{4wm}, interference \cite{imw}, Josephson tunneling \cite{jos}, Anderson localization \cite{ander} etc. The generation and the dynamics of self-bound quantum wave have drawn much attention lately \cite{rmp}. There have been studies of self-bound matter waves or solitons in one (1D) \cite{rmp} or two (2D) \cite{santos,santos2} space dimensions. A soliton travels at a constant velocity in 1D, due to a cancellation of nonlinear attraction and defocusing forces \cite{sol}. The 1D soliton has been observed in a BEC \cite{rmp}. However, a two- or three-dimensional (3D) soliton cannot be realized for two-body contact attraction alone due to collapse \cite{sol}. There have been a few proposals for creating a self-bound 2D and 3D matter-wave state which we term a droplet exploiting extra interactions usually neglected in a dilute BEC of alkali atoms \cite{expt1}. In the presence of an axisymmetric nonlocal dipolar interaction \cite{dbec} a 2D BEC soliton can be generated in a 1D harmonic \cite{santos} or a 1D optical-lattice \cite{santos2} trap. Maucher {\it et al.} \cite{ryd} suggested that for Rydberg atoms, off-resonant dressing to Rydberg nD states can provide a nonlocal long-range attraction which can form a 3D matter-wave droplet. In this Letter we demonstrate that a tiny repulsive three-body interaction can avoid collapse and form a stable self-bound dipolar droplet in 3D{ \cite{bulgac}. There have been experimental \cite{other1} and theoretical \cite{other2} studies of the formation of a trapped dipolar BEC droplet.} In fact, for dipolar interaction stronger than two-body contact repulsion, a dipolar droplet has a net attraction \cite{pelster,pelster2}; but the two-body contact repulsion is too weak to stop the collapse, whereas a three-body contact repulsion can eliminate the collapse and form a stable stationary droplet. {Such a droplet can also be formed in a nondipolar BEC (details to be reported elsewhere) \cite{skapra}.} We study the frontal collision with an impact parameter and angular collision between two dipolar droplets. Only the collision between two integrable 1D solitons is truly elastic \cite{rmp,sol}. As the dimensionality of the soliton is increased such collision is expected to become inelastic with loss of energy in 2D and 3D. In the present numerical simulation at large velocities all collisions are found to be quasi elastic while the droplets emerge after collision with practically no deformation and without any change of velocity. {Due to axisymmetric dipolar interaction, two droplets polarized along the $z$ direction, attract each other when placed along the $z$ axis and repel each other when placed along the $x$ axis and the collision dynamics along $x$ and $z$ directions has different behaviors at very small velocities. For a collision between two droplets along the $z$ direction, the two droplets form a single bound entity in an excited state, termed a 3D droplet molecule \cite{molecule}. However, at very small velocities for an encounter along the $x$ direction, the two droplets repel and stay away from each other due to dipolar repulsion and never meet.} { The dipolar interaction potential, being not absolutely integrable, does not enjoy well defined Fourier transform that would appear for an infinite system \cite{yukalov}. Therefore, to get meaningful results, it is necessary either to regularize this potential, or, which is equivalent, to deal only with finite systems, where the system size plays the role of an effective regularization. That is, as soon as atomic interactions include dipolar forces, only finite systems are admissible. In other words, the occurrence of dipole forces prescribes the system to be finite, either being limited by an external trapping potential or forming a kind of a self-bound droplet. The conditions of stability of such droplets are studied in the present manuscript.} \section{Mean-field Model} The {\it trapless} mean-field Gross-Pitaevskii (GP) equation for a self-bound dipolar droplet of $N$ atoms of mass $m$ in the presence of a three-body repulsion is \cite{rmp1999,blakie} \begin{eqnarray}\label{eq1} i \hbar \frac{\partial \phi({\bf r},t)}{\partial t}&&= {\Big [} -\frac{\hbar^2}{2m}\nabla^2+ \frac{4\pi \hbar^2aN}{m} \vert \phi \vert^2 + \frac{\hbar N^2 K_3}{2} \vert \phi \vert^4\nonumber \\ && +3 a_{\mathrm{dd}}N \int U_{\mathrm{dd}}({\bf R})|\phi({\bf r}',t)|^2 d{\bf r}' {\Big ]} \phi({\bf r},t), \\ a_{\mathrm{dd}}&&\equiv \frac{m\mu_0 \mu_{\mathrm{d}}^2}{12\pi \hbar^2}, \quad U_{\mathrm{dd}}({\bf R})=\frac{1-3\cos^2 \theta}{R^3}, \end{eqnarray} where $a$ is the scattering length, ${\bf R}=({\bf r - r}')$, $\theta$ is the angle between the vector $\bf R$ and the polarization direction $z$, $\mu_0$ is the permeability of free space, $\mu_{\mathrm d}$ is the magnetic dipole moment of each atom, and $K_3$ is the three-body interaction term. This mean-field equation has recently been used by Blakie \cite{blakie} \footnote{ The term droplet formation in reference \cite{blakie} refer to a sudden increase of density of a dipolar BEC in a {\it trap}, whereas the present droplet is self-bound without a trap. } to study a trapped dipolar BEC. We can obtain a dimensionless equation, by expressing length in units of a scale $l$ and time in units of $\tau\equiv ml^2/\hbar$. Consequently, (\ref{eq1}) can be rewritten as \begin{eqnarray} \, i \frac{\partial \phi({\bf r},t)}{\partial t} = {\Big [} -\frac{\nabla^2}{2 }+4\pi a N \vert \phi \vert^2 + \frac{K_3N^2}{2}\vert \phi \vert^4 \nonumber \\ +3a_{\mathrm{dd}}N \int U_{\mathrm{dd}}({\bf R}) |\phi({\bf r}',t)|^2 d{\bf r}' {\Big ]} \phi({\bf r},t), \label{eq2} \end{eqnarray} where $K_3$ is expressed in units of $\hbar l^4/m$ and $|\phi|^2$ in units of $l^{-3}$ and energy in units of $\hbar^2/(ml^2)$. The wave function is normalized as $\int |\phi({\bf r},t)|^2 d{\bf r}=1$. \begin{figure \begin{center} \includegraphics[trim = 1mm 0mm 2mm 0mm,width=.325\linewidth,clip]{fig1a.png} \includegraphics[trim = 1mm 0mm 2mm 0mm,width=.325\linewidth,clip]{fig1b.png} \includegraphics[trim = 1mm 0mm .5mm 0mm,width=.325\linewidth,clip]{fig1c.png} \caption{ 2D contour plot of energy (\ref{eq5}) showing {the energy minimum and} the negative energy region for $^{52}$Cr atoms as a function of widths $w_\rho$ and $w_z$ for (a) $N=10000, K_3=10^{-38}$ m$^6$/s, (b) $N=3000, K_3=10^{-37}$ m$^6$/s and (c) $N=10000, K_3=10^{-37}$ m$^6$/s. The variational and numerical widths of the stationary droplet are marked $\times$ and $+$, respectively. Plotted quantities is all figures are dimensionless and the physical unit for $^{52}$Cr atoms can be restored using the unit of length $l=1$ $\mu$m.} \label{fig1} \end{center} \end{figure} For an analytic understanding of the formation of a droplet a variational approximation of (\ref{eq2}) is obtained with the axisymmetric Gaussian ansatz: \cite{pg,np,kishor} \begin{eqnarray}\label{eq3} \phi({\bf r})&=&\frac{\pi^{-3/4}}{w_z^{1/2} w_{\rho} } \exp\biggr[ -\frac{\rho^2}{2w_\rho^2} -\frac{z^2}{2w_z^2} \biggr], \end{eqnarray} where $\rho^2=x^2+y^2$, $w_\rho$ and $w_z$ are the radial and axial widths, respectively. This leads to the energy density per atom: \begin{eqnarray}\label{eq4} {\cal E}({\bf r})&=& \frac{|\nabla \phi({\bf r}) |^2}{2}+2\pi N a| \phi({\bf r})|^4 +\frac{K_3N^2}{6}| \phi({\bf r})|^6 \nonumber \\ &+&\frac{3a_{\mathrm{dd}}N}{2}| \phi({\bf r})|^2 \int U_{\mathrm{dd}}({\bf R}) | \phi({\bf r}')|^2 d {\bf r}', \end{eqnarray} and the total energy per atom $E\equiv \int {\cal E}({\bf r}) d{\bf r}$ \cite{np}: \begin{eqnarray} \label{eq5} E &=& \frac{1}{2w_\rho^2} +\frac{1}{4w_z^2} +\frac{K_3N^2\pi^{-3}}{18\sqrt 3 w_\rho^4 w_z^2} +\frac{N[a-a_{\mathrm{dd}}f(\kappa)]}{\sqrt{2\pi}w_\rho^2w_z}, \quad \kappa=w_\rho/w_z, \\ f(\kappa)&=& \frac{1+2\kappa^2-3\kappa^2d(\kappa)}{1-\kappa^2}, \quad d(\kappa)= \frac{\mbox{atanh}\sqrt{1-\kappa^2}}{\sqrt{1-\kappa^2}}. \end{eqnarray} { In (\ref{eq5}), the first two terms on the right are contributions of the kinetic energy of the atoms, the third term on the right corresponds to the three-body repulsion, and the last term to the net attractive atomic interactions responsible for the formation of the droplet for $|a|>a_{\mathrm dd}$. The higher order nonlinearity (quintic) of the three-body interaction compared to the cubic nonlinearity of the two-body interaction, has led to a more singular repulsive term at the origin in (\ref{eq5}). This makes the system highly repulsive at the center ($w_\rho,w_z \to 0$), even for a small three-body repulsion, and stops the collapse stabilizing the droplet. } The stationary widths $w_\rho$ and $w_z$ of a droplet correspond to the global minimum of energy (\ref{eq5}) \cite{np,kishor} \begin{eqnarray} && \frac{1}{w_\rho^3} +\frac{ N }{\sqrt{2\pi}} \frac{ \left[2{a} - a_{\mathrm{dd}} {g(\kappa) }\right] }{w_\rho^3w_{z}}+\frac{4K_3N^2}{18\sqrt 3\pi^3 w_\rho^5 w_z^2}=0 , \label{f1} \\ && \frac{1}{w_z^3}+ \frac{ 2N}{\sqrt{2\pi}} \frac{ \left[{a}-a_{\mathrm{dd}} c(\kappa)\right] }{w_\rho^2w_z^2} +\frac{4K_3N^2}{18\sqrt 3\pi^3 w_\rho^4 w_z^3}=0, \label{f2} \end{eqnarray} \begin{eqnarray}\, && g(\kappa)=\frac{2-7\kappa^2-4\kappa^4+9\kappa^4 d(\kappa)}{(1-\kappa^2)^2}, \nonumber \\ && c(\kappa) =\frac{1+10\kappa^2 -2\kappa^4 -9\kappa^2 d(\kappa)}{(1-\kappa^2)^2}.\nonumber \end{eqnarray} \begin{figure \begin{center} \includegraphics[width=\linewidth,clip]{fig2.pdf} \caption{ Variational critical number of atom $N_{\mathrm{crit}}$ for the formation of dipolar {($a_{\mathrm{dd}}=15.3a_0$) and nondipolar ($a_{\mathrm{dd}}=0$) droplets}, obtained from (\ref{f1}) and (\ref{f2}), for different $K_3$. For $N< N_{\mathrm{crit}}$ and for {$a>a_{\mathrm{dd}}=15.3a_0$ (dipolar) and for $a>0$ (nondipolar)} no droplet can be formed. }\label{fig2} \end{center} \end{figure} \section{Numerical results} { Unlike the 1D case, the 3D GP equation (\ref{eq2}) does not have an analytic solution and different numerical methods, such as split-step Crank-Nicolson \cite{CPC} and Fourier spectral \cite{spec} methods, are used for its solution. We solve the 3D GP equation (\ref{eq2}) numerically by the split-step Crank-Nicolson method \cite{CPC} for a dipolar BEC \cite{kishor,CPC1} using both real- and imaginary-time propagation in Cartesian coordinates employing a space step of $ 0.025$ and a time step upto as small as $ 0.00001$. } In numerical calculation, we use the parameters of $^{52}$Cr atoms \cite{np}, e.g., $a_{\mathrm{dd}}=15.3 a_0$ and $m= 52$ amu with $a_0$ the Bohr radius. We take the unit of length $l=1$ $\mu$m, unit of time $\tau\equiv ml^2/\hbar=$ 0.82 ms { and the unit of energy $\hbar^2/(ml^2)=1.29\times 10^{-31}$ J}. The scattering length $a$ can be controlled experimentally, independent of the three-body term $K_3$, by magnetic \cite{mag} and optical \cite{opt} Feshbach resonances and we mostly fix $a=-20a_0$ below. In figures \ref{fig1} we show the 2D contour plot of energy (\ref{eq5}) as a function of widths $w_\rho$ and $w_z$ for different $N$ and $K_3$. This figure highlights the negative energy region. The white region in this plot corresponds to positive energy. The minimum of energy is clearly marked in figures \ref{fig1}. \begin{figure \begin{center} \includegraphics[trim = 2mm 0mm 1mm 0mm,width=.505\linewidth,clip]{fig3a.pdf} \includegraphics[trim = 10mm 0mm 1mm 0mm,width=.487\linewidth,clip]{fig3b.pdf} \caption{ Variational (line) and numerical (chain of symbols) (a) rms sizes $\rho_{\mathrm{rms}}, z_{\mathrm{rms}}$ and (b) energy $|E|$ versus the number of $^{52}$Cr atoms $N$ in a droplet for two different $K_3$: $ 10^{-38}$ m$^6$/s { and} $ 10^{-37}$ m$^6$/s. {The physical unit of energy for $^{52}$Cr atoms can be restored by using the energy scale $1.29\times 10^{-31}$ J.} }\label{fig3} \end{center} \end{figure} For a fixed scattering length $a$, (\ref{f1}) and (\ref{f2}) for variational widths allow solution for the number of atoms $N$ greater than a critical value $N_{\mathrm{crit}}$. For $N< N_{\mathrm{crit}}$ the system is much too repulsive and escapes to infinity. However, this critical value $N_{\mathrm{crit}}$ of $N$ is a function of the three-body term $K_3$ and scattering length $a$. The $N_{\mathrm{crit}}-a$ correlation for different $K_3$ is shown in figure \ref{fig2}. The critical number of atoms for the formation of a nondipolar droplet for $K_3=10^{-37}$ m$^6$/s is also shown in this figure. Although a trapped dipolar BEC with a negligible $K_3$ collapses for a sufficiently large $N$ \cite{bohn}, there is no collapse of the droplets for a large $N$ due to a very strong three-body repulsion at the center. We compare in figure \ref{fig3}(a) the numerical and variational root-mean-square (rms) sizes $\rho_{\mathrm{rms}}$ and $z_{\mathrm{rms}}$ of a droplet versus $N$ for two different $K_3$: $ 10^{-38}$ m$^6$/s, and $ 10^{-37}$ m$^6$/s. These values of $K_3$ are reasonable and are similar to the values of $K_3$ used elsewhere \cite{blakie,blakie1}. In figure \ref{fig3}(b) we show the numerical and variational energies $|E|$ of a droplet versus $N$ for different $K_3$. The energy of a bound droplet is negative in all cases and its absolute value is plotted. \begin{figure \begin{center} \includegraphics[trim = 5mm 0mm 5mm 0mm,width=.4\linewidth,clip]{fig4a.pdf} \includegraphics[trim = 5mm 0mm 5mm 0mm,width=.4\linewidth,clip]{fig4b.pdf} \includegraphics[trim = 5mm 0mm 5mm 0mm,width=.4\linewidth,clip]{fig4c.pdf} \includegraphics[trim = 5mm 0mm 5mm 0mm,width=.4\linewidth,clip]{fig4d.pdf} \caption{ Variational ({v, line) and numerical (n,} chain of symbols) reduced 1D densities $\rho_{1D}(x)$ and $ \rho_{1D}(z)$ along $x$ and $z$ directions, respectively, {and corresponding energies} of a $^{52}$Cr droplet with $a=-20a_0$ for different $N$ and $K_3$: (a) $N=10000, K_3=10^{-37}$ m$^6$/s, (b) $ N=3000,$ $K_3=10^{-37}$ m$^6$/s, (c) $N=10000,$ $K_3=10^{-38}$ m$^6$/s, and (d) $N=3000,$ $K_3=10^{-38}$ m$^6$/s. } \label{fig4} \end{center} \end{figure} \begin{figure \begin{center} \includegraphics[width=.4\linewidth,clip]{fig5a.png} \includegraphics[width=.4\linewidth,clip]{fig5b.png} \includegraphics[width=.4\linewidth,clip]{fig5c.png} \includegraphics[width=.4\linewidth,clip]{fig5d.png} \caption{ The 3D isodensity ($|\phi{(\bf r)}|^2$) of the droplets of (a) figure \ref{fig4}(a), (b) figure \ref{fig4}(b), (c) figure \ref{fig4}(c), (d) figure \ref{fig4}(d). The dimensionless density on the contour in figures \ref{fig5} and \ref{fig6}-\ref{fig8} is 0.001 {which transformed to physical units is 10$^9$ atoms/cc. } }\label{fig5} \end{center} \end{figure} { } To study the density distribution of a $^{52}$Cr droplet we calculate the reduced 1D densities $ \rho_{\mathrm{1D}}(x) \equiv \int dz dy |\phi({\bf r})|^2,$ and $ \rho_{\mathrm{1D}}(z) \equiv \int dx dy |\phi({\bf r})|^2$. In figures \ref{fig4} we plot these densities as obtained from variational and numerical calculations for different $N$ and $K_3$. From figures \ref{fig3}(a) and \ref{fig4}(a)-(d) we find that for a small $N$ and fixed $K_3$, the droplets are well localized with small size and the agreement between numerical and variational results is better. For a fixed $N$, the droplet is more compact for a small $K_3$ corresponding to a small three-body repulsion. In figures \ref{fig5}(a)-(d) we show the 3D isodensity contours of the droplets of figures \ref{fig4}(a)-(d), respectively. In all cases the droplets are elongated in the $z$ direction due to dipolar interaction. In figures \ref{fig5}(a)-(b) and \ref{fig4}(a)-(b) $K_3$ is much larger than that in figures \ref{fig5}(c)-(d) and \ref{fig4}(c)-(d). Hence, the three-body repulsion is stronger in figures \ref{fig5}(a)-(b) thus leading to droplets of larger sizes. In contrast to a local energy minimum in 1D \cite{rmp} and 2D \cite{santos} solitons, the 3D droplets correspond to a global energy minimum {with $E<0$}, viz. figures \ref{fig1}, and are expected to be stable. The stability of the droplets is confirmed (details to be reported elsewhere) by real-time simulation over a long time interval upon a small perturbation. and extreme inelastic collision with the formation of droplet molecule is possible for $v<1$. To test the solitonic nature of the droplets, we study the frontal head-on collision and collision with an impact parameter $d$ of two droplets at large velocity along $x$ and $z$ axes. {To set the droplets in motion the respective imaginary-time wave functions are multiplied by $\exp(\pm i v x)$ and real-time simulation is then performed with these wave functions. } Due to the axisymmetric dipolar interaction the dynamics along $x$ and $z$ axes could be different { at small velocities. At large velocities the kinetic energy $E_k$ of the droplets is much larger than the { internal energies of the droplets,} and the latter plays an insignificant role in the collision dynamics. Consequently, there is no qualititative difference between the collision dynamics along $x$ and $z$ axes and that between the collision dynamics for different impact parameters at large velocities. { As velocity is reduced, the collision becomes inelastic resulting in a deformation and eventual destruction of the individual droplets after collision. At very small velocities, the dipolar energy plays a decisive role in collision along $x$ and $z$ axes, and the dynamics along these two axes have completely different characteristics, viz. figure \ref{fig9}. }} \begin{figure \begin{center} \includegraphics[trim = 0mm 0mm 0mm 0mm, clip,width=.155\linewidth]{fig6a.png} \includegraphics[trim = 0mm 0mm 0mm 0mm, clip,width=.155\linewidth]{fig6b.png} \includegraphics[trim = 0mm 0mm 0mm 0mm, clip,width=.155\linewidth]{fig6c.png} \includegraphics[trim = 0mm 0mm 0mm 0mm, clip,width=.155\linewidth]{fig6d.png} \includegraphics[trim = 0mm 0mm 0mm 0mm, clip,width=.155\linewidth]{fig6e.png} \includegraphics[trim = 0mm 0mm 0mm 0mm, clip,width=.155\linewidth]{fig6f.png} \caption{ Collision dynamics of two droplets of figure \ref{fig4}(b) placed at $x=\pm 4, z=\mp 1$ at $t=0$ moving in opposite directions along the $x$ axis with velocity $v\approx 38$, illustrated by 3D isodensity contours at times (a) $t=0$, (b) = 0.042, (c) = 0.084, (d) = 0.126, (e) = 0.168, (f) = 0.210. The velocities of the droplets are shown by arrows. } \label{fig6} \end{center} \end{figure} The collision dynamics of two droplets of figure \ref{fig4}(b) ($N=3000, K_3=10^{-37}$ m$^6$/s) moving along the $x$ axis in opposite directions with a velocity $v\approx 38$ each and with an impact parameter $d=2$ is shown in figures \ref{fig6}(a)-(f) by successive snapshots of 3D isodensity contour of the moving droplets. A similar collision dynamics of the same droplets moving along the $z$ axis with a velocity $v\approx 37$ each with impact parameter $d=2$ is illustrated in figures \ref{fig7}(a)-(f). The droplets come close to each other in figure \ref{fig6}(b) and \ref{fig7}(b), coalesce to form a single entity in figures \ref{fig6}(c)-(d) and \ref{fig7}(c)-(d), form two separate droplets in figures \ref{fig6}(e) and \ref{fig7}(e). The droplets are well separated in figures \ref{fig6}(f) and \ref{fig7}(f) without visible deformation/distortion in shape and moving along $x$ and $z$ axes with the same initial velocity showing the quasi elastic nature of collision. The frontal head-on collision is also quasi elastic. \begin{figure \begin{center} \includegraphics[trim = 0mm 0mm 0mm 0mm, clip,width=.155\linewidth]{fig7a.png} \includegraphics[trim = 0mm 0mm 0mm 0mm, clip,width=.155\linewidth]{fig7b.png} \includegraphics[trim = 0mm 0mm 0mm 0mm, clip,width=.155\linewidth]{fig7c.png} \includegraphics[trim = 0mm 0mm 0mm 0mm, clip,width=.155\linewidth]{fig7d.png} \includegraphics[trim = 0mm 0mm 0mm 0mm, clip,width=.155\linewidth]{fig7e.png} \includegraphics[trim = 0mm 0mm 0mm 0mm, clip,width=.155\linewidth]{fig7f.png} \caption{ Collision dynamics of two droplets of figure \ref{fig4}(b) placed at $x=\pm 1, z=\mp 4.8$ at $t=0$ moving in opposite directions along the $z$ axis with velocity $v\approx 37$ by 3D isodensity contours at times (a) $t=0$, (b) = 0.052, (c) = 0.104, (d) = 0.156, (e) = 0.208, (f) = 0.260. } \label{fig7} \end{center} \end{figure} \begin{figure \begin{center} \includegraphics[trim = 0mm 0mm 0mm 0mm, clip,width=.155\linewidth]{fig8a.png} \includegraphics[trim = 0mm 0mm 0mm 0mm, clip,width=.155\linewidth]{fig8b.png} \includegraphics[trim = 0mm 0mm 0mm 0mm, clip,width=.155\linewidth]{fig8c.png} \includegraphics[trim = 0mm 0mm 0mm 0mm, clip,width=.155\linewidth]{fig8d.png} \includegraphics[trim = 0mm 0mm 0mm 0mm, clip,width=.155\linewidth]{fig8e.png} \includegraphics[trim = 0mm 0mm 0mm 0mm, clip,width=.155\linewidth]{fig8f.png} \caption{ Collision dynamics of two droplets of figure \ref{fig4}(b) placed at $x=\pm 4, z= 1 $ at $t=0$ moving towards origin with velocity $v\approx 40 $ by 3D isodensity plots at times (a) $t=0$, (b) = 0.042, (c) = 0.084, (d) = 0.126, (e) = 0.168, (f) = 0.210. } \label{fig8} \end{center} \end{figure} \begin{figure}[!t] \begin{center} \includegraphics[trim = 0mm 0mm 0mm 1mm, clip,width=.49\linewidth]{fig9a.png} \includegraphics[trim = 0mm 0mm 0mm 0mm, clip,width=.49\linewidth]{fig9b.png} \caption{ (a) 2D contour plot of the evolution of 1D density $\rho_{1D}(z,t)$ versus $z$ and $t$ during the collision of two droplets of figure \ref{fig4}(d) initially placed at $z =\pm 3.2$ at $t=0$ and moving towards each other with velocity $v\approx 0.5$. (b) 2D contour plot of the evolution of 1D density $\rho_{1D}(x,t)$ versus $x$ and $t$ during the encounter of the same droplets initially placed at $x =\pm 1.6$ at $t=0$ and moving towards each other with velocity $v\approx 0.5$. }\label{fig9} \end{center} \end{figure} To study the angular collision of two droplets of figure \ref{fig4}(b), at $t=0$ two droplets of are placed at $x=\pm 3, z=1$, respectively, and set into motion towards the origin with a velocity $v\approx 40$ each by multiplying the respective imaginary-time wave functions by $\exp(\pm i50 x+9.5 iz)$ and performing real-time simulation. Again the isodensity profiles of the droplets before, during, and after collision are shown in figures \ref{fig8}(a)-(b), (c)-(d), and (e)-(f), respectively. The droplets again come out after collision undeformed conserving their velocities. { Two dipolar droplets placed along the $x$ axis with the dipole moment along the $z$ directions repel by the long range dipolar interaction, whereas the two placed along the $z$ axis attract each other by the dipolar interaction. This creates a dipolar barrier between the two colliding droplets along the $x$ direction. At large incident kinetic energies, the droplets can penetrate this barrier and collide along the $x$ direction. However, at very small kinetic energies ($v<1$), for an encounter along the $x$ direction the droplets cannot overcome the dipolar barrier and the collision does not take place. There is no such barrier for an encounter along the $z$ direction at very small velocities and the encounter takes place with the formation of a oscillating droplet molecule. To illustrate the different nature of the dynamics of collision along $x$ and $z$ directions at very small velocities we consider two droplets of figure \ref{fig4}(d) ($N = 3000; K_3 = 10^{-38}$ m$^6$/s). For an encounter along the $z$ direction at $t=0$ two droplets are placed at $ z=\pm 3.2$ and set in motion in opposite directions along the $z$ axis with a small velocity $v\approx 0.5$. The dynamics is illustrated by a 2D contour plot of the time evolution of the 1D density $\rho_{1D}(z,t)$ in figure \ref{fig9}(a). The two droplets come close to each other at $z=0$ and coalesce to form a droplet molecule and never separate again. The droplet molecule is formed in an excited state due to the liberation of binding energy and hence oscillates. For an encounter along the $x$ direction at $t=0$ two droplets are placed at $ x=\pm 1.6$ and set in motion in opposite directions along the $x$ axis with the same velocity $v\approx 0.5$. The dynamics is illustrated by a 2D contour plot of the time evolution of the 1D density $\rho_{1D}(x,t)$ in figure \ref{fig9}(b). The droplets come a little closer to each other due to the initial momentum. But due to long-range dipolar repulsion they move away from each other eventually and the actual encounter never takes place. In collision dynamics of nondipolar BECs and in collision of dipolar BEC along $z$ direction the BECs never exhibit this peculiar behavior.} { \begin{figure}[!t] \begin{center} \includegraphics[trim = 0mm 0mm 0mm 1mm, clip,width=.8\linewidth]{fig11.pdf} \caption{ Energy well of (\ref{Eq9}) $E(w_\rho)$ vs. $w_\rho$ for different $w_z$ with the parameters of the droplet of figure \ref{fig4}(d). }\label{fig11} \end{center} \end{figure} } {A semi-quantitative estimate of the dipolar repulsion of the collision of two droplets along the $x$ axis at small velocities can be given by the variational expression for energy per atom (\ref{eq5}) for a fixed $w_z$, e.g., \begin{equation}\label{Eq9} E(w_\rho)= \frac{1}{2w_\rho^2} +\frac{K_3N^2\pi^{-3}}{18\sqrt 3 w_\rho^4 w_z^2} +\frac{N[a-a_{\mathrm{dd}}f(\kappa)]}{\sqrt{2\pi}w_\rho^2w_z}, \end{equation} where we have removed the $w_z$-dependent constant term. Equation (\ref{Eq9}) gives the energy well felt by an individual atom approaching the droplet along the $x$ axis. The single approaching atom will interact with all atoms of the droplet distributed along the extention of the droplet along the $z$ direction ($\sim 0.8,$ viz. figure \ref{fig4}(d)). The most probable $z$ value of an atom in the droplet to interact with the approaching atom is $ z_{\mathrm{rms}}\sim w_{z}/\sqrt{2}\approx 0.5.$ In figure \ref{fig11} we plot $E(w_\rho)$ versus $w_\rho$ with the parameters of the droplet of figure \ref{fig4}(d) employed in the dynamics shown in figure \ref{fig9}. We find in this figure that for small $w_z$ the energy well is entirely repulsive. For medium values of $w_z$ an attractive well with a repulsive dipolar barrier appears and for large $w_z$ a fully attractive well appears without the dipolar barrier, which is also the case of an approaching atom along the $z$ axis. For the probable $w_z$ values there is a dipolar energy barrier of height $ \sim 0.2$ near $w_\rho \sim 2$ to $3.$ For the dynamics in figure \ref{fig9}, the approacing atom has an energy of $v^2/2= 0.5^2/2 = 0.125$, which is smaller than the height of the dipolar barrier at $w_\rho \sim 2$ to $3$. Hence the approaching dipolar droplet in figure \ref{fig9}(b) turns back when the distance between the two droplets is $\sim 2$. In the collision along $z$ direction there is no dipolar barrier and the encounter takes place at all velocities. } \section{Summary} We demonstrated the creation of a stable, stationary self-bound dipolar BEC droplet for a tiny repulsive three-body contact interaction for $a_{\mathrm{dd}}<|a|$ and study its statics and dynamics employing a variational approximation and numerical solution of the 3D GP equation (\ref{eq1}). The droplet can move with a constant velocity. At large velocities, the frontal collision with an impact parameter and the angular collision of two droplets { are} found to be quasi elastic. {At medium velocities, the collision is inelastic and leads to a deformation or a destruction of the droplets after collision. } At very small velocities, { the collision dynamics is sensitive to the anisotropic dipolar interaction and hence to the direction of motion of the droplets. The collision between two droplets along the $z$ direction leads to the formation of a droplet molecule after collision. In an encounter along the $x$ direction at very small velocities, the two droplets repel and stay away from each other avoiding a collision.} { It seems appropriate to present a classification of the droplet formation in different parameter domains, e.g., scattering length $a$, dipolar length $a_{\mathrm{dd}}$, the strength of three-body interactions $K_3$, and the number of atoms $N$. In the absence of dipolar interaction ($a_{\mathrm{dd}}=0$), a droplet can be formed for attractive atomic interaction ($a<0$). In all cases there is a minimum number of atoms $N_{\mathrm{crit}}$ for the droplet formation, which increases as the three-body interaction $K_3$ increases or the scattering length $a$ increases corresponds to less attraction, viz. figure \ref{fig2}. There is no upper limit for the number of atoms to form a droplet. A similar panorama exists for the formation of a dipolar droplet with the exception that the dipolar droplet can be formed for $a<a_{\mathrm{dd}}$.} The subject matter of this study is within present experimental possibilities as is clear from the stability plot of figure \ref{fig2}. The size of a trapped dipolar BEC is determined by the harmonic oscillator lengths of the trap, whereas the size of the present droplet is determined by the internal atomic interactions. One should start with a tapped dipolar BEC for $N<N_{\mathrm{crit}}$ where no droplet can be formed, viz. figure \ref{fig2}. Now using the Feshbach resonance technique, one should make the scattering length $a$ more attractive to enter the droplet formation domain. If the harmonic trap is weak then initial droplet size could be relatively large, and by varying the scattering length the size of the droplet could be made much smaller and such droplets have been detected in experiment \cite{other1}. The repulsive three-body force could be responsible for the formation of such droplets. Preliminary study has shown that such droplets can also be formed in nondipolar BECs in the presence of a repulsive three-body interaction \cite{skapra}. \section*{Acknowledgments} I thank V. I. Yukalov and A. Pelster for encouragement and helpful remarks and the Funda\c c\~ao de Amparo \`a Pesquisa do Estado de S\~ao Paulo (Brazil) (Project: 2012/00451-0 and the Conselho Nacional de Desenvolvimento Cient\'ifico e Tecnol\'ogico (Brazil) (Project: 303280/2014-0) for support. \newpage \section*{References}
1,108,101,565,396
arxiv
\section{Introduction} Recent years have seen significant advances in quantum and quantum-inspired Ising solvers, such as quantum annealers~\cite{johnson2011quantum}, quantum approximate optimization circuits~\cite{farhi2014quantum}, or coherent Ising machines~\cite{honjo2021100}. These are devices/methodologies designed to solve optimization problems of the form: $\min_{z \in \{-1, 1\}^n} \sum_{i, j} J_{i, j} z_i z_j + \sum_{i} h_{i} z_i$, where $J_{i, j}, \, h_i$ are real coefficients and $z_i \in \{-1, 1\}$ are discrete variables to be optimized over. The promise of leveraging such technologies to speed up the solution of complex optimization problems has spurred many researchers to explore how Ising solvers can be applied to problems in various domains. However, challenges in both hardware engineering and analysis of existing devices stand in the way of a crisp theoretical characterization of such solvers (e.g., optimality guarantees and speed-up in runtime) and they generally must be treated as heuristics. A standard approach has emerged where an optimization problem is directly transcribed into an Ising problem, and the returned solution is taken at face value. Consequently, this method inherits the heuristic nature of the underlying Ising solver. While this may not be a limitation for many settings, it is insufficient for those requiring global optimality guarantees or provable bounds on solution quality. As an alternative, a few authors have proposed decomposition methods based on the Alternating Direction Method of Multipliers (ADMM), \cite{gambella2020multiblock}, or Benders Decomposition (BD), \cite{chang2020hybrid, zhao2021hybrid, paterakis2021hybrid}. Critically, when ADMM is applied to non-convex problems, it is not guaranteed to converge. When it does, it is often to a local optimum without guarantees on gaps to global optimality. On the other hand, while it may be possible to derive optimality guarantees using BD, proving convergence typically relies on an exhaustive search through the ``complicating variables". This makes it unclear whether such an algorithmic scaffold is primed to take advantage of speed-ups that the Ising solver may offer. \paragraph{Contributions} Our work is motivated by the desire to extend existing work to applications that require global optimality guarantees. We set the standard for designing a hybrid quantum-classical optimization algorithm that offers resilience to the heuristic nature of Ising solvers while taking advantage of any speed-up they may offer. Specifically, we envision convergent hybrid quantum-classical algorithms that (1) use Ising solvers as a primitive with limited requirements on/knowledge of their optimality guarantees and (2) have polynomial complexity in the classical portions of the algorithm. To this end, the contribution of this paper is an algorithmic framework that satisfies these key desiderata. Concretely, \begin{enumerate} \item We revisit and extend a result of \cite{Burer2009} and show that there is an exact convex formulation of many mixed-binary quadratic optimization problems as a copositive program. Neglecting the challenges of working with copositive matrices, convex programs are a well-understood class of optimization problems with a wide variety of efficient solution algorithms. By reformulating mixed-binary quadratic programs as copositive programs, we open the door for hybrid-quantum classical algorithms that are based on existing convex optimization algorithms. \item To solve the copositive programs, we propose a novel hybrid quantum-classical optimization algorithm based on cutting-plane algorithms, a well-established class of convex optimization algorithms. We show that the complexity of the portion of the algorithm handled by the classical computer has polynomial scaling. This analysis suggests that when applied to NP-hard problems, the complexity of the solution is shifted onto the subroutine handled by the Ising solver. \item We conducted benchmarking based on the maximum clique problem to validate our theoretical claims and evaluate potential speed-ups from using a stochastic Ising solver in lieu of a state-of-the-art deterministic solver or an Ising heuristics. Results indicate that the Ising formulation of the subproblems of the hybrid algorithm is efficient versus a MILP formulation in \texttt{Gurobi}, and the hybrid algorithm is competitive even against a non-hybridized Ising formulation of the full problem solved by simulated annealing. \end{enumerate} While preparing this manuscript, a hybrid classical-quantum method relying upon a Frank-Wolfe method was published \cite{yurtsever2022q}. This work also leverages a similar copositive reformulation of quadratic binary optimization problems. We highlight the differences between that work and the one in our manuscript below. This manuscript considers the optimization problem class of mixed-binary quadratic programs, while in \cite{yurtsever2022q}, the authors propose their method for quadratic binary optimization problems, a subcase of the problems considered herein. Moreover, we provide proof of the exactness and strong duality for copositive/completely positive optimization stemming from the mixed-integer quadratic reformulation, addressing an open question in the field. In their manuscript, \cite{yurtsever2022q} conjectures the results proved in this manuscript to be true. Finally, our solution method, which is based on cutting-plane algorithms, has a potential exponential speed-up in run-time compared to Frank-Wolfe algorithms. \subsection{Related Work} One dominant method for mapping optimization problems into Ising problems is through direct transcription. This process typically involves discretizing continuous variables and passing constraints into the objective through a penalty function; the returned solution is often taken at face value or with minimal post-processing to enforce feasibility. Owing to its simplicity, this process has found applications in a variety of problems including jobshop scheduling \cite{venturelli2016job}, routing problems \cite{harwood2021formulating}, community detection \cite{negre2020detecting}, and all of Karp's 21 NP-complete models \cite{lucas2014ising}, among others. Critically, this approach is unforgiving of the heuristic and imperfect characteristics of existing and near-term devices. This approach is often justified by the statement that a good quality solution is sufficient for many problem settings; however, there are also many practical problems where guarantees or bounds on optimality are paramount, such as neural network verification \cite{brown2022unified}, refinery pooling, phase and chemical equilibrium \cite{floudas2009review}, among others. Even with experimental efforts attesting to the quality of returned solutions, it is unclear to what degree such results can be extrapolated to problem classes or instances that have not been evaluated. As an alternative to direct transcription, there is a burgeoning body of literature exploring the potential of decomposition methods for designing hybrid quantum-classical algorithms. These generally refer to algorithms that divide effort between a classical and quantum computer, with each computer informing the computation carried out by the other. Among these, algorithms based on the Benders Decomposition (BD) are gaining traction. BD is particularly effective for problems characterized by ``complicating variables", for which the problem becomes easy once these variables are fixed. For example, a mixed-integer linear program (MILP) becomes a linear program (LP) once the integer variables are fixed--the integers are the complicating variables. BD iterates between solving a master problem over the complicating variables and sub-problems where the complicating variables are fixed, whose solution is used to generate cuts for the master problem. Both \cite{chang2020hybrid} and \cite{zhao2021hybrid} consider mixed-integer programming (MIP) problems where the integer variables are linked to the continuous variables through a polyhedral constraint and leverage a reformulation where dependence on the continuous variables is expressed as constraints over the extreme rays and points of the original feasible region. Because the number of extreme rays and points may be exponentially large, the constraints are not written down in full but iteratively generated from the solutions of the sub-problems. The master problem is an integer program consisting of these constraints and is solved using the quantum computer. Notably, the generated constraint set may be large, with the worst case being the generation of the entire constraint set, resulting in a large number of iterations. The approach in \cite{paterakis2021hybrid} attempts to mitigate this by generating multiple cuts per iteration and selecting the most informative subset of these cuts. Instead of using the quantum computer to solve the master problem, the quantum computer is used to heuristically select cuts based on a minimum set cover or maximum coverage metric. While this may effectively reduce the number of iterations and size of the constraint set, the master problem is often an integer program that may be computationally intractable. For each of the proposed approaches, it is unclear how the complexity of the problem is distributed through the solution process--for example, for \cite{chang2020hybrid, zhao2021hybrid} the complexity might show up in the number of iterations and for \cite{paterakis2021hybrid} it might show up when solving the master problem. Consequently, it is ambiguous whether BD-based approaches can take advantage of a speed-up in the Ising solver, even if one were to exist. Another decomposition that has been explored is based on the Alternating Direction Method of Multipliers (ADMM)\cite{gambella2020multiblock}. This is an algorithm to decompose large-scale optimization problems into smaller, more manageable sub-problems \cite{boyd2011distributed}. While originally designed for convex optimization, ADMM has shown great success as a heuristic for non-convex optimization as well, \cite{diamond2018general}, and significant progress has been made towards explaining its success in such settings \cite{wang2019global}. In \cite{gambella2020multiblock}, the authors propose an ADMM-based decomposition with three sub-problems: the first being over just the binary variables, the second being the full problem with a relaxed copy of the binary variables, and the third being a term that ties the binary variables and their relaxed copies together. For quadratic pure-binary problems, the authors show that the algorithm converges to a stationary point of the augmented Lagrangian, which may not be a global optimizer--convergence to a global optimum is only guaranteed under the more stringent Kurdyka-\L ojasiewicz conditions on the objective function \cite{attouch2013convergence}. Unfortunately, the assumptions guaranteeing convergence to a stationary point fail in the presence of continuous variables. A third class of decomposition proposed and implemented in D-Wave's \texttt{qbsolv} solver is based on tabu search \cite{BoothReinhardtEtAl2017}. \texttt{qbsolv} can be seen as iterating between a large-neighborhood local search (using an Ising solver) and tabu improvements to locally refine the solution (using a classical computer), where previously found solutions are removed from the search space in each iteration. During the local search phase, subsets of the variables are jointly optimized while the remaining variables are fixed to their current values. The solution found in this phase is then used to initialize the tabu search algorithm, and the process is repeated for a fixed number of iterations. Critically, it is unclear whether the algorithm is guaranteed to converge and, if so, what its optimality guarantees are. While finite convergence of tabu search is investigated in \cite{glover2002tabu}, it relies on either recency or frequency memory that ensures an exhaustive search of all potential solutions. Another approach for purely integer programming problems is based on the computation of a Graver basis through the computation of the integer null-space of the constraint set as proposed in \cite{alghassi2019graver}. This null-space computation is posed as a quadratic unconstrained binary optimization (QUBO) and then post-processed to obtain the Graver basis of the constraint set, a test-set of the problem. The test-set provides search directions for an augmentation-based algorithm. For a convex objective, it provides a polynomial oracle complexity in converging to the optimal solution. The authors initialize the problem by solving a feasibility-based QUBO and extend this method to non-convex objectives by allowing multiple starting points for the augmentation. The multistart procedure also alleviates the requirement for computing the complete Graver basis of the problem, which grows exponentially with the problem's size. Considering an incomplete basis or non-convex objectives makes the Graver Augmentation Multistart Algorithm (GAMA) a heuristic for general integer programming problems and cannot address problems with continuous variables. In this paper, we seek to fill a gap in the literature on rigorous solution methods for mixed-integer non-convex optimization problems that use Ising solvers as a subroutine. By elucidating the hidden convex structure of non-convex problems, we pave the way for hybrid quantum-classical algorithms based on efficient convex optimization algorithms. We show that a hybrid algorithm based on cutting-plane algorithms inherits their convergence guarantees without sacrificing efficiency. \subsection{Quantum/Quantum-inspired Ising Solvers} Adiabatic quantum computing (AQC) is a quantum computation paradigm that operates by initializing a system in the ground state of an initial Hamiltonian and slowly sweeping the system to an objective Hamiltonian. This Hamiltonian, referred to as the cost Hamiltonian, maps the objective function of the classical Ising model onto a system with as many quantum bits, or qubits, as original variables in the Ising model. The adiabatic theorem of quantum mechanics states that if the system evolution is ``sufficiently slow", the system ends up in the ground state of the desired Hamiltonian. Here, ``sufficiently slow" depends on the minimum energy gap between the ground and the first excited state throughout the system evolution \cite{albash2018adiabatic}. Since the evaluation of the minimal gap is mostly intractable, one is forced to phenomenologically ``guess'' the evolution’s speed, and if it is too fast, the undesired non-adiabatic transitions can occur. Additionally, real devices are plagued with various incarnations of physical noise, such as thermal fluctuations or decoherence effects, that can hamper computation. The situation is further exacerbated by the challenge of achieving dense connectivity between qubits--densely connected problems are embedded in devices by chaining together multiple physical qubits to represent one logical qubit. The heuristic computational paradigm that encompasses these additional noise and non-quantum effects is known as Quantum Annealing (QA)-- \cite{hauke2020perspectives} provides a review on QA with a focus on possible routes towards solving the open questions in the field. An alternative paradigm to AQC is the gate-based model of quantum computing. Within the gate-based model, Variational Quantum Algorithms (VQAs) is a class of hybrid quantum-classical algorithms that can be applied to optimization \cite{cerezo2021variational}. VQAs share a common operational principle where the ``loss function" of a parameterized quantum circuit is measured on a quantum device and evaluated on a classical processor, and a classical optimizer is used to update (or ``train") the circuit's parameters to minimize the loss. VQAs are often interpreted as a quantum analog to machine learning, leaving many similar questions open regarding their trainability, accuracy, and efficiency. The quantum approximate optimization algorithm (QAOA) is a specific instance of a VQA where the structure of the quantum circuit is the digital analog of adiabatic quantum computing \cite{farhi2014quantum}. QAOA operates by alternating the application of the cost Hamiltonian and a mixing Hamiltonian; the number of alternating blocks is referred to as the circuit depth. For each one of the alternating steps, either mixing or cost application, a classical optimizer needs to determine how long each step should be performed, encoded as rotation angles. Optimizing the expected cost function with respect to the rotation angles is a continuous low-dimensional non-convex problem. QAOA is designed to optimize cost Hamiltonians, such as the ones derived from classical Ising problems. Performance guarantees can be derived for QAOA with well-structured problems, given that the optimal angles are found in the classical optimization step. Although approximation guarantees have not been derived for arbitrary cost Hamiltonians, even depth-one QAOA circuits have non-trivial performance guarantees for specific problems and cannot be efficiently simulated on classical computers \cite{farhi2016quantum}, thus bolstering the hope for a speed-up in near-term quantum machines. Moreover, the algorithm's characteristics, such as relatively shallow circuits, make it amenable to be implemented in currently available noisy intermediate-scale quantum (NISQ) computers compared to other algorithms requiring fault-tolerant quantum devices~\cite{preskill2018quantum}, not yet developed to the best of the authors' knowledge. While QAOA's convergence to optimal solutions is known to improve with increased circuit depth and to succeed in the infinite depth limit following its equivalence to AQC, its finite depth behavior has remained elusive due to the challenges in analyzing quantum many-body dynamics and other practical complications such as decoherence when implementing long quantum circuits, compilation issues, and hardness of the optimal angle classical problem~\cite{uvarov2021barren}. Even considering these complications, QAOA has been extensively studied and implemented in current devices~\cite{willsch2020benchmarking,harrigan2021quantum}, becoming one of the most popular alternatives to address combinatorial optimization problems modeled as Ising problems using gate-based quantum computers. Several other Quantum heuristics for Ising problems have been proposed, usually requiring fault-tolerant Quantum computers. We direct the interested reader to a recent review on the topic~\cite{sanders2020compilation}. An alternative physical system for solving Ising problems that has emerged is coherent Ising machines (CIMs), which are optically pumped networks of coupled degenerate optical parametric oscillators. As the pump strength increases, the equilibrium states of an ideal CIM correspond to the Ising Hamiltonian's ground states encoded by the coupling coefficients. Large-scale prototypes of CIMs have achieved impressive performance in the lab, thus driving theoretical study of their fundamental operating principles. While significant advances have been made on this front, we still lack a clear theoretical understanding of the CIMs' computational performance. Since a thorough understanding of the CIM is limited by our capacity to prove theorems about complex dynamic systems, near-term usage of CIMs must treat them as a heuristic rather than a device with performance guarantees \cite{yamamoto2017coherent}. Even so, there are empirical observations that in many cases, the median complexity of solving Ising problems using CIM scales as $\exp{\sqrt{N}}$ where $N$ is the size of the problem~\cite{mohseni2022ising}, making it a potential approach to solve these problems efficiently in practice. We note that there are other types of Ising machines, including classical thermal annealers (based on spintronics, optics, memristors, and digital hardware accelerators), dynamical-systems solvers (based on optics and electronics), and superconducting-circuit quantum annealers. We direct the interested reader to \cite{mohseni2022ising}, which provides a recent review and comparison of various methods for constructing Ising machines and their operating principles. While there is optimism regarding improvements to and our understanding of quantum technology in the coming decades, it is still unclear what their impact will be in the near term. We believe that the method presented in this paper is a complementary approach to algorithm design that meets these technological advancements ``halfway". In particular, we envision a class of hybrid quantum-classical optimization algorithms that, with limited knowledge of or requirements on the guarantees of any particular Ising solver, can transform black-box solutions to ones with rigorous optimality while simultaneously benefiting from any advantage that does exist. \paragraph{Organization} In Section \ref{sec:prelim}, we present notation, terminology, and the problem setting covered by our approach. In Section \ref{sec:approach}, we introduce the proposed framework, including convex reformulation via copositive programming, a high-level overview of cutting-plane algorithms, and a specific discussion of their application to copositive programming. Section \ref{sec:experiments} provides numerical experiments supporting our assertions about the proposed approach. Finally, we conclude and highlight future directions in Section \ref{sec:conclusion}. \section{Preliminaries}\label{sec:prelim} \subsection{Notation and Terminology} In this paper, we solely work with vectors and matrices defined over the real numbers and reserve lowercase letters for vectors and uppercase letters for matrices. We will also follow the convention that a vector $x \in \mathbb{R}^n$ is to be treated as a column vector, i.e., equivalent to a matrix of dimension $n \times 1$. For a matrix $M$, we use $M_{i, j}$ to denote the entry in the $i$th row and $j$th column, $M_{i, *}$ denotes the entire $i$th row, and $M_{*, j}$ denotes the entire $j$th column. We use $\mathds{1}$ to denote the all-ones vectors and $\basis{j}$ to denote the $j$th standard basis vector (i.e., a vector where all entries are zero except for a 1 for the jth entry). The $p$-norm of a vector $v \in \mathbb{R}^n$ is defined as $\norm{v}_p := \left(\sum_{i= 1}^n v_i^p \right)^{1/p}$. We reserve the letter $I$ to denote the identity matrix. For two matrices, $M$ and $N$, we use $\langle M, \, N \rangle = \text{Tr}(M^\top N)$ to denote the matrix inner product. Note that for two vectors, $\text{Tr}(x^\top y) = x^\top y$ because $x^\top y$ is a scalar, so the matrix inner product is consistent with the standard inner product on vectors. For sets, $S_M + S_N : = \{ M + N \mid M \in S_M, N \in S_N\}$ is their Minkowski sum, $S_M \cup S_N$ their union, and $S_M \cap S_N$ their intersection. For a cone, $\mathcal{K}$, its dual cone is defined as $\mathcal{K}^* = \{X \mid \langle X, \, K \rangle \geq 0, \forall K \in \mathcal{K}\}$. While we work with matrix cones in this paper, this definition of dual cones is consistent with vector cones as well. In this paper, the two cones we will work with are the cone of completely positive matrices and the cone of copositive matrices. The cone of completely positive (CP) matrices, $C^*$, is the set of matrices that have a factorization with entry-wise non-negative entries: \begin{equation} \mathcal{C}^*_{n} := \{X \in \mathbb{R}^{n \times n} \mid X = \sum_k x^{(k)} (x^{(k)})^\top, \quad x^{(k)} \in \mathbb{R}^n_{\scriptscriptstyle \geq 0} \} \end{equation} The cone of copositive matrices, $\mathcal{C}$, is the set of matrices defined by: \begin{equation} \mathcal{C}_{n} := \{X \in \mathbb{R}^{n \times n} \mid v^\top X v \geq 0, \quad \forall v \in \mathbb{R}^n_{\scriptscriptstyle \geq 0} \} \end{equation} As suggested by the notation, the cones of completely positive and copositive matrices are duals of each other. In this paper, we will use the terms Ising problem and quadratic unconstrained binary optimization (QUBO) interchangeably. An Ising problem is an optimization problem of the form: $\min_{z \in \{-1, 1\}^n} \sum_{i, j} J_{i, j} z_i z_j + \sum_{i} h_{i} z_i$, where $J_{i, j}, \, h_i$ are real coefficients and $z_i \in \{-1, 1\}$ are discrete variables to be optimized over. A QUBO, which is an optimization problem of the form $ \min_{x \in \{0, \, 1\}^n} \sum_{i, j} Q_{i, j} x_i x_j$ can be reformulated as an Ising problem using the change of variable $z = 2x - \mathds{1}$. This translates to coefficients in the Ising problem $J_{i, j} = \frac{1}{4} Q_{i, j}$, $h_i = \frac{1}{2} \sum_{j} Q_{i, j}$, and a constant offset of $\frac{1}{4}\sum_{i, j} Q_{i, j}$. \subsection{Problem Setting} In this paper, we consider mixed-binary quadratic programs (MBQP) of the form: \begin{mini} {x \in \mathbb{R}^{n}}{x^\top Q x + 2 c^\top x} {\label{eq:MBQP}\tag{MBQP}}{} \addConstraint{Ax = b, \quad A \in \mathbb{R}^{{m} \times {n}}, \, b \in \mathbb{R}^{{m}}} \addConstraint{x \geq 0} \addConstraint{x_j \in \{0, 1\}, \quad j \in B} \end{mini} where the set $B \subseteq \{1, \ldots, {n}\}$ indexes which of the $n$ variables are binary. This is a general class of problems that encompasses problems including QUBOs, standard quadratic programming, the maximum stable set problem, and the quadratic assignment problem. Because mapping to an Ising problem can also be equivalently expressed as a QUBO, many problems tackled with Ising solvers thus far pass through a formulation similar to the form of Problem \eqref{eq:MBQP}. Using the result in \cite[Sec. 3.2]{Burer2009}, the formulation considered in this paper can be extended to include constraints of the form $x_ix_j = 0$ that force at least one of $x_i$ or $x_j$ to be zero, i.e., \emph{complementarity constraints}. For ease of notation, this extension is left out of the present discussion. \section{Proposed Methodology}\label{sec:approach} In this section, we will discuss our proposed methodology for solving Problem \eqref{eq:MBQP} with optimality guarantees given access to heuristic/black-box Ising solvers. Our result relies on a convex reformulation of Problem \eqref{eq:MBQP} as a copositive program. Leveraging convexity, we propose to solve the problem using cutting-plane algorithms. These belong to a broad class of convex optimization algorithms whose standard components give rise to a natural separation between the role of the Ising solver versus a classical computer. We first state Burer's exact reformulation of Problem \eqref{eq:MBQP} as a completely positive program and its dual copositive program. We then show that under mild conditions (i.e., feasibility and boundedness) of the original MBQP, the copositive and completely positive programs exhibit strong duality. We will then introduce the class of cutting-plane algorithms and summarize the complexity guarantees of several well-known variants. Finally, we explicitly show how cutting-plane algorithms can be used to solve copositive optimization problems given a copositivity oracle and discuss how to implement a copositivity oracle using an Ising solver. \subsection{Convex formulation as a copositive program} In his seminal work, Burer showed that MBQPs can be represented exactly as completely positive programs of the form: \begin{mini} {X \in \mathbb{R}^{{n} \times {n}}, \, x \in \mathbb{R}^{n}}{\innerMat{\quadmat{Q}{c}{\cdot}}{\quadmat{X}{x}{1}}} {\label{eq:CPP}\tag{CPP}}{} \addConstraint{\innerMat{\quadmatrow{\cdot}{\frac{1}{2} A_{i, *}}{\cdot}}{\quadmat{X}{x}{1}} = b_i, \, i = 1, \ldots, {m}} \addConstraint{\innerMat{\Quadmat{A^\top_{i, *} A_{i, *}}{\cdot}{\cdot}}{\quadmat{X}{x}{1}} = b^2_i, \, i = 1, \ldots, {m}} \addConstraint{\innerMat{\quadmat{-\basis{j} \basis{j} ^\top}{\frac{1}{2} \basis{j} }{\cdot}}{\quadmat{X}{x}{1}} = 0, \, j \in B} \addConstraint{\quadmat{X}{x}{1} \in \mathcal{C}^*_{{n} + 1},} \end{mini} where exactness means that Problems \eqref{eq:MBQP} and \eqref{eq:CPP} have the same optimal objective and for an optimal solution, $(x^*, X^*)$, of \eqref{eq:CPP}, $x^*$ lies within the convex hull of optimal solutions for \eqref{eq:MBQP} \cite[Theorem 2.6]{Burer2009}. Similar to semi-definite programming (SDP) relaxations, the completely positive formulation involves lifting to a matrix variable representing first and second-degree monomials of the variables in \eqref{eq:MBQP}, making the objective function and constraints linear. Unlike SDP relaxations, however, the complete positivity constraint is sufficient for ensuring that the feasible region of \eqref{eq:CPP} is exactly the convex hull of the feasible region of \eqref{eq:MBQP}. This distinction is what ensures that the optimal value of \eqref{eq:CPP} is exactly that of \eqref{eq:MBQP}, whereas for an SDP relaxation, the optimal solution may lie outside of the convex hull resulting in a lower objective value (i.e., a \emph{relaxation gap}). Taking the dual of \eqref{eq:CPP} yields a copositive optimization problem of the form \cite[Section 5.9]{boyd2004convex}: \begin{maxi} {\mu, \lambda, \gamma}{\gamma + \sum_{i = 1}^{{m}} \mu^{\text{(lin)}}_i b_i + \mu^{\text{(quad)}}_{i} b_i^2} {\label{eq:COP}\tag{COP}}{} \addConstraint{M(\mu, \lambda, \gamma) \in \mathcal{C}_{{n} + 1},} \end{maxi} where \begin{equation}\label{def:M} \begin{aligned} M(\mu, \lambda, \gamma) := &\quadmat{Q}{c}{\cdot} - \sum_{i = 1}^{{m}} \mu^{\text{(lin)}}_i \quadmatrow{\cdot}{\frac{1}{2} A_{i, *}}{\cdot} - \sum_{i = 1}^{{m}} \mu^{\text{(quad)}}_i \Quadmat{A^\top_{i, *} A_{i, *}}{\cdot}{\cdot}\\ & - \sum_{j \in B} \lambda_j \quadmat{-\basis{j} \basis{j} ^\top}{\frac{1}{2} \basis{j} }{\cdot} - \gamma\Quadmat{\cdot}{\cdot}{1} \end{aligned} \end{equation} is a parametrized linear combination of the constraint matrices. The dual copositive program has a linear objective and a single copositivity constraint---this is a convex optimization problem. While weak duality always holds between an optimization problem and its dual, strong duality is not generally guaranteed. Showing that strong duality holds is critical for ensuring convergence of specific optimization algorithms and exactness when solving the dual problem as an alternative to solving the primal. \begin{theorem}[Strong Duality] If Problem \eqref{eq:MBQP} is feasible with bounded feasible region, then strong duality holds between Problems \eqref{eq:CPP} and \eqref{eq:COP} (i.e., $ \min \eqref{eq:CPP} = \max \eqref{eq:COP}$). \begin{proof}[Proof Sketch] Our proof proceeds by first showing strong duality between the alternative representation of \eqref{eq:CPP} (using a homogenized formulation of the equality constraints) and its dual. By showing that the optimal value of \eqref{eq:COP} is lower-bounded by the optimal value of this homogenized dual problem, we can sandwich the optimal values of Problems \eqref{eq:CPP} and \eqref{eq:COP} by those of a primal-dual pair that has been shown to exhibit strong duality. The complete proof of this result is provided in Appendix \ref{subsec:strong_dual}. \end{proof} \end{theorem} In prior work, characterization of the duality gap between Problems \eqref{eq:CPP} and \eqref{eq:COP} has remained elusive because the feasible region of Problem \eqref{eq:CPP} never has an interior, thus prohibiting straightforward application of Slater's constraint qualification. This result is significant because it shows that under mild conditions, the copositive formulation is exact. This means that the optimal values of Problems \eqref{eq:MBQP} and \eqref{eq:COP} are equivalent, so solving Problem \eqref{eq:COP} is a valid alternative to solving Problem \eqref{eq:MBQP}. Moreover, the optimal solution of Problem \eqref{eq:CPP} can be recovered from the optimal solution of Problem \eqref{eq:COP} by optimizing the Lagrangian function with respect to the optimal dual variables \cite[Prop 5.3.3]{bertsekas2009convex}. While Problems \eqref{eq:CPP} and \eqref{eq:COP} are both convex, neither resolve the difficulty of Problem \eqref{eq:MBQP} as even checking complete positivity (resp. copositivity) of a matrix is NP-hard (resp. co-NP-complete) \cite{murty1987some}. Instead, they should be viewed as ``packaging" the complexity of the problem entirely in the copositivity/complete positivity constraint. There are a number of classical approaches for (approximately) solving copositive/completely positive programs directly, such as the sum of squares hierarchy \cite{Parrilo2000, lasserre2001global}, feasible descent method in the completely positive cone, approximations of the copositive cone by a sequence of polyhedral inner and outer approximations, among others \cite{dur2021conic,burer2012copositive,dur2010copositive}. In this paper, we will exploit the innate synergy between checking copositivity, which is most naturally posed as a quadratic minimization problem, and solving Ising problems. This perspective is suggestive of a hybrid quantum-classical approach where the quantum computer is responsible for checking feasibility (i.e., the ``hard part") of the copositive program while the classical computer directs the search towards efficiently reducing the search space. \subsection{Cutting-Plane/Localization Algorithms}\label{subsec:cp_alg} Cutting-plane/localization algorithms are convex optimization algorithms that divide labor between checking feasibility--abstracted as a \emph{separation oracle}--and optimization of the objective. In this section, we provide a high-level overview of each algorithmic step and summarize both the run-time and oracle complexities of several well-known variants; these complexity measures will ultimately correspond to the complexity of the sub-routine handled by the classical computer and the number of calls to the Ising solver, respectively. While cutting-plane algorithms are often used to solve both constrained and unconstrained optimization problems, they are generally evaluated in terms of their complexity when solving the \emph{feasibility problem}. \begin{definition}[Feasibility Problem] For a set of interest $S \subset \mathbb{R}^{{m}} $, which can only be accessed through a \emph{separation oracle}, the feasibility problem is concerned with either finding a point in the set $x \in S$ or proving that $S$ does not contain a ball of radius $r$. \end{definition} \begin{definition}[Separation Oracle] A separation oracle for a set $S$, $\texttt{Oracle}_S(\cdot)$ takes as input a point $x \in \mathbb{R}^{{m}}$ and either returns \texttt{True} if $x \in S$ or a separating hyperplane if $x \not \in S$. A separating hyperplane is defined by a vector, ${a} \in \mathbb{R}^{{m}}$ and scalar ${b} \in \mathbb{R}$ such that ${a}^\top s \leq {b}$ for all $s \in S$ but ${a}^\top x \geq {b}$. \end{definition} The feasibility problem formulation is non-restrictive because these methods can be readily adapted to solving quasi-convex optimization problems with only a simple modification to the separation oracle. In particular, if the separation oracle indicates feasibility and returns a vector $g$ where $f(y) < f(x)$ implies that $g^\top y \geq g^\top x$, this serves as a separating hyperplane for the subset of the feasible region that has a better objective than the test point. If $f$ is subdifferentiable, any subgradient $g \in \partial f(x)$ satisfies this condition, and for Problem \eqref{eq:COP} choosing $g$ as the objective's coefficient vector is sufficient. Although there are many variations of cutting-plane algorithms, at a high level, they follow a standard template that consists of alternating between checking feasibility of a test point, updating an outer approximation of the feasible region, and judiciously selecting the next test point. This standard template is summarized in Algorithm \ref{alg:cp_meta}. By choosing subsequent test points to be the center of the outer approximation, the algorithm is guaranteed to make consistent progress in reducing the search space (where the metric of progress may also vary across cutting-plane algorithms). Intuitively, cutting plane algorithms can be considered a high-dimensional analog of binary search. \begin{algorithm} \caption{Cutting-plane meta-algorithm (feasibility problem)}\label{alg:cp_meta} \KwIn{$S_0 \subseteq \mathbb{R}^{{m}}$ (Initial Set) with $\texttt{Vol}(S_0) \leq R$} \KwOut{$x \in S$ or \texttt{False} if S does not contain a ball of volume $r$} $x \gets \texttt{Center}(S_0)$\; $k \gets 0$\; \While{\texttt{Oracle}(x) is not \texttt{True} and $\texttt{Vol}(S_k) \geq r$}{ $S_{k + 1} \gets \texttt{Add\_Cut}(S_k, \texttt{Oracle}(x))$\; $x \gets \texttt{Center}(S_{k + 1})$\; $k \gets k + 1$\; } \uIf{\texttt{Oracle}(x) is \texttt{True}}{ \Return{$x$}\; } \Else{ \Return{\texttt{False}}\; } \end{algorithm} A number of well-known variants of cutting-plane algorithms are summarized in Table \ref{tab:cp}. Differences across instantiations of cutting-plane algorithms vary in how subsequent test points are chosen, how the outer approximation is updated, and how progress in decreasing the outer approximation's size is measured. Each of the surveyed variants strikes a different balance between the computational effort needed to compute a good center versus the resolution used to represent the outer approximation. Critically, except for the Center of Gravity method, all cutting-plane algorithms summarized in Table \ref{tab:cp} have a polynomial complexity in the dimension of the optimization variables in terms of both oracle queries and total run-time excluding the oracle calls (i.e., the total complexity of adding the cuts and generating test points). This suggests that if a cutting-plane algorithm were applied to Problem \eqref{eq:COP}, the complexity of the problem is offloaded onto the separation oracle--this is the subroutine we propose to handle using an Ising solver. \begin{table}[h!] { \centering \begin{tabular}{|c|c|c|c|} \hline Name & Oracle Queries & \makecell{Total Run-time\\ (excluding oracle queries)} & References\\ \hline Center of Gravity & $\mathcal{O}({{m}} \log(\frac{R}{r}))$ & \#P-hard \cite{rademacher2007approximating} & \cite{levin1965algorithm}\\ Ellipsoid & $\mathcal{O}({{m}}^2 \log({{m}}\frac{R}{r}))$ & $\mathcal{O}({{m}}^4 \log({{m}}\frac{R}{r}))$ & \cite{shor1977cut, yudin1976evaluation, khachiyan1980polynomial}\\ Inscribed Ellipsoid & $\mathcal{O}({{m}} \log ({{m}} \frac{R}{r}))$ & $\mathcal{O}(({{m}} \log ({{m}} \frac{R}{r}))^{4.5})$ & \cite{khachiyan1988method, nesterov1989self} \\ Volumetric Center & $\mathcal{O}({{m}} \log ({{m}} \frac{R}{r}))$ & $\mathcal{O}({{m}}^{1 + \omega} \log ({{m}} \frac{R}{r}))$ & \cite{vaidya1989new} \\ Analytic Center & $\mathcal{O}({{m}} \log^2 ({{m}} \frac{R}{r}))$ & \makecell{$\mathcal{O}({{m}}^{1 + \omega} \log^2 ({{m}} \frac{R}{r})$\\ \qquad + $({{m}} \log ({{m}} \frac{R}{r}))^{2 + \frac{\omega}{2}})$} & \cite{atkinson1995cutting}\\ Random Walk & $\mathcal{O}({{m}} \log({{m}}\frac{R}{r}))$ & $\mathcal{O}({{m}}^7 \log({{m}}\frac{R}{r}))$ & \cite{bertsimas2004solving} \\ Lee, Sidford, Wong & $\mathcal{O}({{m}} \log ({{m}} \frac{R}{r}))$ & $\mathcal{O}({{m}}^3 \log ^{\mathcal{O}(1)}({{m}} \frac{R}{r}))$ & \cite{lee2015faster}\\ \hline \end{tabular} } \caption{This table summarizes the number of oracle queries and total run-time guarantees of a number of well-known cutting-plane variants. The stated run-times are in terms of the problem dimension, $m$, the volume of the initial set, $R$, and the minimum volume of the set of interest, $r$. The constant $\omega$ stands for the fast matrix multiplication constant.} \label{tab:cp} \end{table} \subsection{Application to copositive optimization} Now that we have introduced cutting-plane algorithms, we are in a position to discuss their application to the copositive program \eqref{eq:COP}. First, we will show how a \emph{copositivity oracle} can be readily transformed into a separation oracle for the feasible region of Problem \eqref{eq:COP}. We will conclude with a discussion of how a copositivity oracle can be implemented using an Ising solver. Formally, we define a copositivity oracle as follows: \begin{definition}[Copositivity Oracle] A copositivity oracle takes as input a matrix, $M$, and either returns \texttt{True} if $M$ is copositive or returns a vector $z \in \mathbb{R}^n_{\scriptscriptstyle \geq 0}$ such that $z^\top M z < 0$ (a ``certificate of non-copositivity"). \end{definition} A copositivity oracle can be turned into a separation oracle for the feasible region of Problem \eqref{eq:COP} by expanding the terms in $z^\top M(\hat{\mu}, \hat{\lambda}, \hat{\gamma}) z$. Explicitly, a test point, $(\hat{\mu}, \hat{\lambda}, \hat{\gamma})$, is infeasible if and only if $M(\hat{\mu}, \hat{\lambda}, \hat{\gamma})$ is not copositive. Given $M(\hat{\mu}, \hat{\lambda}, \hat{\gamma})$ as input, the copositivity oracle returns a certificate of non-copositivity $z \in \mathbb{R}^{{n} + 1}_{\scriptscriptstyle \geq 0}$ such that $z^\top M(\hat{\mu}, \hat{\lambda}, \hat{\gamma}) z < 0$. In contrast, feasibility means that $z^\top M(\mu, \lambda, \gamma) z \geq 0$. Equivalently, the halfspace defined by \begin{align} {b} &= z^\top \quadmat{Q}{c}{\cdot} z, \\ {a}[\mu^{\text{(lin)}}_i] &= z^\top \quadmatrow{\cdot}{\frac{1}{2} A_{i, *}}{\cdot}z,\\ {a}[\mu^{\text{(quad)}}_i] &= z^\top \Quadmat{A^\top_{i, *} A_{i, *}}{\cdot}{\cdot} z,\\ {a}[\lambda_j] &= z^\top \quadmat{-\basis{j} \basis{j} ^\top}{\frac{1}{2} \basis{j} }{\cdot} z,\\ {a}[\gamma] &= z^\top \Quadmat{\cdot}{\cdot}{1} z, \end{align} is a separating hyperplane for $(\hat{\mu}, \hat{\lambda}, \hat{\gamma})$, where we use symbolic indexing to explicitly denote which variable each coefficient corresponds to. Explicitly, the inner product between $a$ and $(\mu, \lambda, \gamma)$ is given by \begin{equation} {a}^\top (\mu, \lambda, \gamma) = \sum_i {a}[\mu^{\text{(lin)}}_i]\mu^{\text{(lin)}}_i + \sum_i {a}[\mu^{\text{(quad)}}_i]\mu^{\text{(quad)}}_i + \sum_j {a}[\lambda_j]\lambda_j + {a}[\gamma] \gamma. \end{equation} This shows that given a copositivity oracle, constructing a separation oracle for Problem \eqref{eq:COP} entails evaluating $\mathcal{O}({m})$ vector-matrix-vector products, each of dimension $\mathcal{O}({n})$. The cutting-plane algorithms presented in Section \ref{subsec:cp_alg} can then be applied without further modification. Checking copositivity of $M(\mu, \lambda, \gamma)$ is naturally posed as solving the following (possibly non-convex) quadratic minimization problem \begin{mini} {z \in \mathbb{R}^{{n} + 1}_{\scriptscriptstyle \geq 0}}{z^\top M(\mu, \lambda, \gamma) z} {\label{eq:norm_cop}}{} \addConstraint{||z||_p \leq 1,} \end{mini} where a matrix is copositive if and only if $\min \eqref{eq:norm_cop}$ is non-negative\footnote{While copositivity is defined as a condition over all of $\mathbb{R}^{{n} + 1}_{\scriptscriptstyle \geq 0}$, quadratic scaling of the objective ensures that optimizing over a norm ball is sufficient for detecting copositivity.}. There are several alternative approaches for checking copositivity \cite{anstreicher2021testing, dur2013testing, hiriart2010variational, bras2016copositivity, xia2020globally}; however, they are typically derived with Problem \eqref{eq:norm_cop} as the starting point and designed to exploit particular properties of Problem \eqref{eq:norm_cop}. By choosing $p = \infty$, Problem \eqref{eq:norm_cop} can be approximated by a QUBO where an approximation of the matrix $M$, $\hat{M}$, is used such that the optimization variables $\hat{z}$ represent a binary expansion of $z$ with $k$ bits as follows: \begin{mini} {\hat{z}}{\hat{z}^\top \hat{M}(\mu, \lambda, \gamma) \hat{z}} {\label{eq:qubo}\tag{QUBO}}{} \addConstraint{\hat{z} \in \{0,1\}^{k(n+1)}}. \end{mini} Explicitly, $\hat{M}(\mu, \lambda, \gamma)$ and $M(\mu, \lambda, \gamma)$ are related as follows: \begin{align} \hat{M}(\mu, \lambda, \gamma) &= {\mathcal{D}}^\top M(\mu, \lambda, \gamma) {\mathcal{D}} , \end{align} where \begin{align} {\mathcal{D}} &:= \begin{pmatrix} \frac{1}{2^0} & \cdots & \frac{1}{2^{{k} - 1}} & 0 & \cdots & 0 & \cdots & 0 & \cdots & 0\\ 0 & \cdots & 0 & \frac{1}{2^0} & \cdots & \frac{1}{2^{{k} - 1}} & \cdots & 0 & \cdots & 0\\ \vdots & \vdots & \vdots & \vdots & \vdots & \vdots & \vdots & \vdots & \vdots & \vdots\\ 0 & \cdots & 0 & 0 & \cdots & 0 & \cdots & \frac{1}{2^0} & \cdots & \frac{1}{2^{{k} - 1}} \end{pmatrix}. \end{align} The construction of \eqref{eq:qubo} is detailed in Appendix \ref{subsec:disc_cop}. The explicit implementation of $\texttt{Oracle}(\cdot)$ is summarized in Algorithm \ref{alg:sep_or}. Critically, the constraints of \eqref{eq:norm_cop} are implied by the natural domain of the Ising solver, mitigating the need to tune coefficients in a penalty method carefully. \begin{algorithm} \caption{Separation oracle, $\texttt{Oracle}(\cdot)$}\label{alg:sep_or} \KwIn{$(\hat{\mu}, \hat{\lambda}, \hat{\gamma})$ (Test point)} \KwOut{ \begin{equation*} \begin{cases} \texttt{True} & \text{ if } (\hat{\mu}, \hat{\lambda}, \hat{\gamma}) \text{ is feasible}\\ \text{Separating hyperplane for } (\hat{\mu}, \hat{\lambda}, \hat{\gamma}) & \text{ otherwise} \end{cases} \end{equation*}} \tcp{Solve \eqref{eq:qubo} using an Ising solver} \begin{argmini*} {\hat{z}}{\eqref{eq:qubo}} {}{z^*\gets} \end{argmini*} \uIf{$\min \eqref{eq:qubo} \geq 0$ }{ \Return{\texttt{True}}\; } \Else{ \begin{align} z &= {\mathcal{D}} z^*\\ {b} &= z^\top \quadmat{Q}{c}{\cdot} z \\ {a}[\mu^{\text{(lin)}}_i] &= z^\top \quadmatrow{\cdot}{\frac{1}{2} A_{i, *}}{\cdot} z\\ {a}[\mu^{\text{(quad)}}_i] &= z^\top \Quadmat{A^\top_{i, *} A_{i, *}}{\cdot}{\cdot} z\\ {a}[\lambda_j] &= z^\top \quadmat{-\basis{j} \basis{j} ^\top}{\frac{1}{2} \basis{j} }{\cdot} z\\ {a}[\gamma] &= z^\top \Quadmat{\cdot}{\cdot}{1} z \end{align} \Return{${a}, \, {b}$}\; } \end{algorithm} \subsection{Discussion} \begin{figure} \centering \includegraphics[width=\textwidth]{figs/solution_diagram.pdf} \caption{This figure depicts the entire solution process for solving a MBQP of the form \eqref{eq:MBQP}.} \label{fig:sol} \end{figure} In summary, we propose to solve Problem \eqref{eq:MBQP} by constructing the equivalent copositive formulation in \eqref{eq:COP} and applying any variant of Algorithm \ref{alg:cp_meta}. Within Algorithm \ref{alg:cp_meta}, the implementation of $\texttt{Oracle}(\cdot)$ is specified by Algorithm \ref{alg:sep_or}. This process is depicted in Figure \ref{fig:sol}. Now that we have presented our method in full, several comments are in order. \paragraph{Computational complexity} While the stated complexity of the cutting-plane algorithms is applicable to any problem, it is suggestively stated in terms of the variable ${m}$. This notational overload is a deliberate choice because the dimension of the dual copositive program is equal to the total number of constraints in Problem \eqref{eq:CPP}, which is $2{m} + |B| + 1 = \mathcal{O}({m})$. The number of constraints can be reduced to $m + |B| + 1$ using the homogenized completely positive reformulation presented in Appendix \ref{subsec:strong_dual}--while this will have no impact on the asymptotic complexity of the method, it can result in a practical reduction in run-time. If $\mathcal{T}_Q$ represents the oracle complexity of a particular method, the additional overhead of converting the copositivity oracle into a separation oracle is given by $\mathcal{O}(m n^2 \mathcal{T}_Q)$. \paragraph{Discretization size} Discretization of the copositivity check automatically introduces an approximation to the copositivity checks. The approximation fidelity is improved as the number of discretization points is increased, although it is limited by the hardware. Not only does representing a finer discretization require more qubits, but it also results in a greater skew in the coefficients of the Ising Hamiltonian--this becomes challenging since many existing hardware platforms have limited precision in their implementable couplings. In contrast, too coarse of a discretization runs the risk of missing the certificate of non-copositivity entirely. This suggests that the discretization scheme should be well-tailored to the problem at hand; Appendix \ref{subsec:disc_cop} provides guidance for choosing a discretization size based on the coefficients of the Ising Hamiltonian. \paragraph{Multiple cuts} Following standard convention, this work assumes that the copositivity oracle returns a single value. In contrast, in practice, many of the aforementioned Ising solvers involve multiple readouts. Each of these solutions can be used to construct a cut, where negative, zero, and positive Ising objective values correspond to deep, neutral, and shallow cuts, respectively. Adding multiple cuts during each iteration is an effective heuristic for improving the convergence rate of the cutting-plane algorithm. While the true ground state corresponds to the deepest cut, the convergence rate guarantees stated in Table~\ref{tab:cp} hold so long as a neutral or deep cut is added at each iteration. Consequently, the proposed approach is not overly reliant on the Ising solver's ability to identify the ground state and is resilient to heuristics. Critically, this raises the question of how to proceed if the Ising solver fails to return a certificate of non-copositivity, which will likely depend on problem specifics, such as the current outer approximation, the objective values of the samples, and the solver itself. For example, if the Ising solver returns positive but small solutions, depending on the current outer approximation, the addition of shallow cuts can still reduce the search space. On the other hand, if all non-zero solutions result in a large objective value, one could increase confidence that the test point is feasible by increasing the number of discretization points and readouts. \section{Experiments}\label{sec:experiments} We conducted an investigation of the proposed method on the maximum clique problem, which finds the largest complete subgraph of a graph. Given a graph, the maximum clique problem can be formulated as a completely positive program \begin{maxi} {X \in \mathbb{R}^{n \times n}}{\innerMat{\mathds{1} \mathds{1}^\top}{X}} {}{\label{eq:mc_cpp}} \addConstraint{\innerMat{\overline{A} + I}{X} = 1} \addConstraint{X \in \mathcal{C}^*_n,} \end{maxi} where $\overline{A}$ is the adjacency matrix of the graph's complement \cite{de2002approximation}. The dual of \eqref{eq:mc_cpp} is the following copositive program: \begin{mini} {\lambda \in \mathbb{R}}{\lambda} {}{\label{eq:mc_cop}} \addConstraint{\lambda(I + \overline{A}) - \mathds{1} \mathds{1}^\top \in \mathcal{C}_n.} \end{mini} This copositive program only has one variable regardless of the graph's number of vertices or edges. However, the copositivity check's size is determined by the number of vertices, $n$, which impacts the complexity of computing the cuts from the certificates of non-copositivity. The number of edges can be used to upper-bound the size of the maximum clique, thus determining the size of the initial feasible region; however, its effect on the complexity of checking copositivity is unclear. \begin{figure} \centering \begin{subfigure}[b]{0.4\textwidth} \centering \includegraphics[clip, trim=6cm 8cm 6cm 3cm,width=\textwidth]{figs/max_clique_diag.pdf} \caption{} \label{fig:mc_ex} \end{subfigure} \hfill \begin{subfigure}[b]{0.4\textwidth} \centering \includegraphics[width=\textwidth]{figs/max_clique_ex_matrix.png} \caption{} \label{fig:mc_ex_mat} \end{subfigure} \bigskip \begin{subfigure}[b]{\textwidth} \centering \includegraphics[width=\textwidth]{figs/max_clique_sol.png} \caption{} \label{fig:mc_ex_sol} \end{subfigure} \caption{Figure \protect\ref{fig:mc_ex} depicts a small maximum clique example where there are edges between all vertices except $x_4$ and $x_5$. Figure \protect\ref{fig:mc_ex_mat} depicts the adjacency matrix of graph \protect\ref{fig:mc_ex}'s complement, which has a single edge between vertices $x_4$ and $x_5$. Figure \ref{fig:mc_ex_sol} depicts the solution process for the copositive cutting-plane algorithm.} \end{figure} To study the scaling of the proposed approach, we considered random max-clique problems with $10, 30, \ldots, 130$ vertices. For each graph size, we generated 25 random Erd\H{o}s-Renyi instances with edge densities $p \in \{0.25, \, 0.5, \, 0.75 \}$ and solved to global optimality using the proposed copositive cutting plane algorithm. The copositivity checks were conducted by solving Anstreicher's MILP characterization of copositivity~\cite{anstreicher2021testing}, using \texttt{Gurobi} version 9.0.3 \cite{GurobiOptimization2020}. All experiments were run on an AMD Ryzen 7 1800X Eight-Core [email protected] with 64GB of RAM restricted to a single thread. \begin{figure} \centering \includegraphics[width=\textwidth]{figs/profile_breakdown.pdf} \caption{This figure plots the time spent on the copositivity checks versus all other operations in the proposed method. The copositivity checks grow exponentially with the number of vertices, while the other operations grow modestly.} \label{fig:profile} \end{figure} We first evaluated whether the copositive cutting-plane algorithm shifts the complexity of the solution process onto the copostivity checks by profiling each component of the algorithm separately. Figure~\ref{fig:profile} plots the time the copositive cutting plane algorithm spent on the copositivity checks versus other operations (updating the outer approximation and computing test points). The time spent on the copositivity checks scales exponentially with the number of vertices in the graph, while the time spent on other operations grows modestly. This is because Problem \eqref{eq:mc_cpp} only has one constraint regardless of the graph's number of vertices or edges. In contrast, the size of the copositivity check is exactly equal to the number of vertices in the graph. Both the theoretical analysis and empirical results confirm that the proposed approach shifts the complexity of the copositive program onto the copositivity checks. This experiment shows that the proposed methodology is particularly effective for problems whose constraints remain constant or grow modestly with problem size. \begin{figure}[t!] \centerline{\includegraphics[width=0.95\textwidth]{figs/gurobi_comparison.pdf}} \caption{This figure plots the time to target to 99\% confidence of the Simulated Annealing (SA) implementation in \texttt{dwave-neal} against the solution time of \texttt{Gurobi} for two discretization points in the copositivity checks. We solved each copositivity check with 100 sweeps and 1000 reads. For all densities, both methods scale exponentially with the number of vertices in the graph; however, SA is several orders of magnitude faster than \texttt{Gurobi}.} \label{fig:DG_comp} \end{figure} To investigate potential speedups from using a stochastic Ising solver, we re-solved each copositivity check that yielded a certificate of non-copositivity using Simulated Annealing (SA) through the software \texttt{dwave-neal} version 0.5.9, a SA sampler \cite{DWaveNeal2021}, i.e., a solver that returns samples on the solutions distribution generated by SA. Because SA is not guaranteed to find the global optima in a single annealing cycle, we define a probabilistic notion of time to target. In particular, we follow \cite{ronnow2014defining} and define the time to target with $s$ confidence to be the number of repetitions to find the ground state at least once with probability $s$ multiplied by the time for each annealing cycle, $\texttt{T}_{\text{anneal}}$, i.e., \begin{align} \texttt{TTT}_s = \texttt{T}_{\text{anneal}}\frac{\log(1 - s)}{\log(1 - \hat{p}_{\text{succ}})}, \end{align} where $\hat{p}_{\text{succ}}$ is the expected value of the returned solution divided by the ground state/minimum. This results in a probability of success that interpolates between counting only ground state solutions and counting all certificates of non-copositivity as successes by considering the relative quality of each sample. We will also consider analogous scenarios where only ground state solutions are counted as success; we reserve the terminology ``time to solution", $\texttt{TTS}_s = \texttt{T}_{\text{anneal}}\frac{\log(1 - s)}{\log(1 - p_{\text{succ}})}$, for such cases to distinguish from the previously defined time to target. The values of $\hat{p}_{\text{succ}}$ and $p_{\text{succ}}$ is evaluated empirically over 1000 samples/reads. The time per annealing cycle, $\texttt{T}_{\text{anneal}}$, was evaluated as the total wall-clock time (for all reads) divided by the number of reads. All other \texttt{dwave-neal} parameters were left as their default values. For each copositivity check solved, we considered discretizations corresponding to ${\min_{\hat{z} \in \{0, \, 1\}^n} \hat{z}^\top M \hat{z} }$. We solved each copositivity check with 100 sweeps and 1000 reads\footnote{While the performance of \texttt{dwave-neal} depends on the number of sweeps, we found that optimizing the number of sweeps does not result in significant reductions in the time to target. We provide further discussion in the Appendix \ref{subsec:hyperopt}.}. Figure \ref{fig:DG_comp} plots the time to target with 99\% confidence from SA against the solution time from \texttt{Gurobi}\footnote{Note that \texttt{Gurobi}'s solution time in Figure \ref{fig:DG_comp} is different from copositivity checks profiling in Figure \ref{fig:profile}. This is because only non-copositive instances were considered for this comparison, while all instances, including copositive ones, were included in the profiling comparison.}. We see that for all discretization sizes, \texttt{dwave-neal} can consistently find certificates of non-copositivity in orders of magnitude less time than \texttt{Gurobi}. Notably, SA and \texttt{Gurobi} demonstrate similar scaling with respect to the number of vertices. Unlike SA, which operates without reference to rigorous optimality bounds, \texttt{Gurobi}'s solution process tracks both upper and lower bounds on the objective value and terminates only when they reach user-specified stopping conditions. To evaluate whether the optimal objective is found early in the solution process and time is spent closing the upper bounds, we plotted \texttt{Gurobi}'s lower and upper bounds progress against time together with $\texttt{TTT}_{0.99}$ and $\texttt{TTT}_{0.999}$ in Figure \ref{fig:min_max_gurobi} for instances with density $p = 0.25$. Analogous plots for other densities are included in the Appendix. For each graph size, we plotted the instances where the ratio between \texttt{Gurobi}'s solution time and $\texttt{TTT}_{0.99}$ is the greatest (top row) and least (bottom row)--all instances were run with 100 sweeps. For each instance, we plot \texttt{Gurobi}'s upper bound (blue) and best objective found (orange), and \texttt{dwave-neal} $\texttt{TTT}_{0.99}$ (green), and $\texttt{TTT}_{0.999}$ (purple). We found that in most instances, \texttt{dwave-neal} reaches the time to target with 99.9\% confidence before \texttt{Gurobi} even returns a callback (i.e., when the purple line does not intersect either of the blue or orange lines); this is likely due to an initial pre-processing step. We only found one instance where \texttt{Gurobi} was able to confirm optimality at the first callback (Figure \ref{fig:first_guess})--even so, $\texttt{TTT}_{0.999}$ is nearly two orders of magnitude faster than \texttt{Gurobi}'s solution time in this instance. \begin{sidewaysfigure} \centering \includegraphics[width=\textheight]{figs/gurobi_bds_p=0.25.pdf} \caption{This figure depicts sample trajectories of \texttt{Gurobi}'s upper and lower bounds against $\texttt{TTT}_{0.99}$ and $\texttt{TTT}_{0.999}$ for edge density $p=0.25$. For each graph size, the top row represents the instance where the ratio between \texttt{Gurobi}'s solution time and $\texttt{TTT}_{0.99}$ is the greatest, and the bottom row represents the instance where the ratio is the smallest--all instances were run with 100 sweeps. In most instances, \texttt{dwave-neal} reaches the $\texttt{TTT}_{0.999}$ confidence before \texttt{Gurobi} even returns a callback (i.e., when the purple line does not intersect either of the blue or orange lines).} \label{fig:min_max_gurobi} \end{sidewaysfigure} Next, we compared the copositive cutting-plane algorithm with the SA implementation in \texttt{dwave-neal} as the Ising solver against solving a MIP formulation of maximum-clique directly with \texttt{Gurobi}. For the copositive cutting-plane algorithm, each copositivity check was conducted with 250 reads and 100 sweeps. Because \texttt{dwave-neal} may fail to find a certificate for some non-copositive matrices, this method may incorrectly reduce the upper bound in the outer approximation; however, cannot incorrectly update the lower bound. Consequently, the solution returned was determined by rounding the lower bound up to the nearest integer. We found that out of all trials, only four instances returned a solution less than the true optimum (two each for graphs of 110 and 130 nodes and edge density $p=0.75$). All of these instances were solved to global optimality by increasing the number of reads to 400. \texttt{Gurobi}'s solution time was evaluated on the following MIP formulation of maximum clique: \begin{mini} {x \in \{0, 1\}^n}{\mathds{1}^\top x} {}{} \addConstraint{x_ix_j = 0, \quad}{\forall (i, j) \in \overline{E},} \end{mini} where $\overline{E}$ is the edges in the complement graph\footnote{We also considered a formulation with additive constraints, i.e, $x_i + x_j \leq 1,\, \forall (i, j) \in \overline{E}$, but did not see significant differences in performance compared to the multiplicative constraints.}. This is a MIP with $n$ binary variables, where $n$ is the number of vertices in the graph, and $|\overline{E}|$ constraints (i.e., the number of edges in the complement graph). Figure \ref{fig:ccpNeal_gurobi_mc} plots the solution time for the \texttt{Gurobi} MIP formulation against that of the copositive cutting-plane algorithm. We find that for smaller graph sizes, \texttt{Gurobi}'s solution time is orders of magnitude faster than the copositive cutting-plane algorithm. However, the copositive cutting-plane algorithm exhibits better scaling with respect to the number of nodes than \texttt{Gurobi}, and outperforms it for larger graph sizes. \begin{figure} \centering \includegraphics[width=\textwidth]{figs/ccpNeal_gurobi_mc.pdf} \caption{This figure plots the solution time for the copositive cutting-plane algorithm with the Simulated Annealing implementation in \texttt{dwave-neal} as the Ising solver, the solution time when solving a mixed-integer programming (MIP) formulation of maximum-clique directly with \texttt{Gurobi}, and the corresponding $\texttt{TTT}_{0.999}$ from D-Wave Ocean \texttt{maximum\_clique} for penalty weight 1. While \texttt{Gurobi}'s solution time is orders of magnitude faster than the copositive cutting-plane algorithm for the smallest graph sizes, the copositive cutting-plane algorithm exhibits better scaling than \texttt{Gurobi} and outperforms it for larger graph sizes. \texttt{maximum\_clique} exhibits a similar scaling to the copositive cutting-plane algorithm but is half an order of magnitude faster. } \label{fig:ccpNeal_gurobi_mc} \end{figure} Finally, we investigated the effectiveness of directly converting the maximum clique problem to an Ising problem using a standard penalty formulation. To do so, we solved each of the maximum clique problem instances using the D-Wave Ocean \texttt{maximum\_clique} solver\footnote{\url{https://docs.ocean.dwavesys.com/projects/dwave-networkx/en/latest/reference/algorithms/generated/dwave_networkx.maximum_clique.html}} with \texttt{dwave-neal} as the sampler and a range of penalty weights in $\{2^{-1}, 2^{0}, \ldots, 2^4\}$; the number of sweeps were left to their default value of 1000. This results in a QUBO with $n$ variables and $|\overline{E}|$ quadratic terms. For each instance, we conducted 1000 reads and evaluated the average normalized sample size (the size of the returned solution divided by the ground truth maximum clique size) and the fraction of reads that resulted in a valid clique; a ground state solution is one that is both a valid clique and has a normalized sample size of 1. We computed the probability of success, $p_{\text{succ}}$, as the fraction of reads that resulted in a ground state solution, which was subsequently used to derive the time to solution to 99.9\% confidence. Figure \ref{fig:neal_mc_25} plots each of these metrics as a function of the penalty weights and graph size for edge density $p = 0.25$. Analogous plots for other densities are included in the Appendix. For penalty weights $0.5$ and $1$, the normalized sample size is often greater than $1$, resulting in samples that do not represent a valid clique. For penalty weights $2, 4, 8$, and $16$, most samples were valid cliques; however, the normalized sample sizes were typically less than 1--these represent non-maximum cliques. Generally, as the penalty weight is increased, the normalized sample size decreases, and the fraction of valid cliques increases. This aligns with the interpretation that the penalty weight represents a trade-off between satisfying the constraints versus optimizing the objective. These empirical results also corroborate the analytical results of \cite{quintero2022characterization}, which state that the minimum valid penalty weight for the stable set of a graph is 1. Given that \texttt{maximum\_clique} represents the maximum clique problem as finding the stable set of the graph built with the complement of the original edges, the bound on the penalty weight is valid. This experiment demonstrates that while the penalty formulation may be an effective heuristic, it typically requires carefully tuning the penalty weights to optimize the trade-off between satisfying the constraints and optimizing the objective. \begin{sidewaysfigure} \centering \includegraphics[width=\textheight]{figs/neal_max_clique_p=0.25.pdf} \caption{This figure plots the normalized sample size (the size of the returned solution divided by the ground truth maximum clique size) and the fraction of reads that resulted in a valid clique for graph density $p = 0.25$. These figures were used to compute the fraction of reads resulting in a ground state solution and the corresponding $\texttt{TTT}_{0.999}$ (also plotted). As the penalty weight is increased, the normalized sample size decreases, and the fraction of valid cliques increases. This highlights the delicate trade-off between constraints and the objective in penalty formulations.} \label{fig:neal_mc_25} \end{sidewaysfigure} Figure \ref{fig:ccpNeal_gurobi_mc} plots the corresponding $\texttt{TTT}_{0.999}$ from D-Wave Ocean \texttt{maximum\_clique} for penalty weight 1 against the \texttt{Gurobi} solution time and the copositive cutting-plane solution time. We observe that \texttt{maximum\_clique} exhibits a similar scaling to the copositive cutting-plane algorithm--likely because they both use \texttt{dwave-neal} as a sampler--but is half an order of magnitude faster. This is an unsurprising observation given that \texttt{maximum\_clique} is a heuristic that operates without reference to optimality bounds. In contrast, both \texttt{Gurobi} and the copositivity cutting-plane algorithm are complete methods. Indeed the choice of 0.999 for the confidence of TTS is arbitrary, as heuristic solvers are not easily compared to complete solvers. These results suggest that the copositive cutting-plane method inherits its favorable scaling from the heuristic Ising solver while maintaining the optimality guarantees of a convergent algorithm. \section{Conclusions}\label{sec:conclusion} In this paper, we have proposed a framework for solving mixed-binary quadratic programming problems with optimality guarantees using a heuristic and black-box Ising solver. Our framework relies on Burer's convex reformulation of such problems using completely positive programming--our first contribution is to extend this result and show that under mild conditions, the dual copositive program exhibits strong duality. We then propose a hybrid quantum-classic solution algorithm based on cutting-plane algorithms, where an Ising solver is used to construct the separation oracle. Critically, the run-time of the components handled by the classical computer scales polynomially with the number of constraints in the original mixed-binary quadratic program. This suggests that if our approach is applied to a problem with exponential scaling, the complexity is shifted on the subroutine carried out by the hardware accelerator, e.g., the quantum computer. Our proposed approach is particularly appealing because it suggests that the proposed approach could take advantage of any speedup that exists even without an explicit characterization of what that speedup is. While the proposed framework seems like a promising way forward for utilizing quantum/quantum-inspired Ising solvers, several crucial questions remain open. This first question is how the algorithm should proceed if the Ising solver fails to find a certificate of non-copositivity. Could one use a classical computer to locally refine approximate primal-dual optimal solutions? The second question is the issue of discretizing the copositivity checks. As an alternative to a uniform discretization, are there ways to locally refine or systematically construct heterogeneous discretizations that better pinpoint a certificate of non-copositivity? Alternatively, could one reparametrize the copositivity check so that the certificates of non-copositivity are better aligned with the discretization points in a coarser grid? We hope that this paper is the first step in answering these questions and serves as an inspiration to the community to embrace Ising solvers with convergent mathematical programming algorithms. \subsection*{Acknowledgements} This work was supported by NSF CCF (grant \#1918549), NASA Academic Mission Services (contract NNA16BD14C – funded under SAA2-403506). R.B. acknowledges support from the NASA/USRA Feynman Quantum Academy Internship program. The authors wish to thank Aaron Lott, Luis Zuluaga, and Juan Vera for helpful input and discussions during the development of these ideas. \bibliographystyle{IEEEtran}
1,108,101,565,397
arxiv
\section{Introduction} Formamide (NH$_2$CHO{}) is an interstellar complex organic molecule (iCOM, referring to C-bearing species with six atoms or more) \citep{Herbst2009,Ceccarelli2017} and a key precursor of more complex organic molecules, that can lead to the origin of life, because of its potential to form peptide bonds \citep{Saladino2012,Kahane2013,Lopez2019}. It has been detected in gas phase in hot corinos \citep{Kahane2013,Coutens2016,Imai2016,Lopez2017,Bianchi2019,Hsu2022}, which are the hot ($\mathrel{\hbox{\rlap{\hbox{\lower4pt\hbox{$\sim$}}}\hbox{$>$}}} 100$ K) and compact ($\mathrel{\hbox{\rlap{\hbox{\lower4pt\hbox{$\sim$}}}\hbox{$<$}}} 100$ au) regions immediately around low-mass (sun-like) protostars \citep{Ceccarelli2007}. The formamide origin is still under debate. In principle, formamide could be synthesized on the grain surfaces or in the gas phase. Two routes have been proposed in the first case: the hydrogenation of HNCO \citep{Charnley2008} and the combination of the HCO and NH$_2$ radicals, when they become mobile upon the warming of the dust by the protostar. However, the first route has been challenged by both experiments \citep{Noble2015} and quantum chemical (QM) calculations \citep{Song2016}. Later, the hydrogenation of HNCO is found to be feasible and followed by H abstraction of NH$_2$CHO{} in a dual-cycle consisting of H addition and H abstraction \citep{Haupa2019}. The second route (i.e., the combination of HCO and NH$_2$) has also been challenged by QM calculations \citep{Rimola2018} and found possible, even though it can also form NH$_3$ $+$ CO in competition with the formamide \citep{Enrique-Romero2022}. In the gas-phase formation theory, it has been proposed that formamide is formed by the gas-phase reaction of H$_2$CO with NH$_2$ \citep{Kahane2013}. This hypothesis was later challenged by \citet{Song2016}. Nonetheless, QM computations \citep{Vazart2016,Skouteris2017} coupled with astronomical observations in shocked regions \citep{Codella2017} support this hypothesis. On the same vein, the observed deuterated isomers of formamide (including NH$_2$CDO, cis- and trans-NHDCHO) \citep{Coutens2016} fits well with the theoretical predictions of a gas-phase formation route \citep{Skouteris2017}. On the other hand, the observed high deuterium fractionation of $\sim$ 2\% for the three different forms of formamide (NH$_2$CDO, cis- and trans-NHDCHO) could also be consistent with the formation in ice mantles on dust grains. The hot corino in the HH 212 protostellar system \citep{Codella2016} in Orion at a distance of $\sim$ 400 pc is particularly interesting because recent observations have spatially resolved it and found it to be an atmosphere of a Solar-System scale protostellar disk around a protostar \citep{Lee2017COM}. This disk atmosphere is rich in iCOMs \citep{Lee2017COM,Codella2018,Lee2019COM}, including formamide. More importantly, these iCOMs have a relative abundance similar to that of other hot corinos \citep{Cazaux2003,Imai2016,Lopez2017,Bianchi2019,Manigand2020}, and even comets \citep{Biver2015}. Therefore, the study of formamide in protostellar disks is key to investigate the emergence of prebiotic chemistry in nascent planetary bodies. In this paper we will study the origin and formation pathways of formamide in this protostellar disk. Previously, the HH 212 disk was mapped at a wavelength $\lambda \sim$ 0.85 mm \citep{Lee2017COM} covering one NH$_2$CHO{} line. Here we map it at a longer wavelength $\lambda \sim$ 1.33 mm, with spectral windows set up to cover more NH$_2$CHO{} lines in order to derive the physical properties of NH$_2$CHO{}. This set up also covers the lines of HNCO and H$_2$CO{}, which have not been reported before, in order to investigate the formation pathways of NH$_2$CHO{}. At longer wavelength, since the continuum emission of the disk is optically thinner, we can also map the molecular line emission in the disk atmosphere closer to the midplane and the central source. Moreover, the deuterated species and $^{13}$C isotopologue of H$_2$CO{} are also detected, allowing us to constrain the origin and correct the optical depth of H$_2$CO{}, respectively. In addition, CH$_3$OH{} and CH$_3$CHO{} are also detected, allowing us to further constrain the formation mechanism of NH$_2$CHO{}. More importantly, with the recently updated binding energies of these molecules, we can investigate the formation mechanism of these molecules and the chemical relationship among them. \section{Observations} The HH 212 protostellar disk was observed with Atacama Large Millimeter/submillimeter Array (ALMA) in Band 6 centered at a frequency of $\sim$ 226 GHz (or $\lambda \sim$ 1.33 mm) in Cycle 5. Project ID was 2017.1.00712.S. Two observations were executed. One was executed on 2017 October 04 in C43-9 configuration with 46 antennas for $\sim$ 18 mins on source with a baseline length of 41.4 m to 15 km to achieve an angular resolution of $\sim$ \arcsa{0}{02}. The other was executed on 2017 December 31 in C43-6 configuration with 46 antennas for 9 mins on source with a baseline length of 15.1 m to 2.5 km to recover a size scale up to $\sim$ \arcsa{1}{8}, which is 4 times the disk size. The correlator was set up to have 4 spectral windows (centered at 232.005, 234.005, 217.765, and 219.705 GHz) , each with a bandwidth of 1.875 GHz and 1920 channels, and thus a spectral resolution of 0.976 MHz per channel, corresponding to $\sim$ 1.3 km s$^{-1}${} per channel. The primary beam was $\sim$ \arcs{25}, much larger than the disk size. The data were calibrated with the CASA package version 5.1.1, with quasar J0423-0120 (a flux of $\sim$ 0.93 Jy) as a passband and flux calibrator, and quasar J0541-0211 (a flux of $\sim$ 0.096 Jy) as a gain calibrator. Line-free channels were combined to generate a visibility for the continuum centered at 226 GHz. We used a robust factor of $-$1.0 for the visibility weighting to generate the continuum map with a synthesized beam of \arcsa{0}{021}$\times$\arcsa{0}{016} at a position angle of $\sim$ 79$^\circ${}. The noise level is $\sim$ 20 $\mu$Jy beam$^{-1}${} or 1.4 K. The channel maps of the molecular lines were generated after continuum subtraction. Using a visibility weighting of 0.5, the synthesized beam has a size of \arcsa{0}{055}$\times$\arcsa{0}{042} at a position angle of $\sim$ 49$^\circ${}. The noise levels are $\sim$ 0.9 mJy beam$^{-1}${} (or $\sim$ 10 K) in the channel maps. The velocities in the channel maps are LSR velocities. \section{Results} The detected lines of NH$_2$CHO{} (16 lines), HNCO (7 lines), H$_2$CO{} (2 lines) as well as its doubly deuterated species D$_2$CO{} (2 lines) and $^{13}$C isotopologue H$_2$$^{13}$CO{} (1 line), CH$_3$OH{} (12 lines), and CH$_3$CHO{} (7 lines) are listed in Table \ref{tab:lines}. They have upper level energy $E_u \mathrel{\hbox{\rlap{\hbox{\lower4pt\hbox{$\sim$}}}\hbox{$<$}}} 500$ K, but with $E_u < 120$ K for CH$_3$CHO{}, H$_2$CO{} as well as its deuterated species and isotopologue. In order to increase the sensitivity for better detections, we divided them into 2 ranges of upper level energies: $E_u < 120$ K and $E_u > 120 K$, and then stacked them to produce the mean channel maps, and then the total line intensity maps, and the position-velocity (PV) diagrams. \subsection{Stratified Distribution of Molecules} Figure \ref{fig:contCOMs} shows the total line intensity maps (red contours) of these molecules on top of the continuum map of the disk at $\lambda \sim$ 1.33 mm, in order to pin point the location of these molecules in the disk and the chemical relationship among them. As shown in Figure \ref{fig:contCOMs}c, the disk is nearly edge-on, with an equatorial dark lane tracing the cooler midplane sandwiched by two brighter features (outlined by the 4th and 5th contour levels) on the top and bottom tracing the warmer surfaces, as seen before in continuum at a shorter wavelength of $\sim$ 0.85 mm \citep{Lee2017Disk}. As can be seen, the emission structure of a given molecule is similar in different $E_u$ ranges, suggesting that it is more dominated by the distribution of the molecule than the upper energy level. After stacking the lines, we achieved a better sensitivity in NH$_2$CHO{} than that in the previous observations obtained at higher resolution \citep{Lee2017COM}, and detected NH$_2$CHO{} not only in the lower disk atmosphere, but also in the upper disk atmosphere. More importantly, we can better pinpoint its emission and found it to be in the inner disk where the disk is warmer. It is brighter in the lower disk atmosphere, with two emission peaks clearly seen in the map with $E_u > 120$ K (see Figure \ref{fig:contCOMs}b). HNCO is detected with the spatial distribution and radial extent consistent with NH$_2$CHO{}. Looking back at the previous results of other iCOMs detected at higher frequency of $\sim$ 346 GHz \citep{Lee2019COM}, we find that t-HCOOH was also detected with the spatial distribution and radial extent consistent with NH$_2$CHO{} \cite[see Figure \ref{fig:contCOMs}f adopted from][]{Lee2019COM}. Notice that the radial distribution of molecular gas detected at higher frequency can also be compared with that here at lower frequency, because the optical depth of the underlying continuum of the dusty disk mainly affects the vertical distribution (i.e., height) of molecular gas in the atmosphere (see Section \ref{sec:midplane}). On the other hand, H$_2$CO{} is only detected with $E_u < 70$ K, and its emission extends further out in radial direction beyond the centrifugal barrier (CB) (Figure \ref{fig:contCOMs}g). The emission is also detected at a larger distance from the disk miplane and extends away from the disk atmosphere, overlapping with the base of the SO disk wind \citep{Lee2021DW} and thus tracing the wind from the disk. Its deuterated species D$_2$CO{} is detected mainly in the disk atmosphere, also at a larger distance from the midplane and a larger radius from the central protostar than NH$_2$CHO{}. On the other hand, the emission of the $^{13}$C isotopologue H$_2$$^{13}$CO{} is very faint and mainly detected in the disk atmospheres. CH$_3$OH{} is detected in the atmosphere extending out to the CB, as found before \citep{Lee2017COM,Lee2019COM}. The emission also extends away from the disk midplane, suggesting that part of it also traces the wind from the disk. As for CH$_3$CHO{}, the emission is mainly detected in the disk atmosphere and extends radially toward the CB. In summary, NH$_2$CHO{}, HNCO, D$_2$CO{}, H$_2$$^{13}$CO{} and CH$_3$CHO{} trace mainly the disk atmosphere, while H$_2$CO{} and CH$_3$OH{} trace not only the disk atmosphere, but also the disk wind. We can also measure the vertical height of these molecules (using lines with $E_u < 120$ K) along the jet axis in the lower atmosphere where the emission is brighter, and find it to be $\sim$ 15, 19, 20, 24, and 26 au, respectively for NH$_2$CHO, HNCO, CH$_3$CHO{}, CH$_3$OH{}, and H$_2$CO{}. We will discuss the vertical height later with the outer radius of these molecules measured from the PV diagrams. \subsection{Kinematics} The spatio-kinematic relationship among these molecules can be studied with the PV diagrams cut across the upper and lower disk atmospheres, as shown in Figure \ref{fig:pv_atms}. Here we use the emission with $E_u < 120$ K, where all molecules are detected. In addition, this emission is expected to trace the lowest temperature and thus the outermost radius at which the molecules start to appear. Previously, the disk was found to be rotating roughly with a Keplerian rotation due a central mass of $\sim$ 0.25 $M_\odot${} (including the masses of the central protostar and the disk) \citep{Codella2014,Lee2017COM}. Therefore, the associated Keplerian rotation curves (blue curves) are plotted here for comparison. The emissions of these molecules trace the disk atmosphere within the CB and are thus enclosed by the Keplerian rotation curves. In the upper disk atmosphere, their emissions form roughly linear PV structures (as marked by the magenta lines), indicating that they arise from rings rotating at certain velocities. For edge-on rotating rings, the radial velocity observed along the line of sight is proportional to the position offset from the center, forming the linear PV structures. Interestingly, the PV structures of HNCO and t-HCOOH are aligned with those of NH$_2$CHO{}, and the PV structures of D$_2$CO{} are roughly aligned with those of CH$_3$OH{}. Except for these similarities, different molecules have different velocity gradients connecting to different locations of the Keplerian curves, indicating that they arise from rings at different disk radii. From the location of their PV structure on the Keplerian curve, we find that the disk radius of these molecules increases from $\sim$ 24 au for NH$_2$CHO{}/HNCO/t-HCOOH, to $\sim$ 36 au for CH$_3$CHO{}, to $\sim$ 40 au for CH$_3$OH{}/D$_2$CO{}, and then to $\sim$ 48 au for H$_2$CO{}. This trend is the same as the increasing order of the vertical height measured earlier for these molecules, indicating that the height increases with increasing radius, as expected for a flared disk in hydrostatic equilibrium. Plotting the velocity gradients in the upper disk atmosphere onto the lower disk atmosphere, we find that the emission detected in the upper disk atmosphere is actually only from the outer radius where their emission start to appear, and the emission also extends radially inward to where NH$_2$CHO{} is detected. Since the nearside of the disk is tilted slightly downward to the south, the emission in the upper disk atmosphere further in is lost due to the absorption against the bright and optically thick continuum emission of the disk surface (see Figure 9b in Lee et al. 2019). Note that for NH$_2$CHO{}/HNCO/t-HCOOH, there seems to be a small velocity shift of $\sim$ 0.5 km s$^{-1}${} between the upper and lower disk atmosphere. This velocity shift could suggest an infall (or accretion) velocity of $\sim$ 0.25 km s$^{-1}${}, which is $\sim$ 8\% of the rotation velocity at $\sim$ 24 au. However, observations at higher spectral and spatial resolution are needed to verify this possibility. \subsection{Physical Properties in the Disk Atmosphere} In order to understand the nature and spatial origin of the detected methanol, we analyzed the observed methanol lines (Table \ref{tab:lines}) via a non-LTE Large Velocity Gradient (LVG) approach, using the code \textsc{grelvg}, initially developed by \citet{Ceccarelli2003}. We used the collisional coefficients of methanol with para-H$_2$, computed by \citet{Rabli2010} between 10 and 200 K for the J$\leq$15 levels and provided by the BASECOL database \citep{Dubernet2012,Dubernet2013}. We assumed an A-/E- CH$_3$OH ratio equal to 1. To compute the line escape probability as a function of the line optical depth we adopted the semi-infinite slab geometry \citep{Scoville1974} and a linewidth equal to 4 km~s$^{-1}$, following the observations. We ran several grids of models to sample the $\chi^2$ surface in the parameter space. Specifically, we varied the methanol column density N(A-CH$_3$OH) and N(E-CH$_3$OH) simultaneously from $2\times 10^{15}$ to $1\times 10^{19}$ cm$^{-2}$ (with a step of a factor of 2), the H$_2$ density $n_{\textrm{\scriptsize H}_2}$ from $10^{6}$ to $10^{9}$ cm$^{-3}$ (with a step of a factor of 2) and the gas temperature T from 50 to 120 K (with a step of 5 K). We then fit the measured the velocity-integrated line intensities ($W=\int T_B dv$ with $T_B$ being the brightness temperature) by comparing them with those predicted by the model, leaving N(A-CH$_3$OH) and N(E-CH$_3$OH), $n_{\textrm{\scriptsize H}_2}$, and $T$ as free parameters. Given the limitation on the J level ($\leq$15), we used only seven of the twelve detected methanol lines with $E_u < 200$ K for the LVG fitting. We considered the line intensities at the emission peak (marked by a blue circle in Figure \ref{fig:contCOMs}j) in the lower disk atmosphere, as listed in Table \ref{tab:lines} . The results of the fit are shown in Figure \ref{fig:LVG}. The best fit gives the following values, where the errors are estimated considering the 1$\sigma$ confidence level and the uncertainties of $\sim$ 40\% in our measurements: N(CH$_3$OH)$=$N(A-CH$_3$OH)$+$N(E-CH$_3$OH)$\sim 1.6^{+4.4}_{-0.8} \times 10^{18}$ cm$^{-2}$; $n_{\textrm{\scriptsize H}_2} \sim 10^{9}$ cm$^{-3}$, which should be the lower limit because it is in the LTE regime at this density; and $T \sim 75\pm20$ K. The lines are predicted to be all optically thick with the lowest line opacity $\tau \sim 1$ for the line at 234.699 GHz and the highest $\tau\sim 19$ for the line at 218.440 GHz, and $\tau$=3--10 for the other lines. We also derived the excitation temperature and column density from rotation diagram using the remaining five transition lines with $E_u > 200 $K (see Figure \ref{fig:LVG}c), assuming optically thin emission and LTE \citep{Goldsmith1999}. In particular, we fit the data with a linear equation, and then derived the temperature from the negative reciprocal of the slope and the column density from the y-intercept. We found that $T\sim 109\pm31$ K and N(CH$_3$OH)$=(1.4\pm0.7)\times 10^{18}$ cm$^{-2}$. Taking the mean values from the two methods, we have $T\sim 92^{+48}_{-37}$ K and N(CH$_3$OH)$=1.5^{+4.5}_{-0.8}\times 10^{18}$ cm$^{-2}$. Notice that previous LTE estimation of excitation temperature of CH$_3$OH{} and CH$_2$DOH{} together from rotation diagram was pretty uncertain, with a value of 165$\pm$85 K \citep{Lee2017COM}, due to a large scatter of the data points. More importantly, the excitation temperature was also overestimated because almost all the lines had E$_u$ $<$ 200 K and were thus likely optically thick. For less abundant molecules detected with a broad range of $E_u$, such as NH$_2$CHO{} and HNCO, the mean excitation temperature and column density of the molecular lines in the disk atmosphere can be roughly estimated from rotation diagram assuming optically thin emission and LTE \citep{Goldsmith1999}. We used the brighter emission in the lower disk atmosphere. Table \ref{tab:lines} lists the integrated line intensities averaged over a rectangular region (with a size of 68 au $\times$ 20 au) that covers most of the emission in the lower atmosphere, measured with a cutoff of 2$\sigma$. Figure \ref{fig:popdia} shows the resulting rotation diagrams for NH$_2$CHO{} and HNCO. The blended lines of NH$_2$CHO{} are excluded from the diagram. The HNCO line at the lowest $E_u$ (marked with an open square) seems to be optically thick with an intensity much lower than the line next down the $E_u$ axis, and is thus excluded from the fitting. For NH$_2$CHO{} and HNCO, we fit the data points to obtain the temperature and column density. It is interesting to note that NH$_2$CHO{} and HNCO have roughly the same excitation temperature of $\sim$ $226\pm130$ K, although with a large uncertainty. On the other hand, since H$_2$CO{}, D$_2$CO{}, and CH$_3$CHO{} are only detected with a narrow range of $E_u < 120$ K and their emission can be optically thick there, we can not derive their excitation temperature from the rotation diagram. In addition, H$_2$CO{} and D$_2$CO{} are only detected with two lines. Also, H$_2$$^{13}$CO{} is only detected with one line. Since D$_2$CO{} has roughly the same radial extent as CH$_3$OH{}, it is assumed to have an excitation temperature of 92 K, the same as that found for CH$_3$OH{}. Since H$_2$CO{} has a slightly larger radius than CH$_3$OH{}, it and its $^{13}$C isotopologue H$_2$$^{13}$CO{} are assumed to have an excitation of 60 K. CH$_3$CHO{} has a smaller radial extent than CH$_3$OH{} and is thus assumed to have an excitation temperature of 100 K. The resulting excitation temperature and column density are listed in Table \ref{tab:colabun}. In addition, the abundance of these molecules are also estimated by dividing the column density of the molecules by the mean H$_2$ column density derived from a dusty disk model \citep{Lee2021Pdisk} in the same region, which is found to be $\sim$ \scnum{1.08}{25} cm$^{-2}${}. This disk model was constructed before to reproduce the continuum emission of the disk at $\lambda \sim$ 850 $\mu$m{} \citep{Lee2021Pdisk} and it can also roughly reproduce the continuum emission of the disk observed here at $\lambda \sim$ 1.33 mm \citep{Lin2021}. Since H$_2$CO{} and D$_2$CO{} lines are each detected with two lines that are likely optically thick, their lines at higher E$_u$ are used to derive the lower limit of their column density. Indeed, the H$_2$CO{} column density can be better derived from the H$_2$$^{13}$CO{} line assuming [$^{12}$C]/[$^{13}$C] ratio of $\sim$ 50, as estimated in the Orion Complex \citep{Kahane2018}. As can be seen from Table \ref{tab:colabun}, the H$_2$CO{} column density derived this way is $\sim$ 3 times that derived from the H$_2$CO{} lines. Thus, the deuteration of H$_2$CO{}, i.e., the abundance ratio [D$_2$CO{}]/[H$_2$CO{}], is $\mathrel{\hbox{\rlap{\hbox{\lower4pt\hbox{$\sim$}}}\hbox{$>$}}}$ 0.053. As for CH$_3$CHO{}, we fixed its temperature to 100 K by fixing the negative reciprocal of the slope in the linear equation and then derived its column density from the y-intercept of the linear fit to the rotation diagram, as shown in Figure \ref{fig:popdia}c. \section{Discussion} \subsection{Lack of Molecular Emission in Disk Midplane} \label{sec:midplane} As discussed in \citet{Lee2017COM,Lee2019COM}, the lack of molecular emission in the disk midplane can be due to an exponential attenuation by the high optical depth of dust continuum. Figure \ref{fig:conttau}a shows the optical depth of the dust continuum at 1.33 mm derived from the dusty disk model that reproduced the thermal emission of the disk \citep{Lee2021Pdisk}. As can be seen, the optical depth is $\mathrel{\hbox{\rlap{\hbox{\lower4pt\hbox{$\sim$}}}\hbox{$>$}}}$ 3 toward the midplane within the CB, where no molecular emission is detected, supporting this possibility. The faint NH$_2$CHO{} and CH$_3$OH{} emission detected in the midplane likely comes from the upper and lower disk atmospheres due to the beam convolution. However, the H$_2$CO{} emission in the midplane near the CB (see Figure \ref{fig:contCOMs}g) should be real detection because the optical depth of the dust continuum decreases to smaller than 3 at the edge. \subsection{Distribution of Molecules and Binding Energy} As discussed earlier, a stratification is seen in the distribution of molecules in the disk atmosphere, with the outer disk radius decreasing from H$_2$CO{}, to CH$_3$OH{}, to CH$_3$CHO{}, and then to NH$_2$CHO{}/HNCO/HCOOH, as shown in Figure \ref{fig:conttau}b together with the temperature structure of the dusty disk model \citep{Lee2021Pdisk}. Similar stratification of H$_2$CO{}, CH$_3$CHO{}, and NH$_2$CHO{} has been seen toward the bow shock region B1 in the young protostellar system L1157 \citep{Codella2017}. That shock region is divided into 3 shock subregions, with shock 1 in the bow wing, shock 3 in the bow tip, and shock 2 in between. The authors interpreted that shock 1 is the youngest shock while shock 3 is the oldest. They found that the observed decrease in abundance ratio [CH$_3$CHO{}]/[NH$_2$CHO{}] from shock 3 to shock 2 and to shock 1 can be modeled if both NH$_2$CHO{} and CH$_3$CHO{} are formed in gas phase. Here in the HH 212 disk, since the temperature of the atmosphere is expected to increase inward toward the center as the disk (Figure \ref{fig:conttau}b), the stratification in the distribution of these molecules could be related to their binding energy (BE) (and thus sublimation temperature). Table \ref{tab:BE} lists the recently computed BE for these molecules. For consistent comparison, we adopt the values obtained from similar methods on amorphous solid water ice \citep{Ferrero2020,Ferrero2022}. Since HNCO was not included in those studies, we adopt its value from \citet{Song2016}. Notice that different methods can result in different BE, e.g., HCOOH was found to have a BE value of less than 5000 K on pure ice \citep{Kruckiewicz2021}, significantly lower than that adopted here. As can be seen, the increasing order of the observed outer radius of NH$_2$CHO{}/t-HCOOH, CH$_3$OH{}, and H$_2$CO{} is consistent with the decreasing order of their BE, indicating that these molecules are thermally desorbed from the ice mantle on dust grains. Notice that this does not necessarily mean that these molecules are formed in the ice mantle, because the density in the disk is so high that even if the molecules are formed in the gas phase they freeze-out quickly and are, therefore, only detected in regions where the dust temperature is larger than the sublimation temperature. As for HNCO and CH$_3$CHO{}, their outer radii do not fit in to those of H$_2$CO{}, CH$_3$OH{}, and NH$_2$CHO{} based on their BE and they can form in gas phase from other species. On the other hand, HNCO and HCOOH may come from the decomposition of the desorbed organic salts (NH$_4^+$OCN$^-$ and NH$_4^+$HCOO$^-$), which have similar BE to that of NH$_2$CHO{} \citep{Kruckiewicz2021,Ligterink2018}. Further work is needed to check this possibility. Previously at $\sim$ \arcsa{0}{15} (60 au) resolution, \citet{Codella2018} detected deuterated water around the disk. Although the deuterated water was found to have an outer radius of $\sim$ 60 au, its kinematics was found to be consistent with that of the centrifugal barrier at $\sim$ 44 au. More importantly, since water has a BE similar to that of H$_2$CO{} (see Table \ref{tab:BE}), it is likely that water, like H$_2$CO{}, is also desorbed from the ice mantle on the dust grains. Thus the water snowline can be located around or slightly outside the centrifugal barrier. \subsection{Centrifugal Barrier and H$_2$CO{} and CH$_3$OH{}} The high deuteration of H$_2$CO{} (with [D$_2$CO{}]/[H$_2$CO{}] $\mathrel{\hbox{\rlap{\hbox{\lower4pt\hbox{$\sim$}}}\hbox{$>$}}}$ 0.053) and methanol (with [CH$_2$DOH]/[CH$_3$OH] $\sim$ 0.12) \citep{Lee2019COM} supports that both are originally formed in ice. These ratios of [D$_2$CO{}]/[H$_2$CO{}] and [CH$_2$DOH]/[CH$_3$OH] are consistent with those found in prestellar cores to Class I sources \cite[references therein]{Mercimek2022}. It is possible that H$_2$CO{} is formed by hydrogenation to CO frozen in the ice mantle on dust grains and then CH$_3$OH{} is formed from it with the addition of two H atoms \citep{Charnley2004}. The derived kinetic temperature of CH$_3$OH{} agrees with the sublimation temperature, also supporting that the methanol is thermally desorbed into gas phase. H$_2$CO{} and CH$_3$OH{} are detected with the outer radius near the CB where an accretion shock is expected as the envelope material flows onto the disk \citep{Lee2017COM}, suggesting that they are desorbed into gas phase due to the heat produced by the shock interaction. It is possible that they were already formed in the ice mantle on dust grains in the collapsing envelope stage and then brought in to the disk \citep{Herbst2009,Caselli2012}. H$_2$CO{} has a lower sublimation temperature than CH$_3$OH{}, and thus can be desorbed into gas phase further out beyond the CB. Interestingly, both H$_2$CO{} and CH$_3$OH{} also extend vertically away from the disk surface, and thus can also trace the disk wind as SO \citep{Tabone2017,Lee2018DW,Lee2021DW}. In addition, since H$_2$CO{} has an outer radius outside the centrifugal barrier, it may also trace the wind from the innermost envelope transitioning to the disk, carrying away angular momentum from there. \subsection{Formamide, HNCO, and H$_2$CO{}} HNCO not only has similar spatial distribution and kinematics, but also has a similar excitation temperature to NH$_2$CHO{}, though with a large uncertainty. In addition, the abundance ratio of HNCO to NH$_2$CHO{} agrees well with the nearly linear abundance correlation found before across several orders of magnitude in molecular abundance \citep{Lopez2019}. All these suggest a chemical link between the two molecules. However, as discussed earlier based on the BE sequence, HNCO itself is likely formed in gas phase but not desorbed from ice mantle, unless the BE of HNCO is significantly underestimated. In particular, although HNCO has a much lower BE than NH$_2$CHO{}, it is detected only in the inner and warmer disk where NH$_2$CHO{} is detected, but not detected in the outer part of the disk where the temperature is lower. Thus, our result implies that HNCO, instead of being parent molecule, is likely a daughter molecule of NH$_2$CHO{} and formed in gas phase. One possible reaction is NH$_2$CHO{} $+$ H $\rightarrow$ HNCO \citep{Haupa2019}. It is also possible that HNCO is formed by destructive gas-phase ion-molecule interactions with amides (also larger amides than NH$_2$CHO{}) \citep{Garrod2008,Tideswell2010}. It has also been proposed that formamide can be formed from formaldehyde (H$_2$CO) in warm gas through the reaction H$_2$CO{} $+$ NH$_2$ $\rightarrow$ NH$_2$CHO{} $+$ H \citep{Kahane2013,Vazart2016,Codella2017,Skouteris2017}. However, we find that H$_2$CO{} has a more extended distribution with different kinematics from formamide, and is thus unclear if it can be the parent molecule in gas phase. Unfortunately, we have no information on the other reactant, NH$_2$. Very likely it is the product of sublimated NH$_3$ \citep{Codella2017}, whose binding energy (see Table \ref{tab:BE}) is larger than that of H$_2$CO{}, which may explain why formamide is not present where H$_2$CO{} is. In conclusion, based on the current observations, it is not possible to constrain the formation route of formamide in the disk atmosphere of HH 212. Nonetheless, our work has added precious information about the formation route of formamide in disk atmosphere, complementing those in different environments, e.g., the L1157 shock \citep{Codella2017}. \acknowledgements We thank the anonymous reviewers for their insightful comments. This paper makes use of the following ALMA data: ADS/JAO.ALMA\#2017.1.00712.S. ALMA is a partnership of ESO (representing its member states), NSF (USA) and NINS (Japan), together with NRC (Canada), NSC and ASIAA (Taiwan), and KASI (Republic of Korea), in cooperation with the Republic of Chile. The Joint ALMA Observatory is operated by ESO, AUI/NRAO and NAOJ. C.-F.L. acknowledges grants from the Ministry of Science and Technology of Taiwan (MoST 107-2119-M-001-040-MY3, 110-2112-M-001-021-MY3) and the Academia Sinica (Investigator Award AS-IA-108-M01). CC acknowledges the funding from the European Union’s Horizon 2020 research and innovation programs under projects “Astro-Chemistry Origins” (ACO), Grant No 811312; the PRIN-INAF 2016 The Cradle of Life - GENESIS-SKA (General Conditions in Early Planetary Systems for the rise of life with SKA); the PRIN-MUR 2020 BEYOND-2P (Astrochemistry beyond the second period elements), Prot. 2020AFB3FX.
1,108,101,565,398
arxiv
\section{\fontsize{18}{25}\selectfont Supplementary Information} \section{Multi-photon coincidence amplitudes} For convenience, we will assume the JSA is a matrix of amplitudes corresponding to photon-pair creation at particular signal and idler frequencies. Since the JSA is actually a continuous function, the bandwidth of the frequency channels should be small compared to the scale over which it varies, otherwise detecting a photon in a particular channel can project it onto a mixed spectral state. In the experiment, the photons are split into relatively broad frequency channels and the resulting mixing degrades the interference visibility. On the other hand, if the JSA were to already consist of a discrete set of modes, for instance the modes of a cavity-based photon pair source, the filtering would only be required to separate these distinct modes. Creation operators for the signal channels are labeled by $a^{\dagger}_j$ and for the idler channels $b^{\dagger}_j$. A JSA can generally be decomposed into pairs of Schmidt modes~\cite{McKinstrie}. The Schmidt modes provide an orthonormal basis such that each pair of modes is an independent two-mode squeezed state, and the overall wavefunction is the tensor product of these squeezed states. We use $c^{\dagger}_j$ for the creation operators of the signal Schmidt modes, and $d^{\dagger}_j$ for the idler Schmidt modes. The two-mode squeezed-vacuum state between the $j$th pair of Schmidt modes can be written as: \begin{equation} \sqrt{1-\lambda_j^2}\sum_{n=0}^\infty \lambda_j^n\ket{n,n}_{j,j}=\sqrt{1-\lambda_j^2}~\text{exp}\left(\lambda_j c^{\dagger}_jd^{\dagger}_j\right)\ket{0,0}_{j,j}. \end{equation} Where $\ket{n,m}_{j,k}$ indicates a number state with $n$ photons in the $j$th signal Schmidt mode and $m$ photons in the $k$th idler Schmidt mode. $\lambda_j$ is a parameter describing the strength of the squeezing. Hence the total state is \begin{equation} \prod_j \sqrt{1-\lambda_j^2}~\text{exp}\left(\lambda_j c^{\dagger}_jd^{\dagger}_j\right) \ket{\text{vac}}=\mathcal{C}~\text{exp}\left(\sum_j\lambda_j c^{\dagger}_jd^{\dagger}_j\right)\ket{\text{vac}}=\mathcal{C}~\text{exp}\left(\sum_{j,k}\Lambda_{j,k} a^{\dagger}_jb^{\dagger}_k\right)\ket{\text{vac}} \end{equation} where $\ket{\text{vac}}$ is the multimode vacuum state, $\Lambda$ is a matrix describing the strength of squeezing between different signal and idler modes in the frequency basis, and $\mathcal{C}=\prod_j \sqrt{1-\lambda_j^2}$. We have used the fact that the $c^{\dagger}_j$ ($d^{\dagger}_j$) operators are just linear combinations of the $a^{\dagger}_j$ ($b^{\dagger}_j$) operators. It can be seen that the Schmidt basis puts $\Lambda$ into diagonal form and is helpful in determining the constant $\mathcal{C}$. The exponential can be expressed as a series of terms corresponding to creation of different total photon numbers: \begin{equation} \text{exp}\left(\sum_{j,k}\Lambda_{j,k} a^{\dagger}_jb^{\dagger}_k\right)=1+\left(\sum_{j,k}\Lambda_{j,k} a^{\dagger}_jb^{\dagger}_k\right)+\frac{1}{2}\left(\sum_{j,k}\Lambda_{j,k} a^{\dagger}_jb^{\dagger}_k\right)^2+...\frac{1}{N!}\left(\sum_{j,k}\Lambda_{j,k} a^{\dagger}_jb^{\dagger}_k\right)^N \end{equation} eg. the two photon amplitude associated with one signal photon in mode $j$ and an idler in mode $k$ is $\mathcal{C}.\Lambda_{j,k}$. The amplitude associated with $N$ signal photons in modes $\vec{j}=j_1,...,j_N$ and $N$ idler photons in modes $\vec{k}=k_1,...,k_N$ depends only on the $N$th term of the expansion: \begin{equation} \phi(\vec{j}, \vec{k})=\frac{\mathcal{C}}{N!}\bra{\text{vac}}\left(\prod_{p=1}^N a_{j_p}\right)\left(\prod_{q=1}^N b_{k_q}\right)\left(\sum_{r,s}\Lambda_{r,s} a^{\dagger}_rb^{\dagger}_s\right)^N\ket{\text{vac}}. \end{equation} First we observe that the $N$ different $b_{k_q}$ must match up with the $N$ different $b_s^\dagger$ to give a non-zero amplitude. There are $N!$ possible orderings which give the same result, so the pre-factor of $1/N!$ is cancelled out: \begin{equation} \phi(\vec{j}, \vec{k})=\mathcal{C}\bra{\text{vac}}\left(\prod_{p=1}^N a_{j_p}\right)\left(\prod_{q=1}^N\sum_r\Lambda_{r,k_q}~a^{\dagger}_r\right)\ket{\text{vac}}. \end{equation} Now the $N$ different $a_{j_p}$ must match the $a_r^\dagger$, but this time the ordering matters, because different orderings give rise to different pairings of indices in the $N$ instances of the $\Lambda$ matrix. Hence there is a summation over all permutations of $j_1...j_N$: \begin{equation} \phi(\vec{j}, \vec{k})=\mathcal{C}\sum_{\sigma\in S_N}\prod_{q=1}^N \Lambda_{j_{\sigma(q)}, k_q}=\mathcal{C}~\text{perm}(\Lambda_{\vec{j},\vec{k}}). \end{equation} Where $S_N$ is the set of all permutations of the numbers 1 to $N$. This is equal to the permanent of $\Lambda_{\vec{j},\vec{k}}$, a sub-matrix of $\Lambda$ containing the rows and columns corresponding to the generated signal and idler frequencies, multiplied by the constant $\mathcal{C}$. The summation is essentially over different pairings of the $N$ signal photons with the $N$ idler photons. In the main text, we refer to the permanent of a submatrix $\psi_{\vec{j}, \vec{k}}$ which contains the two photon amplitudes. As seen above, the two photon amplitudes are equal to the elements of $\Lambda$ multiplied by $\mathcal{C}$. Hence the permanents derived from these two photon amplitudes have an extra factor of $\mathcal{C}^N$, and \begin{equation} \phi(\vec{j}, \vec{k})=\mathcal{C}^{1-N}~\text{perm}\left(\psi_{\vec{j}, \vec{k}}\right) \end{equation} \begin{equation} P(\vec{j}, \vec{k})=\mathcal{C}^{2-2N}~|\text{perm}\left(\psi_{\vec{j}, \vec{k}}\right)|^2. \end{equation} So the probabilities are equal to the absolute square of the permanent, up to a constant factor, which is equal to 1 in the limit of small squeezing. \section{JSA matrices for experimental pump configurations} A continuous JSA $\psi(\omega_s, \omega_i)$ determined only by the energy matching condition for FWM can be written: \begin{equation} \psi(\omega_s, \omega_i)\propto\int d\omega~e(\omega)~e(\omega_s+\omega_i-\omega)=f(\omega_s+\omega_i) \end{equation} where $e(\omega)$ is the spectral amplitude of the pump pulse, $\omega-\omega_s-\omega_i$ is the frequency of the second pump photon, fixed by energy matching, and this results in a function $f(\omega_s+\omega_i)$ which only depends on the sum of the signal and idler frequencies. For a discretized JSA, $\psi_{j,k}$, and discrete pump components with amplitudes $e_j$, where the subscripts label equally spaced frequency channels, we can write \begin{equation} \psi_{j,k}\propto\sum_l e_l~e_{j+k-l}=f_{j+k}. \end{equation} For a single pump component in channel $p$, $\psi_{j,k}$ is only non-zero when $j+k=2p$. So fixing the idler channel, there is only one non-zero component for the signal, and displacing the idler by one channel must also displace the signal by one channel. Hence we can write \begin{equation} \psi\propto\left(\begin{array}{cccc} {0} & {e_p^2} & {0} & {0} \\ {0} & {0} & {e_p^2} & {0} \end{array}\right), \end{equation} where the two rows correspond to the two idler channels used in the experiment, and the four columns correspond to the four signal channels. For two pump components with amplitudes $e_p$,$e_q$, the pump function is \begin{equation} f_j=\sum_k e_k e_{j-k}~~\Rightarrow~~f_{2p}=e_p^2,~~f_{p+q}=2e_pe_q,~~f_{2q}=e_q^2, \end{equation} so if $p$ and $q$ are adjacent channels we have \begin{equation} \psi\propto\left(\begin{array}{cccc} {e_p^2} & {2e_pe_q} & {e_q^2} & {0} \\ {0} & {e_p^2} & {2e_pe_q} & {e_q^2} \end{array}\right). \end{equation} There are two paths to generating the outcome involving photons in both idler channels and signal photons in channels 2 and 3: \begin{equation} \phi_{i1,i2,2,3}\propto\text{perm}\left(\begin{array}{cc} {2e_pe_q} & {e_q^2} \\ {e_p^2} & {2e_pe_q} \end{array}\right)=4e_p^2e_q^2+e_p^2e_q^2=5e_p^2e_q^2, \end{equation} where it can be seen that the two possibilities always add constructively, regardless of the complex phases of $e_p$ and $e_q$, and that they are unbalanced by a ratio 4:1, which makes this effect harder to observe definitively in an experiment. With three pump components $e_p$,$e_q$,$e_r$, the pump function has 6 non-zero components: \begin{equation} f_{2p}=e_p^2,~~f_{2q}=e_q^2,~~f_{2r}=e_r^2,~~f_{p+q}=2e_pe_q,~~f_{p+r}=2e_pe_r,~~f_{q+r}=2e_qe_r. \end{equation} If $p$,$q$,$r$ are three adjacent channels, $f_{2q}$ coincides with $f_{p+r}$, and becomes $e_q^2+2e_pe_r$. This means an idler in a particular channel heralds a signal in a superposition of five frequencies, one of which is cut off since we only use four signal channels. This configuration provides sufficient degrees of freedom to observe both constructive and destructive interference: \begin{equation} \psi\propto\left(\begin{array}{llll} {2e_pe_q~} & {e_q^2+2e_pe_r~} & {2e_qe_r} & {e_r^2} \\ {e_p^2} & {2e_pe_q} & {e_q^2+2e_pe_r~} & {2e_qe_r} \end{array}\right). \end{equation}
1,108,101,565,399
arxiv
\section{Introduction} \label{sec:intro} Disks around young stars provide the material from which planets form. Knowledge of their physical and chemical structure is therefore crucial for understanding planet formation and composition. The physics of protoplanetary disks has been studied in great detail, both using observations of individual objects \citep[e.g.,][]{vanZadelhoff2001,Andrews2010,Andrews2018,Schwarz2016} as well as through surveys of star-forming regions \citep[e.g.,][]{Ansdell2016,Ansdell2017,Barenfeld2016,Pascucci2016,Cox2017,Ruiz-Rodriguez2018,Cieza2019}. Molecular line observations require more telescope time than continuum observations, hence studies of the chemical structure generally target individual disks or small samples of bright disks \citep[e.g.,][]{Dutrey1997,Thi2004,Oberg2010,Cleeves2015,Huang2017}. The picture that is emerging for the global composition of Class II disks around solar analogues is that they have a large cold outer region ($T \lesssim$ 20 K) where CO is frozen out in the disk midplanes \citep[e.g.,][]{Aikawa2002,Qi2013,Qi2015,Qi2019,Mathews2013,Dutrey2017}. However, it is now becoming clear that planet formation already starts when the disk is still embedded in its natal envelope. Grain growth has been observed in Class 0 and I sources and even larger bodies may have formed before the envelope has fully dissipated \citep[e.g.,][]{Kwon2009,Jorgensen2009,Miotello2014,ALMAPartnership2015,Harsono2018}. Furthermore, the dust mass of Class II disks seems insufficient to form the observed exoplanet population, but Class 0 and I disks are massive enough \citep{Manara2018,Tychoniec2020}. Young embedded disks thus provide the initial conditions for planet formation, but unlike their more evolved counterparts, their structure remains poorly characterized. A critical property is the disk temperature structure because this governs disk evolution and composition. For example, temperature determines whether the gas is susceptible to gravitational instabilities (see, e.g., a review by \citealt{Kratter2016}), a potential mechanism to form giant planets, stellar companions and accretion bursts \citep[e.g.,][]{Boss1997,Boley2009,Vorobyov2009,Tobin2016a}. In addition, grain growth is thought to be enhanced in the region where water freezes out from the gas phase onto the dust grains, the water snowline ($T \sim$100--150 K; e.g., \citealt{Stevenson1988,Schoonenberg2017,Drazkowska2017}). Moreover, freeze out of molecules as the temperature drops below their species-specific freeze-out temperature sets the global chemical composition of the disk. This sequential freeze-out causes radial gradients in molecular abundances and elemental ratios (like the C/O ratio, e.g., \citealt{Oberg2011}). In turn, the composition of a planet then depends on its formation location in the disk \citep[e.g.,][]{Madhusudhan2014,Walsh2015,Ali-Dib2017,Cridland2019}. Finally, the formation of high abundances of complex molecules starts from CO ice \citep[e.g.,][]{Tielens1982,Garrod2006,Cuppen2009,Chuang2016} and COM formation will thus be impeded during the disk stage if the temperature is above the CO freeze-out temperature ($T \gtrsim$~20~K). Whether young disks are warm ($T \gtrsim$~20~K, i.e., warmer than the CO freeze-out temperature) or cold (i.e., have a large region where $T \lesssim$~20~K and CO is frozen out) is thus a simple, but crucial question. \begin{deluxetable*}{l l c c c c c c c c c c c} \tablecaption{Overview of source properties. \label{tab:SourceOverview}} \tablewidth{100pt} \tabletypesize{\scriptsize} \tablehead{ \colhead{Source name} \vspace{-0.3cm} & \colhead{Other name} & \colhead{R.A.\tablenotemark{a}} & \colhead{Decl.\tablenotemark{a}} & \colhead{Class} & \colhead{$\mathbf{T_{\rm{bol}}}$} & \colhead{$L_{\rm{bol}}$} & \colhead{$M_{\ast}$} & \colhead{$M_{\rm{env}}$} & \colhead{$M_{\rm{disk}}$} & \colhead{$R_{\rm{disk}}$} & \colhead{$i$} & \colhead{Refs\tablenotemark{b}} \\ \colhead{(IRAS)} \vspace{-0.5cm} & \colhead{} & \colhead{(J2000)} & \colhead{(J2000)} & \colhead{} & \colhead{(K)} & \colhead{($L_{\odot}$)} & \colhead{($M_{\odot}$)} & \colhead{($M_{\odot}$)} & \colhead{($M_{\odot}$)} & \colhead{(au)} & \colhead{(deg)} & \colhead{} \\ } \startdata 04016+2610 & L1489 IRS & 04:04:43.1 & +26:18:56.2 & I & 226 & 3.5 & 1.6 & 0.023 & 0.0071 & 600 & 66 & 1--4 \\ 04302+2247 & Butterfly star & 04:33:16.5 & +22:53:20.4 & I/II & 202 &0.34--0.92 & 0.5\tablenotemark{c} & 0.017 & 0.11 & 244 & $>$76 & 3,5,9 \\ 04365+2535 & TMC1A & 04:39:35.2 & +25:41:44.2 & I & 164 &2.5 & 0.53--0.68 & 0.12 & 0.003--0.03 & 100 & 50 & 1,6--8 \\ 04368+2557 & L1527 IRS & 04:39:53.9 & +26:03:09.5 & 0/I & 59 &1.9--2.75 & 0.19--0.45 & 0.9--1.7 & 0.0075 & 75--125 & 85 & 9--14 \\ 04381+2540 & TMC1 & 04:41:12.7 & +25:46:34.8 & I & 171 & 0.66--0.9 & 0.54 & 0.14 & 0.0039 & 100 & 55 & 1,6,10 \\ \enddata \vspace{-0.5cm} \begin{flushleft} \tablecomments{All values presented in this table are from the literature listed in footnote b.} \vspace{-0.2cm} \tablecomments{TMC1 is resolved here for the first time as a binary. The literature values in this table are derived assuming a single source. } \vspace{-0.2cm} \tablenotetext{a}{Peak of the continuum emission, except for TMC1 where the phase center of the observations is listed. The coordinates of the two sources TMC1-E and TMC1-W are R.A. = 04:41:12.73, Decl = +25:46:34.76 and R.A. = 04:41:12.69, Decl = +25:46:34.73, respectively.} \vspace{-0.15cm} \tablenotetext{b}{References. (1) \citet{Green2013}, (2) \citet{Yen2014}, (3) \citet{Sheehan2017}, (4) \citet{Sai2020}, (5) \citet{Wolf2003}, (6) \citet{Harsono2014}, (7) \citet{Aso2015}, (8) Harsono et al., submitted (9) \citet{Motte2001}, (10) \citet{Kristensen2012}, (11) \citet{Tobin2008}, (12) \citet{Tobin2013}, (13) \citet{Oya2015} (14) \citet{Aso2017}.} \vspace{-0.2cm} \tablenotetext{c}{Not a dynamical mass.} \end{flushleft} \end{deluxetable*} Keplerian disks are now detected around several Class 0 and I sources \citep[e.g.,][]{Brinch2007,Tobin2012,Murillo2013,Yen2017}, but most research has focused on disk formation, size and kinematics \citep[e.g.,][]{Yen2013,Ohashi2014,Harsono2014}, or the chemical structure at the disk-envelope interface \citep[e.g.,][]{Sakai2014a,Murillo2015,Oya2016}. Only a few studies have examined the disk physical structure, and only for one particular disk, L1527~IRS. \citet{Tobin2013} and \citet{Aso2017} modeled the radial density profile and \citet{vantHoff2018b} studied its temperature profile based on optically thick $^{13}$CO and C$^{18}$O observations. The latter study showed the importance of disentangling disk and envelope emission and concluded that the entire L1527 disk is likely too warm for CO freeze-out, in agreement with model predictions \citep[e.g.,][]{Harsono2015}, but in contrast to observations of T Tauri disks. Another important question in regard of the composition of planet-forming material is the CO abundance. The majority of protoplanetary disks have surprisingly weak CO emission, even when freeze-out and isotope-selective photodissociation are taken into account \citep[e.g.,][]{Ansdell2016,Miotello2017,Long2017}. Based on gas masses derived from HD line fluxes \citep{Favre2013,McClure2016,Schwarz2016,Kama2016} and mass accretion rates \citep{Manara2016} the low CO emission seems to be the result of significant CO depletion (up to two orders of magnitude below the ISM abundance of $\sim$10$^{-4}$ with respect to H$_2$). Several mechanisms have been discussed in the literature, either focusing on the chemical conversion of CO into less volatile species \citep[e.g.,][]{Bergin2014,Eistrup2016,Schwarz2018,Schwarz2019,Bosman2018}, or using dust growth to sequester CO ice in the disk midplane \citep[e.g.,][]{Xu2017,Krijt2018}. Observations of CO abundances in younger disks can constrain the timescale of the CO depletion process. Observations of $^{13}$CO and C$^{18}$O toward the embedded sources TMC1A and L1527 are consistent with an ISM abundance \citep{Harsono2018,vantHoff2018b}. Recent work by \citet{Zhang2020} also found CO abundances consistent with the ISM abundance for three young disks in Taurus with ages upto $\sim$ 1 Myr using optically thin $^{13}$C$^{18}$O emission. Since the 2-3 Myr old disks in Lupus and Cha I show CO depletion by a factor 10--100 \citep{Ansdell2016}, these results suggest that the CO abundance decreases by a factor of ten within 1 Myr. On the other hand, \citet{Bergner2020} found C$^{18}$O abundances a factor of ten below the ISM value in two Class I sources in Serpens. In this paper we present ALMA observations of C$^{17}$O toward five young disks in Taurus to address the questions whether young disks are generally too warm for CO freeze-out and whether there is significant CO processing. The temperature profile is further constrained by H$_2$CO observations as this molecule freezes out around $\sim$70 K. Although chemical models often assume a binding energy of 2050 K \citep[e.g.,][]{Garrod2006,McElroy2013}, laboratory experiments have found binding energies ranging between 3300--3700 K depending on the ice surface \citep{Noble2012}. These latter values suggest H$_2$CO freeze-out temperatures between $\sim$70--90 K for disk-midplane densities ($\sim$10$^{8}-10^{10}$ cm$^{-3}$) instead of $\sim$50~K. Experiments by \citet{Fedoseev2015} are consistent with the lower end of binding energies found by \citet{Noble2012}, so we adopt a freeze-out temperature of 70 K for H$_2$CO. An initial analysis of these observations were presented in \citet[PhD thesis]{vantHoff2019}. In addition, HDO and CH$_3$OH observations are used to probe the $\gtrsim100-150$~K region and to determine whether complex molecules can be observed in these young disks, as shown for the disk around the outbursting young star V883 Ori \citep{vantHoff2018c,Lee2019}. In contrast, observing complex molecules has turned out to be very difficult in mature protoplanetary disks. So far, only CH$_3$CN has been detected in a sample of disks, and CH$_3$OH and HCOOH have been detected in TW Hya \citep{Oberg2015,Walsh2016,Favre2018,Bergner2018,Loomis2018,Carney2019}. The observations are described in Sect.~\ref{sec:Observations}, and the resulting C$^{17}$O and H$_2$CO images are presented in Sect.~\ref{sec:Results}. This section also describes the non-detections of HDO and CH$_3$OH. The temperature structure of the disks is examined in Sect.~\ref{sec:Analysis} based on the C$^{17}$O and H$_2$CO observations and radiative transfer modeling. The result that the young disks in this sample are warm with no significant CO freeze out or CO processing is discussed in Sect.~\ref{sec:Discussion} and the conclusions are summarized in Sect.~\ref{sec:Conclusions}. \begin{table*} \caption{Observed fluxes for the 1.3 mm continuum and molecular lines.} \label{tab:Flux} \centering \begin{footnotesize} \begin{tabular}{l c c c c c c} \hline\hline \\[-.3cm] Source & $F_{\mathrm{peak}}$ (1.3 mm) & $F_{\mathrm{int}}$ (1.3 mm) & $F_{\mathrm{int}}$ (C$^{17}$O)\tablenotemark{a} & $F_{\mathrm{int}}$ (H$_2$CO)\tablenotemark{a} \\ & (mJy beam$^{-1}$) & (mJy) & (Jy km s$^{-1}$)) & (Jy km s$^{-1}$) \\ \hline \\[-.3cm] IRAS 04302+2247 & \hspace{.8mm} 24.7 $\pm$ 0.1 & 165.9 $\pm$ 0.8 & 2.2 $\pm$ 0.2 & 3.5 $\pm$ 0.2\\ L1489 IRS & \hspace{2mm} 2.8 $\pm$ 0.1 & \hspace{.8mm} 51.1 $\pm$ 1.1 & 2.9 $\pm$ 0.3 & 8.0 $\pm$ 0.5 \\ L1527 IRS & 102.0 $\pm$ 0.1 & 195.1 $\pm$ 0.4 & 1.9 $\pm$ 0.4 & 3.0 $\pm$ 0.6 \\ TMC1A & 125.8 $\pm$ 0.2 & 210.4 $\pm$ 0.4 & 4.1 $\pm$ 0.4 & 2.3 $\pm$ 0.2 \\ TMC1-E & \hspace{2mm} 9.2 $\pm$ 0.1 & \hspace{.8mm} 10.3 $\pm$ 0.2 & \hspace{1mm} 2.0 $\pm$ 0.3\tablenotemark{b} & \hspace{1mm} 2.6 $\pm$ 0.2\tablenotemark{b} \\ TMC1-W & \hspace{.8mm} 16.2 $\pm$ 0.1 & \hspace{.8mm} 17.6 $\pm$ 0.2 & \hspace{1mm} 2.0 $\pm$ 0.3\tablenotemark{b} & \hspace{1mm} 2.6 $\pm$ 0.2\tablenotemark{b} \\ \hline \end{tabular} \tablecomments{The listed errors are statistical errors and do not include calibration uncertainties.} \tablenotetext{a}{Integrated flux within a circular aperture with 6.0$^{\prime\prime}$ diameter.} \vspace{-0.2cm} \tablenotetext{b}{Flux for both sources together.} \end{footnotesize} \end{table*} \section{Observations} \label{sec:Observations} In order to study the temperature structure in young disks, a sample of five Class I protostars in Taurus was observed with ALMA: IRAS 04302+2247 (also known as the Butterfly star, hereafter IRAS 04302), L1489 IRS (hereafter L1489), L1527 IRS (hereafter L1527), TMC1 and TMC1A. All sources are known to have a disk and Keplerian rotation has been established (\citealt{Brinch2007,Tobin2012,Harsono2014}, van 't Hoff et al. in prep.). IRAS 04302 and L1527 are seen edge-on, which allows a direct view of the midplane, whereas L1489, TMC1 and TMC1A are moderately inclined $\sim$50--60$^\circ$. The source properties are listed in Table~\ref{tab:SourceOverview}. The observations were carried out on 2018 September 10 and 28, for a total on source time of 15 minutes per source (project code 2017.1.01413.S). The observations used 47 antennas sampling baselines between 15~m and 1.4~km. The correlator setup included a 2 GHz continuum band with 488 kHz (0.6 km s$^{-1}$) resolution centered at 240.0 GHz, and spectral windows targeting C$^{17}$O $2-1$, H$_2$CO $3_{1,2}-2_{1,1}$, HDO $3_{1,2}-2_{2,1}$ and several CH$_3$OH $5_K-4_K$ transitions. The spectral resolution was 122.1 kHz for CH$_3$OH and 61.0 kHz for the other lines, which corresponds to a velocity resolution of 0.15 and 0.08 km~s$^{-1}$, respectively. The properties of the targeted lines can be found in Table~\ref{tab:Lineparameters}. Calibration was done using the ALMA Pipeline and version 5.4.0 of the Common Astronomy Software Applications (CASA; \citealt{McMullin2007}). The phase calibrator was J0438+3004, and the bandpass and flux calibrator was J0510+1800. In addition, we performed up to three rounds of phase-only self-calibration on the continuum data with solution intervals that spanned the entire scan length for the first round, as short as 60 s in the second round, and as short as 30 s in the third round. The obtained phase solutions were also applied to the line data. Imaging was done using \textit{tclean} in CASA version 5.6.1. The typical restoring beam size using Briggs weighting with a robust parameter of 0.5 is $0.42^{\prime\prime} \times 0.28^{\prime\prime}$ (59 $\times$ 39 AU) for the continuum images and $0.48^{\prime\prime} \times 0.31^{\prime\prime}$ (67 $\times$ 43 AU) for the line images. The continuum images have a rms of $\sim$0.07 mJy beam$^{-1}$, whereas the rms in the line images is $\sim$5 mJy beam$^{-1}$ channel$^{-1}$ for 0.08 km~s$^{-1}$ channels. The observed continuum and line flux densities are reported in Table~\ref{tab:Flux}. \begin{figure*} \centering \includegraphics[width=\textwidth,trim={0cm 9cm 0cm 1.4cm},clip]{Overview_M0_r05.pdf} \caption{Continuum images at 1.3 mm (\textit{top row}) and integrated intensity maps for the C$^{17}$O $2-1$ (\textit{middle row}) and H$_2$CO $3_{2,1}-2_{1,1}$ (\textit{bottom row}) transitions. The color scale is in mJy beam$^{-1}$ for the continuum images and in mJy beam$^{-1}$ km s$^{-1}$ for the line images. The positions of the continuum peaks are marked with black crosses, and the outflow directions are indicated by arrows in the continuum images. The beam is shown in the lower left corner of each panel.} \label{fig:M0_overview} \end{figure*} \begin{figure*} \centering \includegraphics[width=0.8\textwidth,trim={0cm 0cm 2.8cm 0cm},clip]{IRAS04302.pdf} \caption{Integrated intensity maps for the H$_2$CO $3_{2,1}-2_{1,1}$ (\textit{left}) and C$^{17}$O $2-1$ (\textit{right}) emission toward IRAS 04302. These images have slightly higher resolution than shown in Fig.~\ref{fig:M0_overview} ($0.45^{\prime\prime} \times 0.28^{\prime\prime}$) due to uniform weighting of the visibilities. The positions of the continuum peaks are marked with white crosses and the beam is shown in the lower left corner of each panel.} \label{fig:IRAS04302} \end{figure*} \section{Results} \label{sec:Results} \subsection{C$^{17}$O and H$_2$CO morphology}\label{sec:morphology} Figure~\ref{fig:M0_overview} shows the 1.3 mm continuum images and integrated intensity (zeroth moment) maps for C$^{17}$O $2-1$ and H$_2$CO $3_{1,2}-2_{1,1}$ toward the five sources in our sample. The molecular emission toward IRAS 04302 is highlighted at slightly higher spatial resolution in Fig.~\ref{fig:IRAS04302}. Radial cuts along the major axis are presented in Fig.~\ref{fig:Radprofiles}. The continuum emission is elongated perpendicular to the outflow direction for all sources, consistent with a disk as observed before. TMC1 is for the first time resolved into a close binary ($\sim$85 AU separation). We will refer to the two sources as TMC1-E (east) and TMC1-W (west). Both C$^{17}$O and H$_2$CO are clearly detected toward all sources with a velocity gradient along the continuum structures (see Fig.~\ref{fig:M1_overview}). The velocity gradient suggests that the material in TMC1 is located in a circumbinary disk, but a detailed analysis is beyond the scope of this paper. For both molecules, integrated fluxes are similar (within a factor 2--3) in all sources (Table~\ref{tab:Flux}) and both lines have a comparable (factor 2--4) strength toward each source, with H$_2$CO brighter than C$^{17}$O, except for TMC1A. The H$_2$CO emission is generally more extended than the C$^{17}$O emission, both radially and vertically, except toward TMC1 and TMC1A where both molecules have the same spatial extent. This is not a signal-to-noise issue, as can be seen from the radial cuts along the major axis (Fig.~\ref{fig:Radprofiles}). The most striking feature in the integrated intensity maps is the V-shaped emission pattern of the H$_2$CO in the edge-on disk IRAS~04302 (see Fig.~\ref{fig:IRAS04302}), suggesting that the emission arises from the disk surface layers and not the midplane, in contrast to the C$^{17}$O emission. The H$_2$CO emission displays a ring-like structure toward L1527. Given that this disk is also viewed edge-on, this can be explained by emission originating in the disk surface layers, with the outer component along the midplane arising from the envelope. As we will show later in this section, the emission toward IRAS~04302 shows very little envelope contribution, which can explain the difference in morphology between these two sources. The C$^{17}$O emission peaks slightly offset ($\sim$60~au) from the L1527 continuum peak, probably due to the dust becoming optically thick in the inner $\sim$10~au as seen before for $^{13}$CO and C$^{18}$O \citep{vantHoff2018b}. The current resolution does not resolve the inner 10~au, hence the reduction in CO emission is more extended. In IRAS~04302, a similar offset of $\sim$60~au is found for both C$^{17}$O and H$_2$CO, suggesting there may be an unresolved optically thick dust component as well. Toward L1489, C$^{17}$O has a bright inner component ($\sim$200 au) and a weaker outer component that extends roughly as far as the H$_2$CO emission ($\sim$600 au). A similar structure was observed in C$^{18}$O by \citet{Sai2020}. The slight rise seen in C$^{18}$O emission around $\sim$300 au to the southwest of the continuum peak is also visible in the C$^{17}$O radial cut. Imaging the C$^{17}$O data at lower resolution makes this feature more clear in the integrated intensity map. In contrast, the H$_2$CO emission decreases in the inner $\sim$75 au, but beyond that it extends smoothly out to $\sim$600 au. The off-axis protrusions at the outer edge of the disk pointing to the northeast and to the southwest were also observed in C$^{18}$O and explained as streams of infalling material \citep{Yen2014}. The C$^{17}$O emission peaks slightly ($\sim$40--50 au) off-source toward TMC1A. \citet{Harsono2018} showed that $^{13}$CO and C$^{18}$O emission is absent in the inner $\sim$15~au due to the dust being optically thick. The resolution of the C$^{17}$O observations is not high enough to resolve this region, resulting only in a central decrease in emission instead of a gap. A clear gap is visible for H$_2$CO with the emission peaking $\sim$100--115 au off source. The central absorption falling below zero is an effect of resolved out large-scale emission. Finally, toward TMC1, H$_2$CO shows a dip at both continuum peaks, while the C$^{17}$O emission is not affected by the eastern continuum peak. As discussed for the other sources, this may be the result of optically thick dust in the inner disk. The protrusions seen on the west side in both C$^{17}$O and H$_2$CO are part of a larger arc-like structure that extends toward the southwest beyond the scale shown in the image. While it is tempting to ascribe all of the compact emission to the young disk, some of it may also come from the envelope and obscure the disk emission. To get a first impression whether the observed emission originates in the disk or in the envelope, position-velocity (\textit{pv}) diagrams are constructed along the disk major axis for the four single sources (Fig.~\ref{fig:PVdiagrams}). In these diagrams, disk emission is located at small angular offsets and high velocities, while envelope emission extends to larger offsets but has lower velocities. In all sources, C$^{17}$O traces predominantly the disk, with some envelope contribution, especially in L1527 and L1489. H$_2$CO emission also originates in the disk, but has a larger envelope component. An exception is IRAS 04302, which shows hardly any envelope contribution. These results for L1527 are in agreement with previous observations \citep{Sakai2014b}. In L1489, a bright linear feature is present for H$_2$CO extending from a velocity and angular offset of -2 km s$^{-1}$ and -2$^{\prime\prime}$, respectively, to offsets of 2 km s$^{-1}$ and 2$^{\prime\prime}$. This feature matches the shape of the SO \textit{pv}-diagram \citep{Yen2014}, which was interpreted by the authors as a ring between $\sim$250--390 au. While a brightness enhancement was also identified by \citet{Yen2014} in the C$^{18}$O emission (similar as seen here for H$_2$CO), the C$^{17}$O emission does not display such feature. Another way to determine the envelope contribution is from the visibility amplitudes. Although a quantitative limit on the envelope contribution to the line emission requires detailed modeling for the individual sources, which will be done in a subsequent paper, a first assessment can be made with more generic models containing either only a Keplerian disk or a disk embedded in an envelope (see Appendix~\ref{ap:Vismod}). For IRAS 04302, both the C$^{17}$O and H$_2$CO visibility amplitude profiles can be reproduced without an envelope. This suggests that there is very little envelope contribution for this source, consistent with the \textit{pv} diagrams. A disk is also sufficient to reproduce the visibility amplitudes at velocities $> |1|$ km s$^{-1}$ from the systemic velocity toward L1489, L1527 and TMC1A. For the low velocities a small envelope contribution is required. The line emission presented here is thus dominated by the disk. Although both the C$^{17}$O and H$_2$CO emission originates predominantly from the disk, the C$^{17}$O emission extends to higher velocities than the H$_2$CO emission in IRAS~04302, L1527 and TMC1A. This is more easily visualized in the spectra presented in Fig.~\ref{fig:Spectra}. These spectra are extracted in a 6$^{\prime\prime}$ circular aperture and only include pixels with $> 3\sigma$ emission. While H$_2$CO is brighter at intermediate velocities than C$^{17}$O (even when correcting for differences in emitting area), it is not present at the highest velocities. H$_2$CO emission thus seems absent in the inner disk in these sources, which for TMC1A is also visible in the moment zero map (Fig.~\ref{fig:M0_overview}). However, in L1489, both molecules have similar maximum velocities. Toward TMC1 they extend to the same redshifted velocity, while C$^{17}$O emission is strongly decreased at blueshifted velocities as compared to the redshifted velocities. \begin{figure} \centering \includegraphics[trim={0cm 1.6cm 0cm 1.5cm},clip]{Radialprofiles_normalized_r05.pdf} \caption{Normalized radial cuts along the disk major axis for the 1.3 mm continuum flux (black) and the C$^{17}$O (blue) and H$_2$CO (orange) integrated intensities. The shaded area shows the 3$\sigma$ uncertainty.} \label{fig:Radprofiles} \end{figure} \begin{figure*} \centering \includegraphics[width=\textwidth]{PVdiagrams_r05_annotated-final.pdf} \caption{Position-velocity diagrams for C$^{17}$O (\textit{top panels}) and H$_2$CO (\textit{bottom panels}) along the major axis of the disks in the single systems (listed above the rows). C$^{17}$O traces predominantly the disk, that is, high velocities at small angular offsets, whereas H$_2$CO generally has a larger envelope component, that is, low velocities at large angular offsets. The velocity is shifted such that 0 km s$^{-1}$ corresponds to the systemic velocity. The color scale is in mJy beam$^{-1}$. The white arrows in the L1489 H$_2$CO panel highlight the linear feature that is described in the text.} \label{fig:PVdiagrams} \end{figure*} \subsection{C$^{17}$O and H$_2$CO column densities and abundances} To compare the C$^{17}$O and H$_2$CO observations between the different sources more quantitatively, we calculate disk-averaged total column densities, $N_T$, assuming optically thin emission in local thermodynamic equilibrium (LTE), using \begin{equation}\label{eq1} \frac{4\pi F\Delta v}{A_{ul}\Omega hcg_{\mathrm{up}}} = \frac{N_T}{Q(T_{\mathrm{rot}})}e^{-E_{\mathrm{up}}/kT_{\mathrm{rot}}}, \end{equation} where $F\Delta v$ is the integrated flux density, $A_{ul}$ is the Einstein A coefficient, $\Omega$ is the solid angle subtended by the source, $E_{\mathrm{up}}$ and $g_{\mathrm{up}}$ are the upper level energy and degeneracy, respectively and $T_{\mathrm{rot}}$ is the rotational temperature. The integrated fluxes are measured over the dust emitting area (Table~\ref{tab:Coldens}). We note that this does not necessarily encompasses the total line flux, but it will allow for an abundance estimate as described below. A temperature of 30 K is adopted for C$^{17}$O and 100 K for H$_2$CO, as these are slightly above their freeze-out temperatures. The C$^{17}$O column density ranges between $\sim 2-20 \times 10^{15}$ cm$^{-2}$, with the lowest value toward L1489 and the highest value toward TMC1A. The H$_2$CO column density is about an order of magnitude lower with values between $\sim 4-18 \times 10^{14}$ cm$^{-2}$. The lowest value is found toward toward TMC1A and the highest value toward L1527. Changing the temperature for H$_2$CO to 30 K decreases the column densities by only a factor $\lesssim$3. The H$_2$CO column density toward L1527 is a factor 3--6 higher than previously derived by \citet{Sakai2014b}, possibly because they integrated over different areas and velocity ranges for the envelope, disk and envelope-disk interface. Integrating the H$_2$CO emission over a circular aperture of 0.5$^{\prime\prime}$ and excluding the central $|\Delta v| \leq 1.0$ km s$^{-1}$ channels to limit the contribution from the envelope and resolved-out emission, results in a H$_2$CO column density of $9.7 \times 10^{13}$ cm$^{-2}$, only a factor 2--3 higher than found by \citet{Sakai2014b}. \citet{Pegues2020} found H$_2$CO column densities spanning three orders of magnitude ($\sim 5 \times 10^{11} - 5 \times 10^{14}$ cm$^{-2}$) for a sample of 13 Class II disks. The values derived here for Class I disks are thus similar to the high end ($\lesssim4$ times higher) of the values for Class II disks. An assessment of the molecular abundances can be made by estimating the H$_2$ column density from the continuum flux. First, we calculate the disk dust masses, $M_{\rm{dust}}$, from the integrated continuum fluxes, $F_\nu$, using \begin{equation} \label{eq2} M_{\rm{dust}} = \frac{D^2 F_\nu}{\kappa_\nu B_\nu(T_{\rm{dust}})}, \end{equation} where $D$ is the distance to the source, $\kappa_\nu$ is the dust opacity with the assumption of optically thin emission, and $B_\nu$ is the Planck function for a temperature $T_{\rm{dust}}$ \citep{Hildebrand1983}. Adopting a dust opacity of $\kappa_{1.3mm} = 2.25$ cm$^2$ g$^{-1}$, as used for Class II disks by e.g., \citet{Ansdell2016}, and a dust temperature of 30~K, similar to e.g., \citet{Tobin2015a} for embedded disks, results in disk dust mass between 3.7 $M_E$ for \mbox{TMC1-E} and 75 $M_E$ for TMC1A. Using the same dust opacity as for Class II disks is probably reasonable if grain growth starts early on in the disk-formation process. However, adopting $\kappa_{\rm{1.3mm}}$ = 0.899 cm$^2$ g$^{-1}$, as often done for protostellar disks and envelopes \citep[e.g.,][]{Jorgensen2007,Andersen2019,Tobin2020}, only affects the molecular abundances by a factor $\sim$2. Assuming a gas-to-dust ratio of 100 and using the size of the emitting region, these dust masses result in H$_2$ column densities of $2-90 \times 10^{23}$~cm$^{-2}$. The resulting C$^{17}$O and H$_2$CO abundances are listed in Table~\ref{tab:Coldens}. For C$^{17}$O, the abundances range between $1.2 \times 10^{-8}$ and $1.2 \times 10^{-7}$. Assuming a C$^{16}$O/C$^{17}$O ratio of 1792 (as in the interstellar medium; \citealt{Wilson1994}), a CO ISM abundance of 10$^{-4}$ with respect to H$_2$ corresponds to a C$^{17}$O abundance of $5.6 \times 10^{-8}$. The derived C$^{17}$O abundances are thus within a factor 5 of the ISM abundance, suggesting that no substantial processing has happened as observed for Class II disks where the CO abundance can be two orders of magnitude below the ISM value \citep[e.g.,][]{Favre2013}. These results are consistent with the results from \citet{Zhang2020} for three Class I disks in Taurus (including TMC1A), but not with the order of magnitude depletion found by \citet{Bergner2020} for two Class I disks in Serpens. For H$_2$CO, the abundance ranges between $\sim 3 \times 10^{-10}$--$\sim 8 \times 10^{-9}$ in the different sources, except for TMC1A where the abundance is $\sim 5 \times 10^{-11}$, probably due to the absence of emission in the inner region. Abundances around $10^{-10}$--$10^{-9}$ are consistent with chemical models for protoplanetary disks \citep[e.g.,][]{Willacy2009,Walsh2014}. However, H$_2$CO abundances derived for TW Hya and HD 163296 are 2--3 orders of magnitude lower, $8.9 \times 10^{-13}$ and $6.3 \times 10^{-12}$, respectively \citep{Carney2019}. \begin{table*} \caption{Column densities and column density ratios.} \label{tab:Coldens} \centering \begin{footnotesize} \begin{tabular}{l l c c c c c} \hline\hline \\[-.3cm] Source & Molecule & Area\tablenotemark{a} & $F_{\mathrm{int}}$\tablenotemark{b} & $N$\tablenotemark{c} & $N$/$N$(H$_2$)\tablenotemark{d} & $N$/$N$(H$_2$CO)\tablenotemark{e} \\ & & ($^{\prime\prime}$ $\times$ $^{\prime\prime}$) & (Jy km s$^{-1}$) & (cm$^{-2}$) & & \\ \hline \\[-.3cm] IRAS 04302 & C$^{17}$O & 3.95 $\times$ 1.01 & \hspace{0.7mm} 1.4 $\pm$ 0.05 & $5.5 \pm 0.41 \times 10^{15}$ & \hspace{2.8mm} $3.8 \times 10^{-8}$ & \hspace{4mm} 46 \\ & H$_2$CO & 3.95 $\times$ 1.01 & \hspace{0.7mm} 1.5 $\pm$ 0.05 & $1.2 \pm 0.04 \times 10^{14}$ & \hspace{3.5mm} $8.4 \times 10^{-10}$ & \hspace{4mm} - \\ & HDO & 0.50 $\times$ 0.50 & $< 4.5 \times 10^{-3}$ & \hspace{5mm} $< 7.4 \times 10^{13}$ & \hspace{.6mm} $< 5.3 \times 10^{-10}$ & \hspace{.4mm} $<$ 0.62\\ & CH$_3$OH & 0.50 $\times$ 0.50 & $< 6.9 \times 10^{-3}$ & \hspace{5mm} $< 7.3 \times 10^{14}$ & $< 5.2 \times 10^{-9}$ & $<$ 6.1\\ L1489 & C$^{17}$O & 4.05 $\times$ 2.19 & \hspace{0.7mm} 1.5 $\pm$ 0.11 & $ 2.3 \pm 0.40 \times 10^{15}$ & \hspace{2.8mm} $1.2 \times 10^{-7}$ & \hspace{3mm} 15 \\ & H$_2$CO & 4.05 $\times$ 2.19 & \hspace{0.7mm} 4.2 $\pm$ 0.11 & $1.5 \pm 0.04 \times 10^{14}$ & \hspace{3mm} $7.6 \times 10^{-9}$ & \hspace{4mm} - \\ & HDO & 0.50 $\times$ 0.50 & $< 5.0 \times 10^{-3}$ & \hspace{5mm} $< 8.3 \times 10^{13}$ & $< 4.2 \times 10^{-9}$ & \hspace{2mm} $<$ 0.55\\ & CH$_3$OH & 0.50 $\times$ 0.50 & $< 8.4 \times 10^{-3}$ & \hspace{5mm} $< 8.8 \times 10^{14}$ & $< 4.4 \times 10^{-8}$ & \hspace{.4mm} $<$ 5.9\\ L1527 & C$^{17}$O & 1.34 $\times$ 0.77 & 0.54 $\pm$ 0.03 & $ 7.7 \pm 0.96 \times 10^{15}$ & \hspace{2.5mm} $1.2 \times 10^{-8}$ & \hspace{3mm} 43 \\ & H$_2$CO & 1.34 $\times$ 0.77 & 0.55 $\pm$ 0.03 & $1.8 \pm 0.10 \times 10^{14}$& \hspace{4mm}$2.7 \times 10^{-10}$ & \hspace{4mm} - \\ & HDO & 0.50 $\times$ 0.50 & $< 5.6 \times 10^{-3}$ & \hspace{5mm} $< 9.2 \times 10^{13}$ & \hspace{.6mm} $< 1.4 \times 10^{-10}$ & \hspace{2mm} $<$ 0.51\\ & CH$_3$OH & 0.50 $\times$ 0.50 & $< 7.9 \times 10^{-3}$ & \hspace{5mm} $< 8.3 \times 10^{14}$ & $< 1.3 \times 10^{-9}$ & \hspace{.4mm} $<$ 4.6 \\ TMC1A & C$^{17}$O & 0.93 $\times$ 0.88 & \hspace{0.7mm} 1.1 $\pm$ 0.02 & $2.0 \pm 0.08 \times 10^{16}$ & \hspace{3mm} $2.3 \times 10^{-8}$ & \hspace{6mm} 488 \\ & H$_2$CO & 0.93 $\times$ 0.88 & 0.10 $\pm$ 0.02 & $4.1 \pm 0.82 \times 10^{13}$ & \hspace{4mm}$4.6 \times 10^{-11}$ & \hspace{4mm} -\\ & HDO & 0.50 $\times$ 0.50 & $< 5.0 \times 10^{-3}$ & \hspace{5mm} $< 8.3 \times 10^{13}$ & \hspace{.6mm} $< 9.3 \times 10^{-11}$ & \hspace{.4mm} $<$ 2.0 \\ & CH$_3$OH & 0.50 $\times$ 0.50 & $< 7.7 \times 10^{-3}$ & \hspace{5mm} $< 8.1 \times 10^{14}$ & \hspace{.6mm} $< 9.1 \times 10^{-10}$ & \hspace{.4mm} $<$ 18\\ TMC1-E & C$^{17}$O & 0.71 $\times$ 0.54 & 0.10 $\pm$ 0.01 & $3.6 \pm 0.85 \times 10^{15}$ & \hspace{2.8mm} $3.9 \times 10^{-8}$ & \hspace{3mm} 33\\ & H$_2$CO & 0.71 $\times$ 0.54 & 0.12 $\pm$ 0.01 & $1.1 \pm 0.09 \times 10^{14}$ & \hspace{3mm}$1.2 \times 10^{-9}$ & \hspace{4mm} - \\ & HDO & 0.50 $\times$ 0.50 & $< 5.0 \times 10^{-3}$ & \hspace{5mm} $< 8.3 \times 10^{13}$ & \hspace{.6mm} $< 8.9 \times 10^{-10}$ & \hspace{.4mm} $<$ 0.75\\ & CH$_3$OH & 0.50 $\times$ 0.50 & $< 7.7 \times 10^{-3}$ & \hspace{5mm} $< 8.1 \times 10^{14}$ & $< 8.7 \times 10^{-9}$ & \hspace{.4mm} $<$ 7.4\\ TMC1-W & C$^{17}$O & 0.81 $\times$ 0.63 & 0.12 $\pm$ 0.01 & $ 3.3 \pm 0.65 \times 10^{15}$ & \hspace{2.8mm} $2.8 \times 10^{-8}$ & \hspace{3mm} 35\\ & H$_2$CO & 0.81 $\times$ 0.63 & 0.15 $\pm$ 0.01 & $9.5 \pm 0.66 \times 10^{13}$ & \hspace{3mm}$8.0 \times 10^{-9}$ & \hspace{4mm} - \\ & HDO & 0.50 $\times$ 0.50 & $< 5.0 \times 10^{-3}$ & \hspace{5mm} $< 8.3 \times 10^{13}$ & \hspace{.6mm} $< 6.9 \times 10^{-10}$ & \hspace{.4mm} $<$ 0.87\\ & CH$_3$OH & 0.50 $\times$ 0.50 & $< 7.7 \times 10^{-3}$ & \hspace{5mm} $< 8.1 \times 10^{14}$ & $< 6.8 \times 10^{-9}$ & \hspace{.4mm} $<$ 8.5\\ \hline \end{tabular} \begin{flushleft} \vspace{-0.3cm} \tablenotetext{a}{Area over which the flux is extracted.} \vspace{-0.2cm} \tablenotetext{b}{Integrated flux. For HDO and CH$_3$OH this is the 3$\sigma$ upper limit to the integrated flux.} \vspace{-0.2cm} \tablenotetext{c}{Column density.} \vspace{-0.2cm} \tablenotetext{d}{Column density with respect to H$_2$, where the H$_2$ column density estimated from the continuum flux and assuming a gas-to-dust ratio of 100.} \vspace{-0.2cm} \tablenotetext{e}{Column density with respect to H$_2$CO.} \end{flushleft} \end{footnotesize} \end{table*} Caveats in determining these abundances are the assumption that the continuum and line emission is optically thin. As discussed in Sect.~\ref{sec:morphology}, there is likely an optially thick dust component which would result in underestimates of the dust masses and overestimates of the abundances. On the other hand, optically thick dust hides molecular line emission originating below its $\tau = 1$ surface, which leads to underestimates of the abundances. Based on the results from \citet{Zhang2020}, C$^{17}$O may be optically thick in Class I disks. This would also result in underestimating the abundances. Scaling the dust temperature used in Eq.~\ref{eq2} with luminosity as done by \citet{Tobin2020} for embedded disks in Orion, results in dust masses lower by a factor $\sim$2, and therefore slightly higher abundances. Moreover, the integrated line flux is assumed to originate solely in the disk, but as shown in Fig.~\ref{fig:PVdiagrams}, there can be envelope emission present. Finally, the H$_2$CO emission originates in the disk surface layers, which means the abundances are higher than derived here assuming emission originating throughout the disk. To take all these effects in to account, source specific models are required. \subsection{HDO and CH$_3$OH upper limits} Water and methanol form on ice-covered dust grains and thermally desorb into the gas phase at temperatures $\sim$100--150 K. These molecules are thus expected to trace the hot region inside the water snowline. The observations cover one HDO (deuterated water) transition ($3_{1,2}-2_{2,1}$) with an upper level energy of 168 K, and 16 transitions in the CH$_3$OH $J = 5_k - 4_k$ branch with upper level energies ranging between 34 and 131~K. None of these lines are detected in any of the disks. To compare these non-detections to observations in other systems, a 3$\sigma$ upper limit is calculated for the disk-averaged total column density by substituting \begin{equation} \label{eq3} 3\sigma = 3 \times1.1 \sqrt{\delta v\Delta V} \times \mathrm{rms}, \end{equation} for the integrated flux density, $F\Delta v$ in eq.~\ref{eq1}. Here $\delta v$ is the velocity resolution, and $\Delta V$ is the line width expected based on other line detections. The factor 1.1 takes a 10\% calibration uncertainty in account. Assuming the water and methanol emission arises from the innermost part of the disk, the rms is calculated from the base line of the spectrum integrated over a central 0.5$^{\prime\prime}$ diameter aperture ($\sim$ one beam) and amounts to $\sim$2.7 mJy for HDO and $\sim$3.0 mJy for CH$_3$OH. A line width of 4 km s$^{-1}$ and a rotational temperature of 100 K are adopted. A 3$\sigma$ column density upper limit of $\sim$ $8\times10^{13}$ cm$^{-2}$ is then found for HDO. This is 1--2 orders of magnitude below the column densities derived for the Class 0 sources NGC1333 IRAS2A, NGC1333 IRAS4A-NW and NGC1333 IRAS4B ($\sim 10^{15} - 10^{16}$ cm$^{-2}$; \citealt{Persson2014}) and more than 3 orders of magnitude lower than toward the Class 0 source IRAS 16293A ($\sim 5 \times 10^{17}$ cm$^{-2}$; \citealt{Persson2013}). Taking into account the larger beam size of the earlier observations ($\sim1^{\prime\prime}$) lowers the here derived column density by only a factor $\sim$4. Furthermore, \citet{Taquet2013b} showed that the HDO observations toward NGC1333 IRAS2A and NGC1333 IRAS4A are consistent with column densities up to $10^{19}$ and $10^{18}$ cm$^{-2}$, respectively, using a grid of non-LTE large velocity gradient (LVG) radiative transfer models. For CH$_3$OH, the $5_{0,5}-4_{0,4}$ (A) transition provides the most stringent upper limit of $\sim$ $8\times10^{14}$ cm$^{-2}$. This upper limit is orders of magnitude lower than the column density toward the Class 0 source IRAS 16293 ($2 \times 10^{19}$ cm$^{-2}$ within a 70 au beam; \citealt{Jorgensen2016}) and the young disk around the outbursting star V883 Ori (disk-averaged column density of $\sim$ $1.0 \times 10^{17}$ cm$^{-2}$; \citealt{vantHoff2018c}). A similarly low upper limit ($5\times10^{14}$ cm$^{-2}$) was found for a sample of 12 Class I disks in Ophiuchus \citep{ArturdelaVillarmois2019}. However, this upper limit is not stringent enough to constrain the column down to the value observed in the TW Hya protoplanetary disk (peak column density of $3-6 \times 10^{12}$ cm$^{-2}$; \citealt{Walsh2016}) or the upper limit in the Herbig Ae disk HD 163296 (disk-averaged upper limit of $5\times 10^{11}$ cm$^{-2}$; \citealt{Carney2019}). \begin{figure*} \centering \includegraphics[width=\textwidth,trim={0cm 11.8cm 0cm 0.6cm},clip]{EdgeonDisks.pdf} \caption{Integrated intensity (moment zero) maps of the low-velocity (\textit{top row}), intermediate-velocity (\textit{middle row}), and high-velocity (\textit{bottom row}) C$^{17}$O emission in the warm (\textit{first and second column}) and cold edge-on disk models (\textit{fourth and fifth column}), as well as for the observations toward L1527 (\textit{third column}) and IRAS~04302 (\textit{sixth column}). The models contain either a disk and envelope (\textit{first and fourth column}) or only a disk (\textit{second and fifth column}). For the models, low velocities range from -1.0 to 1.0 km~s$^{-1}$, for intermediate velocities $|\Delta v|$ = 1.0-2.0 km~s$^{-1}$ and for high velocities $|\Delta v|$ = 2.0-4.0 km~s$^{-1}$ with respect to the source velocity. For IRAS 04302 (L1527), low velocities range from -1.19 to 1.09 (-1.19 to 1.25) km~s$^{-1}$, intermediate velocities range from -3.56 to -1.19 (-2.42 to -1.19) km~s$^{-1}$ and from 1.09 to 2.97 (1.25 to 2.39) km~s$^{-1}$, and high velocities range from -3.56 to -5.28 (-2.42 to -3.97) km~s$^{-1}$ and from 2.97 to 4.67 (2.39 to 3.13) km~s$^{-1}$ with respect to the source velocity. Only pixels with $>3\sigma$ emission are included. The color scale is in mJy beam$^{-1}$ km s$^{-1}$. The source position is marked with a black cross and the beam is shown in the lower left corner of the panels. A 100 au scalebar is present in the \textit{bottom panels.} The V-shaped emission pattern that is visible at intermediate velocities in the cold model and the IRAS 04302 observations is indicated by white arrows.} \label{fig:M0_edgeondisks} \end{figure*} \begin{figure*} \centering \includegraphics[trim={0cm 9cm 0cm 1.5cm},clip]{Verticalcut_normalized_r05.pdf} \caption{Vertical cuts through the edge-on disks IRAS 04302 (\textit{left panels}) and L1527 (\textit{right panels}) at 0.5$^{\prime\prime}$ (\textit{top panels}), 0.8$^{\prime\prime}$ (\textit{middle panels}) and 1.3$^{\prime\prime}$ (\textit{bottom panel}) north of the continuum peak. The 1.3 mm continuum is shown in black and the integrated intensity for C$^{17}$O $J=2-1$ and H$_2$CO 3$_{1,2}$-2$_{1,1}$ in blue and orange, resp. The shaded area shows the 3$\sigma$ uncertainty. The largest offset is not shown for L1527 because the continuum and C$^{17}$O emission reaches the noise limit. The H$_2$CO emission is single peaked at $\sim$10 au.} \label{fig:Verticalprofiles} \end{figure*} For a better comparison with other sources, column density ratios are calculated with respect to H$_2$ and H$_2$CO, and are reported in Table~\ref{tab:Coldens}. Using the H$_2$ column density derived from the continuum flux, upper limits of $\sim 1-40 \times 10^{-10}$ are found for the HDO abundance. CH$_3$OH upper limits range between $1-40 \times 10^{-9}$. This is orders of magnitude lower than what is expected from ice observations ($10^{-6}-10^{-5}$; \citealt{Boogert2015}), and thus from thermal desorption, as observed in IRAS 16293 ($\lesssim 3 \times 10^{-6}$; \citealt{Jorgensen2016}) and V883-Ori ($\sim 4 \times 10^{-7}$; \citealt{vantHoff2018c}). Abundances for non-thermally desorbed CH$_3$OH in TW Hya are estimated to be $\sim 10^{-12}-10^{-11}$ \citep{Walsh2016}. \citet{Sakai2014b} detected faint CH$_3$OH emission (from different transitions than targeted here) toward L1527, with a CH$_3$OH/H$_2$CO ratio between 0.6 and 5.1. Our upper limit of 4.6 for L1527 is consistent with these values. CH$_3$OH/H$_2$CO ratios of 1.3 and $<$ 0.2 were derived for TW Hya and HD 163296, respectively, but our CH$_3$OH upper limit is not stringent enough to make a meaningful comparison. An assumption here is that the emitting regions of CH$_3$OH and H$_2$CO are co-spatial. As noted in Sect.~\ref{sec:morphology}, H$_2$CO seems absent in the inner disk where CH$_2$OH is expected. \begin{figure*} \centering \includegraphics[trim={0cm 16.5cm 0cm 1cm},clip]{TemperatureOverview.pdf} \caption{\textit{Left panel:} Radial midplane temperature profile for IRAS 04302 inferred from the CO and H$_2$CO snowline estimates (orange circles). The solid orange line is a power law of the shape $T \propto R^{-0.75}$. For comparison, the temperature measurements for L1527 from $^{13}$CO and C$^{18}$O emission (yellow circles) and a power law temperature profile with $T \propto R^{-0.35}$ (yellow line, with 1$\sigma$ uncertainty) are shown \citep{vantHoff2018b}, as well as the temperature profile derived for the disk-like structure in the Class~0 source IRAS 16293A (dashed red line; \citealt{vantHoff2020}), and the temperature profile for the Class II disk TW Hya (dashed blue line; \citealt{Schwarz2016}). The TW Hya temperature profile traces a warmer layer above the midplane and the midplane CO snowline is located around $\sim$20 AU \citep[e.g.,][]{vantHoff2017,Zhang2017}. The blue shaded area denotes the temperatures at which CO is frozen out. \textit{Right panel:} Temperature profiles from the \textit{left panel} overlaid with temperature profiles from embedded disk models from \citet{Harsono2015}. All three models have a stellar luminosity of 1~$L_\sun$, an envelope mass of 1~$M_\sun$, a disk mass of 0.05~$M_\sun$ and a disk radius of 200 au, but different accretion rates of $10^{-4} M_\sun$ yr$^{-1}$ (solid black line), $10^{-5} M_\sun$ yr$^{-1}$ (dashed black line) and $10^{-7} M_\sun$ yr$^{-1}$ (dotted black line) and therefore different total luminosities. } \label{fig:PowerlawT} \end{figure*} \section{Analysis} \label{sec:Analysis} \subsection{Temperature structure in the edge-on disks} For (near) edge-on disks, CO freeze-out should be readily observable as CO emission will be missing from the outer disk midplane \citep{Dutrey2017,vantHoff2018b}. \citet{vantHoff2018b} studied the effect of CO freeze-out on the optically thick $^{13}$CO and C$^{18}$O emission in L1527. The less-abundant C$^{17}$O is expected to be optically thin and traces mainly the disk. Here we employ the models from \citet{vantHoff2018b} to predict the C$^{17}$O emission pattern for varying degrees of CO freeze-out (see Fig.~\ref{fig:diskmodels}): a `warm' model (no CO freeze-out), an `intermediate' model (CO freeze-out in the outer disk midplane) and a `cold' model (CO freeze-out in most of the disk, except the inner part and surface layers). Briefly, in these models gaseous CO is present at a constant abundance of 10$^{-4}$ with respect to H$_2$ in the regions in the disk where $T >$ 20 K and in the envelope. For the warm model, the L1527 temperature structure from \citet{Tobin2013} is adopted, and for the intermediate and cold models the temperature is reduced by 40\% and 60\%, respectively. There is no CO freeze out in the 125 au disk in the warm model, while the intermediate and cold models have the CO snowline at 71 and 23 au, respectively. Synthetic images cubes are generated using the radiative transfer code LIME \citep{Brinch2010}, making use of the C$^{17}$O LAMDA file \citep{Schoier2005} for the LTE calculation, and are convolved with the observed beam size. Figure~\ref{fig:M0_edgeondisks} shows moment zero maps integrated over the low, intermediate and high velocities for the warm and cold edge-on disk model. Models with and without an envelope are presented. The difference between the warm and cold model is most clearly distinguishable at intermediate velocities (Fig.~\ref{fig:M0_edgeondisks}, middle row). In the absence of an envelope, the emission becomes V-shaped in the cold model, tracing the warm surface layers where CO is not frozen out. This V-shape is not visible when there is a significant envelope contribution. The cold model differs from the warm model in that the envelope emission becomes comparable in strength to the disk emission when CO is frozen out in most of the disk. In the warm case, the disk emission dominates over the envelope emission. At low velocities (Fig.~\ref{fig:M0_edgeondisks}, top row), the difference between a warm and cold disk can be distinguished as well when an envelope is present, although in practice this will be much harder due to resolved out emission at these central velocities. Without an envelope, the low velocity emission originates near the source center due to the rotation, and the models are indistinguishable, except for differences in the flux. Due to the rotation, the emission at these velocities gets projected along the minor axis of the disk (that is, east-west). At the highest velocities (Fig.~\ref{fig:M0_edgeondisks}, top row), the emission originates in the inner disk, north and south of the source. If CO is absent in the midplane, very high angular resolution is be required to observe this directly through a V-shaped pattern. C$^{17}$O moment zero maps integrated over different velocity intervals for IRAS 04302 and L1527 are presented in Fig.~\ref{fig:M0_edgeondisks}. The observations show no sign of CO freeze-out in L1527 and resemble the warm model (most clearly seen at intermediate velocities), consistent with previous results for C$^{18}$O and $^{13}$CO \citep{vantHoff2018b}. IRAS 04302 on the other hand displays a distinct V-shaped pattern at intermediate velocities, suggesting that CO is frozen out in the outer part of this much larger disk ($\sim$250 au, compared to 75--125 au for L1527; \citealt{Aso2017,Tobin2013,Sheehan2017}). The vertical distribution of the emission in both disks is highlighted in Fig.~\ref{fig:Verticalprofiles} with vertical cuts at different radii. In L1527, the C$^{17}$O emission peaks at the midplane throughout the disk, while for IRAS~04302 the peaks shift to layers higher up in the disk for radii $\gtrsim$110 au. A first estimate of the CO snowline location can be made based on the location of the V-shape. In the cold model, the CO snowline is located at 23 au, but due to the size of the beam, the base of the V-shape and the first occurrence of a double peak in the vertical cuts is at $\sim$55 au. In IRAS 04302, the V-shape begins at a radius of $\sim$130 au, so the CO snowline location is then estimated to be around $\sim$100 au. \begin{figure} \centering \includegraphics[width=0.5\textwidth,trim={0cm 11cm 7.5cm 1.5cm},clip]{InclinationEffect.pdf} \caption{Integrated intensity (moment zero) maps of the intermediate-velocity C$^{17}$O $J=2-1$ emission in the warm (\textit{top row}), intermediate (\textit{middle row}) and cold disk model (\textit{bottom row}). The \textit{left column} shows a near edge-on disk ($i=85^\circ$) as in Fig.~\ref{fig:M0_edgeondisks}, and the \textit{right column} shows a less-inclined disk ($i=85^\circ$). The velocity range $\Delta v$ is 1.0--1.9 km~s$^{-1}$ for $i=85^\circ$ and 1.3--1.8 km~s$^{-1}$ for $i=60^\circ$. The color scale is in mJy beam$^{-1}$ km s$^{-1}$. The source position is marked with a black cross and the beam is shown in the lower left corner of the panels. A 100 au scalebar is present in the bottom panels.} \label{fig:M0_incl-effect} \end{figure} \begin{figure} \centering \includegraphics[width=0.5\textwidth,trim={0cm 11cm 7.5cm 1.5cm},clip]{InclinedDisks_obs.pdf} \caption{Integrated intensity (moment zero) maps of the low-velocity (\textit{top row}), intermediate-velocity (\textit{middle row}), and high-velocity (\textit{bottom row}) C$^{17}$O $J=2-1$ emission toward L1489 (\textit{left column}) and TMC1A (\textit{right column}). Only pixels with $>3\sigma$ emission are included. For TMC1A (L1489), low velocities range from -1.27 to 1.26 (-0.47 to 0.43) km~s$^{-1}$, the intermediate velocities include $|\Delta v|$ = 1.34--2.49 (0.50--3.00) km~s$^{-1}$, and the high velocities are $|\Delta v|$ = 2.57--4.94 (3.05--4.65) km~s$^{-1}$ with respect to the source velocity. The color scale is in mJy beam$^{-1}$ km s$^{-1}$ The source position is marked with a black cross and the beam is shown in the lower left corner of the panels. A 100 au scalebar is present in the bottom panels.} \label{fig:M0_incldisks} \end{figure} A clear V-shaped pattern is also visible in the H$_2$CO integrated emission map for IRAS 04302 (Fig.~\ref{fig:M0_overview}). The V-shape starts at around 55 au ($\sim$1 beam offset from the continuum peak). If the reduction of H$_2$CO in the midplane is fully due to freeze-out, the snowline is then located around (or inward of) $\sim$25 au. In L1527, H$_2$CO emission also appears to come from surface layers, except in the outer disk (see Figs.~\ref{fig:M0_overview} and \ref{fig:Verticalprofiles}). The cold models show that CO emission from the envelope becomes comparable in strength to emission from the disk if CO is frozen out in a large part of the disk. Given that the envelope contribution is much larger in L1527 than in IRAS~04302, the emission peaking in the outer disk midplane is likely originating in the envelope. Instead of a clear V-shape, the emission in the inner region forms two bright lanes along the continuum position. A similar pattern is seen in the individual channels. This suggests that the H$_2$CO snowline is unresolved at the current resolution and closer in than in IRAS 04302 ($\lesssim$ 25 au). A zeroth order estimate of the midplane temperature profile for IRAS 04302 can be made from these two snowline estimates using a radial power law, $T \propto R^{-q}$. For disks, often a power law exponent $q$ of 0.5 is assumed, but $q$ can range between 0.33 and 0.75 \citep[see e.g.,][]{Adams1986,Kenyon1993a,Chiang1997}. A power law with $q$ = 0.75 matches the two temperature estimates reasonably well (see Fig.~\ref{fig:PowerlawT}). This temperature profile is quite similar to the profile constructed for L1527 based on $^{13}$CO and C$^{18}$O temperature measurements \citep{vantHoff2018b}. The L1527 temperature profile predicts a H$_2$CO snowline radius of $\lesssim$ 10 au, consistent with the results derived above. IRAS 04302 is thus warm like L1527, with freeze-out occuring only in the outermost part of this large disk. \begin{figure*} \centering \includegraphics[width=\textwidth,trim={0cm 0.2cm 0cm 0cm},clip]{CartoonTemperatureStructure.pdf} \caption{Schematic representation of the temperature structure derived for Class I disks based on C$^{17}$O and H$_2$CO observations. A large part of the disk midplane, or even the entire midplane, is too warm for CO to freeze out unlike protoplanetary disks that have the CO snowline at a few tens of au. The majority of the midplane has a temperature between $\sim$20--70 K such that CO is in the gas phase while H$_2$CO is frozen out. The C$^{17}$O emission therefore arises predominantly from the midplane region (yellow area), and the H$_2$CO emission from the surface layers (orange region). } \label{fig:CartoonTemperatureStructure} \end{figure*} \subsection{Temperature structure in less-inclined disks} For less inclined disks, observing freeze-out directly is much harder; the projected area between the top and bottom layer becomes smaller (that is, the V-shape becomes more narrow), therefore requiring higher spatial resolution to observe it. In addition, because now both the near and the far side of the disk become visible, emission from the far side's surface layers can appear to come from the near side's midplane (see Figure~\ref{fig:incldisk}, and \citealt{Pinte2018}), which makes a V-shape due to emission originating only in the surface layers harder to observe. For the L1527 disk model, the intermediate and warm model become quite similar for an inclination of 60$^\circ$ at this angular resolution, and only a cold disk shows a clear V-shaped pattern (Fig.~\ref{fig:M0_incl-effect}). Figure~\ref{fig:M0_incldisks} shows the C$^{17}$O moment zero maps for the intermediate inclined disks TMC1A and L1489. The disk size, stellar mass and stellar luminosity of TMC1A are comparable to L1527. At intermediate velocities there is no sign of a V-shaped pattern, so these observations do not suggest substantial freeze out of CO in TMC1A. In order to constrain the CO snowline a little better, models were run with snowline locations of 31, 42 and 56 au (that is, in between the cold and intermediate model). All three models show a V-shape, suggesting that the CO snowline is at radii $\gtrsim$70 au in TMC1A. This is consistent with the results from \citet{Aso2015}, who found a temperature of 38 K at 100 au from fitting a disk model to ALMA C$^{18}$O observations, and the results from Harsono et al. (submitted), who find a temperature of 20 K at 115 au. There is no sign of a V-shaped pattern in the H$_2$CO emission. For L1489, the intermediate velocities show a more complex pattern with CO peaking close to the source and at larger offsets ($\gtrsim2^{\prime\prime}$). A similar structure was seen in C$^{18}$O \citep{Sai2020}. This could be the result of non-thermal desorption of CO ice in the outer disk if the dust column is low enough for UV photons to penetrate \citep{Cleeves2016}, or due to a radial temperature inversion resulting from radial drift and dust settling \citep{Facchini2017}. Such a double CO snowline has been observed for the protoplanetary disk IM Lup \citep{Oberg2015,Cleeves2016}. The structure of the continuum emission, a bright central part and a fainter outer part, make these plausible ideas. Another possibility is that the extended emission is due to a warm inner envelope component. The UV irradiated mass of L1489 derived from $^{13}$CO 6--5 emission is similar to that of L1527 and higher than for TMC1A and TMC1 \citep{Yildiz2015}. This may provide a sufficient column along the outflow cavity wall for C$^{17}$O emission to be observed. A high level of UV radiation is supported by O and H$_2$O line fluxes \citep{Karska2018}. If the edge of the compact CO emission is due to freeze-out, the CO snowline is located at roughly 200 au. Models based on the continuum emission have temperatures of $\sim$30 K or $\sim$20--30 K at 200 au (\citealt{Brinch2007,Sai2020}, respectively), so CO could indeed be frozen out in this region. The H$_2$CO emission does not show a gap at 200 au, which could mean that the emission is coming from the surface layers. The C$^{17}$O (and C$^{18}$O) abundance in these warmer surface layers may then be too low to be detected at the sensitivity of these observations. \section{Discussion} \label{sec:Discussion} \subsection{Temperature structure of young disks} We have used observations of C$^{17}$O and H$_2$CO toward five Class I disks in Taurus to address whether embedded disks are warmer than more evolved Class II disks. While the C$^{17}$O observations can indicate the presence or absence of $\lesssim$20 K gas, the addition of H$_2$CO observations allows to further constrain the temperature profile. The picture that is emerging suggests that these young disks have midplanes with temperatures between $\sim$20 and $\sim$70 K; cold enough for H$_2$CO to freeze out, but warm enough to retain CO in the gas phase (Fig.~\ref{fig:CartoonTemperatureStructure}). This suggests that, for example, the elemental C/O ratio in both the gas and ice could be different from that in protoplanetary disks. If planet formation starts during the embedded phase, the conditions for the first steps of grain growth are then thus different than generally assumed. Young disks being warmer than protoplanetary disks can also have consequences for the derived disk masses from continuum fluxes. This has been taken into consideration in recent literature by adopting a dust temperature of 30 K for solar-luminosity protostars \citep{Tobin2015a,Tobin2016b,Tychoniec2018,Tychoniec2020}, although not uniformly \citep[e.g.,][]{Williams2019,Andersen2019}, while 20 K is generally assumed for protoplanetary disks \citep[e.g.,][]{Ansdell2016}. In their study of Orion protostars, \citet{Tobin2020} take this one step further by scaling the temperature by luminosity based on a grid of radiative transfer models resulting in an average temperature of 43 K for a 1 $L_{\odot}$ protostar. Since higher temperatures will result in lower masses for a certain continuum flux, detailed knowledge of the average disk temperature is crucial to determine the mass reservoir available for planet formation. While the current study shows that embedded disks are warmer than protoplanetary disks, and the radial temperature profiles for L1527 and IRAS 04302 hint that 30 K may be to low for the average disk temperature, source specific modeling of the continuum and molecular line emission is required to address what would be an appropriate temperature to adopt for the mass derivation. However, an increase in temperature by a factor two will lower the mass by only a factor ~two (see Eq.~\ref{eq2}), and \citet{Tobin2020} still find embedded disks to be more massive than protoplanetary disks by a factor $>4$. Differences in temperature can thus not account for the mass difference observed between embedded and protoplanetary disks. \subsubsection{The textbook example of IRAS 04302} The C$^{17}$O and H$_2$CO emission toward IRAS 04302 present a textbook example of what you would expect to observe for an edge-on disk, that is, a direct view of the vertical structure. The C$^{17}$O emission is confined to the midplane, while H$_2$CO is tracing the surface layers. Assuming the absence of H$_2$CO emission in the midplane is due to freeze out, we can not only make a first estimate of the radial temperature profile but also of the vertical temperature structure. At the current spatial resolution, the vertical structure is spatially resolved for radii $\gtrsim70$ au, that is, $\sim$3 beams across the disk height. At these radii, the H$_2$CO emission peaks $\sim$30--50 au above the midplane (at radii of 70 and 180 au, respectively), suggesting that the temperature is between $\sim$20--70 K in the $\sim$30 au above the midplane. The temperature structure can be further constrained by observing molecules with a freeze-out temperature between that of CO and H$_2$CO, that is, between $\sim$20--70 K. Based on the UMIST database for astrochemistry \citep{McElroy2013}, examples of such molecules are CN, CS, HCN, C$_2$H, SO and H$_2$CS (in increasing order of freeze-out temperature). Another option would be to observe several H$_2$CO lines because their line ratios are a good indicator of the temperature \citep[e.g.,][]{Mangum1993}. These observations thus confirm that edge-on disks are well-suited to study the disk vertical structure through molecular line observations. \subsubsection{Comparison with protostellar envelopes and protoplanetary disks} No sign of CO freeze-out is detected in the C$^{17}$O observations of L1527, and while freeze-out is much more difficult to see in non-edge on disks, TMC1A does not show hints of freeze out at radii smaller than $\sim$70 au. A first estimate puts the CO snowline at $\sim$100 au in IRAS~04302, and the CO snowline may be located around $\sim$200 AU in L1489. These young disks are thus warmer than T Tauri disks where the snowline is typically at a few tens of AU, as can be seen in Fig.~\ref{fig:COsnowlineLocations}. We only include Class II disks for which a CO snowline location has been reported based on molecular line observations, either $^{13}$C$^{18}$O (for TW Hya; \citealt{Zhang2017}) or N$_2$H$^+$ \citep{Qi2019}. There is no clear trend between CO snowline location and bolometric luminosity for either Class, but the Class I disks have CO snowlines at larger radii compared to Class II disks with similar bolometric luminosities. \begin{figure} \centering \includegraphics[width=0.5\textwidth,trim={0cm 16.3cm 7.5cm .8cm},clip]{COsnowlineLocations.pdf} \caption{Overview of CO snowline locations in disks derived from molecular line observations as function of bolometric luminosity. The locations for Class I disks (orange) are derived in this work using the C$^{17}$O emission. Class II T Tauri disks are shown in blue. For TW Hya, the CO snowline location is determined from $^{13}$C$^{18}$O emission by \citet{Zhang2017}. For the other Class II disks, the CO snowline is derived from N$_2$H$^+$ emission by \citet{Qi2019}. Arrows denote upper and lower limits. } \label{fig:COsnowlineLocations} \end{figure} In protostellar envelopes, snowline radii larger than expected based on the luminosity have been interpreted as a sign of a recent accretion burst \citep{Jorgensen2015,Frimann2017,Hsieh2019}. During such time period of increased accretion, the circumstellar material heats up, shifting the snowlines outward. Once the protostar returns to its quiescent stage, the temperature adopts almost instantaneously, while the chemistry takes longer to react. During this phase the snowlines are at larger radii than expected from the luminosity. The results in Fig.~\ref{fig:COsnowlineLocations} could thus indicate that small accretion bursts have occurred in the Class I systems and that the CO snowlines have not yet shifted back to their quiescent location. When such a burst should have happened depends on the freeze-out timescale, $\tau_{\rm{fr}}$; \begin{equation} \tau_{\rm{fr}} = 1 \times 10^4 \hspace{1mm} \rm{yr} \sqrt{\frac{10 \hspace{1mm} \rm{K}}{T_{\rm{fr}}}} \frac{10^6 \hspace{1mm} \rm{cm}^{-3}}{n_{\rm{H_2}}}, \end{equation} where $T_{\rm{fr}}$ is the freeze-out temperature and $n_{\rm{H_2}}$ is the gas density \citep{Visser2012}. For densities of $\gtrsim 10^8$ cm$^{-3}$, the CO freeze out timescale is $\lesssim$ 100 yr. This could suggest that Class I protostars frequently undergo small accretion bursts. Alternatively, these young disks may have lower densities than more evolved disks. As shown by the model results from Murillo et al. (in preperation), decreasing the density while keeping the luminosity constant shifts the snowlines outward. If this is what is causing the results in Fig.~\ref{fig:COsnowlineLocations}, this means that embedded disks not only have different temperature structures from protoplanetary disks, but also different density structures. However, the larger disk masses derived for embedded disks compared to protoplanetary disks for similar disk radii makes this unlikely \citep{Tobin2020}. Another comparison is made in Fig.~\ref{fig:PowerlawT}, where the radial temperature profiles inferred for L1527 and IRAS~04302 are shown together with those for the younger Class 0 disk-like structure around IRAS~16293A \citep{vantHoff2020} and the Class II disk TW Hya \citep{Schwarz2016}. The young disks are warmer than the more evolved Class~II disk, but much colder than the Class~0 system IRAS~16293A. When making this comparison one should keep in mind that IRAS~16293A reflects an envelope where the temperature will be larger at larger scales because of the spherical rather than disk structure. In a disk the temperature will drop more rapidly in the radial direction due to the higher extinction compared to an envelope. Nevertheless, such an evolutionary trend is expected because the accretion rate decreases as the envelope and disk dissipate. As a consequence, heating due to viscous accretion diminishes and hence the temperature drops, as shown by two-dimensional physical and radiative transfer models for embedded protostars \citep{D'Alessio1997,Harsono2015}. In addition, the blanketing effect of the envelope decreases as the envelope dissipates \citep{Whitney2003}. As a first comparison between the observations and model predictions, models from \citet{Harsono2015} are overlaid on the observationally inferred temperature profiles in Fig.~\ref{fig:PowerlawT} (right panel). In these models the dust temperature is determined based on stellar irradiation and viscous accretion. Models are shown for a stellar luminosity of 1 $L_\odot$, an envelope mass of 1 $M_\odot$, a disk mass of 0.05 $M_\odot$, a disk radius of 200 au and different accretion rates. The disk mass has a negligible effect on the temperature profiles (see \citealt{Harsono2015} for details). IRAS 16293A matches reasonably well with the temperature profile for a heavily accreting system ($10^{-4} M_\odot$ yr$^{-1}$), consistent with estimates of the accretion rate (e.g., \mbox{$\sim 5 \times 10^{-5} M_\odot$ yr$^{-1}$;} \citealt{Schoier2002}). However, as in these models the total luminosity is based on the stellar luminosity and the accretion luminosity (and a contribution from the disk), the match for IRAS 16239A with a strongly accretion model may just reflect the systems bolometric luminosity of 20 $L_\odot$. In contrast, the temperature profiles for L1527 and IRAS 04302 are comparable to the colder \mbox{$10^{-7} M_\odot$ yr$^{-1}$} model, consistent with the accretion rate of $\sim 3 \times 10^{-7} M_\odot$ yr$^{-1}$ for L1527 (see \citealt{vantHoff2018b}). Similar accretion rates in the order of $10^{-7} M_\odot$ yr$^{-1}$ have been reported for L1489, TMC1A and TMC1 \citep[e.g.,][]{Mottram2017,Yen2017} based on the bolometric luminosities \citep[see e.g.,][]{Stahler1980,Palla1993}. We are not aware of a measurement toward IRAS 04302, but our very preliminary modeling results (van 't Hoff et al. in prep.) are consistent with an accretion rate in the order of $10^{-7} M_\odot$ yr$^{-1}$. Measured accretion rates for TW Hya range between $\sim$ $2 \times 10^{-10} - 2 \times 10^{-9} M_\odot$ yr$^{-1}$ \citep[e.g.,][]{Herczeg2008,Curran2011,Ingleby2013}, and accretion rates of $\sim$ $10^{-10} - 10^{-8} M_\odot$ yr$^{-1}$ are typically measured for protoplanetary disks around T Tauri stars (see \citealt{Hartmann2016} for a review). The results presented here thus provide observational evidence for cooling of the circumstellar material during evolution. More sources need to be observed to confirm this trend and to answer more detailed questions such as, when has a disk cooled down sufficiently for large-scale CO freeze-out? Does this already happen before the envelope dissipates? IRAS 04302 is a borderline Class I/Class II object embedded in the last remnants of its envelope, but still has a temperature profile more similar to L1527 than TW Hya. Although a caveat here may be the old age of TW Hya ($\sim$10 Myr), this hints that disks may stay warm until the envelope has fully dissipated. \subsubsection{TMC1} TMC1 is for the first time resolved to be a close ($\sim$85~au) binary. A possible configuration of the system could be that TMC1-E is present in the disk of TMC1-W, as for example observed for L1448 IRS3B \citep{Tobin2016a}. TMC1-E would then increase the temperature in the east side of the disk. This may be an explanation for the asymmetry in the C$^{17}$O emission with the emission dimmer east of TMC1-W (see Figs.~\ref{fig:Radprofiles}, \ref{fig:M1_overview} and \ref{fig:Spectra}). Given the upper level energy of 16 K, emission from the C$^{17}$O $J=2-1$ transition will decrease with temperatures increasing above $\sim$25~K. The weaker C$^{17}$O emission may thus signal a higher temperature in the east side of the disk. However, TMC1-E does not seem to cause any disturbances in the disk, such as spiral arms, although the high inclination may make this hard to see. Another possibility could be that TMC1-E is actually in front of the disk. \subsection{Chemical complexity in young disks} One of the major questions regarding the chemical composition of planetary material, is whether they contain complex organic molecules (COMs). Due to the low temperatures in protoplanetary disks, observations of COMs are very challenging because these molecules thermally desorb at temperatures $\gtrsim$100--150 K, that is, in the inner few AU. In contrast, COMs are readily detected on disk-scales in protostellar envelopes (e.g., IRAS 16293, NGC1333 IRAS2A, NGC1333 IRAS4A, and B1-c; \citealt{Jorgensen2016,Taquet2015,vanGelder2020}) and in the young disk V883-Ori, where a luminosity outburst has heated the disk and liberated the COMs from the ice mantles \citep{vantHoff2018c,Lee2019}. Although young disks seem warmer than protoplanetary disks, the CH$_3$OH and HDO non-detections with upper limits orders of magnitude below column densities observed toward Class 0 protostellar envelopes suggest that they are not warm enough to have a hot-core like region with a large gas reservoir of COMs. This is consistent with recent findings by \citet{ArturdelaVillarmois2019} for a sample of Class I protostars in Ophiuchus. More stringent upper limits are required for comparison with the Class II disks TW Hya and HD 163296. However, the detection of HDO and CH$_3$OH may have been hindered by optically thick dust in the inner region, or by the high inclinations of these sources. Modeling by Murillo et al. (in preperation) shows that the water snowline is very hard to detect in near edge-on disks. These non-detections thus do not rule out the presence of HDO and CH$_3$OH, in fact, if the region where HDO and CH$_3$OH are present is much smaller than the beam, they may have higher columns than the upper limits derived here. This is corroborated by the weak detection of CH$_3$OH in L1527 \citep{Sakai2014b}. These results thus merely show that Class I disks do not have an extended hot-core like region, making the detection of COMs just as challenging as in Class II disks. A related question to the chemical composition is whether the disk material is directly inherited from the cloud, processed en route to the disk, or even fully reset upon entering the disk. Young disks like L1527, where no CO freeze-out is observed, suggest that no full inheritance takes place, at least not for the most volatile species like CO. Ice in the outer disk of IRAS 04302 could be inherited. However, the freeze-out timescale for densities $> 10^6$ cm$^{-3}$ is $< 10^4$ year, so this CO could have sublimated upon entering the disk and frozen out as the disk cooled (see e.g., \citealt{Visser2009a}). Without CO ice, additional grain-surface formation of COMs will be limited in the young disks. So if COMs are present in more evolved disks, as for example shown for V883 Ori, they must have been inherited from a colder pre-collapse phase. Physicochemical models show that prestellar methanol can indeed be incorporated into the disk \citep{Drozdovskaya2014}. \subsection{Decrease in H$_2$CO in the inner disk} While the H$_2$CO emission is brighter than the C$^{17}$O emission at intermediate velocities, no H$_2$CO emission is detected at the highest velocities in IRAS 04302, L1527 and TMC1A, suggesting a reduction in H$_2$CO flux in the inner $\lesssim$20--30 au in these disks. This is not just a sensitivity issue, as for example, C$^{17}$O and H$_2$CO have similar strength and emitting area in channels around +1.9 km s$^{-1}$ with respect to the source velocity in L1527 while 3.05 km s$^{-1}$ is the highest velocity observed for C$^{17}$O and 2.60 km s$^{-1}$ the highest velocity for H$_2$CO. The decrease in H$_2$CO emission is also unlikely to be due to the continuum being optically thick because this would affect the C$^{17}$O emission as well, unless there is significantly more C$^{17}$O emission coming from layers above the dust millimeter $\tau$ = 1 surface than H$_2$CO emission. Given the observed distributions with H$_2$CO being vertically more extended than C$^{17}$O this seems not to be the case. Moreover, the drop in H$_2$CO in TMC1A occurs much further out than where the dust becomes optically thick. Formaldehyde rings have also been observed in the protoplanetary disks around TW Hya \citep{Oberg2017}, HD 163296 \citep{Qi2013b,Carney2017}, DM Tau \citep{Henning2008,Loomis2015} and DG Tau \citep{Podio2019}. Interestingly, a ring is only observed for the $3_{03}-2_{02}$ and $3_{12}-2_{11}$ transitions and not for the $5_{15}-4_{14}$ transition. \citet{Oberg2017} argue that the dust opacity cannot be the major contributor in TW Hya, because the dust opacity should be higher at higher frequencies, thus for the $5_{15}-4_{14}$ transition. Instead, they suggest a warm inner component that is visible in the $5_{15}-4_{14}$ transition ($E_{\mathrm{up}}$ = 63 K) and not in the $3_{12}-2_{11}$ transition ($E_{\mathrm{up}}$ = 33 K). For L1527, we observe the $3_{12}-2_{11}$ transition and radiative transfer modeling for the L1527 warm disk model shows that both the C$^{17}$O ($E_{\mathrm{up}}$ = 33 K) and H$_2$CO emission goes down by a factor $\sim$2 if the temperature is increased by 80\%. An excitation effect thus seems unlikely, unless the C$^{17}$O emission is optically thick. The latter is not expected given that the C$^{18}$O in L1527 is only marginally optically thick \citep{vantHoff2018b}. The absence of H$_2$CO emission in the inner disk thus points to a reduced H$_2$CO abundance. A lower total (gas + ice) H$_2$CO abundance (more than an order of magnitude) in the inner 30 au is seen in models by \citet{Visser2011}, who studied the chemical evolution from pre-stellar core into disk, but these authors do not discuss the H$_2$CO chemistry. The H$_2$CO abundance in the inner disk can be low if its formation is inefficient. H$_2$CO can form both in the gas and in the ice \citep[e.g.,][]{Willacy2009,Walsh2014,Loomis2015}. On the grain surfaces, the dominant formation route is through hydrogenation of CO \citep{Watanabe2002,Cuppen2009,Fuchs2009}. Since there seems to be no CO freeze out in these young disks, or only at radii $\gtrsim$ 100 au, H$_2$CO is expected to form predominantly in the gas. Ring-shaped H$_2$CO emission due to increased ice formation outside the CO snowline, as used to explain the ring observed in HD 163296 \citep{Qi2013b}, is thus not applicable to the disks in this sample. In the gas, the reaction between CH$_3$ and O is the most efficient way to form H$_2$CO \citep[e.g.,][]{Loomis2015}. Therefore, a decrease in gas-phase H$_2$CO formation would require a low abundance of either CH$_3$ or O. CH$_3$ is efficiently produced by photodissociation of CH$_4$ or through ion-molecule reactions. A low CH$_3$ abundance thus necessitates the majority of carbon to be present in CO, in combination with a low X-ray flux as carbon can only be liberated from CO by X-ray generated He$^+$. Atomic oxygen is formed through photodissociation H$_2$O and CO$_2$, or through dissociation of CO via X-ray-generated He$^+$. A low atomic oxygen abundance would thus require a low UV and X-ray flux. Besides a low formation rate, a high destruction rate would also decrease the amount of H$_2$CO. However, the destruction products have a limited chemistry and re-creation of H$_2$CO is the most likely outcome. \citet{Willacy2009} showed that a third of the ions formed by H$_2$CO destruction through HCO$^+$ and DCO$^+$ form CO instead of reforming H$_2$CO, leading to a depletion between 7 and 20 AU for their disk model. However, this only reduces H$_2$CO in the midplane, not in the surface layers. In addition, \citet{Henning2008} suggested the conversion of CO into CO$_2$-containing molecules and hydrocarbons that freeze out onto dust grains (see also \citealt{Aikawa1999}). However, the C$^{17}$O observations do not suggest heavy CO depletion. Another effect that could contribute is photodesorption of methanol ice that is inherited from earlier phases. Laboratory experiments have shown that methanol does not desorb intact upon VUV (vacuum ultraviolet) irradiation, but rather leads to the release of smaller photofragments including H$_2$CO \citep{Bertin2016,Cruz-Diaz2016}. This could lead to an increase of H$_2$CO outside the region where CH$_3$OH ice thermally desorbs ($\sim100-150$ K). Finally, turbulence may play a role as models by \citet{Furuya2014} show the formation of H$_2$CO rings when mixing is included. However, these rings are due to a decrease of H$_2$CO inside the CO snowline and an increase outside this snowline, and these results may not be applicable to embedded disks without CO freeze out. Observations of higher excitation H$_2$CO lines and chemical modeling with source-specific structures may provide further insights. It is worth noting that \citet{Pegues2020} found both centrally-peaked and centrally-depressed H$_2$CO emission profiles for a sample of 15 protoplanetary disks. A reduction of H$_2$CO emission toward three out of the five disks in our sample could mean that the H$_2$CO distribution is set during the embedded stage. \section{Conclusions} \label{sec:Conclusions} Temperature plays a key role in the physical and chemical evolution of circumstellar disks, and therefore in the outcome of planet formation. However, the temperature structure of young embedded disks, in which the first steps of planet formation take place, is poorly constrained. Our previous analyis of $^{13}$CO and C$^{18}$O emission in the young disk L1527 suggest that this disk is warm enough ($T \gtrsim$ 20-25 K) to prevent CO freeze-out \citep{vantHoff2018b} in contrast to protoplanetary disks that show large cold outer regions where CO is frozen out. Here we present ALMA observations of C$^{17}$O, H$_2$CO and non-detections of HDO and CH$_3$OH for five young disks in Taurus, including L1527. The observations of L1527 and in particular IRAS~04302, with C$^{17}$O emission originating in the midplane and H$_2$CO emission tracing the surface layers, highlight the potential of edge-on disks to study the disk vertical structure. Based on the following results we conclude that young disks are likely warmer than more evolved protoplanetary disks, but not warm enough to have a large gas reservoir of complex molecules, like the young disk around the outbursting star V883-Ori: \begin{itemize} \item CO freeze-out can be observed directly with C$^{17}$O observations in edge-on disks. L1527 shows no sign of CO freeze-out, but IRAS 04302 has a large enough disk for the temperature to drop below the CO freeze-out temperature in the outermost part (radii $\gtrsim$100 au). \item H$_2$CO emission originates primarily in the surface layers of IRAS~04302 and L1527. The snowline ($T \sim$70 K) is estimated around (or inward of) $\sim$25 au in IRAS~04302 and at $\lesssim$25 au in L1527. \item CO freeze-out is much more difficult to observe in non-edge-on disks, but the C$^{17}$O emission in TMC1A suggest a snowline at radii $\gtrsim$ 70 au. Two spatial components are seen in the C$^{17}$O emission toward L1489. If the outer edge of the inner component is due to CO freeze-out, the snowline would be around $\sim$200 au. \item The CO snowline locations derived for the Class I disks are farther out than found for Class II disks with similar bolometric luminosities. \item The HDO and CH$_3$OH non-detections with upper limits more than two orders of magnitude lower than observed for hot cores in protostellar envelopes or the disk around the outbursting star V883-Ori suggest that these Class I disks do not have a large gas reservoir of COMs. \item The inferred temperature profiles are consistent with trends found in radiative transfer models of disk-envelope systems with accretion rates decreasing from $10^{-4}$ to $10^{-7} M_\odot$ yr$^{-1}$. \end{itemize} \noindent As evidence is piling up for planet formation to start already during the embedded phase, adopting initial conditions based on the physical conditions in more evolved Class II disks seems not appropriate. Instead, planet formation may start in warmer conditions than generally assumed. Furthermore, without a large CO-ice reservoir, COM formation efficiency is limited in embedded disks. Observations of COMs in more evolved disks therefore suggest that these molecules are inherited from earlier phases. \acknowledgements We would like to thank the referee for a prompt and positive report that helped improve the paper, Patrick Sheehan for his assistence with the visibility plotting and Gleb Fedoseev for useful discussions about the H$_2$CO freeze-out temperature. M.L.R.H. would like to thank Yuri Aikawa for comments on an earlier version of this manuscript for her PhD thesis. This paper makes use of the following ALMA data: ADS/JAO.ALMA\#2017.1.01413.S. ALMA is a partnership of ESO (representing its member states), NSF (USA) and NINS (Japan), together with NRC (Canada), MOST and ASIAA (Taiwan), and KASI (Republic of Korea), in cooperation with the Republic of Chile. The Joint ALMA Observatory is operated by ESO, AUI/NRAO and NAOJ. Astrochemistry in Leiden is supported by the Netherlands Research School for Astronomy (NOVA). M.L.R.H acknowledges support from a Huygens fellowship from Leiden University. J.J.T. acknowledges support from grant AST-1814762 from the National Science Foundation and past support from the Homer L. Dodge Endowed Chair at the University of Oklahoma.The National Radio Astronomy Observatory is a facility of the National Science Foundation operated under cooperative agreement by Associated Universities, Inc. J.K.J acknowledges support by the European Research Council (ERC) under the European Union's Horizon 2020 research and innovation programme through ERC Consolidator Grant ``S4F'' (grant agreement No~646908). A.M. acknowledges funding from the European Union’s Horizon 2020 research and innovation programme under the Marie Sklodowska-Curie grant agreement No 823823, (RISE DUSTBUSTERS) and from the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) – Ref no. FOR 2634/1 ER685/11-1. C.W. acknowledges financial support from the University of Leeds and from the Science and Technology Facilities Council (grant numbers ST/R000549/1 and ST/T000287/1).
1,108,101,565,400
arxiv
\section{Introduction} Hopf orders allow to introduce a number-theoretic perspective in the study of semisimple Hopf algebras. For cocommutative Hopf algebras (group algebras), a rich theory on Hopf orders emerged from Larson's seminal papers \cite{L1} and \cite{L2}; see, for example, \cite{Ch2} and \cite{U} and the references therein. The theory of quantum groups emphasizes Hopf algebras that are neither commutative nor cocommutative. In contrast, little is known about Hopf orders in this case. \par \vspace{2pt} Our first step was taken in \cite{CM1}, where we found that, unlike group algebras, complex semisimple Hopf algebras may not admit Hopf orders over number rings. This striking phenomenon was further explored in \cite{CM3} and \cite{CCM}. The semisimple Hopf algebras analyzed in these works are twisted group algebras of non-abelian groups; i.e., group algebras in which the coalgebra structure is modified in Movshev's way. It turns out that they are all simple Hopf algebras. We asked in \cite{CM3} whether the simplicity is indeed the reason for the non-existence of integral Hopf orders. \par \vspace{2pt} In this paper, we change the previous viewpoint and focus on when integral Hopf orders in twisted group algebras do exist, how to construct them, and their uniqueness. Our inspiration is the example of Hopf order that we constructed in \cite[Proposition 4.1]{CM3} for a twisted group algebra on the symmetric group $S_4$. Most of this paper is devoted to fit this construction in a general group-theoretical framework. \subsection{Results and method of proofs} Let $K$ be a number field with ring of integers $R$. Let $G$ be a finite group and $M$ an abelian\vspace{-0.5pt} subgroup of $G$. Suppose that $K$ is large enough so that $K\hspace{-0.9pt}M$ splits as an algebra. Let $\widehat{M}$ be the character\vspace{1.25pt} group of $M$. Take the set $\{e_{\phi}\}_{\phi \in \widehat{M}}$ of orthogonal primitive idempotents in $K\hspace{-0.9pt}M$ giving the Wedderburn\vspace{-3pt} decomposition.\vspace{-0.6pt} Recall that $e_{\phi} := \frac{1}{\vert M \vert} \sum_{m \in M} \phi(m^{-1}) m.$ If $\omega: \widehat{M} \times \widehat{M} \rightarrow K^{\times}$ is a (normalized) 2-cocycle, then $$J:=\sum_{\phi,\psi \in \widehat{M}} \omega(\phi,\psi) e_{\phi} \otimes e_{\psi}$$ is a twist for the group algebra $K\hspace{-0.9pt}G$ (see Subsections \ref{Sc:drt} and \ref{Sc:mv}). This algebra can be endowed with a new Hopf algebra structure, denoted by $(K\hspace{-0.9pt}G)_J$, as follows. The algebra structure remains unchanged. The coproduct, counit, and antipode are defined from those of $K\hspace{-0.9pt}G$ in the following way: $$\Delta_{J}(g)= J\Delta(g) J^{-1}, \hspace{0.7cm} \varepsilon_J(g)=\varepsilon(g), \hspace{0.7cm} S_J(g)=U_J\hspace{1pt}S(g)\hspace{0.5pt}U_J^{-1}, \hspace{0.7cm} \forall g \in G.$$ Here, $U_J:= \sum_{\phi \in \widehat{M}} \hspace{1pt}\omega(\phi,\phi^{-1}) e_{\phi}.$ \par \vspace{2pt} Our first result stems from the following observation. The cocycle $\omega$ is cohomologous to one with values in roots of unity. We can replace $J$ by a cohomologous twist and extend $K$, if necessary, to ensure that $\omega$ takes values in $R$. Suppose that $M$ is contained in a normal abelian subgroup $N$ of $G$. Extend $K$ again, if necessary, to ensure that $K\hspace{-0.9pt}N$ splits as an algebra. Since $K\hspace{-0.9pt}N$ is commutative, every idempotent of $K\hspace{-0.9pt}M$ is a sum of idempotents of $K\hspace{-0.9pt}N$. Take the full set $\{e^N_{\nu}\}_{\nu \in \widehat{N}}$ of orthogonal primitive idempotents in $K\hspace{-0.9pt}N$ (the superscript $N$ is placed to distinguish\vspace{-0.5pt} these idempotents in different group algebras). Since $N$ is normal,\vspace{1.5pt} $G$ acts on $\widehat{N}$ by $(g \triangleright \nu)(n)=\nu(g^{-1}ng)$ for all $n \in N$. The following rule holds\vspace{-1.5pt} in $K\hspace{-0.8pt} G$: $(e_{\nu}^Ng)(e_{\nu'}^Ng')=e_{\nu}^Ne^N_{g \hspace{0.65pt}\triangleright\hspace{0.65pt} \nu'}gg'$. Thus, the $R$-subalgebra $X$ of $K\hspace{-0.8pt} G$ generated by the set $\{e_{\nu}^Ng: \nu \in \widehat{N}, g\in G\}$ is finitely\vspace{1pt} generated as an $R$-module. One can see\vspace{1pt} that $X$ is a Hopf order of $K\hspace{-0.8pt} G$ by using the formulas:\linebreak $\Delta(e^N_{\nu})=\sum_{\eta \in \widehat{N}} e^N_{\eta} \otimes e^N_{\eta^{-1}\nu}, \hspace{3pt} \varepsilon(e^N_{\nu})=\delta_{\nu,1},$ and $S(e^N_{\nu})=e^N_{\nu^{-1}}$. The idempotents of $K\hspace{-0.9pt}M$ belong\vspace{1pt} to $X$. Hence, $J^{ \pm 1} \in X \otimes_R X$. This implies that $\Delta_J(X) \subseteq X \otimes_R X$, $\varepsilon_J(X) \subseteq R,$ and $S_J(X) \subseteq X$. So, $X$ is a Hopf order of $(K\hspace{-0.9pt}G)_J$ over $R$. \par \vspace{2pt} The previous idea\vspace{-0.25pt} can be refined through the concept of Lagrangian decomposition. Consider the\vspace{-0.25pt} skew-symmetric pairing \hspace{-1pt}$\B_{\omega}:\widehat{M} \times \widehat{M} \rightarrow K^{\times}$ associated to $\omega$. It is defined as $\B_{\omega}(\phi,\psi) = \omega(\phi,\psi)\omega(\psi,\phi)^{-1}$ for all $\phi,\psi \in \widehat{M}.$ Assume\vspace{1pt} that $\B_{\omega}$ is non-degenerate (this requires $M$ to be of central type). We alternatively\vspace{-1.5pt} say that $\omega$ is non-degenerate. A subgroup $L$ of $\widehat{M}$ is called Lagrangian if $L=L^{\perp}$, where $\perp$ is taken with respect to $\B_{\omega}$. A Lagrangian subgroup produces a short exact sequence $1 \rightarrow L \rightarrow \widehat{M} \rightarrow \widehat{L} \rightarrow 1.$ If it splits, then $\widehat{M} \simeq L\times \widehat{L}$. Such\vspace{-0.5pt} a decomposition is called Lagrangian decomposition of $\widehat{M}$. A Lagrangian decomposition\vspace{-0.25pt} always exists and has the following property: writing\vspace{1pt} every element of $\widehat{M}$ as a pair $(l,\lambda)$, with $l \in L,\lambda \in \widehat{L}$, the cocycle $\omega$ is (up to coboundary) given\vspace{1pt} by $\omega((l,\lambda),(l',\lambda')) = \lambda(l')$. Thanks to this property, we can show in Lemma \ref{lemJ} that $J$ and $J^{-1}$ can be expressed as: $$J^{\pm 1} \hspace{1pt}=\hspace{1pt} \sum_{\lambda \in \widehat{L}} e_{\lambda}^{L} \otimes \lambda^{\pm 1} \hspace{1pt}=\hspace{1pt} \sum_{l \in L} l^{\pm 1} \otimes e_l^{\widehat{L}}.$$ By applying $\widehat{\hspace{5pt}\cdot\hspace{5pt}}$ to $\widehat{M} \simeq L\times \widehat{L}$, we get that $M \simeq \widehat{L} \times L$. This allows to view $L$ as a subgroup of $M$. \par \vspace{2pt} Our first main result (Theorem \ref{thm:main1}) is stated as follows: \begin{theorem-intro} Let $K$ be a (large enough) number field with ring\vspace{1pt} of integers $R$. Let $G$ be a finite group and $M$ an abelian subgroup of $G$ of central type. Consider the twist \linebreak $J$ in $K\hspace{-1pt}M \otimes K\hspace{-1pt}M$ afforded by a non-degenerate $2$-cocycle $\omega: \widehat{M} \times \widehat{M} \rightarrow K^{\times}$. \par \vspace{2pt} Fix a Lagrangian decomposition $\widehat{M} \simeq L \times \widehat{L}$. Suppose that $L$ (viewed as inside of $M$) is contained in a normal abelian subgroup $N$ of $G$. Then, $(K\hspace{-0.8pt}G)_J$ admits a Hopf order $X$ over $R$. \end{theorem-intro} As before, $X$ is the Hopf order of $K\hspace{-0.8pt} G$ generated by $\{e_{\nu}^Ng: \nu \in \widehat{N}, g\in G\}$ and it satisfies that $J^{ \pm 1} \in X \otimes_R X$. Under the extra hypothesis that the action of $G/N$ on $N$ induced by conjugation is faithful, $X$ can be characterized as the unique Hopf order of $(K\hspace{-0.69pt}G)_J$ containing all the primitive idempotents of $K\hspace{-0.9pt}N$ (Proposition \ref{cond:unique}). \par \vspace{2pt} We wonder if, up to cohomologous twist, every Hopf order of a twisted group algebra $(K\hspace{-0.69pt} G)_J$ is a Hopf order $X$ of $K\hspace{-0.69pt} G$ such that $J^{\pm 1} \in X \otimes_R X$. All examples of integral Hopf orders in twisted group algebras known so far arise in this form. \par \vspace{2pt} Our second main result (Theorem \ref{thm:main2}) deals with the uniqueness of the Hopf order constructed above in the case of semidirect products of groups: \begin{theorem-intro} Let $K$ be a (large enough) number field with ring of integers $R$. Consider the semidirect product $G := N \rtimes Q$ of two finite groups $N$ and $Q$, with $N$ abelian. Let $L$ and $P$ be abelian subgroups of $N$ and $Q$ respectively. Set $M=LP$. Let $\tau \in Q$. Suppose that $N,Q,L,P,$ and $\tau$ satisfy the following conditions: \begin{enumerate} \item[{\it (i)}] $L$ and $P$ are isomorphic and commute with one another. \vspace{2pt} \item[{\it (ii)}] $Q$ acts on $N$ faithfully. \vspace{2pt} \item[{\it (iii)}] $N=L \oplus (\tau \cdot L)$, where $N$ is written additively. \vspace{2pt} \item[{\it (iv)}] $N^{\tau} \neq \{1\}$. \vspace{2pt} \item[{\it (v)}] $\widehat{N}^{\sigma \tau}=\big(\widehat{N}^{\tau}\big) \cap \big(\widehat{N}^{\sigma \tau \sigma^{-1}}\big)=\{\varepsilon\}$ for every $\sigma \in P$ with $\sigma \neq 1$. \end{enumerate} Let $J$ be the twist in $K\hspace{-1pt}M \otimes K\hspace{-1pt}M$ arising from an isomorphism $f:P \rightarrow \widehat{L}$ (see Section \ref{ex-uniq}). Then, the Hopf order of $(K\hspace{-0.9pt}G)_J$ over $R$ generated by the primitive idempotents of $K\hspace{-1.1pt}N$ and the elements of $Q$ is unique. \end{theorem-intro} To establish the uniqueness, it is enough to prove that any Hopf order $Y$ of $(K\hspace{-0.9pt}G)_J$ over $R$ contains the idempotent $e^L_{\varepsilon}$ (Propositions \ref{cond:unique} and \ref{squeeze}). This is achieved by obtaining $e^L_{\varepsilon}$ from certain elements of $(K\hspace{-0.9pt}G)_J$ that must belong to $Y$. Set, for brevity, $H=(K\hspace{-0.9pt}G)_J$. The dual Hopf order $Y^{\star}$ consists of those $\varphi \in H^*$ such that $\varphi(Y) \subseteq R$. Any character of $H$ belongs to $Y^{\star}$ and any cocharacter of $H$ belongs to $Y$ (Proposition \ref{character}). We can construct elements in $Y$ by manipulating characters and cocharacters and by using the operations of Hopf order of $Y$ and $Y^{\star}$ and the evaluation map $Y^{\star} \otimes_R Y \rightarrow R$. Another tool that helps to obtain elements in $Y$ is the following (Proposition \ref{sub}): if $A$ is a Hopf subalgebra of $H$, then $Y \cap A$ is a Hopf order of $A$. We will prove that $e^L_{\varepsilon} \in Y$ by exploiting these facts. \par \vspace{2pt} The previous strategy is reinforced with the knowledge of the cocharacters of $(K\hspace{-0.9pt}G)_J$. In Proposition \ref{Prop:irr}, we determine the irreducible cocharacters of a general twisted group algebra $(K\hspace{-0.9pt}G)_J$. For any $\tau \in G$, we show in Proposition \ref{vipcharacter} that the element $|M|e^M_{\varepsilon} \tau e^M_{\varepsilon}$ is a cocharacter of $(K\hspace{-0.9pt}G)_J$ when $\omega$ is non-degenerate. \par \vspace{2pt} Our second theorem is illustrated with a family of examples in which $N=\F_q^{2n},$ $Q=\SL_{2n}(q)$, and $Q$ acts on $N$ in the natural way (see Section \ref{example}). Here, $\F_q$ is the finite field with cardinality $q$. The role of $\SL_{2n}(q)$ can be also played by $\GL_{2n}(q)$ or $\Sp_{2n}(q)$. The subgroup $P$ of $Q$ is defined by means of an algebra homomorphism $\Phi:\F_{q^n}\to \M_n(\F_q)$ induced by the product of $\F_{q^n}$. \par \vspace{2pt} The group $\F_q^{2n} \rtimes \SL_{2n}(q)$ embeds in $\PSL_{2n+1}(q)$. The twist $J$ for $K(\F_q^{2n} \rtimes \SL_{2n}(q))$ of the previous theorem is also a twist for $K \PSL_{2n+1}(q)$. As an application of our results, we prove in Theorem \ref{PSL2n1} that $(K \PSL_{2n+1}(q))_J$ does not admit a Hopf order over $R$. The Hopf algebra $(\Co \PSL_{2n+1}(q))_J$ provides a further example of simple and semisimple complex Hopf algebra that does not admit a Hopf order over any number ring. \par \vspace{2pt} \subsection{Organization of the paper} The paper is organized as follows: \par \vspace{2pt} In Section 2, we recall some background material on Hopf orders, Drinfeld's twist, and Movshev's method of twisting a group algebra. In Section 3, we discuss the coalgebra structure of a twisted group algebra and describe its irreducible cocharacters. \par \vspace{2pt} In Section 4, we establish our first main theorem, after a preliminary discussion on Lagrangian decompositions. We also characterize the Hopf order constructed here in several ways, and underline, through an example, that our method of construction can produce different Hopf orders. The problem of the uniqueness is tackled in Section 5, where we establish our second main theorem. The above-mentioned families of examples are presented in Section 6. \par \vspace{2pt} Lastly, Section 7 deals with the non-existence of integral Hopf orders for a twist of the group algebra on $\PSL_{2n+1}(q)$. \subsection{Acknowledgements} The work of Juan Cuadra was partially supported by the Spanish Ministry of Science and Innovation, through the grant PID2020-113552GB-I00 (AEI/FEDER, UE), by the Andalusian Ministry of Economy and Knowledge, through the grant P20\underline{ }00770, and by the research group FQM0211. \section{Preliminaries} The preliminary material necessary for this paper is the same as that of \cite{CM1}, \cite{CM3}, and \cite{CCM}. For convenience, we briefly collect here the indispensable content and refer the reader to there for further information. \subsection{Conventions and notation} We will work over a base field $K$ (mostly a number field). Unless otherwise specified, vector spaces, linear maps, and undecorated tensor products $\otimes$ will be over $K$. Throughout, $H$ will stand for a finite-dimensional Hopf algebra over $K$, with identity element $1_H$, coproduct $\Delta$, counit $\varepsilon$, and antipode $S$. The dual Hopf algebra of $H$ will be denoted by $H^*$. For the general aspects of the theory of Hopf algebras, our reference books are \cite{Mo} and \cite{Ra}. \subsection{Hopf orders} Let $R$ be a subring of $K$ and $V$ a finite-dimensional vector space over $K$. A \emph{lattice of\hspace{1pt} $V$\hspace{-1.5pt} over $R$} is a finitely generated and projective $R$-submodule $X$ of $V$ such that the natural map $X \otimes_R K \rightarrow V$ is an isomorphism. Under this isomorphism, $X$ corresponds to the image of $X \otimes_R R$. \par \vspace{2pt} A \textit{Hopf order of $H$ over $R$} is a lattice $X$ of $H$ which is closed under the Hopf algebra operations; that is, $1_H \in X$, $XX \subseteq X$, $\Delta(X)\subseteq X\otimes_{R} X$, $\varepsilon(X) \subseteq R,$ and $S(X)\subseteq X$. (For the condition on the coproduct, we use the natural identification of $X\otimes_{R} X$ as an $R$-submodule of $H\otimes H$.) Our reference books for the theory of Hopf orders in group algebras are \cite{Ch2} and \cite{U}. \par \vspace{2pt} In the next three results, $K$ is assumed to be a number field with ring of integers $R$. Hopf orders are understood to be over $R$. \begin{lemma}\cite[Lemma 1.1]{CM1} \label{dual} Let $X$ be a Hopf order of $H$. \begin{enumerate} \item[{\it (i)}] The dual lattice $X^{\star}:=\{\varphi \in H^* : \varphi(X) \subseteq R\}$ is a Hopf order of $H^*$. \vspace{2pt} \item[{\it (ii)}] The natural isomorphism $H \simeq H^{**}$ induces an isomorphism $X \simeq X^{\star \star}$ of Hopf orders. \vspace{2pt} \end{enumerate} \end{lemma} The importance of the dual order for us ultimately lies in the following: \begin{proposition}\cite[Proposition 1.2]{CM1} \label{character} Let $X$ be a Hopf order of $H$. Then: \begin{enumerate} \item[{\it (i)}] Every character of $H$ belongs to $X^{\star}$. \vspace{2pt} \item[{\it (ii)}] Every character of $H^*$ (cocharacter of $H$) belongs to $X$. In particular, $X$ contains every group-like element of $H$. \end{enumerate} \end{proposition} We will often use the following technical tool: \begin{proposition}\cite[Proposition 1.9]{CM1} \label{sub} Let $X$ be a Hopf order of $H$. If $A$ is a Hopf subalgebra of $H$, then $X\cap A$ is a Hopf order of $A$. \end{proposition} \subsection{Drinfeld twist} \label{Sc:drt} An invertible element $J:=\sum J^{(1)} \otimes J^{(2)}$ in $H \otimes H$ is called a {\it twist} for $H$ provided that: $$\begin{array}{c} (1_H \otimes J)(id \otimes \Delta)(J)=(J \otimes 1_H)(\Delta \otimes id)(J), \quad \textrm{and} \vspace{4pt} \\ (\varepsilon \otimes id)(J)=(id \otimes \varepsilon)(J)=1_H. \end{array}$$ The {\it Drinfeld twist} of $H$ is the new Hopf algebra $H_J$ constructed as follows: $H_J=H$ as an algebra, the counit is that of $H$, and the coproduct and antipode are defined by: $$\Delta_J(h)=J \Delta(h)J^{-1} \qquad \textrm{and} \qquad S_J(h)=U_J\hspace{1pt}S(h)\hspace{0.5pt}U_J^{-1} \qquad \forall h \in H.$$ Here, $U_J:=\sum J^{(1)}S(J^{(2)})$. Writing $J^{-1}=\sum J^{-(1)} \otimes J^{-(2)}$, we have that $U_J^{-1}=\sum S(J^{-(1)})J^{-(2)}.$ \par \vspace{2pt} We stress that if $A$ is a Hopf subalgebra of $H$ and $J$ is a twist for $A$, then $J$ is a twist for $H$. \par \vspace{2pt} Our main results will rely on the following fact, which is easy to prove: \begin{proposition}\cite[Proposition 2.4]{CM3} \label{twistorder} Let $H$ be a Hopf algebra over $K$ and $J$ a twist for $H$. Let $R$ be a subring of $K$ and $X$ a Hopf order of $H$ over $R$. Assume that $J$ and $J^{-1}$ belong to $X \otimes_R X$. Then, $X$ is a Hopf order of $H_J$ over $R$. \end{proposition} In a similar fashion, we denote by $X_J$ the Drinfeld twist of $X$. \subsection{Construction of twists for group algebras} \label{Sc:mv} Movshev devised in \cite{Mv} the following ingenious method of constructing twists for a group algebra. Let $M$ be a finite abelian group. Assume\vspace{-1pt} that $\car K \nmid \vert M \vert$ and that $K$ is large enough for the group algebra $K\hspace{-0.9pt}M$ to split. Consider\vspace{1pt} the character group $\widehat{M}$ of $M$. For $\phi \in \widehat{M}$, the primitive idempotent of $K\hspace{-0.9pt}M$ corresponding to $\phi$ is $$e_{\phi} := \frac{1}{\vert M \vert } \sum_{m \in M} \phi(m^{-1}) m.$$ If $\omega: \widehat{M} \times \widehat{M} \rightarrow K^{\times}$ is a normalized 2-cocycle, then $$J_{M, \omega}:=\sum_{\phi,\psi \in \widehat{M}} \omega(\phi,\psi) e_{\phi} \otimes e_{\psi}$$ is a twist for $K\hspace{-0.9pt}M$. \par \vspace{2pt} Suppose now that $G$ is a finite group and $M$ is an abelian subgroup of $G$. Then, $K\hspace{-0.9pt}M$ is a Hopf subalgebra of $K\hspace{-0.9pt}G$ and, consequently, $J_{M, \omega}$ is a twist for $K\hspace{-0.9pt}G$. (It is pertinent to mention that not all twists of $K\hspace{-0.9pt}G$ arise in this way, see \cite{EG2}.) In the sequel, we will omit the subscripts in the twist and simply write $J$. \section{A distinguished cocharacter} \enlargethispage{\baselineskip} Let $\tau \in G$. The aim of this section is to prove that the element $|M|e_{\varepsilon} \tau e_{\varepsilon}$ is a cocharacter of $(K\hspace{-0.9pt}G)_J$ and, hence, it is contained in every Hopf order of $(K\hspace{-0.9pt}G)_J$. To achieve this, some preparations are necessary. \subsection{Irreducible representations of twisted group algebras over abelian groups} Let $\hspace{3pt}\overbarK{K}\hspace{0pt}$ denote the algebraic closure of the number field $K$. Consider the following two abelian groups (see \cite[Section 2.1, p. 31]{Ka} and \cite[Section 1.2, p. 18]{Ka2} for the definitions): \begin{enumerate} \item[$\sq$] $H^2(M,\hspace{3pt}\overbarK{K}^{\times})$, the second cohomology group of $M$ with values in $\hspace{3pt}\overbarK{K}^{\times}\hspace{0pt}$. \vspace{2pt} \item[$\sq$] $P_{sk}(M,\hspace{3pt}\overbarK{K}^{\times})$, the group of skew-symmetric pairings of $M$ with values in $\hspace{3pt}\overbarK{K}^{\times}\hspace{0pt}$. \end{enumerate} Every $2$-cocycle $\omega$ gives rise to a skew-symmetric pairing $\B_{\omega}$ defined by $$\B_{\omega}(m,m') = \omega(m,m')\omega(m',m)^{-1} \hspace{8pt} \forall m,m' \in M,$$ and $\B_{\omega}$ depends only on the cohomology class of $\omega$. We know from \cite[Lemma 2.2, p. 19, and Theorem 3.6, p. 31]{Ka2} that the map $$\B:H^2(M,\hspace{3pt}\overbarK{K}^{\times}) \rightarrow P_{sk}(M,\hspace{3pt}\overbarK{K}^{\times}), [\omega] \mapsto \B_{\omega},$$ is an isomorphism of abelian groups. \par \vspace{2pt} The \emph{radical} of $\omega$ is defined to be the radical of the pairing $\B_{\omega}$. That is, $$\Rad(\omega)=\{m \in M : \omega(m,m')=\omega(m',m) \hspace{5pt} \forall m' \in M\}.$$ Clearly, $\Rad(\omega)$ is a subgroup of $M$. \par \vspace{2pt} Recall that $\omega$ is said to be \emph{non-degenerate} if the pairing $\B_{\omega}$ is so; equivalently, if $\Rad(\omega)=\{1\}$. Being non-degenerate is a property preserved under multiplication by coboundaries. Let $\pi:M \rightarrow M/\Rad(\omega)$ denote the canonical projection. The pairing $\mathcal{B}_{\omega}$ on $M$ induces a skew-symmetric pairing $\overbarB{\mathcal{B}}_{\omega}$ on $M/\Rad(\omega)$ such that $\mathcal{B}_{\omega}=\overbarB{\mathcal{B}}_{\omega} \circ (\pi \times \pi)$. By construction, $\overbarB{\mathcal{B}}_{\omega}$ is non-degenerate. Then, there is a 2-cocycle $\bar{\omega}$ on $M/\Rad(\omega)$ such that $\B_{\bar{\omega}}=\overbarB{\mathcal{B}}_{\omega}.$ The cohomology class of $\omega$ satisfies $[\omega] = [\bar{\omega} \circ (\pi \times \pi)]$. This shows that, up to coboundary, any $2$-cocycle $\omega$ on $M$ is inflated from a unique non-degenerate $2$-cocycle $\bar{\omega}$ on $M/\Rad(\omega)$. \par \vspace{2pt} On the other hand, by \cite[Proposition 2.1.1, p. 14]{Ka1} any $2$-cocycle on an abelian group with values in $\hspace{3pt}\overbarK{K}^{\times}\hspace{0pt}$ is cohomologous to a cocycle with values in a cyclotomic ring of integers. So, if one starts with a $2$-cocycle $\omega$ on $M$ with values in $K^{\times}$, the process of inflating from $M/\Rad(\omega)$ described before can be indeed achieved in a cyclotomic field extension of $K$. \par \vspace{2pt} Consider now the twisted group algebra $K^{\omega}[M]=\oplus_{m \in M} K\hspace{-1pt} u_m$, where the product is given by $u_m u_{m'}=\omega(m,m')u_{mm'}$. Assume that $K$ is large enough so that $K^{\omega}[M]$ splits as an algebra. The center of $K^{\omega}[M]$ is spanned, as a vector space, by the set $\{u_m: m \in \Rad(\omega)\}$. Write, for short, $\overbarM{M}=M/\Rad(\omega)$. Suppose that $\omega$ is inflated from a non-degenerate $2$-cocycle $\bar{\omega}$ on \hspace{2pt}$\overbarM{M}$ (we extend $K$ to achieve this if necessary). Then, the twisted group algebra $K^{\bar{\omega}}[\hspace{3.5pt}\overbarM{M}\hspace{2pt}]$ is a matrix algebra and there is surjective algebra map $K^{\omega}[M] \twoheadrightarrow K^{\bar{\omega}}[\hspace{3.5pt}\overbarM{M}\hspace{2pt}]$. Let $V$ be the unique (up to isomorphism) irreducible representation of $K^{\bar{\omega}}[\hspace{3.5pt}\overbarM{M}\hspace{2pt}]$. By inflation, $V$ is also an irreducible representation of $K^{\omega}[M].$ \par \vspace{2pt} The following lemma can be easily proved: \begin{lemma}\label{irrtwgpal} Let $\widehat{M}$ denote the group of characters of $M$ with values in $K^{\times}$. For every $\phi \in \widehat{M}$, let $V_{\phi}$ be the representation of $K^{\omega}[M]$ which is equal to $V$ as a\vspace{1pt} vector space and whose action is defined by $u_m \hspace{1.5pt} \scalebox{0.82}{$\odot$}\hspace{1.5pt} v = \phi(m) (u_m \cdot v).$ Then: \begin{enumerate} \item[{\it (i)}] $V_{\phi}$ is an irreducible representation of $K^{\omega}[M]$. \vspace{2pt} \item[{\it (ii)}] Every irreducible representation of $K^{\omega}[M]$ is isomorphic to $V_{\phi}$ for some \linebreak $\phi \in \widehat{M}$. \vspace{2pt} \item[{\it (iii)}] $V_{\phi} \simeq V_{\psi}$ if and only if $\phi \vert_{\Rad(\omega)}=\psi \vert_{\Rad(\omega)}$. \vspace{2pt} \item[{\it (iv)}] The irreducible representations of $K^{\omega}[M]$ are in one-to-one correspondence with the irreducible representations of $\Rad(\omega)$. \item[{\it (v)}] The dimension of every irreducible representation of $K^{\omega}[M]$ equals $\sqrt{\hspace{-0.5pt}\big\vert\hspace{-0.5pt}\frac{M}{\Rad(\omega)}\hspace{-0.5pt}\big\vert\hspace{-0.5pt}}$. \item[{\it (vi)}] The character $\chi_{\phi}:K^{\omega}[M] \rightarrow K$ afforded by $V_{\phi}$ is given by: $$\chi_{\phi}(u_m) = \begin{cases} \sqrt{\big\vert\frac{M}{\Rad(\omega)}\big\vert}\hspace{2pt} \phi(m) & \text{ if } m \in \Rad(\omega), \\ \hspace{1cm}0 & \text{ otherwise. } \end{cases}$$ \end{enumerate} \end{lemma} \subsection{Irreducible cocharacters of $(K\hspace{-0.9pt}G)_J$} The Hopf algebra $(K\hspace{-0.9pt}G)_J$ is cosemisimple by \cite[Corollary 3.6]{AEGN}. The irreducible corepresentations of $(K\hspace{-0.9pt}G)_J$ were determined by Etingof and Gelaki in \cite[Section 3]{EG1}. The following result is \cite[Proposition 2.1]{CM3}. It reinterprets in our setting and summarizes \cite[Propositions 3.1, 4.1, and 4.2]{EG1}. \begin{proposition}\label{decomp} Let $\{\tau_{\ell}\}_{\ell=1}^n$ be a set of representatives of the double cosets of $M$ in $G$. Then: \begin{enumerate} \item[{\it (i)}] As a coalgebra, $(K\hspace{-0.9pt}G)_J$ decomposes as the direct sum of subcoalgebras \begin{equation}\label{decompKGJ} (K\hspace{-0.9pt}G)_J = \bigoplus_{\ell=1}^n K(M\tau_{\ell} M). \vspace{2pt} \end{equation} \item[{\it (ii)}] The subcoalgebra $K(M\tau_{\ell} M)$ has a basis given by $\{e_{\phi}\tau_{\ell}e_{\psi}\}_{(\phi,\psi)\in N_{\tau_{\ell}}}$, where \vspace{1pt} $$\hspace{1cm} N_{\tau_{\ell}}=\big\{(\phi,\psi) \in \widehat{M}\times\widehat{M} \ : \ \psi(m)=\phi(\tau_{\ell}^{-1}m\tau_{\ell}) \hspace{7pt} \forall m \in M\cap (\tau_{\ell} M \tau_{\ell}^{-1}) \big\}.\vspace{1pt}$$ (Notice that if $M \cap (\tau_{\ell} M \tau_{\ell}^{-1})=\{1\}$, then $N_{\tau_{\ell}}=\widehat{M} \times\widehat{M}$.) \vspace{6pt} \item[{\it (iii)}] The dual algebra of $K(M\tau_{\ell} M)$ is isomorphic to the twisted group algebra $K^{(\omega,\omega^{-1})\vert _{N_{\tau_{\ell}}}}[N_{\tau_{\ell}}]$. \end{enumerate} \end{proposition} In view of the decomposition \eqref{decompKGJ}, to describe the irreducible cocharacters of $(K\hspace{-0.9pt}G)_J$, it suffices to restrict our attention to those of $K(M\tau_{\ell} M)$. Set, for short, $\tau=\tau_{\ell}$. Denote by $\Rad_{\tau}$ the radical of the restriction of $(\omega,\omega^{-1})$ to $N_{\tau}$. \begin{proposition}\label{Prop:irr} For every $m,m'\in M,$ consider the element \begin{equation}\label{irrcoc} c_{\tau}(m,m'):=\sqrt{\Big\vert\frac{N_{\tau}}{\Rad_{\tau}}\Big\vert}\sum_{(\phi,\psi)\in\Rad_{\tau}}\phi(m)\psi(m') e_{\phi} \tau e_{\psi}. \end{equation} Then: \begin{enumerate} \item[{\it(i)}] $c_{\tau}(m,m')$ is an irreducible cocharacter of $(K\hspace{-0.9pt}G)_J$. \vspace{2pt} \item[{\it (ii)}] $c_{\tau}(m_1,m'_1) = c_{\tau'}(m_2,m'_2)$ if and only if $\tau=\tau'$ and $(m_1,m'_1)(m_2,m'_2)^{-1} \in (\Rad_{\tau})^{\perp}.$ \end{enumerate} \end{proposition} \begin{proof} Let $\{u_{(\phi,\psi)}\}_{(\phi,\psi)\in N_{\tau}}$ be the dual basis of $\{e_{\phi}\tau e_{\psi}\}_{(\phi,\psi)\in N_{\tau}}$. One can check that: $$u_{(\phi_1,\psi_1)} u_{(\phi_2,\psi_2)}=\omega(\phi_1,\phi_2) \omega^{-1}(\psi_1,\psi_2)u_{(\phi_1\phi_2,\psi_1\psi_2)}.$$ We identify $K(M\tau M)^*$ with the twisted group algebra $K^{(\omega,\omega^{-1})\vert_{N_{\tau}}}[N_{\tau}]$. By duality, the irreducible cocharacters of $K(M\tau M)$ are obtained as the irreducible characters of $K(M\tau M)^*$. \par \smallskip (i) The elements of $M\times M$ can be viewed\vspace{1pt} as characters on $\widehat{N_{\tau}}$ in the natural way. For every pair $(m,m') \in M \times M$, consider\vspace{-1pt} the irreducible representation $V_{(m,m')}$ of $K^{(\omega,\omega^{-1})\vert_{N_{\tau}}}[N_{\tau}]$ and the corresponding character $\chi_{(m,m')}$ as\vspace{1.5pt} in Lemma \ref{irrtwgpal}. Every irreducible cocharacter of $K(M\tau M)$ will be of this form.\vspace{1.5pt} We describe $\chi_{(m,m')}$ as an element in $K(M\tau M)$ by using Lemma \ref{irrtwgpal}(vi): $$\begin{array}{rl} \chi_{(m,m')} & \hspace{-4.5pt}=\hspace{1.5pt} {\displaystyle \sum_{(\phi,\psi)\in N_{\tau}} \chi_{(m,m')}(u_{(\phi,\psi)})e_{\phi} \tau e_{\psi}} \vspace{5pt} \\ & \hspace{-4.5pt}=\hspace{1.5pt} {\displaystyle \sqrt{\Big\vert\frac{N_{\tau}}{\Rad_{\tau}}\Big\vert} \sum_{(\phi,\psi)\in \Rad_{\tau}} \phi(m)\psi(m') e_{\phi} \tau e_{\psi}} \vspace{5pt} \\ & \hspace{-4.5pt}=\hspace{1.5pt} c_{\tau}(m,m'). \end{array}$$ \smallskip (ii) According to Lemma \ref{irrtwgpal}(iii), $V_{(m_1,m'_1)} \simeq V_{(m_2,m'_2)}$ if and only if $(m_1,m'_1) \vert_{\Rad_{\tau}}=(m_2,m'_2) \vert_{\Rad_{\tau}}$. This condition\vspace{0.5pt} is equivalent to $(m_1,m'_1)(m_2,m'_2)^{-1} \in (\Rad_{\tau})^{\perp}.$ Finally, notice\vspace{0.66pt} that $c_{\tau}(m_1,m'_1) \neq c_{\tau'}(m_2,m'_2)$ if $\tau \neq \tau'$ because\vspace{0.5pt} the intersection of $K(M\tau M)$ and $K(M\tau' M)$ is trivial in view of the decomposition \eqref{decompKGJ} of $(K\hspace{-0.9pt}G)_J$. \end{proof} Let $\{(m_i,m'_i)\}_{i=1}^r$ be coset representatives of $(\Rad_{\tau})^{\perp}$ in $M \times M$. By using that $\phi(e_{\varepsilon})=\delta_{\phi,\varepsilon}$ and $(\varepsilon,\varepsilon)\in \Rad_{\tau}$, we have the following chain of equalities: $$\begin{array}{rl} e_{\varepsilon} \tau e_{\varepsilon} & \hspace{-4.5pt}=\hspace{1.5pt} {\displaystyle \sum_{(\phi,\psi)\in \Rad_{\tau}} \phi(e_{\varepsilon}) \psi(e_{\varepsilon}) e_{\phi} \tau e_{\psi}} \vspace{5pt} \\ & \hspace{-4.5pt}=\hspace{1.5pt} {\displaystyle \frac{1}{\vert M \vert^2} \sum_{(\phi,\psi)\in \Rad_{\tau}} \hspace{2pt} \sum_{m,m' \in M} \phi(m)\psi(m') e_{\phi} \tau e_{\psi}} \vspace{5pt} \\ & \hspace{-4.5pt}=\hspace{1.5pt} {\displaystyle \frac{1}{\vert M \vert^2} \sum_{(\phi,\psi)\in \Rad_{\tau}} \big\vert (\Rad_{\tau})^{\perp} \big\vert \sum_{i=1}^r \phi(m_i)\psi(m'_i) e_{\phi} \tau e_{\psi}} \vspace{5pt} \\ & \hspace{-4.5pt}=\hspace{1.5pt} {\displaystyle \frac{1}{\vert \Rad_{\tau} \vert} \sum_{i=1}^r \sum_{(\phi,\psi)\in \Rad_{\tau}} \phi(m_i)\psi(m'_i) e_{\phi} \tau e_{\psi}}. \end{array}$$ Hence: \begin{equation}\label{S2Eq1} \sum_{i=1}^r c_{\tau}(m_i,m'_i) = \sqrt{\Big\vert\frac{N_{\tau}}{\Rad_{\tau}}\Big\vert} |\Rad_{\tau}| e_{\varepsilon} \tau e_{\varepsilon} = \sqrt{|N_{\tau}||\Rad_{\tau}|} e_{\varepsilon} \tau e_{\varepsilon}. \end{equation} \begin{proposition} If $\omega$ is non-degenerate, then $\sqrt{|N_{\tau}||\Rad_{\tau}|}$ is a natural number that divides $|M|$. \end{proposition} \begin{proof} Since $\omega$ is non-degenerate, the algebra $K^{(\omega,\omega^{-1})}[\hspace{1pt}\widehat{M} \times \widehat{M}\hspace{1pt}]$ has\vspace{2pt} a unique irreducible representation, say $W$, of\vspace{1pt} dimension $|M|$. Consider $W$ as a representation of $K^{(\omega,\omega^{-1})}[N_{\tau}]$ by restriction of scalars. It decomposes\vspace{1pt} as the direct sum $W \simeq \oplus_{\phi \in \widehat{\Rad_{\tau}}} V_{\phi}^{(s_{\phi})}$. On the other hand, $K^{(\omega,\omega^{-1})}[\hspace{1pt}\widehat{M} \times \widehat{M}\hspace{1pt}]$ is free\vspace{1pt} as a module over $K^{(\omega,\omega^{-1})}[N_{\tau}]$. Bearing in mind\vspace{2pt} that all the $V_{\phi}$'s have the same dimension, the above implies that all the numbers $s_{\phi}$'s are equal. Write\vspace{1pt} simply $s$ for them. Counting dimensions, we obtain: \begin{equation}\label{S2Eq2} |M| = s |\widehat{\Rad_{\tau}}| \sqrt{\Big\vert\frac{N_{\tau}}{\Rad_{\tau}}\Big\vert}= s\sqrt{|N_{\tau}||\Rad_{\tau}|}. \end{equation} This establishes the claim. \end{proof} The following result refines \cite[Proposition 2.2]{CM3}, which required the hypothesis $M \cap \big(\tau M \tau^{-1}\big) =\{1\}$: \begin{proposition}\label{vipcharacter} If $\omega$ is non-degenerate, then the cocharacter $|M|e_{\varepsilon} \tau e_{\varepsilon}$ is contained in every Hopf order of $(K\hspace{-0.9pt}G)_J$. \end{proposition} \begin{proof} From \eqref{S2Eq1} and \eqref{S2Eq2} we have: $$|M|e_{\varepsilon} \tau e_{\varepsilon} = s\sqrt{|N_{\tau}||\Rad_{\tau}|} e_{\varepsilon} \tau e_{\varepsilon} = s \sum_{i=1}^r c_{\tau}(m_i,m'_i).$$ Now apply that the right-hand side term is a cocharacter of $(K\hspace{-0.9pt}G)_J$ and that a cocharacter is contained in any Hopf order by Proposition \ref{character}(ii). \end{proof} \section{Lagrangian decomposition and a sufficient condition for the existence of Hopf orders} In \cite[Subsection 4.1]{CM3} we constructed an example of integral Hopf order for the twisted group algebra $(K\hspace{-1pt}S_4)_J$, where $J$ was a twist arising from a subgroup of $S_4$ isomorphic to the Klein four-group. The key fact in this construction was the existence of a Hopf order $X$ of $K\hspace{-1pt}S_4$ such that $J^{\pm 1} \in X \otimes_R X$. Then, the twisted Hopf $R$-algebra $X_J$ of $X$ provides a Hopf order of $(K\hspace{-1pt}S_4)_J$. The goal of this section is to fit this example in a general group-theoretical framework. \par \vspace{2pt} We start with a brief discussion on Lagrangian decompositions of an abelian group of central type. We refer the reader to \cite[Section 4]{D} and \cite[Section 1]{BGM} for further details. \par \vspace{2pt} Let $M$ be a finite abelian group of central type and $\beta:M \times M \rightarrow K^{\times}$ a non-degenerate skew-symmetric pairing. For a subgroup $L$ of $M$, we consider the orthogonal complement $$L^{\perp} = \{m \in M : \beta(m,l)=1 \hspace{5pt} \forall l \in L\}.$$ The subgroup $L$ is said to be a \emph{Lagrangian} of $M$ if $L=L^{\perp}$. A Lagrangian subgroup gives rise to a short exact sequence $$1 \rightarrow L \rightarrow M \stackrel{\hspace{-2pt}\pi}{\rightarrow} \widehat{L} \rightarrow 1.$$ Here, $\pi$ is defined\vspace{-1pt} by $\pi(m)(l)=\beta(m,l)$ for all $m \in M, l \in L$. Suppose that $\pi$ splits. Then, $M\simeq L\times \widehat{L}$ and such a decomposition is called\vspace{0.5pt} \emph{Lagrangian decomposition} of $M$. It is proved in \cite[Lemma 4.2]{D} and \cite[Proposition 1.7]{BGM} that: \begin{enumerate} \item A Lagrangian decomposition of $M$ always exists. \vspace{2pt} \item Writing every element of $M$ as a pair $(l,\lambda)$, with $l \in L,\lambda \in \widehat{L}$, the pairing $\beta$ takes the form: $$\beta\big((l,\lambda),(l',\lambda')\big) = \lambda(l')\lambda'(l)^{-1}.$$ \end{enumerate} Let $\alpha: M \times M \rightarrow K^{\times}$ be now a non-degenerate $2$-cocycle. Applying the preceding discussion to the pairing ${\mathcal B}_{\alpha}$, we obtain that $\alpha$ is (up to coboundary) given by: \begin{equation}\label{eval} \alpha\big((l,\lambda),(l',\lambda')\big) = \lambda(l'). \end{equation} We will next see that this formula, when applied to the twisting procedure, allows to express the twist in an illuminating form. \par \vspace{2pt} Suppose that $\omega: \widehat{M} \times \widehat{M} \rightarrow K^{\times}$ is a non-degenerate $2$-cocycle. Let $\widehat{M} \simeq L\times \widehat{L}$ be a Lagrangian decomposition of $\widehat{M}$ such that $\omega$ is given as in \eqref{eval}. Call $f:L\times \widehat{L} \rightarrow \widehat{M}$ the isomorphism giving the previous decomposition. Identify the character group of $\widehat{M}$ with $M$ in the natural way, and similarly for $\widehat{L}$. Thus, we have an isomorphism $\widehat{f}:M \rightarrow \widehat{L} \times L$. Under these isomorphisms, we can see the elements of $\widehat{M}$ as pairs $(l,\lambda)$, with $l \in L,\lambda \in \widehat{L}$, and the elements of $M$ as pairs $(\lambda',l')$, with $\lambda' \in \widehat{L},l' \in L$. The evaluation\vspace{1pt} of $\widehat{M}$ at $M$ is then given by $(l,\lambda)(\lambda',l')=\lambda'(l)\lambda(l')$. \par \vspace{2pt} Under these identifications, we also have that: \begin{enumerate} \item[(1)] The primitive idempotents in $K\hspace{-1pt}M$ are of the form $e_{(l,\lambda)}$. \vspace{2pt} \item[(2)] Each $e_{(l,\lambda)}$ can be rewritten as the product $e_{\lambda} e_l$, where now $e_{\lambda}\in K\hspace{-1pt}L$ and $e_l \in K\hspace{-0.5pt}\widehat{L}$. (We again view $l \in L$ as a character of $\widehat{L}$.) \end{enumerate} The next result ties the twist afforded by $\omega$ with the dual bases $\{(\lambda,e_{\lambda})\}_{\lambda \in \widehat{L}}$ and $\{(e_l,l)\}_{l \in L}$ of $K\hspace{-1pt}L$: \begin{lemma}\label{lemJ} The twist $J$ in $K\hspace{-1pt}M\otimes K\hspace{-1pt}M$ afforded by $\omega$ can be expressed as: \begin{equation}\label{Jrwt} J \hspace{1pt}=\hspace{1pt} \sum_{\lambda \in \widehat{L}} e_{\lambda} \otimes \lambda \hspace{1pt}=\hspace{1pt} \sum_{l \in L} l \otimes e_l.\vspace{-2pt} \end{equation} Hence, $J$ lies in $K\hspace{-1pt}L\otimes K\hspace{-0.5pt}\widehat{L}.$ \end{lemma} \begin{proof} We compute: $$\begin{array}{rl} J & \hspace{-2pt}=\hspace{3pt} {\displaystyle \sum_{l,l' \in L} \hspace{2pt} \sum_{\lambda,\lambda' \in \widehat{L}} \omega\big((l,\lambda),(l',\lambda')\big) e_{(l,\lambda)} \otimes e_{(l',\lambda')}} \vspace{4pt} \\ & \hspace{-6pt}\stackrel{\eqref{eval}}{=}\hspace{-1pt} {\displaystyle \sum_{l,l' \in L} \hspace{2pt} \sum_{\lambda,\lambda' \in \widehat{L}} \lambda(l') e_{\lambda} e_l \otimes e_{\lambda'} e_{l'}} \vspace{5pt} \\ & \hspace{-2pt}=\hspace{3pt} {\displaystyle \sum_{\lambda \in \widehat{L}} e_{\lambda} \bigg(\sum_{l \in L} e_l \bigg) \otimes \bigg(\sum_{\lambda' \in \widehat{L}} e_{\lambda'} \bigg) \bigg(\sum_{l' \in L} \lambda(l') e_{l'} \bigg)} \vspace{5pt} \\ & \hspace{-2pt}=\hspace{3pt} {\displaystyle \sum_{\lambda \in \widehat{L}} e_{\lambda} \otimes \lambda.} \end{array}$$ The expression in the right-hand side of \eqref{Jrwt} is obtained in a similar way by using that $\sum_{\lambda \in \widehat{L}} \lambda(l')e_{\lambda}=l'$. \end{proof} Observe that \begin{equation}\label{J-1rwt} J^{-1} \hspace{1pt}=\hspace{1pt} \sum_{\lambda \in \widehat{L}} e_{\lambda} \otimes \lambda^{-1} \hspace{1pt}=\hspace{1pt} \sum_{l \in L} l^{-1} \otimes e_l. \end{equation} We are now in a position to state the main result of this section: \begin{theorem}\label{thm:main1} Let $K$ be a (large enough) number field with ring\vspace{1pt} of integers $R$. Let $G$ be a finite group and $M$ an abelian subgroup of $G$ of central type. Consider the twist \linebreak $J$ in $K\hspace{-1pt}M \otimes K\hspace{-1pt}M$ afforded by a non-degenerate $2$-cocycle $\omega: \widehat{M} \times \widehat{M} \rightarrow K^{\times}$. \par \vspace{2pt} Fix a Lagrangian decomposition $\widehat{M} \simeq L \times \widehat{L}$. Suppose that $L$ (viewed as inside of $M$) is contained in a normal abelian subgroup $N$ of $G$. Then, $(K\hspace{-0.8pt}G)_J$ admits a Hopf order over $R$. \end{theorem} \begin{proof} We will construct a Hopf order $X$ of $K\hspace{-0.8pt} G$ such that $J^{\pm 1} \in X \otimes_R X$. Then, $X$ will be closed under the coproduct and the antipode of $(K\hspace{-0.9pt} G)_J$ by Proposition \ref{twistorder}. \par \vspace{2pt} Let $X$ be\vspace{-1pt} the $R$-subalgebra of $K\hspace{-0.8pt} G$ generated by the set $\{e_{\nu}g: \nu \in \widehat{N}, g\in G\}$. Since $N$ is normal, $G$ acts on $\widehat{N}$ by $(g \triangleright \nu)(n)=\nu(g^{-1}ng)$ for all $n \in N$. Thus, $ge_{\nu}g^{-1}=e_{g \hspace{0.65pt}\triangleright\hspace{0.65pt} \nu}$, and we get the rule: $(e_{\nu}g)(e_{\nu'}g')=e_{\nu}e_{g \hspace{0.65pt}\triangleright\hspace{0.65pt} \nu'}gg'$. Choose\vspace{1.5pt} a set $Q$ of coset representatives of $N$ in $G$ with $1$ as the representative of $N$. Then, $X$ is a free $R$-module with basis $\{e_{\nu}q: \nu \in \widehat{N}, q \in Q\}$. One can see\vspace{1pt} that $X$ is a Hopf order of $K\hspace{-0.8pt} G$ by using the formulas: $\Delta(e_{\nu})=\sum_{\eta \in \widehat{N}} e_{\eta} \otimes e_{\eta^{-1}\nu}, \hspace{3pt} \varepsilon(e_{\nu})=\delta_{\nu,1},$ and $S(e_{\nu})=e_{\nu^{-1}}$. \par \vspace{2pt} An idempotent of $K\hspace{-1pt}L$ is a sum of primitive\vspace{0.5pt} idempotents of $K\hspace{-1pt}N$ because $K\hspace{-1pt}L$ is a subalgebra of $K\hspace{-1.2pt}N$ and $K\hspace{-1pt}N=\oplus_{\nu \in \widehat{N}} \hspace{1pt} K\hspace{-1pt} e_{\nu}$. Hence, $X$ contains all the primitive idempotents of $K\hspace{-1pt}L$. The\vspace{1.5pt} expressions \eqref{Jrwt} and \eqref{J-1rwt} yield that $J^{\pm 1} \in X \otimes_R X$. \end{proof} A particular case in which the hypothesis of Theorem \ref{thm:main1} is satisfied is when $M$ itself is contained in a normal abelian subgroup of $G$. Although this does not always happen; see Example \ref{example1} below. \par \vspace{2pt} The next characterization will be useful to prove that, in some situations, $X$ is the unique Hopf order. A warning on notation is first necessary. Since we will simultaneously work with the primitive idempotents of different group algebras, we will specify the group as a superscript to distinguish them. \begin{proposition}\label{cond:unique} Retain the hypotheses of Theorem \ref{thm:main1}. Assume furthermore that the natural action of $G/N$ on $N$ induced by conjugation is faithful (equivalently, that $C_G(N)=N$). Consider the Hopf order $X$ of $(K\hspace{-0.69pt}G)_J$ over $R$ constructed above. Let $Y$ be a Hopf order of $(K\hspace{-0.69pt}G)_J$ over $R$. The following assertions are equivalent: \vspace{2pt} \begin{enumerate} \item[{\it (i)}] $X=Y$. \vspace{2pt} \item[{\it (ii)}] $e_{\nu}^N \in Y$ for all $\nu \in \widehat{N}$. \vspace{4pt} \item[{\it (iii)}] $e_{\varepsilon}^L, e_{\varepsilon}^N \in Y$. \end{enumerate} \end{proposition} \begin{proof} (i) $\Rightarrow$ (ii) This is clear by the construction of $X$. \par \vspace{2pt} (ii) $\Rightarrow$ (iii) The\vspace{0.5pt} map from $\widehat{N}$ to $\widehat{L}$ given by restriction is surjective, and for every $\lambda \in \widehat{L}$, we have: $$e^L_{\lambda} \hspace{2pt}=\hspace{1pt} \sum_{\begin{subarray}{c} \nu \in \widehat{N} \vspace{1pt} \\ \nu \vert_L = \lambda \end{subarray}} e^N_{\nu}.$$ In particular, $e_{\varepsilon}^L \in Y$. \par \vspace{2pt} (iii) $\Rightarrow$ (i) We first\vspace{0.5pt} show that $X \subseteq Y$. By Proposition \ref{sub}, $Y \cap (K\hspace{-1pt}M)$ is a Hopf order of $K\hspace{-1pt}M$ over $R$. Proposition \ref{character} yields that $M$ is contained in $Y$. Similarly, $Y \cap (K\hspace{-1pt}L)$ is a Hopf order of $K\hspace{-1pt}L$ over $R$. Notice\vspace{-1pt} that $e_{\lambda}^L=(\lambda^{-1} \otimes id)\Delta(e_{\varepsilon}^L)$ for every $\lambda \in \widehat{L}$. Using Proposition \ref{character} again, we get that $e_{\lambda}^L \in Y$. In\vspace{1pt} view\vspace{-0.5pt} of Equations \ref{Jrwt} and \ref{J-1rwt}, $J$ and $J^{-1}$ belong to $Y \otimes_R Y$. So, $Y$ is\vspace{0.5pt} also a Hopf order of $K\hspace{-1pt}G$ over $R$. By applying once more Proposition \ref{character}, we obtain\vspace{1pt} that $G$ is contained in $Y$. Now, $Y \cap (K\hspace{-1pt}N)$ is a Hopf order of $K\hspace{-1pt}N$ over $R$. Arguing\vspace{-0.5pt} as before, $e_{\nu}^N=(\nu^{-1} \otimes id)\Delta(e_{\varepsilon}^N)$ belongs to $Y$ for every $\nu \in \widehat{N}$. Hence, $X \subseteq Y$. \par \vspace{2pt} We next show that $Y \subseteq X$. Pick an arbitrary element $y \in Y$ and express it in the following form: $$y \hspace{3pt}=\hspace{2pt}\sum_{\nu\in\widehat{N}\hspace{-1pt},\hspace{0.5pt}q \in Q} k_{\nu,q} e_{\nu}q, \hspace{2pt}\textrm{ with } k_{\nu,q}\in K.$$ We will prove that $k_{\nu,q} \in R$ for all $\nu \in \widehat{N},q \in Q$. This task admits two reductions. Firstly, by\vspace{1pt} multiplying $y$ with each primitive idempotent in $K\hspace{-1pt}N$ (which belong to $Y$), it suffices to show that if $\sum_{q \in Q} k_q e _{\nu} q \in Y$, with $k_q \in K$, then $k_q \in R$. Secondly, by multiplying this new element with the elements in $Q$ (which also belong to $Y$), it suffices to show that $k_1 \in R$. \par \vspace{2pt} Let $\{\mu_i\}_{i=1}^t$ be\vspace{0.75pt} a set of generators of $\widehat{N}$ as an abelian group. Since $Y$ is a Hopf order of $K\hspace{-1pt}G$ that contains\vspace{1pt} $\{e_{\nu}^N\}_{\nu \in \widehat{N}}$, the element resulting from the following computation will be in $Y \otimes_R \ldots \otimes_R Y$ ($t+1$ times): $$\begin{array}{l} {\displaystyle \big(e^N_{\mu_1}\otimes \ldots \otimes e^N_{\mu_t} \otimes 1\big)\Delta^{(t)}\Big(\sum_{q\in Q} k_q e^N_{\nu}q\Big)\big(e^N_{\mu_1}\otimes \ldots \otimes e^N_{\mu_t}\otimes 1\big)} \vspace{6pt} \\ \hspace{0.8cm}= \hspace{0.1cm} {\displaystyle \sum_{q\in Q} \big(e^N_{\mu_1}\otimes \ldots \otimes e^N_{\mu_t} \otimes 1\big) \bigg(\sum_{\eta_1,\ldots,\eta_t \in \widehat{N}} k_q e^N_{\eta_1}q \otimes \ldots \otimes e^N_{\eta_t}q \otimes e^N_{\nu-\eta_1-\ldots - \eta_t}q\bigg)} \vspace{4pt} \\ {\displaystyle \hspace{10.1cm} \big(e^N_{\mu_1}\otimes \ldots \otimes e^N_{\mu_t}\otimes 1\big)} \vspace{8pt} \\ \hspace{0.8cm}= \hspace{0.1cm} {\displaystyle \sum_{q\in Q} \hspace{0.6mm} \sum_{\eta_1,\ldots, \eta_t \in \widehat{N}\hspace{-1pt}} k_q e^N_{\mu_1} e^N_{\eta_1} e^N_{q \hspace{1pt}\triangleright \mu_1} q \otimes \ldots \otimes e^N_{\mu_t}e^N_{\eta_t} e^N_{q \hspace{1pt}\triangleright \mu_t} q \otimes e^N_{\nu-\eta_1-\ldots -\eta_t}q.} \end{array}$$ The idempotents $e^N_{\mu_{i}}$'s are orthogonal.\vspace{0.5pt} The only non-zero summands in the above sum will be those in which $\mu_i=\eta_i= q \triangleright \mu_i$ for every $i=1,\ldots, t$. By hypothesis, the action of $G/N$ on $N$ is faithful. So the action of $G/N$ on $\widehat{N}$ is faithful\vspace{-0.5pt} as well. Then, $q \triangleright \mu_i=\mu_i$ for all $i$ if and only if $q=1$. The above sum reduces to: $$k_1 e_{\mu_1}^N \otimes \ldots \otimes e_{\mu_t}^N \otimes e_{\nu-\mu_1-\ldots-\mu_t}^N.$$ This element belongs to $Y \otimes_R \ldots \otimes_R Y$ ($t+1$ times). Hence, it satisfies a monic polynomial with coefficients in $R$. Since the tensor product of idempotents is an idempotent, $k_1$ satisfies a monic polynomial with coefficients in $R$. Therefore, $k_1 \in R$ and we are done. \end{proof} We can squeeze a bit more condition (iii) in Proposition \ref{cond:unique}: \begin{proposition}\label{squeeze} Retain the hypotheses of Theorem \ref{thm:main1}. In addition, suppose that $L$ generates $N$ as a $G/N$-module. Let $Y$ be a Hopf order of $(K\hspace{-0.69pt}G)_J$ over $R$. Then, $e_{\varepsilon}^L \in Y$ if and only if $e_{\varepsilon}^N \in Y$. \end{proposition} \begin{proof} Keep\vspace{2pt} the notation of the proof of Theorem \ref{thm:main1}. Bear in mind that $e_{\varepsilon}^N$ is an integral in $K\hspace{-1pt}N$ such that $\varepsilon(e_{\varepsilon}^N)=1$. For $g \in G$, notice that $g \triangleright e_{\varepsilon}^L = ge_{\varepsilon}^Lg^{-1}=e_{\varepsilon}^{gLg^{-1}}$ and $\varepsilon\big(e_{\varepsilon}^{gLg^{-1}}\big)=1$. Consider\vspace{0.5pt} the element $\Lambda:= \prod_{g \in G} e_{\varepsilon}^{gLg^{-1}}$ in $K\hspace{-1pt}N$. That $L$ generates $N$ as a $G/N$-module means\vspace{2pt} that there is a subset $F$ of $Q$ such that $N=\prod_{g \in F}\hspace{1pt} gLg^{-1}$. This implies that $\Lambda$ is an integral in $K\hspace{-1pt}N$. Moreover, $\varepsilon(\Lambda)=1$. By the uniqueness of integrals, it must be $\Lambda=e_{\varepsilon}^N$. \par \vspace{2pt} Suppose now that $e_{\varepsilon}^L \in Y$. We have\vspace{1pt} seen in the proof of (iii) $\Rightarrow$ (i) in Proposition \ref{cond:unique} that if $e_{\varepsilon}^L \in Y$, then $Y$ is a Hopf order of $K\hspace{-0.69pt}G$\vspace{1pt} and $G \subset Y$. Therefore, $ge_{\varepsilon}^Lg^{-1} \in Y$ and $e_{\varepsilon}^N=\prod_{g \in G} \hspace{0.5pt} ge_{\varepsilon}^Lg^{-1} \in Y$. \par \vspace{2pt} Conversely, suppose that $e_{\varepsilon}^N \in Y$. Arguing as in the proof of (iii) $\Rightarrow$ (i) in Proposition \ref{cond:unique} with $Y \cap (K\hspace{-1pt}N)$, we get that $e_{\nu}^N=(\nu^{-1} \otimes id)\Delta(e_{\varepsilon}^N)$ belongs to $Y$ for every $\nu \in \widehat{N}$. The same argument of the proof of (ii) $\Rightarrow$ (iii) gives that $e_{\varepsilon}^L \in Y$. \par \vspace{2pt} Notice that the parts of the proof of Proposition \ref{cond:unique} we just invoked do not require that the action of $G/N$ on $N$ is faithful. \end{proof} The following example shows a finite non-abelian group $G$ containing an abelian subgroup $M$ of central type such that: \begin{enumerate} \item[(1)] $M$ is not included in a normal abelian subgroup of $G$. \vspace{2pt} \item[(2)] There is a Lagrangian decomposition $M \simeq L \times \widehat{L}$ such that $L$ is however included in a normal abelian subgroup of $G$. \end{enumerate} \begin{example}\label{example1} Let $p$ be a prime number and $\F_p$ the field with $p$ elements. Consider the subgroup $G$ of $\GL_{2n+2}(\F_p)$ consisting of matrices $(a_{ij})$ defined by the following conditions: $$a_{11}=a_{2n+2\hspace{1.5pt} 2n+2}=1 \text{ and } a_{i\hspace{0.25pt} 1} = a_{2n+2\hspace{1pt} j}=0 \text{ for } i= 2,\ldots, 2n+2; j=1,\ldots ,2n+1.$$ By forgetting the first and last rows and columns of every matrix, we make $G$ to fit into the following short exact sequence: $$1\to \Gamma \to G\to \GL_{2n}(\F_p)\to 1.$$ The subgroup $\Gamma$ consists of matrices of the following form: \vspace{2mm} $$\hspace{1.5cm}\left(\begin{array}{c|ccc|c} 1 & a_{12} & \ldots & a_{1\hspace{0.75pt} 2n+1} & a_{1\hspace{0.75pt} 2n+2} \\ \hline 0 & & & & a_{2\hspace{0.75pt} 2n+2} \vspace{-2pt} \\ \vdots & & \hspace{0.3cm}\textrm{{\large {\it Id}}} & & \vdots \\ 0 & & & & a_{2n+1\hspace{1.5pt} 2n+2} \\ \hline 0 & 0 & \ldots & 0 & 1 \end{array}\right).\vspace{2mm}$$ Write $V$ for the abelian group $(\F_p^{2n},+)$ and set $V^*= \Hom_{\F_p}(V,\F_p)$. Take a dual basis $\{(x_i,y_i)\}_{i=1}^{2n} \subset V^* \times V$. We assign to every matrix $(a_{ij})$ in $\Gamma$ the following pair in $V^* \times V:$ $$\big(a_{2\hspace{1pt} 2n+2}\hspace{1pt} x_1+\ldots+a_{2n+1\hspace{1.5pt} 2n+2}\hspace{1pt} x_{2n}\hspace{1pt},\hspace{1pt} a_{12}\hspace{0.5pt}y_1+\ldots +a_{1\hspace{0.75pt} 2n+1}\hspace{0.5pt} y_{2n}\big).$$ This gives us another exact sequence: $$0\to \F_p \to \Gamma \to V^* \times V\to 0.$$ Write $z$ for a generator of $(\F_p,+)$. Then, $\Gamma$ has the following presentation: $$\Gamma=\langle x_i,y_i,z : x_i^p=y_i^p=z^p=[x_i,z]=[y_i,z]=[x_i,x_j]=[y_i,y_j]=1, [y_i,x_j]=z^{\delta_{i,j}}\rangle.$$ Here, we view the generators as inside $\Gamma$ as follows: $x_i$ is the elementary matrix with $1$ in the $(i,2n+2)$-entry, $y_i$ is the elementary matrix with $1$ in the $(1,i)$-entry, and $z$ is the elementary matrix with $1$ in the $(1,2n+2)$-entry. \par \vspace{2pt} Let $M$ be now the subgroup of $G$ generated by $x_{n+1},\ldots, x_{2n},y_1,\ldots,y_n$, which is clearly of central type. Consider a non-degenerate cocycle $\omega: \widehat{M} \times \widehat{M} \rightarrow K^{\times}$ with Lagrangian decomposition defined by $M=L \times \widehat{L} = \langle x_{n+1},\ldots, x_{2n}\rangle \times \langle y_1,\ldots, y_n\rangle$. \par \vspace{2pt} We have that $M$ is abelian and it is not contained in an abelian normal subgroup of $G$. (By multiplying with appropriate matrices, one can see that if $M$ were contained in such a group, then $\Gamma$ would be abelian as well, reaching a contradiction.) However, $L$ is contained in the abelian normal subgroup $N:= \langle x_1,\ldots,x_{2n} \rangle \langle z \rangle$. \par \vspace{2pt} Here, the roles of $L$ and $\widehat{L}$ can be interchanged. We also have that $\widehat{L}$ is contained in the abelian normal subgroup $\langle y_1,\ldots,y_{2n} \rangle \langle z \rangle$. Notice that the two Hopf orders constructed from each one of these normal subgroups are different. This can be seen when trying to express a primitive idempotent of $K\hspace{-1pt}\widehat{L}$ as an $R$-linear combination of the basis $\{e_{\nu}q: \nu \in \widehat{N}, q \in Q\}$. \end{example} All examples of integral Hopf orders in twisted group algebras that we know so far are constructed as in the proof of Theorem \ref{thm:main1}. This suggests the following question: \begin{question} Let $K$ be a number\vspace{1pt} field with ring of integers $R$. Let $G$ be a finite group and $J$ a twist for $K\hspace{-0.69pt} G$ arising from an abelian subgroup $M$ and a non-degenerate $2$-cocycle on $\widehat{M}$ with values in $K^{\times}$. Suppose that $(K\hspace{-0.69pt} G)_J$ admits a Hopf order $X$ over $R$. Is there a cohomologous twist $\tilde{J}$ to $J$ such that $\tilde{J}^{\pm 1} \in X \otimes_R X$? \end{question} The example of Hopf order $X$ of $K\hspace{-1pt} S_4$ in \cite[Proposition 4.1]{CM3} shows that the initial twist $J$ used there does not satisfy $J^{\pm 1} \in X \otimes_R X$, see \cite[Remark 4.2]{CM3}. However, $J$ could be replaced by a cohomologous twist to achieve that condition. \section{Existence and uniqueness of Hopf orders in twists of certain semidirect products}\label{ex-uniq} This section also grows out of the above-mentioned example of integral Hopf order for $(K\hspace{-1pt}S_4)_J$. Such a Hopf order had the additional property of being unique. In this section, we examine this property in the framework defined by the hypotheses of Theorem \ref{thm:main1}. For semidirect products of groups, we provide several conditions that guarantee the uniqueness of the Hopf order constructed there. \par \vspace{2pt} Let $M$ be a finite abelian group. Suppose that $M=LP$, where $L$ and $P$ are subgroups of $M$ such that $L\cap P=\{1\}$ and $L \simeq P$. Fix an isomorphism $f:P \rightarrow \widehat{L}$. It induces a non-degenerate\vspace{-0.75pt} skew-symmetric pairing $\beta_f:\widehat{L} \times \widehat{P} \to K^{\times}$ given by $\beta_f(\lambda,\rho)= \rho(f^{-1}(\lambda))$. Identifying $\widehat{M}$ with $\widehat{P}\widehat{L}$, we get\vspace{1.5pt} the following $2$-cocycle on $\widehat{M}$: $$\omega(\rho_1\lambda_1,\rho_2\lambda_2)=\beta_f(\lambda_1,\rho_2), \hspace{10pt} \lambda_i \in \widehat{L}, \rho_i \in \widehat{P} \hspace{1pt}\textrm{ for }\hspace{1pt} i=1,2.\vspace{4pt}$$ The twist $J$ in $K\hspace{-1pt} M \otimes K\hspace{-1pt} M$ afforded by $\omega$ takes the following form: $$J=\sum_{\lambda \in\widehat{L}} \hspace{1pt}\sum_{\rho \in \widehat{P}}\hspace{1pt} \omega(\lambda,\rho)e_{\lambda}\otimes e_{\rho}.$$ We call $J$ the \emph{twist arising from} $f$. The isomorphism $\widehat{f}:L \rightarrow \widehat{P}$ yields a Lagrangian decomposition $\widehat{M} \simeq L \times \widehat{L}$ such that $J$ can be expressed as in \eqref{Jrwt}. \par \vspace{2pt} The following result, encompassed in Theorem \ref{thm:main1}, supplies more examples of integral Hopf orders in twisted group algebras: \begin{theorem}\label{thm:main2} Let $K$ be a (large enough) number field with ring of integers $R$. Consider the semidirect product $G := N \rtimes Q$ of two finite groups $N$ and $Q$, with $N$ abelian. Let $L$ and $P$ be abelian subgroups of $N$ and $Q$ respectively. Set $M=LP$. Let $\tau \in Q$. Suppose that $N,Q,L,P,$ and $\tau$ satisfy the following conditions: \begin{enumerate} \item[{\it (i)}] $L$ and $P$ are isomorphic and commute with one another. \vspace{2pt} \item[{\it (ii)}] $Q$ acts on $N$ faithfully. \vspace{2pt} \item[{\it (iii)}] $N=L \oplus (\tau \cdot L)$, where $N$ is written additively. \vspace{2pt} \item[{\it (iv)}] $N^{\tau} \neq \{1\}$. \vspace{2pt} \item[{\it (v)}] $\widehat{N}^{\sigma \tau}=\big(\widehat{N}^{\tau}\big) \cap \big(\widehat{N}^{\sigma \tau \sigma^{-1}}\big)=\{\varepsilon\}$ for every $\sigma \in P$ with $\sigma \neq 1$. \end{enumerate} Let $J$ be the twist in $K\hspace{-1pt}M \otimes K\hspace{-1pt}M$ arising from an isomorphism $f:P \rightarrow \widehat{L}$. Then, $(K\hspace{-0.9pt}G)_J$ admits a unique Hopf order over $R$. This Hopf order is generated, as an $R$-subalgebra, by the primitive idempotents of $K\hspace{-1.1pt}N$ and the elements of $Q$. \end{theorem} \begin{proof} We argued above that $L$ and $f$ provide a Lagrangian decomposition\linebreak $\widehat{M} \simeq L \times \widehat{L}$ such that $J$ can be expressed as in \eqref{Jrwt}. By hypothesis, $L$ is contained in $N$, and $N$ is a normal abelian subgroup of $G$. Theorem \ref{thm:main1} gives that $(K\hspace{-0.9pt} G)_J$ admits a Hopf order $X$ over $R$. The existence is so ensured. Notice that, in this setting, the Hopf order $X$ constructed in the proof of Theorem \ref{thm:main1} is the $R$-submodule generated by the set $\{e_{\nu}^N q: \nu \in \widehat{N},q \in Q\}$. This set is a basis of $K\hspace{-0.9pt} G$ as a $K$-vector space. \par \vspace{2pt} For the uniqueness, let $Y$ be a Hopf order of $(K\hspace{-0.9pt} G)_J$ over $R$. The idea of the proof is to establish that $e_{\varepsilon}^L$ belongs to $Y$ and then get that $X=Y$ by applying Propositions \ref{cond:unique} and \ref{squeeze}. (Hypotheses (ii) and (iii) are needed to apply such propositions.) Showing that $e_{\varepsilon}^L \in Y$ will require much technical work. \par \vspace{2pt} Denote\vspace{-1pt} by $V$ the representation $\text{Ind}_Q^G(K)$ modulo the trivial representation $K$. We identify $\text{Ind}_Q^G(K):=K\hspace{-1pt}G \otimes_{K\hspace{-1pt}Q} K$ with $K\hspace{-1pt} N$ as a vector space. Set,\vspace{-2pt} for short, $\widehat{N}^{\bullet}= \widehat{N} \backslash \{\varepsilon\}$. The set $\big\{e_{\nu}: \nu \in \widehat{N}^{\bullet}\big\}$ is a basis of $V$ and the action of $G$ on $V$ is defined by $(nq) \hspace{1pt}\mbox{$\diamond \hspace{-4.255pt} \cdot$}\hspace{3pt} e_{\nu} = (q \triangleright \nu)(n)e_{q \triangleright \nu}$ for all $n \in N,q \in Q$.\vspace{1pt} Here, $(q \triangleright \nu)(n)=\nu(q^{-1}\cdot n)$. Consider\vspace{1pt} the character $\chi:K\hspace{-1pt} G \rightarrow K$ afforded by $V$. It is not difficult to check that $\chi$ is given by: $$\chi\big(e^N_{\nu}q\big) = \begin{cases} \hspace{2pt}1\hspace{3pt} \text{ if }\hspace{2pt} \nu \neq \varepsilon \hspace{2pt} \text{ and }\hspace{2pt} q \triangleright \nu =\nu, \vspace{1pt} \\ \hspace{2pt}0\hspace{3pt} \text{ otherwise.} \end{cases}\vspace{2pt}$$ We know from Proposition \ref{vipcharacter} that the cocharacter $c_{\tau}:= |M| e^M_{\varepsilon} \tau e^M_{\varepsilon}$ of $(K\hspace{-1pt} G)_J$ belongs to $Y$. The element $E_{\tau}:=(\chi\otimes id)(\Delta_J(c_{\tau}))$ must belong to $Y$ as well in view of Proposition \ref{character}. A large part of the rest of the proof is devoted to find an appropriate expression for $E_{\tau}$. We start with the following computation: \begin{align} E_{\tau} \hspace{5pt} & \hspace{-3pt}\overset{\text{\ding{172}}}{=}\hspace{4pt} |M| \hspace{1pt} \displaystyle{\sum_{\begin{subarray}{l} \lambda_1,\lambda_2 \in \widehat{L} \\ \rho_1,\rho_2 \in \widehat{P}\end{subarray}} \big(\chi \otimes id\big)\bigg(J\Big(e^L_{\lambda_1} e^P_{\rho_1}\tau e^L_{\lambda_2} e^P_{\rho_2} \otimes e^L_{\lambda_1^{-1}}e^P_{\rho_1^{-1}}\tau e^L_{\lambda_2^{-1}}e^P_{\rho_2^{-1}}\Big)J^{-1}}\bigg) \vspace{4pt} \nonumber \\ \hspace{5pt} & \hspace{-3pt}\overset{\text{\ding{173}},\text{\ding{174}}}{=}\hspace{4pt} |M| \hspace{1pt} \displaystyle{\sum_{\begin{subarray}{l} \lambda_1,\lambda_2 \in \widehat{L} \\ \rho_1,\rho_2 \in \widehat{P}\end{subarray}} \hspace{2pt} \frac{\omega(\lambda_1,\rho_1^{-1})}{\omega(\lambda_2,\rho_2^{-1})} \hspace{2pt} \chi \big(e^L_{\lambda_1} e^P_{\rho_1}\tau e^L_{\lambda_2} e^P_{\rho_2}\big) e^L_{\lambda_1^{-1}}e^P_{\rho_1^{-1}}\tau e^L_{\lambda_2^{-1}}e^P_{\rho_2^{-1}}} \vspace{5pt} \nonumber \\ \hspace{5pt} & \hspace{-3pt}\overset{\text{\ding{175}},\text{\ding{174}}}{=}\hspace{4pt} |M| \hspace{2pt}\displaystyle{\sum_{\begin{subarray}{l} \lambda \in \widehat{L} \\ \rho \in \widehat{P}\end{subarray}} \chi\big(e^P_{\rho}\tau e^L_{\lambda}\big)e^L_{\lambda^{-1}}e^P_{\rho^{-1}}\tau e^L_{\lambda^{-1}}e^P_{\rho^{-1}}.} \label{sumetau} \end{align} Here, we have used: \begin{enumerate} \item[\ding{172}] That $\Delta(e_{\varepsilon}^M)=\sum_{\lambda \in \widehat{L}} \sum_{\rho \in \widehat{P}} \hspace{2pt} e^L_{\lambda} e^P_{\rho} \otimes e^L_{\lambda^{-1}}e^P_{\rho^{-1}}$. \vspace{1pt} \item[\ding{173}] Definition of $J$. \vspace{1pt} \item[\ding{174}] That $\{e_{\lambda}^L\}_{\lambda \in \widehat{L}}$ and $\{e_{\rho}^P\}_{\rho \in \widehat{P}}$ are complete sets of orthogonal idempotents in $K\hspace{-0.9pt}L$ and $K\hspace{-0.9pt}P$ respectively. \vspace{1pt} \item[\ding{175}] That $\chi$ is a character: $\chi(gh)=\chi(hg)$ for all $g,h \in G$. And that $L$ and $P$ commute with one another. \vspace{1pt} \end{enumerate} We\vspace{-2pt} next calculate the scalar $\chi\big(e^P_{\rho}\tau e^L_{\lambda}\big)$ occurring in the sum \eqref{sumetau}. We use that $e^L_{\lambda} = \sum_{\begin{subarray}{l} \nu \in \widehat{N} \\ \nu \vert_L = \lambda \end{subarray}} e^N_{\nu}$. We have: $$\chi\big(e^P_{\rho}\tau e^L_{\lambda}\big) \hspace{1pt} = \hspace{2pt} \frac{1}{|P|}\sum_{\sigma \in P} \sum_{\begin{subarray}{c} \nu \in \widehat{N} \vspace{0.75pt} \\ \nu \vert_L = \lambda \end{subarray}} \rho(\sigma^{-1})\chi\big(e^N_{\nu}\sigma \tau\big).$$ By hypothesis (v), $\sigma \tau$ does not fix non-trivial characters in $\widehat{N}$ when $\sigma \neq 1$. Then, the above equality takes the form: \begin{equation}\label{Eq1} \hspace{-4mm}\chi\big(e^P_{\rho}\tau e^L_{\lambda}\big) \hspace{1pt} = \hspace{2pt} \frac{1}{|P|}\sum_{\begin{subarray}{c} \nu \in \widehat{N} \vspace{0.75pt}\\ \nu \vert_L = \lambda \end{subarray}} \chi(e^N_{\nu}\tau) \hspace{1pt} = \hspace{2pt} \frac{1}{|P|} \hspace{1.5pt} \# \big\{\nu \in \widehat{N}^{\bullet} \hspace{1pt}:\hspace{1pt} \nu\vert_L=\lambda \hspace{2pt}\text{ and }\hspace{2pt} \tau \triangleright \nu =\nu\big\}. \end{equation} We next find out the value of the right-hand side term in this equation. Put $_{\tau}N=\{\tau \cdot n-n:n\in N\}$. The condition $\tau \triangleright \nu=\nu$ amounts to $\nu \vert_{_{\tau}\hspace{-1pt}N}=\varepsilon$. Recall that our hypothesis (iii) is that $N=L\oplus (\tau \cdot L)$. This implies that $N=L+{}_{\tau}N$. Observe that the conditions $\tau \triangleright \nu=\nu$ and $\nu\vert_L=\lambda$ define $\nu$ uniquely. And $\nu$ satisfies such conditions if and only if $\lambda\vert_{L\cap\hspace{1pt} {}_{\tau}\hspace{-1pt}N} = \varepsilon$. Equation \ref{Eq1} now reads as: $$\chi\big(e^P_{\rho}\tau e^L_{\lambda}\big) = \begin{cases} \hspace{2pt}\frac{1}{|P|}\hspace{3pt} \text{ if }\hspace{2pt} \lambda\vert_{L\cap\hspace{1pt} {}_{\tau}\hspace{-1pt}N} = \varepsilon \hspace{2pt}\text{ and } \hspace{2pt} \lambda\neq\varepsilon, \vspace{1pt} \\ \hspace{6pt}0\hspace{6pt} \text{ otherwise.} \end{cases}$$ We return to the sum \eqref{sumetau}. We make the changes of variables $\rho \mapsto \rho^{-1}$ and $\lambda \mapsto \lambda^{-1}$ and substitute the value of $\chi\big(e^P_{\rho}\tau e^L_{\lambda}\big)$. As before, we set $\widehat{L}^{\bullet}= \widehat{L} \backslash \{\varepsilon\}$. We get: $$E_{\tau} \hspace{1pt} = \hspace{1pt} \frac{|M|}{|P|}\hspace{1pt} \sum_{\rho\in \widehat{P}} \hspace{1pt}\sum_{\begin{subarray}{c} \lambda\in \widehat{L}^{\bullet} \vspace{0.3pt} \\ \lambda\vert_{L \cap \hspace{1pt} {}_{\tau}\hspace{-1pt}N}=\varepsilon \end{subarray}} e^L_{\lambda}e^P_{\rho} \tau e^L_{\lambda}e^P_{\rho}.$$ We simplify further this expression by using the equality: $$\sum_{\rho\in \widehat{P}} e^P_{\rho}\otimes e^P_{\rho} \hspace{1pt} = \hspace{1pt} \frac{1}{|P|}\sum_{\sigma \in P} \sigma \otimes \sigma^{-1}.$$ We obtain: \begin{equation}\label{Eq2} E_{\tau} \hspace{1pt} = \hspace{1pt} \sum_{\sigma \in P} \sigma \Bigg(\sum_{\begin{subarray}{c} \lambda\in \widehat{L}^{\bullet} \vspace{0.3pt} \\ \lambda\vert_{L \cap \hspace{1pt} {}_{\tau}\hspace{-1pt}N} =\varepsilon \end{subarray}} e^L_{\lambda} \tau e^L_{\lambda}\hspace{2pt}\Bigg)\sigma^{-1}. \end{equation} The next step is to simplify the sum in the brackets. Since $N=L\oplus (\tau \cdot L)$, we have an isomorphism $L\oplus L\simeq N$ given by $(l_1,l_2)\mapsto l_1+\tau\cdot l_2$. Under this identification, we can write the action of $\tau$ on $N\simeq L\oplus L$ in matrix form as: $$\tau= \begin{pmatrix} 0 & \alpha \\ id & \gamma \end{pmatrix}.$$ Here, $\alpha,\gamma:L \rightarrow L$ are the following group homomorphisms: $\alpha=\pi_L \circ \tau^2$ and $\gamma=\tau^{-1} \circ \pi_{\tau\cdot L} \circ \tau^2$, where $\pi_L:N \rightarrow L$ and $\pi_{\tau\cdot L}:N \rightarrow \tau\cdot L$ are the projections attached to the direct sum $N=L\oplus (\tau\cdot L)$. \par \vspace{2pt} We will identify $\widehat{N}$ with $\widehat{L}\oplus \widehat{L}$ as well. Bear in mind that $e^L_{\lambda} = \sum_{\lambda' \in \widehat{L}} e^N_{(\lambda,\lambda')}$. We compute: $$\begin{array}{rl} e^L_{\lambda}\tau e^L_{\lambda} & \hspace{-2.5pt} = \hspace{3pt} {\displaystyle \sum_{\lambda_1,\lambda_2\in \widehat{L}} e^N_{(\lambda,\lambda_1)}\tau e^N_{(\lambda,\lambda_2)}} \vspace{4pt} \\ & \hspace{-2.5pt} = \hspace{3pt} {\displaystyle \sum_{\lambda_1,\lambda_2\in \widehat{L}} \tau e^N_{\tau^{-1} \triangleright (\lambda,\lambda_1)} e^N_{(\lambda,\lambda_2)}} \vspace{4pt} \\ & \hspace{-2.5pt} = \hspace{3pt} {\displaystyle \sum_{\lambda_1,\lambda_2 \in \widehat{L}} \tau e^N_{(\lambda_1,\lambda\circ \alpha + \lambda_1\circ \gamma)} e^N_{(\lambda,\lambda_2)}} \vspace{4pt} \\ & \hspace{-2.5pt} = \hspace{3pt} \tau e^N_{(\lambda,\lambda \circ (\alpha+\gamma))} \end{array}$$ We replace this in Equation \ref{Eq2}. We get: $$E_{\tau} \hspace{1pt} = \hspace{1pt} \sum_{\sigma \in P} \sum_{\begin{subarray}{c} \lambda\in \widehat{L}^{\bullet} \vspace{0.3pt} \\ \lambda\vert_{L \cap\hspace{1pt} {}_{\tau}\hspace{-1pt}N} =\varepsilon \end{subarray}} \sigma \tau e^N_{(\lambda,\lambda \circ (\alpha+\gamma))} \sigma^{-1}.$$ We next describe $L\cap {}_{\tau}N$. For $(l_1,l_2) \in L \oplus L$, we have: $$(\tau-id) \begin{pmatrix} l_1 \\ l_2\end{pmatrix} = \begin{pmatrix} \alpha(l_2)-l_1 \\ l_1+(\gamma-id)(l_2) \end{pmatrix}.$$ This element belongs to $L$ if and only if $l_1=(id-\gamma)(l_2)$. Then: $$(\tau-id) \begin{pmatrix} l_1 \\ l_2 \end{pmatrix} = \begin{pmatrix} (\alpha+\gamma-id)(l_2) \\ 0 \end{pmatrix}.$$ We thus see that $\lambda \in \widehat{L}$ is trivial on $L\cap {}_{\tau} N$ if and only if $\lambda \circ (\alpha+\gamma-id)=\varepsilon$; or, equivalently, $\lambda \circ (\alpha+\gamma)=\lambda$. In this case, the\vspace{1pt} character $(\lambda,\lambda)$ in $\widehat{N}$ is $\tau$-invariant. We show\vspace{1pt} that the set of characters $\lambda$ satisfying $\lambda \circ (\alpha+\gamma-id)=\varepsilon$ is not empty. Our hypothesis (iv) states that $N^{\tau}\neq \{1\}$. Then, there is a non-trivial $(l_1,l_2)$ such that: $$\begin{pmatrix} 0 & \alpha \\ id & \gamma \end{pmatrix} \begin{pmatrix} l_1 \\ l_2 \end{pmatrix} = \begin{pmatrix} l_1 \\ l_2 \end{pmatrix}.$$ This means that $\alpha(l_2) = l_1$ and $(\alpha+\gamma-id)(l_2)=0$. The non-triviality of $(l_1,l_2)$ implies that $\alpha+\gamma-id:L \to L$ is not invertible. Therefore, there is $\lambda \in \widehat{L}$ such that $\lambda \neq \varepsilon$ and $\lambda \circ (\alpha+\gamma-id)=\varepsilon$. \par \vspace{2pt} Summing this up, we finally arrive at the desired expression for $E_{\tau}$: $$E_{\tau} \hspace{1pt} = \hspace{1pt} \sum_{\sigma \in P} \sum_{\begin{subarray}{c} \lambda \in \widehat{L}^{\bullet} \vspace{0.3pt} \\ \lambda \circ (\alpha+\gamma)=\lambda \end{subarray}} \sigma \tau e^N_{(\lambda,\lambda)} \sigma^{-1}.$$ On the other hand, proceeding in a similar fashion with the cocharacter $c_{\tau^{-1}}:=|M|e^M_{\varepsilon}\tau^{-1}e^M_{\varepsilon}$ we obtain the following element: $$E_{\tau^{-1}} \hspace{1pt} = \hspace{1pt} \sum_{\sigma \in P} \sum_{\begin{subarray}{c} \lambda \in \widehat{L}^{\bullet} \vspace{0.3pt} \\ \lambda \circ (\alpha+\gamma)=\lambda \end{subarray}} \sigma \tau^{-1} e^N_{(\lambda,\lambda)} \sigma^{-1}.$$ Both elements, $E_{\tau}$ and $E_{\tau^{-1}}$, belong to $Y$. The product $E_{\tau}E_{\tau^{-1}}$ belongs to $Y$ as well. The next step is to calculate $E_{\tau}E_{\tau^{-1}}$: $$E_{\tau}E_{\tau^{-1}} \hspace{1pt} = \hspace{1pt} \sum_{\sigma_1,\sigma_2 \in P} \hspace{3pt} \sum_{\begin{subarray}{c} \lambda_1 \in \widehat{L}^{\bullet} \vspace{0.3pt} \\ \lambda_1 \circ (\alpha+\gamma)=\lambda_1 \end{subarray}} \hspace{3pt} \sum_{\begin{subarray}{c} \lambda_2 \in \widehat{L}^{\bullet} \vspace{0.3pt} \\ \lambda_2 \circ (\alpha+\gamma)=\lambda_2 \end{subarray}} \sigma_1 \tau e^N_{(\lambda_1,\lambda_1)}\sigma_1^{-1}\sigma_2 \tau^{-1} e^N_{(\lambda_2,\lambda_2)}\sigma_2^{-1}.$$ We use that the character $(\lambda_2,\lambda_2)$ is $\tau$-invariant. We have: $$E_{\tau}E_{\tau^{-1}} \hspace{1pt} = \hspace{1pt} \sum_{\sigma_1,\sigma_2 \in P} \hspace{3pt} \sum_{\begin{subarray}{c} \lambda_1 \in \widehat{L}^{\bullet} \vspace{0.3pt} \\ \lambda_1 \circ (\alpha+\gamma)=\lambda_1 \end{subarray}} \hspace{3pt} \sum_{\begin{subarray}{c} \lambda_2 \in \widehat{L}^{\bullet} \vspace{0.3pt}\\ \lambda_2 \circ (\alpha+\gamma)=\lambda_2 \end{subarray}} \sigma_1 \tau e^N_{(\lambda_1,\lambda_1)} e^N_{(\sigma_1^{-1}\sigma_2) \hspace{0.5pt} \triangleright \hspace{0.5pt} (\lambda_2,\lambda_2)}\sigma_1^{-1}\sigma_2\tau^{-1}\sigma_2^{-1}.$$ The product\vspace{1.5pt} of the idempotents is non-zero if and only if $(\sigma_1^{-1}\sigma_2) \triangleright (\lambda_2,\lambda_2)=(\lambda_1,\lambda_1)$. This implies that $\big((\sigma_1^{-1}\sigma_2)^{-1} \tau (\sigma_1^{-1}\sigma_2)\big) \triangleright (\lambda_2,\lambda_2)=(\lambda_2,\lambda_2)$. Our hypothesis (v) states\vspace{0.5pt} that $\big(\widehat{N}^{\tau}\big) \cap \big(\widehat{N}^{\sigma \tau \sigma^{-1}}\big)=\{\varepsilon\}$ whenever $\sigma \neq 1$. Then,\vspace{1pt} the only non-zero contributions\vspace{1.5pt} to the previous sum occur when $\sigma_1=\sigma_2$. And,\vspace{1pt} in this case, $\lambda_1=\lambda_2$. We obtain: $$E_{\tau}E_{\tau^{-1}} \hspace{2pt} = \hspace{4pt} \sum_{\sigma \in P} \hspace{3pt} \sum_{\begin{subarray}{c} \lambda \in \widehat{L}^{\bullet} \vspace{0.3pt} \\ \lambda \circ (\alpha+\gamma)=\lambda \end{subarray}} \sigma e^N_{(\lambda,\lambda)} \sigma^{-1} \hspace{2pt} = \hspace{4pt} \sum_{\sigma \in P} \hspace{3pt} \sum_{\begin{subarray}{c} \lambda \in \widehat{L}^{\bullet} \vspace{0.3pt} \\ \lambda \circ (\alpha+\gamma)=\lambda \end{subarray}} e^N_{\sigma \hspace{0.5pt} \triangleright \hspace{0.5pt} (\lambda,\lambda)}.$$ Recall that $L$ and $P$ commute by hypothesis (i). For every $l \in L$, we have: $$\big(\sigma \triangleright (\lambda,\lambda)\big)(l) = (\lambda,\lambda)(\sigma^{-1}l\sigma) = (\lambda,\lambda)(l) = \lambda(l).$$ Then, for every $\sigma \in P$, there is $\mu(\sigma) \in \widehat{L}$ such that $\sigma \triangleright (\lambda,\lambda)=(\lambda,\mu(\sigma))$. Since $(\lambda,\lambda)$ is $\tau$-invariant and non-trivial, it cannot be $\sigma$-invariant for $\sigma \in P$ with $\sigma \neq 1$. Otherwise, $(\lambda,\lambda)$ would be $\sigma \tau$-invariant, contradicting\vspace{-1pt} our hypothesis (v). Hence, $\mu(\sigma_1) \neq \mu(\sigma_2)$ when $\sigma_1\neq \sigma_2$. This yields\vspace{0.5pt} that $\mu(\sigma)$ runs one-to-one over all $\widehat{L}$ when $\sigma$ runs in $P$. Now we get: $$E_{\tau}E_{\tau^{-1}} \hspace{3pt} = \hspace{2pt} \sum_{\begin{subarray}{c} \lambda \in \widehat{L}^{\bullet} \vspace{0.3pt} \\ \lambda \circ (\alpha+\gamma)=\lambda \end{subarray}} \hspace{3pt} \sum_{\sigma \in P} e^N_{\sigma \hspace{0.5pt} \triangleright \hspace{0.5pt} (\lambda,\lambda)} \hspace{4pt} = \hspace{2pt} \sum_{\begin{subarray}{c} \lambda \in \widehat{L}^{\bullet} \vspace{0.3pt} \\ \lambda \circ (\alpha+\gamma)=\lambda \end{subarray}} \hspace{4pt} \sum_{\lambda' \in \widehat{L}} e^N_{(\lambda,\lambda')} \hspace{4pt} = \hspace{2pt} \sum_{\begin{subarray}{c} \lambda \in \widehat{L}^{\bullet} \vspace{0.3pt} \\ \lambda \circ (\alpha+\gamma)=\lambda \end{subarray}} e_{\lambda}^L.$$ In particular, this gives that $E_{\tau}E_{\tau^{-1}} \in K\hspace{-1pt}L$. Then, $\Delta_J(E_{\tau}E_{\tau^{-1}})=\Delta(E_{\tau}E_{\tau^{-1}})$. Let $\varphi \in \widehat{L}$ be such that $\varphi\circ (\alpha+\gamma)=\varphi$. The next step\vspace{1.5pt} is to compute the element $(\varphi \otimes id)(\Delta_J(E_{\tau}E_{\tau^{-1}}))$. We have: $$\begin{array}{ll} (\varphi \otimes id)(\Delta_J(E_{\tau}E_{\tau^{-1}})) & =\hspace{5pt} {\displaystyle \sum_{\begin{subarray}{c} \lambda \in \widehat{L}^{\bullet} \vspace{1pt} \\ \lambda \circ (\alpha+\gamma)=\lambda \end{subarray}} \hspace{3pt} \sum_{\phi \in \widehat{L}} \varphi(e_{\phi}^L) e_{\phi^{-1}\lambda}^L} \vspace{5pt} \\ & =\hspace{5pt} {\displaystyle \sum_{\begin{subarray}{c} \lambda \in \widehat{L}^{\bullet} \vspace{1pt} \\ \lambda \circ (\alpha+\gamma)=\lambda \end{subarray}} e_{\varphi^{-1}\lambda}^L} \vspace{5pt} \\ & =\hspace{5pt} {\displaystyle \sum_{\begin{subarray}{c} \lambda \in \widehat{L}\backslash\{\varphi^{-1}\hspace{-0.5pt}\} \vspace{1pt}\\ \lambda \circ (\alpha+\gamma)=\lambda \end{subarray}} e_{\lambda}^L.} \end{array}$$ In the last equality, we used that the set $\{\lambda \in \widehat{L} : \lambda \circ (\alpha+\gamma)=\lambda\}$ is a subgroup of $\widehat{L}$. By Proposition \ref{sub}, $Y \cap (K\hspace{-1pt}L)$ is a Hopf order of $K\hspace{-1pt}L$ over $R$.\vspace{1.25pt} The element $(\varphi \otimes id)(\Delta_J(E_{\tau}E_{\tau^{-1}}))$ belongs to $Y \cap (K\hspace{-1pt}L)$ in light of Proposition \ref{character}.\vspace{1.25pt} We exploit further the previous equality with the following calculation: \par \vspace{2pt} $$\prod_{\begin{subarray}{c} \varphi \in \widehat{L}^{\bullet} \vspace{1pt} \\ \varphi \circ (\alpha+\gamma)=\varphi \end{subarray}}(\varphi \otimes id)(\Delta_J(E_{\tau}E_{\tau^{-1}})) \hspace{1pt} = \hspace{1pt} e_{\varepsilon}^L.$$ This yields that $e_{\varepsilon}^L \in Y \cap (K\hspace{-1pt}L)$. Thus, $e_{\varepsilon}^L \in Y$. Propositions \ref{cond:unique} and \ref{squeeze} finally apply to obtain $Y=X$. \end{proof} The following consequence of the uniqueness property is noteworthy: \begin{remark} The Hopf order $X$ constructed in the above proof is the unique Hopf order of $K\hspace{-1pt} G$ such that $J \in X \otimes_R X$. \end{remark} The number of orders in a semisimple Hopf algebra over a number field is finite by \cite[Theorem 1.8]{M}. Theorem \ref{thm:main2} brings to light a different behavior of the number of Hopf orders in twisted group algebras in comparison to that in group algebras on abelian groups. \par \vspace{2pt} For example, the number of Hopf orders of the group algebra on $C_p,$ with $p$ prime, tends to infinity when the base number field is enlarged in a suitable way, see \cite[Theorem 3 and page 21]{TO} or the compilation in \cite[Section 3]{CM2}. However, for twisted group algebras, Theorem \ref{thm:main2} shows that such a number can be constantly one. This phenomenon already appeared in our study of the Hopf orders of Nikshych's Hopf algebra, see \cite[Theorem 6.15 and Remark 6.13]{CM2}. \section{Examples}\label{example} We illustrate Theorem \ref{thm:main2} with several examples: \subsection{The basic example} Let $\F_q$ be the finite field with $q$ elements. For $n \in \Na$, consider the field extension $\F_{q^n}/\F_q$. By fixing a basis of $\F_{q^n}$ as a $\F_q$-vector space, we get a linear isomorphism $\F_{q^n} \simeq \F_q^n$. The product of $\F_{q^n}$ induces an injective algebra homomorphism $\Phi:\F_{q^n}\to \M_n(\F_q)$. \par \vspace{2pt} Consider the groups $Q=\SL_{2n}(q)$ and $N=\F_q^{2n}$, where $Q$ acts on $N$ in the natural way. Write $B=\{v_1,\ldots, v_{2n}\}$ for the standard basis of $\F_q^{2n}$ as a $\F_q$-vector space. Let $L$ and $L'$ be\vspace{1.25pt} the subspaces of $\F_q^{2n}$ spanned by $\{v_1,\ldots, v_n\}$ and $\{v_{n+1},\ldots, v_{2n}\}$ respectively. Let $P$ be following subgroup of $Q$: $$P=\bigg\{ \begin{pmatrix} \textrm{Id} & \hspace{-3pt}\Phi(a) \\ 0 & \hspace{-3pt}\textrm{Id} \end{pmatrix} : a \in \F_{q^n}\bigg\}.$$ Finally, we pick in $Q$ the block matrix $$\tau=\begin{pmatrix} \textrm{Id} & \hspace{-3pt}0 \\ \textrm{Id} & \hspace{-3pt}\textrm{Id} \end{pmatrix}.$$ We already have all the prescribed data to apply Theorem \ref{thm:main2}. It is easy to see that $N,Q,L,P,$ and $\tau$ satisfy the hypotheses (i)-(iv). Notice that $N^{\tau}=L'$. \par \vspace{2pt} We next justify that the hypothesis (v) is also satisfied. Firstly, we check that $\widehat{N}^{\sigma\tau} = \{\varepsilon\}$ for every $\sigma \in P$ with $\sigma \neq 1$. We write $\sigma$ as $$\sigma = \begin{pmatrix} \textrm{Id} & \hspace{-3pt}\Phi(a) \\ 0 & \hspace{-3pt}\textrm{Id} \end{pmatrix},$$ with $a \neq 0$. Then, the following determinant is nonzero: $$\det(\sigma\tau -1) = \det \begin{pmatrix} \Phi(a) & \hspace{-3pt}\Phi(a) \\ \textrm{Id} & \hspace{-3pt}0 \end{pmatrix} = \det (\Phi(-a)).$$ The matrix $\sigma\tau-1$ acts on $N$ and this action is invertible. Hence, the action of $\sigma\tau-1$ on $\widehat{N}$ is invertible as well. This yields that $\widehat{N}^{\sigma\tau} = \{\varepsilon\}$. \par \vspace{2pt} Secondly, we check that $\widehat{N}^{\tau} \cap \widehat{N}^{\sigma \tau \sigma^{-1}}=\{\varepsilon\}$ for $\sigma$ as before. We can write every element of $\widehat{N}$ as a pair $(\varphi,\psi)$, with $\varphi \in \widehat{L}$ and $\psi \in \widehat{L'}.$ One can easily\vspace{1pt} show that $\widehat{N}^{\tau} = \big\{(\varphi,\varepsilon): \varphi \in \widehat{L}\big\}$ and $\widehat{N}^{\sigma\tau \sigma^{-1}} = \big\{(\varphi, \Phi(-a)^{-1} \triangleright \varphi) : \varphi \in \widehat{L}\big\}$. Since $\Phi(-a)$ is invertible, we obtain that $\widehat{N}^{\tau}\cap \widehat{N}^{\sigma \tau \sigma^{-1}}=\{\varepsilon\}$. \par \vspace{2pt} We have verified that all the hypotheses of Theorem \ref{thm:main2} hold. We denote the resulting twisted Hopf algebra $\big(K(\F_q^{2n} \rtimes \SL_{2n}(q))\big)_J$ by $H_{q,n}$. The requirement that $K$ is large enough is satisfied in this example if $K$ contains a primitive $p$th root of unity. \par \vspace{2pt} In this way, we constructed an infinite family of semisimple Hopf algebras, parameterized by $q$ and $n$, which admit a unique Hopf order over $R$. \subsection{Variations of the basic example} \enlargethispage{1.5\baselineskip} Observe that the previous construction similarly works for $Q=\GL_{2n}(q)$. \par \vspace{2pt} We next show that it works for $Q=\Sp_{2n}(q)$ as well. Recall that $$\Sp_{2n}(q)=\bigg\{A \in \GL_{2n}(q): A^t\begin{pmatrix} 0 & \hspace{-3pt}\textrm{Id} \\ -\textrm{Id} & \hspace{-3pt}\textrm{0} \end{pmatrix} A= \begin{pmatrix} 0 & \hspace{-3pt}\textrm{Id} \\ -\textrm{Id} & \hspace{-3pt}\textrm{0} \end{pmatrix}\bigg\}.$$ Consider\vspace{0.25pt} the bilinear form $\mathcal{T}:\F_{q^n} \times \F_{q^n} \rightarrow \F_q$ defined by the trace of the field extension $\F_{q^n}/\F_q$. We know\vspace{0.5pt} that this form is non-degenerate and symmetric. (When $q$ is even, we also know that the map $\F_{q^n}\rightarrow \F_q, x \mapsto \textrm{Tr}_{\F_{q^n}/\F_q}(x^2)=\textrm{Tr}_{\F_{q^n}/\F_q}(x)^2$, is nonzero.) A result from\vspace{0.25pt} the theory of bilinear forms ensures the existence of an orthogonal\vspace{0.12pt} basis for $\F_{q^n}$ as a $\F_q$-vector space. Expressing the product of $\F_{q^n}$ with respect to this basis,\vspace{0.12pt} we get an algebra homomorphism $\Psi:\F_{q^n}\to \M_n(\F_q)$ with the additional property that $\Psi(a)$ is symmetric for all $a \in \F_q$. Now, set $$P=\Bigg\{ \begin{pmatrix} \textrm{Id} & \hspace{-3pt}\Psi(a) \\ 0 & \hspace{-3pt}\textrm{Id} \end{pmatrix} : a \in \F_{q^n}\Bigg\}.$$ One can easily check that $P \subset Q$ and $\tau \in Q$. The hypotheses (i)-(iv) of Theorem \ref{thm:main2} are likewise satisfied. For\vspace{-1pt} the hypothesis (v), one can argue as before to check the required conditions on the invariant subsets of $\widehat{N}$. Or,\vspace{1pt} alternatively, use that $\Psi(a)=D\Phi(a)D^{-1}$ for all $a \in \F_{q^n}$, where $D$ is an invertible matrix. \subsection{Composition of basic examples} Let $n_1,n_2,\ldots,n_k \in \Na$ be such that $n=\sum_i n_i$. For a fixed $q$, consider the family of Hopf algebras $H_{q,n_i}$ and the tensor product Hopf algebra $$\bigotimes_i H_{q,n_i} \simeq K\Big(\prod_i \big(\F_q^{2n_i} \rtimes \SL_{2n_i}(q)\big)\Big)_{\otimes J_i} \simeq K\Big(\F_q^{2n} \rtimes \Big(\prod_i \SL_{2n_i}(q)\Big)\Big)_{\Upsilon},$$ where we write $\Upsilon=\otimes J_i$. Let $X_{q,n_i}$ denote the unique Hopf order of $H_{q,n_i}$. Then, $Y:=\otimes_i \hspace{1pt} X_{q,n_i}$ is a Hopf order of $K\hspace{-1.25pt}\big(\F_q^{2n} \rtimes (\prod_i \SL_{2n_i}(q))\big)_{\Upsilon}$. Since $Y$ contains all the primitive idempotents of $K\F_q^{2n}$, we have that $\Upsilon \in Y\otimes_R Y$ and $Y$ is unique by Proposition \ref{cond:unique}. \par \vspace{2pt} The diagonal embedding $\prod_i \SL_{2n_i}(q) \hookrightarrow \SL_{2n}(q)$ induces an embedding of Hopf algebras $$\bigotimes_i H_{q,n_i} \hookrightarrow K\Big(\F_q^{2n} \rtimes \SL_{2n}(q)\Big)_{\Upsilon}.$$ By intersecting with $\otimes_i \hspace{1pt} H_{q,n_i}$, we see that any Hopf order $X$ of $K\hspace{-1.25pt}\big(\F_q^{2n} \rtimes \SL_{2n}(q)\big)_{\Upsilon}$ contains all the primitive idempotents of $K\F_q^{2n}$ and satisfies $\Upsilon \in X\otimes_R X$. Then, $X$ is also a Hopf order of the group algebra $K\hspace{-1.25pt}\big(\F_q^{2n} \rtimes \SL_{2n}(q)\big)$. Theorem \ref{thm:main1} and Proposition \ref{cond:unique} shows that $K\hspace{-1.25pt}\big(\F_q^{2n} \rtimes \SL_{2n}(q)\big)_{\Upsilon}$ has a unique Hopf order. \section{An application}\label{app} In \cite[Section 5]{CM3}, we posed the following question: \begin{question} Let $G$ be a finite non-abelian simple group. Let $\Omega$ be a non-trivial twist for $\Co G$ arising from a $2$-cocycle on an abelian subgroup of $G$. Can $(\Co G)_{\Omega}$ admit a Hopf order over a number ring? \end{question} The results obtained so far in \cite{CCM} and \cite[Theorem 3.3]{CM3} support a negative answer. In this final section, we provide one more instance of partial negative answer through $\PSL_{2n+1}(q)$. The strategy of proof deployed here is different to that in \cite{CCM} and \cite[Theorem 3.3]{CM3}. Here, we embed $H_{q,n}$ in a twist of $K \PSL_{2n+1}(q)$ and exploit the concrete form of the unique Hopf order of $H_{q,n}$. Comparing with \cite[Theorem 6.3]{CCM}, notice that the following proof is constructive and does not rely neither on the classification of the finite simple groups nor on the one of the minimal simple groups. \begin{theorem}\label{PSL2n1} Let $K$ be a number field with ring of integers $R$. Let $p$ be a prime number and $q=p^m$ with $m \geq 1$. Assume that $K$ contains a primitive $p$th root of unity $\zeta$. There exists a twist $J$ for the group algebra $K \PSL_{2n+1}(q)$, arising from a $2$-cocycle on an abelian subgroup, such that $(K \PSL_{2n+1}(q))_J$ does not admit a Hopf order over $R$. \end{theorem} \begin{proof} Let $\pi:\SL_{2n+1}(q) \rightarrow \PSL_{2n+1}(q)$ denote\vspace{0.75pt} the canonical projection. As in Section \ref{example}, write $N=\F_q^{2n}$. We have an embedding $\iota:N \rtimes \SL_{2n}(q) \rightarrow \PSL_{2n+1}(q)$ given by: $$(v,A) \mapsto \pi \hspace{-2pt}\begin{pmatrix} A & v \\ 0 & 1 \end{pmatrix}.$$ We identify $N \rtimes \SL_{2n}(q)$ with its image through $\iota$ and view it as a subgroup of $\PSL_{2n+1}(q)$. The twist $J$ for $K(N \rtimes K\SL_{2n}(q))$ used in the construction of $H_{q,n}$ allows us to twist $K\PSL_{2n+1}(q)$ as well. Thus, we can consider $H_{q,n}$ as a Hopf subalgebra of $(K\PSL_{2n+1}(q))_J$. \par \vspace{2pt} We will next prove that $(K\PSL_{2n+1}(q))_J$ does not admit a Hopf order over $R$ by contradiction. Suppose that $(K\PSL_{2n+1}(q))_J$ admits a Hopf order $Y$ over $R$. Then, $Y \cap H_{q,n}$ is a Hopf order of $H_{q,n}$ over $R$. We saw in Section \ref{example} that $H_{q,n}$ admits a unique Hopf order $X$ over $R$. So, $X=Y \cap H_{q,n}$. And we know that $J \in X \otimes_R X$. This implies that $J \in Y \otimes_R Y$ and that $Y$ is a Hopf order of $K\PSL_{2n+1}(q)$. \par \vspace{2pt} We write $u_{\pi(B)}$ for the element $\pi(B) \in \PSL_{2n+1}(q)$ when viewed in the group algebra $K\PSL_{2n+1}(q)$. For $r \in \F_q$, let $x_{ij}(r)$ denote the elementary matrix in $\SL_{2n+1}(q)$ with $r$ in the $(i,j)$-entry. Recall that $X$ (and consequently $Y$) contains the primitive idempotents of $K\hspace{-1pt} N$. Then, for a character $\chi:\F_q \to K^{\times}$ of $(\F_q,+)$, the element $$\frac{1}{q}\sum_{r \in \F_q} \chi(r) u_{\pi(x_{1\hspace{0.33pt}2n+1}(r))}$$ belongs to $Y$. Since $\PSL_{2n+1}(q) \subset Y$, we can multiply that element by elements of $\PSL_{2n+1}(q)$ and produce other elements in $Y$. In particular, $Y$ contains the element $$\frac{1}{q}\sum_{r \in \F_q} \chi(r) u_{\pi(x_{ij}(r))}.$$ Consider now $(\F_p,+)$ as a subgroup of $(\F_q,+)$. Summing over all characters $\chi$ such that $\chi \vert_{\F_p}$ is trivial, we get that $$\frac{1}{p}\sum_{r \in \F_p} u_{\pi(x_{ij}(r))}$$ belongs to $Y$. \par \vspace{2pt} Put $g_1=\pi(x_{12}(1)), g_2=\pi(x_{13}(1)),$ and $g_3=\pi(x_{23}(1))$. The subgroup generated by $g_1,g_2,$ and $g_3$ is a Heisenberg group of order $p^3$. The previous arguments show that $Y$ contains the element $$\frac{1}{p^2}\sum_{r,s=0}^{p-1} u_{g_1^{r}g_3^{s}} = \Big(\frac{1}{p}\sum_{r \in \F_p} u_{\pi(x_{13}(r))}\Big)\Big(\frac{1}{p}\sum_{s \in \F_p} u_{\pi(x_{23}(s))}\Big).$$ Consider the $p$-th dimensional irreducible representation of the Heisenberg group given by: $$\begin{array}{l} g_1 \mapsto E_{12} + E_{23} + \ldots + E_{p-1 p} + E_{p1}, \vspace{3pt} \\ g_2 \mapsto \textrm{diag}(\zeta,\zeta,\ldots,\zeta), \vspace{3pt} \\ g_3 \mapsto \textrm{diag}(1,\zeta,\ldots, \zeta^{p-1}). \end{array}$$ Here, $E_{ij}$ denotes\vspace{1.5pt} the matrix in $\M_p(K)$ with $1$ in the $(i,j)$-entry and zero elsewhere. A direct calculation reveals that $\frac{1}{p^2}\sum_{r,s} u_{g_1^{r}g_3^{s}}$ maps to $\frac{1}{p}\sum_{i=1}^{p} E_{i1}$. Since $\frac{1}{p^2}\sum_{r,s} u_{g_1^{r}g_3^{s}}$ belongs to $Y$, it satisfies\vspace{1.5pt} a monic polynomial with coefficients in $R$. However, $\frac{1}{p}\sum_{i=1}^{p} E_{i1}$ does not, reaching so a contradiction. \end{proof} The statement for the complexified group algebra is established as in the proof of \cite[Corollary 2.4]{CM1}: \begin{corollary} There exists a twist $J$ for the group algebra $\Co \PSL_{2n+1}(q)$, arising from a $2$-cocycle on an abelian subgroup, such that the complex semisimple Hopf algebra $(\Co \PSL_{2n+1}(q))_J$ does not admit a Hopf order over any number ring. \end{corollary} \medskip
1,108,101,565,401
arxiv
\section{Introduction} \label{Introduction} Majorana fermions are zero energy spatially localized states that emerge in topological superconductors as an equal superposition of electrons and holes \cite{kitaev,read,wilczek}. Theoretical predictions for their presence in nanoscale devices, such as spin-orbit coupled wires in proximity to a superconductor \cite{sarma,oreg,weizmann}, were strongly supported by recent experiments \cite{kouwenhoven, heiblum, marcus}. One of the significant properties of a superconducting island hosting Majorana modes is its ground state degeneracy. However, when such an island is in the Coulomb blockade regime this degeneracy is lowered and replaced by two sectors of distinct charge parity, each of which has either an even or an odd number of electrons \cite{fu,xu}. Recent theoretical progress \cite{ BeriCooper,AltlandEgger,Beri,Affleck13,AltlandBeryEggerTsvelik14,Tsvelik,numerics,ZazunovAltlandEgger14,Eriksson14a,Eriksson14, Kashuba15,Pikulin16,Meidan16,Plugge16,mora,michaeli,beek,vanHeck}, particularly the works of B{\'e}ri-Cooper \cite{BeriCooper} and Altland-Egger \cite{AltlandEgger} have paved the way for the study of such Majorana islands, predicting the emergence of a ``topological Kondo effect'' in the Coulomb valley regime. Under the condition where the number of electrons in the island is fixed, and where the number of lead-coupled Majorana modes exceeds two, $M>2$, the Majorana degrees of freedom non-locally encode an effective quantum impurity spin. This ``spin'' collectively interacts with the lead's electrons, leading to a correlated state characterized by non-Fermi liquid (NFL) behavior that is observable in the electrical conductance. Recently, it was shown that this behavior emerges at much higher temperatures near charge degeneracy points \cite{michaeli,mora}. Motivated by this and following Ref.~\onlinecite{Eriksson14}, we ask the question: what are the consequences of breaking charge conservation on the properties of the system? Our work deals with multi-terminal charge transport through a Majorana island connected to $M>2$ external leads via Majorana tunneling junctions. In addition, the island is coupled via a Josephson junction to a grounded bulk superconductor (SC), see Fig.~\ref{fig1}. Having sufficiently long wires, we assume that the Majoranas have no direct coupling. While the charge in the island is tuned by a gate voltage, the Josephson coupling allows charge fluctuations in units of $2e$ between the island and the bulk superconductor. The aforementioned topological Kondo state is known to be completely stable against lead asymmetry~\cite{BeriCooper} and gate voltage detuning~\cite{michaeli,mora}. In this paper, however, we show that the Josephson coupling gives rise to an instability of the topological Kondo fixed point. In a charge conserving system and far from a charge degeneracy point, tunneling events between the leads and the island are of the form $\psi_i^\dagger \psi_j$ which merely describes the exchange of charge $e$ between leads $i$ and $j$. Here $\psi_i$ ($\psi_i^\dagger$) annihilates (creates) an electron in lead $i$. However, the lack of charge conservation permits tunneling events of $\it{two}$ electrons from the leads to the island and then to the bulk SC, leading to anomalous terms of the form $\psi_i \psi_j$ (or $\psi_i^\dagger \psi_j^\dagger$). As our analysis shows, these terms can be identified with channel anisotropy in the topological Kondo Hamiltonian and as a result the system is driven towards a new fixed point of strong coupling regime. The equal combination of these tunneling events which effective emerges at low temperature, leads to a correlated Kondo state involving only one Majorana field $\psi_i\pm\psi_i^\dagger$ from each lead. For $M=3$ for example, this is equivalent \cite{Tsvelik1, Tsvelik2} to two-channel Kondo (2CK) physics allowing for new ways to explore its non-Fermi liquid properties. The full phase diagram of the system can be mapped as function of the ratio of the Josephson energy $E_J$ to the charging energy $E_c$ and as function of temperature $T$. As we show, at $E_J \neq 0$, where the system flows to a new fixed point, the zero-temperature conductance is $\frac{2e^2}{h}$ and associated with Andreev reflection. Furthermore, low temperature corrections to the conductance have a universal power-law dependence $T^\alpha$ with $\alpha=1$ for $M=3$. This provides an experimental signature of the non-Fermi liquid behavior of the 2CK state. On the other hand, we find $\alpha=2$ for all $M>3$. The rest of the paper is organized as follows: in Sec.~\ref{Model} we present a detailed formulation of the model. Sec.~\ref{Phase diagram} presents the emergent instability at the Kondo fixed point and parity interaction, and includes the phase diagram of the system. In Sec.~\ref{Low energy conductance} we deal with calculation of the low energy conductance, its low $T$ corrections and the effect of interactions in the leads. We summarize in Sec.~\ref{Summary} \begin{figure}[pt] \centering \includegraphics[scale=0.3]{fig1.pdf} \caption{Schematics of the device: superconducting island with spatially localized Majorana modes and with charging energy $E_c$, coupled to a bulk superconductor and to $M$ (here $M=3$) external normal leads. A gate voltage tunes the number of electrons in the island. } \label{fig1} \end{figure} \section{Model} \label{Model} Our system is described by the Hamiltonian $H = H_{\rm{c}}+H_{\rm{J}}+ H_{\rm{leads}} +H_{\rm{T}}$. The island is coupled to a gate enforcing both its charging energy $E_c$ and average occupancy, leading to the charging Hamiltonian \begin{align} H_{\rm{c}} = E_c \left(\mathcal{N}-n_g\right)^2, \end{align} where $\mathcal{N}$ is the electron number operator relative to the gate voltage parameter $n_g$. In conventional superconductors, the ground state is expected to have an even number of electrons due to the superconducting energy gap required by an unpaired electron. However, the hosted zero energy Majorana modes indeed allow odd occupancy without paying this energy. By tuning the gate voltage the number of electrons in the ground state is fixed to be the integer which is closest to $n_g$, denoted by $N_0$. However, when $n_g$ has half integer values the ground state is degenerated and consists of two states whose charges differ by $e$.\\ The Josephson coupling between the island and the bulk superconductor enables Cooper-pair tunneling described by \begin{align} H_{\rm{J}} = -E_J\cos(\phi), \label{HJ} \end{align} where $E_J$ is the Josephson energy. The superconducting phase of the island $\phi$ is canonically conjugate to its electron number $\mathcal{N}$ and satisfies the commutation relation $[\phi,\mathcal{N}]=2i$. Tunneling of a Cooper-pair into the island conserves its parity but changes its charge by $\pm 2e$.\\ In addition, single-electron tunneling between the island and external leads is described by \begin{align} H_{\rm{T}} =\sum_{j=1}^M t_j \psi_j^\dagger(0) \gamma_j e^{-i \phi/2}+{\rm{h.c.}}, \label{HT} \end{align} where $\psi^\dagger_j(0)$ is a creation operator of a single electron at the endpoint of lead $j$. The neutral Majorana operators $\gamma_j$'s anti-commute and satisfy $\gamma_j^2=1$. We assume that the Majorana zero modes are far apart and have no direct coupling as the topological quantum wires are sufficiently long. Here, $e^{\pm i \phi/2}$ changes the charge of the island by $\pm e $, i.e half of a Cooper pair, and by contrast to the Josephson coupling, flips its parity. \\ The lead's electrons are modeled by a one dimensional Hamiltonian of non-interacting chiral fermions \begin{align} H_{\rm{leads}} =-i\sum_{j=1}^M \int_{-\infty}^{\infty}{\frac{\rm{dx}}{2\pi} v_F \psi_j^\dagger \partial_x \psi_j}, \label{Hleads} \end{align} where $\{\psi_i(x),\psi^\dagger_j(x') \}=i\delta_{ij}\delta(x-x') $. When $E_J=0$, the model reduces to the extensively studied topological Kondo model. Below, we will study the effect of the Josephson term on the properties of the system. Furthermore, two distinct situations emerge depending on the gate voltage: when $n_g \approx \frac{1}{2}+{\rm{integer}}$, the system is close to a charge degeneracy point and we refer to this as an ``on-resonance" situation, in contrast to the off-resonance case which occurs otherwise. \section{Phase diagram} \label{Phase diagram} In this chapter we map out the phase diagram as function of $T$ and the ratio $E_J/E_c$. This will allow us to connect the charging dominated regime $E_c \gg E_J$, where most previous work has been done, with the Josephson-dominated regime, $E_J \gg E_c$. \subsection{charging dominated regime} As a first step, in this subsection we consider the effect of the Josephson coupling as a perturbation and analyze the stability of the Kondo fixed point of the system for $E_J\ll E_c$. To keep the presentation simple, consider a gate voltage very close to the off-resonance point, $n_g \approx \rm{integer}$ (away from charge degeneracy points), and also equal lead's couplings, $t_1=t_2=\ldots=t_M=t$. We apply perturbation theory around zero Josephson coupling, where the unperturbed ground state has a fixed number of electrons $N_0$ (Note that for $M>2$ this is not a unique state, since there are $2^{M/2-1}$ states with fixed charge $N_0$). However, since $H_J$ does not conserve charge, the true ground state cannot have a fixed number of electrons and instead, it consists of superposition of different charge states \begin{align} |gs\rangle\approx|N_0\rangle+\frac{E_J}{4E_c}(|N_0+2\rangle+|N_0-2\rangle)+\mathcal{O} \left( \frac{E_J^2}{E_c^2 } \right). \end{align} The parity sector subspace including the two states with $ N_0\pm 1 $ electrons in the island is described by a $2\times 2$ Hamiltonian \begin{align} H = \left( \begin{array}{cc} E_c & -E_J \\ -E_J & E_c \\ \end{array} \right). \end{align} Hence, the two excited states $|ex+\rangle$ and $|ex-\rangle$ include two charge states \begin{align} |ex\pm\rangle\approx\frac{1}{\sqrt{2}}(|N_0+1\rangle\pm|N_0-1\rangle)+\mathcal{O} \left( \frac{E_J}{E_c } \right), \end{align} with energy eigenstates $E_c\mp E_J$. For weak lead-island coupling $\Gamma=2\pi t^2\nu\ll E_c$, where $\nu$ is the density of states in the leads, the low energy physics of the system is governed by virtual transitions from the ground state to higher charge states. Using a Schrieffer-Wolff transformation, we perform leading order perturbation theory in the leads coupling $t$ to obtain an effective Hamiltonian $H_{\rm{eff}}=\langle gs|H_{\rm{T}}\frac{1}{E-H_c-H_J}H_{\rm{T}}|gs\rangle$. Importantly, the Josephson coupling allows virtual transitions $|N_0+1\rangle\leftrightarrow|N_0-1\rangle$ or $|N_0\rangle\leftrightarrow|N_0\pm 2\rangle$. This gives rise to terms of the form $\sim\psi_{i} \psi_{j}$ (or $\sim\psi_{i}^{\dagger}\psi_{j}^{\dagger}$), where $\it{two}$ electrons tunnel into (or out of) the island. Taking this into account, the resulting low energy effective Hamiltonian is \begin{align} \label{HEFF0} H_{\rm{eff}}=\frac{t^2}{E_{c}}\sum_{i\neq j}\gamma_{j}\gamma_{i}\left[\psi_{i}^{\dagger}\psi_{j}-\frac{3E_{J}}{2E_{c}}\psi_{i}^{\dagger}\psi_{j}^{\dagger}\right]+\rm{h.c}. \end{align}\ Expressing the fermionic lead operators as $\psi_j(x)=\frac{1}{\sqrt{2}}(\eta_j(x)+i\rho_j(x))$, where $\rho_j(x)=\rho^\dagger_j(x)$ and $\eta_j(x)=\eta^\dagger_j(x)$ are Majorana operators, $H_{\rm{eff}}$ takes the form \begin{align} \label{HEFF1} H_{\rm{eff}}=J_{\eta}\sum_{i\neq j}\gamma_{j}\gamma_{i}\eta_{i}(0)\eta_{j}(0) + J_{\rho}\sum_{i\neq j}\gamma_{j}\gamma_{i}\rho_{i}(0)\rho_{j}(0), \end{align} where $J_{\rho/\eta}=\frac{t^2}{E_c}(1\pm\frac{3E_{J}}{2E_{c}})$. One can see that each Majorana sector $\eta$ and $\rho$ in the leads provides a separate screening channel operator $\eta_{i}(0)\eta_{j}(0)$ or $\rho_{i}(0)\rho_{j}(0)$ coupled to the impurity degree of freedom $\gamma_{j}\gamma_{i}$. Each of these operators satisfies separately $SO(M)_1$ Kac-Moody algebra~\cite{Tsvelik1}. At $E_J=0$, the effective screening channel operator is the sum $\eta_{i}(0)\eta_{j}(0)+\rho_{i}(0)\rho_{j}(0)$, hence this Hamiltonian is equivalent to $SO(M)_2$ topological Kondo Hamiltonian \cite{Tsvelik1}. However, Eq.~(\ref{HEFF1}) shows that the Josephson coupling $E_J$ results in channel anisotropy $\Delta J \equiv J_{\rho}-J_{\eta}=\frac{3E_J t^2}{E_c^2}$, breaking the $SO(M)_2$ symmetry down to $SO(M)_1\times SO(M)_1$. For brevity, we shall refer to the topological Kondo phase as the $SO(M)_2$ phase, and to the low energy phase stabilized by the Josephson coupling as $SO(M)_1$ phase. We will also refer to the later as Andreev NFL phase, due to its conductance properties, see below. A related destabilization of the $SO(M)_2$ fixed point was recently reported for $M=3$ (corresponding to a crossover from 4-channel to 2CK states) in a related spin chain context in Ref.~\onlinecite{giulianoa} As is well known, multichannel Kondo effects are destabilized by channel anisotropy; however, while in topological Kondo setups lead-anisotropy remarkably does not yield any channel anisotropy, we see that the Josephson coupling does lead to channel anisotropy at the topological Kondo fixed point, which is hence unstable. We may start drawing the charging dominated side of the phase diagram of the device, see Fig.~\ref{phasediagram}. The effect of channel anisotropy on the Kondo fixed point can be extracted by identifying a relevant operator with scaling dimension $\Delta=2/M$~\cite{remark}, which corresponds to tunneling of charge $2e$. While in a charge conserving system this operator is disregarded, the presence of the Josephson coupling indeed allows it. Given that this operator involves degrees of freedom from the leads, we expect its dimensionless coupling constant to be $g_0 \sim \frac{\Gamma E_J}{E_c^2}$ to leading order in $E_J$ and in $\Gamma$. Consequently, The system is driven towards a new fixed point, no matter how initially small $E_J$ is. Using standard renormalization group (RG) analysis~\cite{Pustilnik04}, the crossover to strong coupling takes place at scale \begin{align} \label{Ts} T^*=D_0 g_0^{\frac{M}{M-2}}, \end{align} where $D_0$ is the initial electron bandwidth. This energy scale $T^*$ which vanishes as $E_J\rightarrow 0 $ is contracted in Fig.~\ref{phasediagram} with the finite Kondo temperature $T_K\sim E_c e^{-\frac{E_c}{\Gamma}}$ signifying the crossover to the low temperature $SO(M)_2$ topological Kondo phase. As our analysis shows, a second crossover necessarily occurs at lower temperature $T<T^*$ into a $SO(M)_1$ phase whose properties will be discussed In Sec. \ref{Low energy conductance}. \\ \begin{figure}[pt] \centering \includegraphics[scale=0.38]{phasediagram_v3.pdf} \caption{Schematic phase diagram away from a charge degeneracy point. Different sequences of crossovers occur upon decreasing temperature, depending on $E_J/E_c$. In the charging dominated regime, one obtains first a crossover from free fermion behavior, where the leads are decoupled from the island, to the topological Kondo state, on energy scale $T_K$. This crossover is followed by another one to an Andreev type NFL state, described by $SO(M)_1$, below energy scale $T^*$ given in Eq.~(\ref{Ts}). In the Josephson dominated regime where still $U > \Gamma$, there is a direct crossover from free fermions to the new $SO(M)_1$ phase. When the parity interaction, $U$, is smaller than the tunnel width $\Gamma$, there is first a crossover from free fermions to a non-interacting Majorana resonant state, followed by a second crossover to the $SO(M)_1$ state below $U$. Modification of the phase diagram at a charge degeneracy point is discussed in the text.} \label{phasediagram} \end{figure} \subsection{Parity interaction} In the previous subsection we essentially integrated out the bulk SC and generated effective anomalous couplings between the leads, see Eq.~(\ref{HEFF0}). We now follow the same ideology, this time keeping an internal degree of freedom of the island which can be identified with its parity. Keeping this degree of freedom explicitly is crucial either in the Josephson dominated regime $E_J \gg E_c$ or near resonance $n_g \approx N_0+ \frac{1}{2}$. For the moment, let us focus only on the Hamiltonian of the island together with the Josephson coupling to the bulk SC, $H_{\rm{c}}+H_{\rm{J}}$. As already noted, in the absence of Majorana fermions, the island is a conventional superconductor which is allowed to have only an even number of electrons $\mathcal{N}={\rm{even}}$. The Hamiltonian $H_{\rm{c}}+H_{\rm{J}}$ in this case has a discrete set of eigenstates labeled $m=0,1,2,\ldots$ (which can be expressed in terms of Mathieu functions, see Ref.~\onlinecite{Koch}), depending on the values of $E_J$, $E_c$, and $n_g$, see Fig. ~\ref{fig2}, where these levels are shown as red lines. However, the presence of Majorana modes in the island changes this picture. First, there is an additional set of eigenstates associated with an odd number of electrons, $\mathcal{N}={\rm{odd}}$, see blue lines in Fig. \ref{fig2}. Furthermore, the Majorana modes give rise to degeneracy $2^{N/2-1}$ of each parity sector (odd or even), where $N \ge M$ is the number of Majorana fermions. We label the energy eigenstates of $H_{\rm{c}}+H_{\rm{J}}$ as $E^p_{m}$, where $m=0,1,\ldots$ and $p=+$ or $-$ for the even and odd sectors of parity respectively. We shall consider temperatures $T \ll \max\{E_c,E_J \}$. This implies temperatures smaller compared to the excitations gaps of $H_c+H_J$ inside each parity sector, but it allows for two possibilities: (i) low energy subspace with unique parity. As seen in Fig.~\ref{fig2}, this emerges in the charge dominated regime $E_c \gg E_J$ and away from resonance. This situation was considered in the previous subsection. (ii) Quasi-degenerated low energy states with parity $p=\pm$, realized in the Josephson dominated regime $E_J \gg E_c$, or near a resonance $n_g \approx N_0+\frac{1}{2}$. In the latter case, the Hamiltonian can be projected down to a subspace of two lowest eigenstates $|+\rangle$ ,$|-\rangle$ of $H_{\rm{c}}+H_{\rm{J}}$, with eigenvalues $E^+_{0}$, $E^-_{0}$ respectively. Denoting them by a pseudo-spin $\sigma^z |\pm\rangle= \pm |\pm\rangle$ the operator $\mathcal{P}=|+\rangle \langle +|+|-\rangle \langle -|$ projects the system to the manifold of these two states. In general, there is a finite energy difference between these states $U=E^+_{0}-E^-_{0}$, see Fig.~\ref{phasediagram}, which is referred as the $\it{parity}$ interaction. Thus, the projected Hamiltonian takes the form \begin{align} \label{HU} \mathcal{P}H\mathcal{P}= \sum_{j=1}^M \left[t_j\psi_j^\dagger(0)\gamma_j(A\sigma^-+B\sigma^+)+ \rm{h.c}\right]-\frac{U}{2}\sigma^z, \end{align} where the matrix elements $A=\langle -|e^{-i\phi/2}|+\rangle$ and $B=\langle +|e^{-i\phi/2}|-\rangle$, as well as the parity interaction $U$, are determined by $E_c$, $E_J$, and $n_g$. (Here we return to general $t_j$ which are not necessarily isotropic). In the following, we calculate them explicitly for the various regimes. In fact, these matrix elements can be evaluated using Mathieu functions as described in Ref.~\onlinecite{Koch}, see Fig.~\ref{fig_AB}. In the case of $B=0$, corresponding to the case $E_J=0$, this model was considered in Ref.~\onlinecite{michaeli}. \begin{figure}[pt] \centering \includegraphics[scale=0.26]{fig2.pdf} \caption{Energy levels of $H_{\rm{c}}+H_{\rm{J}}$ for $\mathcal{N}$ even (red) or $\mathcal{N}$ odd (blue). There are two lowest energy states of opposite parity, whose energy splitting $U$ is controlled by the gate voltage $n_g$, see Eqs. (\ref{HP}) and (\ref{HUEC}). Here $E_{01}=\rm{min}_{n_g}(E^+_1-E^+_0)$. } \label{fig2} \end{figure} We begin discussing the Josephson dominated regime, in which, as shown by Ref.~\onlinecite{heck}, parity interaction emerges as an exponentially suppressed tunneling of the phase field. Generally, the two terms $H_{c}$ and $H_J$ compete, as the first tends to fix the number of Cooper pairs, while the second favors charge fluctuations. When $E_J$ is the largest energy scale, the superconducting phase $\phi$ tends to be locked in one of the minima of the cosine term in Eq.~(\ref{HJ}), and thus effectively behaves as a particle in harmonic potential. However, tunneling events between different minima (instantons) where $\phi \rightarrow \phi + 2\pi$ lead to the effective low energy parity interaction~\cite{heck} \begin{align} U\sim (E_c E_J^3)^{1/4}e^{-\sqrt{8E_J/E_c}}\cos(\pi n_g). \label{HP} \end{align} Note that $U=0$ when $n_g$ reaches a degeneracy point. As a single electron tunnels from one of the leads into the island, the parity flips, as realized by the operator $\sigma^x$. One can show that in Eq.~(\ref{HU}) $A=B=1$ up to exponentially small corrections, such that the low energy Hamiltonian yields \begin{align} \label{HU2} \mathcal{P}H\mathcal{P}= \sum_{j=1}^M \left(t_j\psi_j^\dagger(0)\gamma_j \sigma^x+ \rm{h.c}\right)-\frac{U}{2}\sigma^z. \end{align} Assuming for simplicity real $t_j$, this becomes \begin{align} \label{HU3} \mathcal{P}H\mathcal{P}=\sum_{j=1}^M \sqrt{2} i t_j \rho_j(0) \gamma_j - \frac{U}{2} \sigma^z. \end{align} In this case, if furthermore $U=0$, then $\sigma_x$ commutes with the Hamiltonian and the pseudo-spin subspace can be eliminated. One major drawback of the Josephson dominated limit is that the parity energy $U$ is exponentially small in $E_J/E_c$. However, this is not necessarily the case in the charge dominated regime as we now discuss. Consider the charge dominated regime at two different regimes of gate voltage. First, away from a resonance $n_g\approx N_0-\frac{1}{2}$, the Hamiltonian can be projected to its two lowest states manifold $E_0^+$ and $E_0^-$. We assume without the loss of generality that $N_0$ is even. In order to obtain the effective Hamiltonian we calculate the ground states of the two parity sectors $|+\rangle$ and $|-\rangle$ to first order in $\frac{E_J}{E_c}$, \begin{align} |+\rangle & \approx|N_0\rangle+\frac{E_J}{2E_c}|N_0-2\rangle+\frac{E_J}{6E_c}|N_0+2\rangle, \nonumber \\ |-\rangle &\approx|N_0-1\rangle+\frac{E_J}{2E_c}|N_0+1\rangle+\frac{E_J}{6E_c}|N_0-3\rangle. \end{align} Using Eq.~(\ref{HU}), we obtain $A\approx 1$, $B\approx \frac{E_J}{E_c}$. The projected Hamiltonian then takes the form, \begin{align} \label{HUEC} \mathcal{P}H\mathcal{P} &= \sum_{j=1}^M \left[t_j\psi_j^\dagger(0)\gamma_j(\sigma^- +\frac{E_J}{E_c}\sigma^+)+ \rm{h.c}\right]-\frac{U}{2}\sigma^z, \nonumber \\ U&=2E_c(n_g-N_0+\frac{1}{2}). \end{align} As anticipated, in this case the parity interaction is of order $E_c$. The situation is more complex in the vicinity of off-resonance point $n_g\approx 0$, where additional excited states are very close to the two-state manifold. For $E_J \ll E_c $ the energy gap $E_{01}^{-}\equiv E_{1}^{-}-E_{0}^{-}\approx 2E_{J}$ is small compared to the gap $ U\approx E_{c}$. This enables transitions from the ground state $E_{0}^{+}$ to $E_{1}^{-}$ , which is very close to $E_{0}^{-}$. As a result, the picture of the pseudo-spin two state manifold collapses. However, for $\frac{E_{J}}{E_{c}}$ of order unity, $E_{01}^{-}$ already exceeds $U$, see Fig.~\ref{fig3}. Consequently, the pseudo-spin picture holds in this regime. Calculating the coefficients, we find $A=B=\frac{1}{\sqrt{2}}$ such that the projected Hamiltonian has the exactly the same form as Eq.~(\ref{HU3}), where here the parity energy is of order of $E_c$. \begin{figure}[pt] \centering \includegraphics[scale=0.5]{fig_AB.pdf} \caption{Matrix elements $A$ and $B$ appearing in the effective model Eq.~(\ref{HU}). They are calculated using Mathieu functions (see Ref.~\onlinecite{Koch}) in the on-resonant case $n_g=1/2+{\rm{int}}$. In the charging dominated regime this matches the coefficients in Eq.~(\ref{HUEC}) while in the Josephson dominated regime $A,B \to 1$ with exponentially small difference. } \label{fig_AB} \end{figure} For temperatures lower than $U$ the parity of the Hamiltonian Eq.~(\ref{HU2}) is fixed. In order to obtain an effective Hamiltonian in this regime one needs to consider processes in which, after a single electron tunnels from one of the leads into the box, a second electron has to either tunnel $\it{in}$ or out of it. Performing Schrieffer-Wolff transformation starting from the Hamiltonian Eq.~(\ref{HUEC}), we obtain an effective Kondo Hamiltonian exactly as in Eq.~(\ref{HEFF1}), \begin{align} H_{\rm{eff}}= \sum_{i\neq j} J_{ij}\rho_i(0)\rho_j(0)\gamma_i\gamma_j, \label{HEFF2} \end{align} where the exchange coupling are given by $J_{ij}=\frac{2t_i t_j}{U}$, and the Majorana modes $\eta_j(x)$ of the leads decouple from $H_{\rm{eff}}$. This Hamiltonian coincides with Eq.~(\ref{HEFF1}) in the infinite anisotropy limit when $J_\eta=0$; following the above RG analysis, it is obtained as an effective Hamiltonian starting from Eq.~(\ref{HEFF1}) below the energy scale $T^*$, where one of the two $SO(M)_1$ channels decouples. We now return to the phase diagram, Fig. \ref{phasediagram}, consider the regime $E_J \gtrsim E_c$, and connect it with the small $E_J$ regime discussed earlier. One can associate a Kondo scale $Ue^{-\frac{\Gamma}{U}}$ at which the coupling Eq.~(\ref{HEFF2}) flows to strong coupling. We identify this crossover with the same scale $T^*$ discussed already at small $E_J$ signaling the flow from from the $SO$ into the $SO(M)_1$ phases. Since $U < E_c$, the scale $T^* \sim e^{-\frac{\Gamma}{U}}$ may exceed the Kondo scale for $\Gamma \ll U$. On the contrary, as the temperature raises above $U$, the effect of the parity interaction becomes unnoticeable such that the system is effectively in the on-resonance regime. At $U=0$ the system consists of $M$ non-interacting Majorana fermions. The coupling $\Gamma$ of each Majorana to a corresponding lead, gives rise to a Majorana resonant state, which forms for temperatures $T \ll \Gamma$, as denoted in Fig.~\ref{phasediagram}. We briefly speculate on the modification of the phase diagram when the gate voltage is tuned to a charge degeneracy point. In this case the topological Kondo state emerges at scale $\Gamma$ (which exceeds $T_K$). Since the on- and off-resonant Kondo states are described by the same fixed point~\cite{michaeli,mora}, we conclude that the same instability of the topological Kondo state occurs at scale $T^*$ given by Eq.~(\ref{Ts}) for small $E_J/ E_c$. For large $E_J / E_c$, even though $U=0$, there is a similar crossover between the phase of $M$-decoupled free Majorana resonances, to the $SO(M)_1$ phase, on an exponentially small scale. This energy scale is proportional to the difference $A^2-B^2$ in Eq.~(\ref{HU}), and is identified using the mapping to quantum Brownian motion in a periodic potential below. \section{Low energy conductance} \label{Low energy conductance} We now probe the low energy properties of the system, including its low temperature conductance, sensitivity to lead asymmetry and to the gate voltage. We will begin this section by a brief review of mapping, which we will then apply to obtain the different fixed points of our system, and then to find their conductance properties. \subsection{Preliminaries} We briefly review the method by Yi and Kane~\cite{YiKane}, mapping our problem to quantum Brownian motion (QBM) of a particle in a periodic potential. As a first step towards strong coupling analysis, we bosonize the fermionic fields of the leads. The tunneling part $H_{\rm{T}}$ consists of bi-linears of Majorana and fermionic operators $\psi_j^\dagger\gamma_j$ (or $\gamma_j\psi_j$) which we bosonize \begin{align} \psi_j^\dagger(x)\gamma_j \sim e^{i\varphi_j(x)}, \end{align} where $j=1,2,\ldots,M$ (we set the lattice constant to unity). This bosonization procedure is completely equivalent to combining the Majorana oprators $\gamma_i$ with the fermionic Klein factors of each lead $\xi_i$ \cite{AltlandEgger, Beri}. Since all of these bi-linears commute with the Hamiltonian they can be treated as a c-number that can be absorbed into the tunneling amplitude. In terms of the bosonic fields, the imaginary time action of the leads has the form \begin{align} S_{\rm{leads}} =\frac{1}{4 \pi}\sum_{j=1}^{M} \int_{-\infty}^\infty dx \int d \tau \partial_x \varphi_j(v_F\partial_x \varphi_j - i \partial_\tau \varphi_j). \end{align} By integrating out all the degrees of freedom away from $x=0$, this action becomes \begin{align} S_{\rm{0}}= \frac{1}{(2\pi)^2}\int d\omega|\omega| |\vec{\varphi}(\omega)|^2. \end{align} The single-electron tunneling is described by \begin{align} S_{\rm{T}}= \sum_{j=1}^M t_j \int_{-\infty}^{\infty}d\tau e^{i\varphi_j(0,\tau)}e^{-i \phi/2} +\rm{h.c}. \end{align} At this point, we follow Yi and Kane \cite{YiKane} and identify $\vec{\varphi}=(\varphi_1,\varphi_2,\ldots,\varphi_M)|_{x=0}$ with the momentum of a particle in a strong periodic potential. In this language, $S_T$ is a hopping term which generates tunneling events between potential minima, while $S_0$ describes an Ohmic coupling of the particle to a dissipative bath. The number of electrons in each lead $(n_1,\ldots,n_M)$ corresponds to the position of the particle in $M$-dimensional space. Starting at weak lead coupling, the particle is located at one of the potential's minima at $n_j=\rm{integer}$, and is able to hop to adjacent minima separated by a vector $\vec{R}_0$ via $S_{\rm{T}}$. Thus, the allowed charge states, corresponding to the potential minima in the QBM space, form a Bravais lattice. The tunneling Hamiltonian can be expressed then as \begin{align} \label{ST} S_{\rm{T}}= \sum_{j=1}^M t_j e^{i\sqrt{2}\vec{\varphi}\cdot \vec{R}^{(j)}_0}e^{-i \phi/2} +\rm{h.c}. \end{align} The vectors $\vec{R}^{(j)}_0$ have $M$ components, where only the $j$-th of them is non-vanishing and given by $\frac{1}{\sqrt{2}}$. Following Refs.~\cite{YiKane,michaeli}, our convention is such that the argument of the above exponent is $\sqrt{2}\vec{\varphi}\cdot \vec{R}^{(j)}_0=\varphi_j$, and its scaling dimension is $|R_0|^2=\frac{1}{2}$. Next, we consider the strong coupling limit of the QBM action, where the bosonic phases $\varphi_i$ are pinned. This corresponds to vanishing of the periodic potential, leading to QBM in free space. The stability of this strong coupling fixed point can be analyzed by examining the effect of a weak periodic potential which has the same periodicity as the original Bravias lattice. Using Fourier decomposition, it is described by $U(r)=\sum_{\vec{G}}v_{\vec{G}}e^{i\vec{G}\cdot r}$, where $\{\vec{G}\}$ is reciprocal lattice vector satisfying $\vec{G}\cdot \vec{R}=\rm{integer}$ for any Bravias vector $\vec{R}$. The scaling dimension of $v_{\vec{G}}$ is given by $|{\vec{G}}^2|$, where we denote the shortest reciprocal lattice vectors by ${\vec{G}_0}$. The length of ${\vec{G}_0}$ determines the leading temperature corrections to physical quantities, e.g., the conductance, as we discuss below. In conclusion of this part, our model gives rise to low energy fixed points whose low energy properties will be described using the QBM mapping. Different fixed points correspond to different lattices, yielding different leading irrelevant operators. These various options are described in this subsection and summarized in Table I. \subsection{Fixed points and leading irrelevant operator} Now we would like to explore the low temperature properties of the various regimes presented in the previous sections. First, we consider the charge dominated regime, where virtual charge transitions give rise to an effective Kondo Hamiltonian, see Eqs. (\ref{HEFF0},\ref{HEFF1}). In QBM language, this Hamiltonian reads \begin{align} \label{HQMB1} H_{\rm{eff}} = \sum_{i\neq j}^M (J_{\parallel} e^{i\sqrt{2}\vec{\varphi}\cdot \vec{R}^{(ij)}_\parallel}+J_{\perp}e^{i\sqrt{2}\vec{\varphi}\cdot \vec{R}^{(ij)}_\perp})+\rm{h.c}, \end{align} where $J_{\parallel}=\frac{t^2}{E_c}$, $J_{\perp}=\frac{3t^2E_J}{2E_c^2}$ ($E_c\gg E_J$), and $\vec{R}_{\perp,\parallel}^{(ij)}$ are defined such that $\sqrt{2}\vec{\varphi}\cdot \vec{R}^{(ij)}_\parallel=\varphi_i-\varphi_j$ and $\sqrt{2}\vec{\varphi}\cdot \vec{R}^{(ij)}_\perp=\varphi_i+\varphi_j$. These $M$-dimensional vectors $\vec{R}^{(ij)}_\parallel$ and $\vec{R}^{(ij)}_\perp$ in the above equation, correspond to two distinct types of particle hopping in the periodic potential. Specifically, $\vec{R}^{(ij)}_\parallel$ corresponds to charge conserving particle hopping $\psi^\dagger_i\psi_j$, such that its components sum to $0$, e.g., $\vec{R}^{(12)}_\parallel=\frac{1}{\sqrt{2}}(1,-1,0,\ldots)$. On the other hand, $\vec{R}^{(ij)}_\perp$ corresponds to two electrons tunneling $\psi^\dagger_i\psi^\dagger_j$ described by $i$-th and $j$-th coordinates with the same sign, e.g. $\vec{R}^{(12)}_\perp=\frac{1}{\sqrt{2}}(1,1,0,\ldots)$. Note that the vectors $\vec{R}^{(ij)}_\parallel$ are linearly dependent on $\vec{R}^{(ij)}_\perp$. At $E_J=0$, $J_{\perp}$ vanishes and as a result the motion of the Brownian particle is restricted to an $M-1$ dimensional space, in which the overall charge of the leads $e \sum_j n_j$ is fixed. As already noted, the allowed charge states in this space form a Bravais lattice; specifically, for $M=3$, the particle's motion is restricted to a two dimensional triangular lattice, see gray planes in Fig.~\ref{fig3}. In this case there is no hopping between these planes. The analysis of this system using the QBM mapping was performed in Ref.~\onlinecite{Beri}, leading to a triangular reciprocal lattice, with the resulting leading irrelevant operator $\Delta_M^{(E_J=0)}=2(M-1)/M$. Crucially, at any $E_J\neq 0 $ tunneling events of two electrons into (or out of) the island generate a finite probability of particle hopping between two parallel planes $\sum_j n_j \rightarrow\sum_j n_j\pm 2$, see dashed lines in Fig.~\ref{fig3}. For $M=3$ this leads to $\it{three}$ dimensional QBM on an FCC lattice, whose basis vectors are $\frac{1}{\sqrt{2}}\left\{1,1,0\right\}$. One should notice that in the case $E_J \ll E_C$, hopping between generalized (1,1,1) planes, $J_{\perp}$, is weaker than the hopping within the planes, $J_{\parallel}$. On the other hand, when the system is dominated by the parity interaction, $J_{\parallel} \approx J_{\perp} = \frac{2t^2}{U}$, see Eq.~(\ref{HEFF2}). In either case, both tunneling amplitudes are marginally relevant and flow to the same strong coupling fixed point which we now analyze. Dealing with the strong coupling limit, the form of the Bravais lattice vectors $\vec{R}^{(ij)}_\perp$ gives the following possible reciprocal lattice vectors: (i) $\vec{G}= \frac{1}{\sqrt{2}}\left(1,1,\ldots,1\right)$, corresponding to the diagonal lattice vector of a (hyper) BCC lattice, with length $|{\vec{G}}|=\sqrt{\frac{M}{2}}$, (ii) $\vec{G}=\left\{\sqrt{2},0,\ldots\right\}$ with length $|{\vec{G}}|=\sqrt{2}$. Therefore, the shortest reciprocal lattice vector has length $|{\vec{G_0}}|=\sqrt{\frac{M}{2}}$ for $M=2,3$, and $|{\vec{G_0}}|=\sqrt{2}$ for $M>3$. This implies that $v_{\vec{G}}$ is irrelevant for all $M>2$ and marginal for $M=2$. Note that $M=3$ corresponds to 2CK state where the well known scaling dimension of the leading irrelevant operator is $\Delta=\frac{3}{2}$. In conclusion, the weak potential $U(r)$ vanishes during the RG flow, resulting in free space QBM. \begin{figure}[pt] \centering \includegraphics[scale=0.7]{lattice_av2.pdf} \caption{Bravais lattice formed for $M=3$ leads in the Coulomb valley. In the absence of Josephson coupling the allowed charge configurations of the leads form triangular lattices, shown as gray planes, within which the charge is fixed. Turning on $E_J$ allows inter-planar charge transitions, shown as dashed lines. Since parity is conserved the basis vectors are of the form $\sim\left\{1,1,0\right\}$ and thus form an FCC lattice. } \label{fig3} \end{figure} We now turn to the on-resonance case where $n_g\approx\rm{integer}+1/2$. The charge conserving case, $E_J=0$, was analyzed in Refs.~\cite{michaeli,mora}. Due to the charge degeneracy between states with $N_0$ and $N_0+1$ electrons in the island, the total charge in the leads $\sum_j n_j$ is permitted to fluctuate by $1$. Consequently, the particle is allowed to hop between \emph{two} adjacent lattice planes perpendicular to the direction $\frac{1}{M}(1,1,\ldots,1)$. For $M=3$, the formed lattice is a corrugated honeycomb lattice consisting of two triangular sublattices. Note that for $E_J=0$ in the off-resonance case, the particle hops between sites of the triangular lattice via virtual transitions through the high-energy sublattice. The honeycomb lattice, however, has the same Bravais lattice as each triangular lattice; as result of this the structure of the leading irrelevant operator is the same as in the off-resonance case. \begin{figure}[pt] \centering \includegraphics[scale=0.7]{lattice_bv2.pdf} \caption{Lattice formed for $M=3$ leads in the on-resonant regime. It can be decomposed into triangular planes characterized by even (red dots) or odd (blue dots) integer value of $n_1+n_2+n_3$. The on-site energy of the two sublattices is the same at $U=0$ (on-resonance). The interplanar charge transitions is now staggered, as in Eq.~(\ref{HU}), see thick ($A$) versus dashed ($B$) lines. The unit cell and Bravais lattice is the same as in Fig.~\ref{fig3}, i.e., FCC. Only in the non-interacting limit $E_c=0$, we have $A=B$ and the lattice becomes simple cubic.} \label{fig3b} \end{figure} At finite $E_J$, the effective tunneling is given by Eq.~(\ref{HU2}). On-resonance $U$ vanishes, allowing fluctuations of the parity. The QBM action then takes the form \begin{align} S = S_{\rm{0}}+ \sum_{j=1}^M [t_j e^{i\sqrt{2}\vec{\varphi}\cdot \vec{R}^{(j)}_0}(A \sigma^-+ B \sigma^+) +\rm{h.c}]. \end{align} Importantly, at $E_J \neq 0$ tunneling events of $2e$ enable the particle to hop between $(1,1,1)$ planes characterized by any integer $\sum_j n_j$, see Fig.~\ref{fig3b}. This set of planes may be divided into planes where $\sum_j n_j $ is even or odd, see red and blue lattice sites in Fig.~\ref{fig3b}. While this set of lattice sites forms an (hyper) cubic lattice, for $A \ne B$ there is a staggered structure in the tunneling between planes, see thick versus dashed lines in Fig.~\ref{fig3b}. Consequently, the corresponding Bravais lattice remains FCC as in the off-resonant case. In this case there is a distinction between the Bravais lattice vectors of FCC, see Table~I, and the shortest lattice vectors appearing in the tunneling Hamiltonian, $\vec{R}^{(j)}_0$, with $|R_0|^2=\frac{1}{2}$, corresponding to (hyper) cubic lattice. Being a relevant perturbation, the tunneling Hamiltonian flows to strong coupling. By analyzing the reciprocal Bravais lattice, i.e., BCC, as a perturbation, we obtain the same scaling dimension of the leading irrelevant operator as in the off-resonance case. The distinction between the (hyper) cubic and FCC lattices is due to the difference between the tunneling amplitudes $t_A=t\cdot A$ and $t_B=t\cdot B$, where $t_j=t$ is isotropic. This difference, however, vanishes at large $E_J/E_c$. In fact, in this regime the superconducting phase $\phi$ is localized in the minima of the cosine potential with a typical localization length (in units of $2 \pi$) of $(E_c/E_J)^{1/4}$. The sensitivity of the wave function in the phase representation to boundary conditions, which is measure in the difference between $A$ and $B$, is exponentially suppressed in $2\pi/(E_c/E_J)^{1/4}$, see Fig.~\ref{fig_AB}. Thus, at temperatures higher than an exponentially small energy scale, similar to $U$ in Fig.~\ref{phasediagram}, QBM takes place essentially on a hyper-cubic lattice. This Bravais lattice having the same reciprocal lattice, leads to a leading irrelevant operator of dimension $2$, see Table I. In all discussed cases, where the electron tunneling flows to strong coupling and hence the periodic potential flows to weak coupling, the effect of lead-anisotropy is seen to be irrelevant. \begin{table} \begin{tabular}{|p{1.7cm}| p{2.1cm} | l | p{1.7cm} |l|} \hline system & lattice &$\sqrt{2}\vec{R}^{(0)} $ & $\vec{G}_0/\sqrt{2} $ & $\Delta_{M} $\\ \hline $E_J=0$ \newline $n_g \ne \frac{1}{2}+N_0$ & triangular &$\{1,-1,0\} $& $\{-\frac{2}{3},\frac{1}{3},\frac{1}{3}\}$ \newline triangular & ~~$\frac{4}{3}$ \\ \hline $E_J=0$ \newline $n_g = \frac{1}{2}+N_0$ & honeycomb & $\{1,0,0\}^{*}$ & $\{-\frac{2}{3},\frac{1}{3},\frac{1}{3}\}$ \newline triangular & ~~$\frac{4}{3}$ \\ \hline $E_J \ne 0$ \newline $n_g \ne \frac{1}{2}+N_0$ & FCC & $\{1,1,0\}$ & $\{\frac{1}{2},\frac{1}{2},\frac{1}{2}\} $ \newline BCC & ~~$\frac{3}{2}$ \\ \hline $E_J \ne 0$ \newline $n_g = \frac{1}{2}+N_0$ & cubic \newline (FCC Bravais) & $\{1,0,0\}$ & $\{\frac{1}{2},\frac{1}{2},\frac{1}{2}\}$ \newline BCC & ~~$\frac{3}{2}$ \\ \hline $E_J \gg E_c $ \newline $E_c \to 0 $ & cubic & $\{1,0,0\} $ & $\{1,0,0\} $ \newline cubic & $~~2$\\ \hline \end{tabular} \caption{Summary of the various lattices and lattice vectors introduced in the QBM description. For clarity, we restrict our attention to $M=3$ leads. At $E_J=0$ lattices are 2 dimensional, while for $E_J \ne 0$ lattices are 3-dimensional. $\vec{R}^{(0)}$ refers to the shortest vector, which determines the scaling dimension of $H_T$ via $|\vec{R}^{(0)} |^2$, which is not necessarily a vector of the Bravais lattice. $\vec{G}_0$ is the shortest reciprocal lattice vector. The (Bravais) reciprocal lattice is denoted below each $\vec{G}_0$ vector. In the lattice vector of the honeycomb lattice $\{1,0,0 \}^*$ motion is restricted to two neighboring $(1,1,1)$ planes.} \end{table} \subsection{Conductance} We now discuss the conductance focusing on the new phases stabilized by the Josephson coupling, using the QBM picture applied in the previous sections. In the strong coupling limit, the QBM takes place in $M$-dimensional free space obtained after the vanishing of the periodic potential during RG flow. Suppose that we apply a voltage $V_1=V$ on a single lead, $i=1$. In the QBM action, the voltage $V_1$ couples to the electron number in lead $1$, $n_1$, as $-eV_1n_1\equiv \mathcal{V}(n_1)$ corresponding to a linear potential in the particle's coordinate $n_1$, i.e., to an electrical field in this direction. This gives rise to a force $F=-\frac{d{\mathcal{V}}(n_1)}{dn_1}$, which, in the presence of dissipation, leads to steady-state velocity $\dot{n}_1$ via $0=\frac{d^2 n_1}{dt^2}=F-\frac{m\dot{n}_1}{\tau}$, where $\tau$ is the mean free time of the Brownian particle. Rather than computing $\dot{n}_1$ we use the same method as \cite{michaeli}, and argue that the steady-state velocity is independent of the dimensionality $M$ due to fact that the free space QBM is spatially isotropic and decoupled along different directions. Thus we conclude that $I_1$ is independent of $M$ (Notice however, that while in the charge conserving situation~\cite{michaeli} the dimensionality equals $M-1$, in our case the Brownian particle can explore all $M$-dimensions). For $M=1$, the current in lead $1$ is given by $I_1=\frac{2e^2}{h}V$ \cite{alicea2}. Therefore, we find $I_1=\frac{2e^2}{h}V$ for all $M$ at zero temperature and for finite $E_J$. In general one can discuss a conductance matrix $G_{ij}$ such that $I_i = \sum_{j}G_{ij} V_j$. At $T=0$, $G_{ij} =\frac{ 2e^2}{h} \delta_{ij}$ for $E_J > 0$. Low temperature corrections of the conductance are dominated by the leading irrelevant operator of the strong coupling fixed point summarized in Table I. As already noted, this operator follows from weak periodic potential which has (hyper) FCC structure and has a scaling dimension $|\vec{G}_0|^2$. Consequently, we obtain \begin{align} G_{ij} = \frac{2e^2}{h}(\delta_{ij}+A_{ij}T^{2\Delta_M-2}), \end{align} where $\Delta_M = \frac{3}{2}$ for $M=3$, or $\Delta_M = 2 $ for $ M>3$, and $A_{ij}$ are non-universal constants depending on the problem's parameters. In the regime of $T > U$, denoted as $M$-decoupled Majorana resonant states in Fig.~\ref{phasediagram}, we have $\Delta_M=2$ for any $M$. This universal power law should be contrasted with the result of Eriksson et. al~\cite{Eriksson14}, finding a manifold of fixed points with continuously varying exponents. The latter was achieved (i) in the Josephson dominated regime where the parity interaction is negligible, and (ii) in a special situation where $T_K$ exceeds the tunnel width $\Gamma$ (as opposed to our assumptions, see Fig.~\ref{phasediagram}). \subsection{Interactions} \label{Interactions} \begin{figure}[pt] \centering \includegraphics[scale=0.25]{fig4_v2.pdf} \label{fig4} \caption{Phase diagram for $M=3$ (a) in the off-resonance case and (b) near a charge degeneracy point, as function of the Luttinger parameter $g$ of the leads. The corresponding lattices in the QBM language are shown in Fig.~\ref{fig3} and Fig.~\ref{fig3b}, respectively. The lower line corresponds to weak leads coupling $t\rightarrow 0$ and the upper line corresponds to the strong coupling regime, where the QBM takes place in a weak periodic potential $v\rightarrow 0$. Stable (unstable) fixed points are marked by solid (dashed) line. \ } \label{fig4} \end{figure} Using the QBM formulation, generalization of the previous analysis to interacting leads is straightforward. The interactions are given by the Luttinger parameter $g$. In order to study their effect, we find the change in the length of both the Bravias and the reciprocal lattice vectors, which is given by $|\vec{R}|\rightarrow |\vec{R}|/\sqrt{g} $, $|\vec{G}|\rightarrow \sqrt{g}|\vec{G}| $ \cite{YiKane}. The phase of the system strongly depends on whether the gate voltage is either on- or off-resonance. For concreteness, we focus on the case $M=3$. First, if the system is off-resonance, we find a line of intermediate unstable fixed points, see Fig. \ref{fig4}. This line emerges since there is a range of $g$ in which both $t$ and $v$ are irrelevant. Explicitly, the Bravais lattice vectors of FCC and BCC, see Table I, give the relation $|R|^2|G|^2=3/2$, implying that at the marginal point of $t$, $|R|^2=1$, $|G|^2=3/2>1$, hence $v$ is irrelevant. On the other hand, on-resonance, the scaling dimension of the tunneling operator $t$ is determined by a non-Bravais vector $R_0$. As seen in Table I, $|\vec{R}_0|^2=(|\vec{R}|^2)/2$, hence, in this case the marginal point of $t$, $|R_0|^2=1$, gives $|G|^2=3/4<1$, implying that $v$ is relevant. Thus, in the on-resonant case we obtain an intermediate line of stable fixed points, see Fig.~\ref{fig4}(b). \section{Summary} \label{Summary} To conclude our work, we showed that Josephson coupling gives rise to a substantial change in the physics of Majorana islands. The full phase diagram of the system depending on $\frac{E_J}{E_c}$ and $n_g$ has been obtained, predicting universal values of the conductance at $T=0$ and its power-law low temperature corrections. While the original model including the bulk superconductor is more complicated, the effective model in Eq.~(\ref{HU}) may be used to test our predictions using numerical techniques. With the fast progress in the field, we are optimistic that our predictions will be verified experimentally. {\it Acknowledgements:} We thank R. Egger, C. Mora, K. Michaeli and L. Fu for helpful and interesting discussions. This work is supported by Israel Science Foundation Grant No.~1243/13, and the Marie Curie CIG Grant No.~618188. \bibliographystyle{apsrev}
1,108,101,565,402
arxiv
\section{Introduction} {Respiratory motion hampers accurate diagnosis as well as image-guided therapeutics. For example, during radiotherapy,} it may lead to poor local tumor control and increased radiation toxicity to the normal organs~\cite{motionlung2018}. It can also exhibit itself as motion artifacts in the acquired images, making it difficult to differentiate nodule/tumor morphology changes from those induced by respiratory motion. This also makes the image registration task across different breathing phases as well as across different time points challenging. To validate the image registration accuracy/performance for commissioning solutions available in clinical commercial systems, the American Association of Physicists in Medicine(AAPM) TG-132~\cite{Brock2017TG132} recommended independent quality checks using digital phantoms. Current commercial solutions such as ImSimQA allow creation of synthetic deformation vector fields (DVFs) by user-defined transformations with only a limited degree of freedom. These monotonic transformations can not capture the realistic respiratory motion. For modeling respiration motion, an intuitive representation of motion is time-varying displacement vector fields (DVFs) obtained by deformable image registrations (DIR) in 4D images, acquired in a breathing cycle. Surrogate-driven approaches~\cite{MCCLELLAND201319} {employ} DVF as a function of the surrogate breathing signal. However, an exact and direct solution in the high-dimensional space of DVFs is computationally intractable. {Still, motion surrogates have been widely studied in the field of radiotherapy for building models establishing the relationship between surrogates and respiratory motion estimated from the image data \cite{MCCLELLAND201319}. For example, the 1D diaphragm displacement has been reported as a reliable surrogate for tumor motion model \cite{Cervi_o_2009} as well as for PCA (principle component analysis) respiratory motion model to correct CT motion artifacts~\cite{Zhang2007_PCA}.} Recently, Romaguera et al.~\cite{ROMAGUERA2020_2DSeq2Seq} used a 2D {sequence-to-sequence (Seq2Seq) network~\cite{seq2seq2014}} to predict 2D in-plane motion for a single future time point. Krebs et al.~\cite{Krebs2020_cVAE} applied a similar encoder-decoder network in a conditional variational autoencoder (cVAE) framework {in which network parameters were learned to approximate the distribution of deformations in low-dimensional latent space with the encoder and decode the latent features} for {2D motion prediction with the decoder.} Romaguera et al.~\cite{ROMAGUERA2021_cVAE} integrated Voxelmorph \cite{balakrishnan2019voxelmorph} for assisting the VAE encoder to map deformations in latent space conditioned on anatomical features from 3D images. Temporal information of 2D surrogate cine images from a 2D Seq2Seq network was used to predict 3D DVF {at a single future time point.} {In this paper, we present a novel deep learning respiratory motion simulator (RMSim) that learns to generate realistic patient-specific respiratory motion represented by time-varying DVFs at different breathing phases from a static 3D CT image. For the first time,} we also allow modulation of this simulated motion via arbitrary 1D breathing traces as auxiliary input to create large variations. This in turn creates diverse patient-specific data augmentations while also generating ground truth for DIR validation. Our work has several differences and advantages over the aforementioned deep learning approaches: {(1) we used 3D Seq2Seq architecture for the first time which has never been attempted before for predicting deformations due to GPU memory limitations, (2) we did not use VoxelMorph in its entirety but only the Spatial Transform module to train our model end-to-end, and (3) as opposed to predicting just a single future time point, we can predict 9 future time point breathing phases simultaneously (learnt from 4D-CT images with 10 3D CT breathing phases) along with their 3D DVFs. We have thoroughly validated our RMSim output with both private and public benchmark datasets (healthy and cancer patients) and demonstrated that adding our patient-specific augmentations to training data can improve performance/accuracy of state-of-the-art deep learning DIR algorithms. We also showcase breathing trace-modulated respiratory motion simulations for public static radiology scans (in the accompanying \textbf{supplementary video}). The code, pretrained models, and augmented DIR validation datasets will be released at \url{https://github.com/nadeemlab/SeqX2Y}. \begin{figure}[th!] \begin{center} \footnotesize \setlength{\tabcolsep}{3pt} \includegraphics[width=1\textwidth]{figures/model_new_figure.pdf} \caption{The schematic image for the proposed deep learning model. The Seq2Seq encoder-decoder framework was used as the backbone of the proposed model. The model was built with 3D convolution layers {for feature encoding and output decoding} and 3D convolutional Long Short-Term Memory (3D ConvLSTM) layers {for spatial-temporal correlation between time points}. The last layer of the decoder was a spatial transform layer to warp the initial phase image with the predicted Deformation Vector Field (DVF). To modulate the respiratory motions the 1D breathing trace was given as input along with the initial phase image. {The dimension of image volume was 128 $\times$ 128 $\times$ 128 and the input feature to 3D ConvLSTM is 64 $\times$ 64 $\times$ 64 $\times$ 96 (Depth $\times$ Width $\times$ Height $\times$ Channel)} } \label{fig:Model} \end{center} \end{figure} \section{Materials and Methods} \subsection{Datasets} We used an internal lung 4D-CT dataset retrospectively collected and de-identified from 140 non-small cell lung cancer (NSCLC) patients receiving radiotherapy in our institution. The {helical and cine mode} 4D-CTs were acquired using Philips Brilliance Big Bore or GE Advantage {respectively} and binned into 10 phases using the vendor's proprietary software with breathing signals from bellows or external fiducial markers. The x-ray energy for the CT image was 120 kVp {and tube current varies case by case according to vendor-specific tube current modulations based on patient size. The mAs range is [100, 400] for GE and [500, 800] for Philips.} The image slice dimension was 512x512, while the number of image slices varied patient by patient. We used the 100:40 split for training:testing. We used 20 cases of the Lung Nodule Analysis (LUNA) challenge dataset~\cite{SETIO20171LUNA} {containing 3D radiology CTs for lung tumor screening} to show that our RMSim model trained with the internal dataset can be effectively applied to an external radiology/diagnostic dataset to generate realistic respiration motions (see accompanying \textbf{supplementary video}). {For quantitative evaluation of the model generality on an external data set, we used POPI \cite{vandemeulebroucke2011spatiotemporal} dataset which contains 6 10-phase 4D-CTs with segmented lung masks as well as annotated landmarks on the vessel and airway bifurcations.} {To validate} the effectiveness of data augmentation using synthetic respiratory motion images generated from our RMSim model in the deformable registration task, we used the Learn2Reg 2020 challenge dataset~\cite{hering_alessa_2020_Learn2Reg}. The Learn2Reg dataset consists of 30 subjects (20 for the training / 10 for the testing) with 3D CT thorax images taken in inhale and exhale phases. For each Learn2Reg 20 inhale/exhale pairs, we generated other phases of images using our RMSim model which was trained with the internal dataset, therefore increasing the sample size to 200 in total to augment the training of a well-known unsupervised deep learning {DIR} method, VoxelMorph~\cite{balakrishnan2019voxelmorph}. Unfortunately the inhale-exhale landmarks are not publicly available {in Learn2Reg dataset to assess the registration accuracy}. For the landmarks evaluation {in registration task}, we used the POPI dataset. {Brief description/purpose of all the datasets used in this study is given in Table~\ref{table:datasets}.} All datasets used in this study were cropped to eliminate the background and resampled to 128$\times$128$\times$128 with 2mm voxel size {due to the GPU memory constrains}. \begin{table*}[ht] \centering \caption{{Datasets used in this study.}} \label{table:datasets} \footnotesize \begin{tabular}{l|p{0.15\linewidth}|p{0.20\linewidth}|p{0.25\linewidth}|p{0.15\linewidth}} \hline \textbf{Dataset} & \textbf{Size} & \textbf{Description} & \textbf{Purpose} & \textbf{Evaluation} \\ \hline Internal 4D-CTs & 140 (100 training, 40 testing) & 10-phase radiotherapy 4D-CTs & Training and testing RMSim & Image similarity\\ \hline LUNA & 20 & Radiology CTs for lung nodule detection & Testing model generality & Visualization and qualitative \\ \hline POPI 4D-CTs & 6 & 10-phase 4D-CTs with landmarks & Testing model generality (evaluating DVF accuracy) & Target Registration Error (TRE) of landmarks \\ \hline Learn2Reg & 30 (20 training, 10 testing) & Inspiration-expiration thorax CT pairs with lung segmentations & Training and testing RMSim-augmented deep learning deformable image registration (Voxelmorph) & Lung segmentation (Dice score) and image similarity \\ \hline \end{tabular} \end{table*} \subsection{Realistic Respiratory Motion Simulation} \begin{figure}[t!] \begin{center} \footnotesize \setlength{\tabcolsep}{3pt} \includegraphics[width=1\textwidth]{figures/4dct_surrogate.pdf} \caption{Respiration motion surrogate extraction using a diaphragm point that has the maximum superior-inferior displacement across the phases. LDDMM was used to register the phase 1 (fixed) image to other phases (moving) to get the DVFs. The diaphragm point's trajectory in z-axis (shown in red) across the phases was considered as the breathing trace. Yellow line shows the diaphragm position at phase 1.} \label{fig:RPM} \end{center} \end{figure} { Sequence-to-Sequence (Seq2Seg) is a many-to-many network architecture originally developed for natural language processing tasks such as language translation. Inspired by Seq2Seq, the proposed RMSim, illustrated in Figure~\ref{fig:Model}, is a novel deep learning encoder-decoder architecture that comprises three main parts including 3D convolution, ConvLSTM3D (3D Convolutional Long-Short Term Memory), and spatial transformation layer (adapted from VoxelMorph \cite{balakrishnan2019voxelmorph}). The 3D convolution in the encoder is used to reduce the matrix dimension and extract salient features from images. We used 3$\times$3$\times$3 kernel size and 2$\times$2$\times$2 stride size to reduce the matrix dimension to 1/8. The number of channels for 3D convolution layer is 96. LSTM has a more complex cell structure than a neuron in classical recurrent neural network (RNN). Apart from the cell state, it contains gate units to decide when to keep or override information in and out of memory cells to better handle the gradient vanishing problem in recurrent neural network. This helps in learning long term dependencies. ConvLSTM \cite{ShiCovLSTMNIPS2015} replaces Hadamard product with convolution operators in the input as well as the state transitions to capture the spatial pattern of the feature representations aggregated from different time points. We implemented ConvLSTM in 3D for handling the 3D phase images from the 4D-CT. We used two stacked ConvLSTM3D layers to make the network deeper, adding levels of abstraction to input observations similar to the typical deep neural network. The hidden state output from ConvLSTM3D was fed to both the next layer in the same stack and the next timepoint ConvLSTM3D layer. The output of ConvLSTM3D in the decoder at each predicted time point was up-sampled to the original input resolution and output channels were reduced via 3D convolution, resulting in the 3D DVF for the final output. The initial phase CT image was then deformed to a predicted phase image at different breathing phase using spatial transformation layer and the predicted 3D DVFs.} Moreover, to modulate the predicted motion with a patient-specific pattern, we used an auxiliary input of 1D breathing trace. In this paper, we considered the amplitude of diaphragm apex motion as the surrogate of the respiratory signal~\cite{Cervi_o_2009}. {The 1D breathing trace for each training case was extracted using DVF obtained from} large deformation diffeomorphic metric mapping (LDDMM) DIR provided by ANTs (Advanced Normalization Tools). {Specifically, using the DVF, the apex point in diaphragm was propagated from the phase at the end of inhalation to other phases to generate the 1D displacement trace. The apex of the diaphragm was determined by finding the lung surface voxel with the maximum superior-inferior (z-axis) displacement among the DVFs. The z-axis displacement of the apex voxel at each phase resembles the 1D breathing trace.} Figure~\ref{fig:RPM} describes the process of preparing the 1D respiratory signal. {Feature-wise transformations, e.g. addition or multiplication, are simple and effective mechanisms to incorporate conditioning information from another data source to the features learned in the network.} In this paper, the hidden state of ConvLSTM at each phase is modulated by a simple element-wise multiplication of the phase-amplitude of the trace: \begin{equation} \label{eqn:modulation} m(H_t,b_t) =b_{t}H_t, \end{equation} where $H_t$ is the hidden state encoded from the sequence of phase images up to phase $t$ and $b_t$ is the amplitude of the breathing trace at phase $t$, The loss function for training includes the mean-squared error of ground truth phase image and predicted phase image, and the regularization on the gradient of DVF by promoting smoothness of DVF: \begin{equation} \label{eqn:loss} Loss =\sum_{t>0}[(Y_t-T(X_0,\phi_t))^2 + ||\nabla\phi_t||^2], \end{equation} where $X_0$ is the initial phase image (phase 1 in this paper), $T$ is the spatial transform (adapted from VoxelMorph), $\phi_t$ is the predicted DVF for phase $t$ and $Y_t$ is the ground truth phase image at phase $t$. We developed RMSim using the PyTorch library (version 1.2.0). We used Adam for optimization and set learning rate to be 0.001 (as done in the original Seq2Seq paper~\cite{ShiCovLSTMNIPS2015}). Due to the large data size of 4D image sequence (10 3D CT phase images constituting a single 4D-CT), the batch size was limited to 1 and the number of feature channels was 96, considering GPU memory and training time. The model was trained and tested on an internal high performance computing cluster with 4 NVIDIA A40 GPUs with 48GB memory each. Our model consumed 35.2 GB GPU memory and the training time was approximately 72 hours. The inference time for 9 phases and 40 total test cases from the internal dataset was less than 3 minutes. \subsection{Data augmentation by RMSim} Since RMSim can generate a series of realistic respiratory motion-induced images from a single 3D CT, one of its use cases is data augmentation for training DIR algorithms. For each of the 20 training cases in the Learn2Reg Grand Challenge dataset~\cite{hering_alessa_2020_Learn2Reg}, we randomly selected a 1D breathing trace from our internal dataset to modulate the motion {on the Learn2Reg inhalation image} to generate 9 additional phase images, increasing the training size 10-fold. {We chose a popular deep learning DIR method, VoxelMorph, suitable for unsupervised training for the propose of validating effectiveness of data augmentation.} We first trained a VoxelMorph model with the original 20 inhalation-to-exhalation image pairs in the Learn2Reg training set. We then trained another VoxelMorph model with the augmented data including 200 pairs of inhalation-to-phase images. We compared the registrations from the two VoxelMorph models for validating the effectiveness of data augmentation. \subsection{Evaluation Metrics} For image similarity, we used structure similarity index measure (SSIM) \cite{SSIM2004} which measures the similarity of two given images based on the degradation of structural information, including luminance, contrast and structure. The closer the SSIM value is to 1, the more similarity between the two images. SSIM was used for comparing RMSim-predicted phase images and ground truth phase images in the internal test cases. SSIM was also used for comparing deformable registration results from VoxelMorph to validate data augmentation effectiveness in Learn2Reg test cases, which additionally were evaluated with the provided lung segmentation using Dice score to compare the ground truth lung contours and propagated lung contours. For landmark comparison in the POPI dataset, we used Target Registration Error (TRE), defined as the Euclidean distance between a landmark position spatially transformed and the target position. \section{Results} For each test case in the internal 4D-CT dataset, we generated 9 simulated phase images from the ground truth phase 1 image by deforming the phase 1 image using the predicted DVF at each phase. We calculated SSIM to measure the image similarity (SSIM\textsuperscript{sim}) between the simulated phase image and the ground truth phase image. For comparison, we also calculated the SSIM (SSIM\textsuperscript{gnd}) between the ground truth phase 1 image and the rest of the ground truth phase images. The average SSIM\textsuperscript{sim} was 0.92$\pm$0.04, compared to 0.86$\pm$0.08 of SSIM\textsuperscript{gnd} ($p <0.01$.) {We also measured the diaphragm displacement between the reference respiratory signal and the predicted signal (see Figure~\ref{fig:error}). As can be seen, the error increased from inhale to exhale phases. This is because prediction accuracy decreases at later time points. However, the overall displacement error was within 3 mm. Adding more realistic respiratory data for training can further reduce this displacement error.} \begin{figure}[th!] \begin{center} \footnotesize \setlength{\tabcolsep}{3pt} \includegraphics[width=0.8\textwidth]{figures/Displacement_error.pdf} \caption{{The error between reference respiratory signal (diaphragm displacement in mm) and predicted signal.}} \label{fig:error} \end{center} \end{figure} To demonstrate the modulation flexibility of the 1D breathing traces, we applied {different} breathing traces to the same 3D CT image to generate different motion simulations, as shown in Figure~\ref{fig:Results}. The plot on the top illustrates the two 1D breathing traces used for modulation. The breathing trace 1 (BT1), denoted by orange color line, represents the original respiratory signal for the case. BT2 denoted by gray line is a trace from another patient that was used to generate the simulated images. The white horizontal line indicates the position of the apex of the diaphragm in the initial phase (the first column). It is used as a reference to show the relative positions of the diaphragm at different phases. The diaphragm in images on the upper row clearly shows the more significant movement as BT2 has higher amplitudes in the trace. \begin{figure}[!ht] \begin{center} \footnotesize \setlength{\tabcolsep}{3pt} \includegraphics[width=1\textwidth]{figures/2-Traces.pdf} \caption{Two different breathing traces, BT1 and BT2 shown in the plot, were used to simulate the respiration motion of an internal case, resulting in 2 series of modulated phase images according to the breathing traces. The diaphragm has larger displacement in images simulated with BT2 (upper row) than the displacement in images simulated with shallower BT1 (bottom row.) The white horizontal line indicates the position of the apex of the left diaphragm at the initial phase (left-most column.) We also overlay the propagated lung(in yellow), heart(in red), esophagus(in blue) and tumor(in green) contours using predicted DVFs.} \label{fig:Results} \end{center} \end{figure} {The amplitude range in our internal dataset was 0.14 -- 40 mm. To validate the prediction performance on out-of-range displacement, we predicted additional sequences using a 5 times larger respiratory amplitude. The prediction results using a 5 times larger respiratory signal achieve a higher diaphragm level which means the predicted respiratory has larger fluctuation than the original respiratory signal but it was not proportional to the respiratory signal that was used for inference (see Figure~\ref{fig:out_of_range}).} \begin{figure}[th!] \begin{center} \footnotesize \setlength{\tabcolsep}{3pt} \includegraphics[width=1\textwidth]{figures/Respiratory_amplitude.pdf} \caption{The predicted phase 5 images using different 1D respiratory signal. Blue line is original respiratory signal, orange line is 3 times amplitude and green line is 5 times amplitude.} \label{fig:out_of_range} \end{center} \end{figure} The results of propagating anatomical structures using the predicted DVFs are also shown in Figure~\ref{fig:Results}. We propagated the lung, heart, esophagus, and tumor from the initial phase image. The propagated contours are well-matched with the predicted image and the motion of structures looks very realistic. We also provided the \textbf{supplementary video} of the simulated 4D-CT along with the ground truth 4D-CT and the 3D volume-rendered visualizations. Specifically, the 3D volume-rendered visualizations on LUNA challenge datasets as well as internal lung radiotherapy datasets with structure propagation are included in the accompanying \textbf{supplementary video} with chained predictions for 60-phase predictions for LUNA challenge (radiology lung nodule) and 30-phase predictions for the lung radiotherapy datasets. In POPI dataset, there is only one case which contains lung segmentations on all the phases. For this case, we extracted 1D breathing trace from the lung segmentations as we did for our internal dataset. RMSim trained with our internal dataset predicted the remaining phases from the inhale phase with the modulation from the 1D breathing trace. The average TRE (Target Registration Error) of landmarks propagated with our predicted DVFs in this case was 0.92$\pm$0.64mm, showing that RMSim can accurately predict the patient-specific motion from the patient's 1D breathing trace. Figure \ref{fig:POP4DCTTRE} shows the TRE results for all predicted phases in this case. For the three other 4D-CT cases in POPI there were no lung segmentation masks so we performed semi-automatic lung segmentation for extracting the 1D breathing traces and the results are shown in Figure~\ref{fig:dir_valid_supp}. \begin{figure}[t!] \begin{center} \footnotesize \setlength{\tabcolsep}{3pt} \includegraphics[width=0.8\textwidth]{figures/TRE_POPI_4DCT.pdf} \caption{TRE results of all 9 phases from the 4DCT case in POPI. RMSim trained with the internal dataset were able to achieve sub-mm accuracy in this external case.} \label{fig:POP4DCTTRE} \end{center} \end{figure} \begin{figure}[!ht] \begin{center} \footnotesize \setlength{\tabcolsep}{3pt} \includegraphics[width=1\textwidth]{figures/supp_fig1.pdf} \caption{Three other 4D-CT POPI cases including 10 phases with landmarks on each phase (TRE plots for the three cases given below). For each case, we show original and predicted phase images overlaid with the difference with respect to original phase 1 input. In original DIR\_Validation\_03 phase difference image, the diaphragm in the left lung (viewer's right) did not move due to the large tumor but it does in our prediction (shown in red bounding boxes). This case does not deflect from the goals of this paper, i.e. data augmentation and DIR validation. The difference in Case \#1 appears minor because the breathing is shallower (less diaphragm movement) and Case \#2 and Case \#3 have larger differences due to deeper breathing.} \label{fig:dir_valid_supp} \end{center} \end{figure} Additionally, we used the RMSim for augmenting the Learn2Reg Challenge dataset. The Dice score of lung segmentation of 10 Learn2Reg testing cases using the {VoxelMorph without augmentation} was $0.96$ $\pm$ $0.01$ while the model trained with RMSim data augmentation was $0.97$ $\pm$ $0.01$ ($p$ $<$ 0.001 using the \textit{paired t-test}). The SSIM between the warped images and the ground truth images was $0.88$ $\pm$ $0.02$ for the {model without augmentation} and $0.89$ $\pm$ $0.02$ ($p$ $<$ 0.001) for the model with augmentation. To validate the improvement of DIR using VoxelMorph with augmentation, we propagated the landmark points from the inhale phase to the exhale phase for the 6 cases available in POPI dataset and computed the TRE. On average, pre-DIR TRE was 8.05$\pm$5.61mm, {VoxelMorph w/o augmentation} was 8.12$\pm$5.78mm compared to 6.58$\pm$6.38mm for VoxelMorph with augmentation ($p$ $<$ 3e-48). The TRE comparison of all 6 cases are shown in Figure \ref{fig:POPI_bar}. \begin{figure}[t!] \begin{center} \footnotesize \setlength{\tabcolsep}{3pt} \includegraphics[width=1\textwidth]{figures/TRE_POPI_6cases.pdf} \caption{TRE results of POPI dataset. VoxelMorph with RMSim augmentation outperformed the {VoxelMorph w/o augmentation} in all 6 cases.} \label{fig:POPI_bar} \end{center} \end{figure} \section{Discussion} In this work, we presented a 3D Seq2Seq network, referred to as RMSim, to predict patient-specific realistic motion induced/modulated with 1D breathing trace. We successfully validated our RMSim output with both {private and public benchmark datasets (healthy and cancer patients) and demonstrated that adding our patient-specific augmentations to training data can improve performance/accuracy of state-of-the-art deep learning DIR algorithms. We also showcased breathing trace-modulated respiratory motion simulations for public static radiology scans.} In this work, we predicted the motion in one breathing cycle. In the future, we will fine-tune our current model to predict multiple cycles in one-shot. Possible solutions include making our model bi-directional and using cross-attention to improve temporal dynamics in a long sequence. Further research is needed to investigate the impact of training data augmentation on different image modalities such as 4D-MRI. Another application of our work is in external radiotherapy treatment planning. RMSim simulated 4D-CT can be used to delineate the internal target volume (ITV) which is the union of the target volumes in all respiratory phases. The entire ITV is irradiated in radiation therapy to ensure all regions of tumor receive enough radiation. There is a more sophisticated alternative to ITV, referred to as robust treatment planning, where the key idea is to model the motion and directly incorporate it into the planning \cite{unkelbach_robust_2018}. This typically can be done by assuming a probability density function (PDF) for the position of the target and doing plan optimization based on that~\cite{lens2017probabilistic,watkins2014multiple}. It is also possible to assume a set of possible motion PDFs to account for uncertainty in breathing and plan accordingly \cite{heath2009incorporating, bortfeld2008robust}. The simulated 4D-CT can be used to extract the motion PDF or a set of motion PDFs from varied breathing patterns exhibited by the patient. {Additional interesting future direction is the extension of our earlier work in exhaustively simulating physics-based artifacts in CT and CBCT images for more robust cross-modal deep learning translation, segmentation, and motion-correction algorithms \cite{alam2021generalizable,alam2021motion,dahiya2021multitask}, available via our Physics-ArX library (\url{https://github.com/nadeemlab/Physics-ArX}). Specifically, in our previous work we presented a proof-of-concept pipeline for physics-based motion artifact simulation in CT/CBCT images using 4D-CT phases \cite{alam2021motion}. Using the method proposed in the current paper, we can generate and modulate large/diverse 4D-CT phases from any static 3D CT scan using the 1D RPM signal. These generated 4D-CT variations can then be used to produce large realistic motion-artifact variations via our earlier pipeline}\cite{alam2021motion}.} \noindent \textbf{Limitations:} For simplicity, we used the maximal displacement on the diaphragm as the surrogate of clinical breathing trace to drive the modulation. We assume (1) the breathing pattern is regular since we extracted the diaphragm displacements from amplitude-binned 4D-CT, and (2) regional DVFs are linearly scaled according to diaphragm motion. Note 1D breathing trace might not represent the actual cardiac motion. Because of the GPU memory constraints, our input and output dimension was limited to 128x128x128. Nevertheless, the precise estimation of motion is not required for providing realistic motion-induced ground truth DVFs for the validation of the DIR algorithms and data augmentation for training DIR algorithms, as shown in this work. To extend our work to tumor tracking during radiation treatment, we will use the signals from the actual external real-time motion management (RPM) device to drive the modulation more precisely. We will also explore incorporating 2D MV/kV projections acquired during the treatment to infer more realistic cardiac/tumor motion. \section*{Acknowledgements} This work was supported partially by NCI/NIH P30 CA008748. \section*{Conflict of interest} We have no conflict of interest to declare. \section*{Code Availability Statement} The code, pretrained models, and augmented DIR validation datasets will be released at \url{https://github.com/nadeemlab/SeqX2Y}. \section*{Data Availability Statement} The public datasets used in this study and their urls are as follows: (1) Learn2Reg Challenge Lung CT dataset (Empire10 Challenge Dataset): \url{https://drive.google.com/drive/folders/1yHWLQEK9c1xzggkCC4VX0X4To7BBDqu5}, (2) LUNA challenge dataset (subset0.zip): \url{https://zenodo.org/record/3723295}, (3) DIR Validation POPI Dataset (6 4D CT patients with landmarks): \url{https://www.creatis.insa-lyon.fr/rio/dir_validation_data}, and (4) POPI model dataset (one 4D CT patient dataset with landmarks on all phases as well as lung segmentation mask): \url{https://www.creatis.insa-lyon.fr/rio/popi-model_original_page}. \section*{References} \section{Introduction} {Respiratory motion hampers accurate diagnosis as well as image-guided therapeutics. For example, during radiotherapy,} it may lead to poor local tumor control and increased radiation toxicity to the normal organs~\cite{motionlung2018}. It can also exhibit itself as motion artifacts in the acquired images, making it difficult to differentiate nodule/tumor morphology changes from those induced by respiratory motion. This also makes the image registration task across different breathing phases as well as across different time points challenging. To validate the image registration accuracy/performance for commissioning solutions available in clinical commercial systems, the American Association of Physicists in Medicine(AAPM) TG-132~\cite{Brock2017TG132} recommended independent quality checks using digital phantoms. Current commercial solutions such as ImSimQA allow creation of synthetic deformation vector fields (DVFs) by user-defined transformations with only a limited degree of freedom. These monotonic transformations can not capture the realistic respiratory motion. For modeling respiration motion, an intuitive representation of motion is time-varying displacement vector fields (DVFs) obtained by deformable image registrations (DIR) in 4D images, acquired in a breathing cycle. Surrogate-driven approaches~\cite{MCCLELLAND201319} {employ} DVF as a function of the surrogate breathing signal. However, an exact and direct solution in the high-dimensional space of DVFs is computationally intractable. {Still, motion surrogates have been widely studied in the field of radiotherapy for building models establishing the relationship between surrogates and respiratory motion estimated from the image data \cite{MCCLELLAND201319}. For example, the 1D diaphragm displacement has been reported as a reliable surrogate for tumor motion model \cite{Cervi_o_2009} as well as for PCA (principle component analysis) respiratory motion model to correct CT motion artifacts~\cite{Zhang2007_PCA}.} Recently, Romaguera et al.~\cite{ROMAGUERA2020_2DSeq2Seq} used a 2D {sequence-to-sequence (Seq2Seq) network~\cite{seq2seq2014}} to predict 2D in-plane motion for a single future time point. Krebs et al.~\cite{Krebs2020_cVAE} applied a similar encoder-decoder network in a conditional variational autoencoder (cVAE) framework {in which network parameters were learned to approximate the distribution of deformations in low-dimensional latent space with the encoder and decode the latent features} for {2D motion prediction with the decoder.} Romaguera et al.~\cite{ROMAGUERA2021_cVAE} integrated Voxelmorph \cite{balakrishnan2019voxelmorph} for assisting the VAE encoder to map deformations in latent space conditioned on anatomical features from 3D images. Temporal information of 2D surrogate cine images from a 2D Seq2Seq network was used to predict 3D DVF {at a single future time point.} {In this paper, we present a novel deep learning respiratory motion simulator (RMSim) that learns to generate realistic patient-specific respiratory motion represented by time-varying DVFs at different breathing phases from a static 3D CT image. For the first time,} we also allow modulation of this simulated motion via arbitrary 1D breathing traces as auxiliary input to create large variations. This in turn creates diverse patient-specific data augmentations while also generating ground truth for DIR validation. Our work has several differences and advantages over the aforementioned deep learning approaches: {(1) we used 3D Seq2Seq architecture for the first time which has never been attempted before for predicting deformations due to GPU memory limitations, (2) we did not use VoxelMorph in its entirety but only the Spatial Transform module to train our model end-to-end, and (3) as opposed to predicting just a single future time point, we can predict 9 future time point breathing phases simultaneously (learnt from 4D-CT images with 10 3D CT breathing phases) along with their 3D DVFs. We have thoroughly validated our RMSim output with both private and public benchmark datasets (healthy and cancer patients) and demonstrated that adding our patient-specific augmentations to training data can improve performance/accuracy of state-of-the-art deep learning DIR algorithms. We also showcase breathing trace-modulated respiratory motion simulations for public static radiology scans (in the accompanying \textbf{supplementary video}). The code, pretrained models, and augmented DIR validation datasets will be released at \url{https://github.com/nadeemlab/SeqX2Y}. \begin{figure}[th!] \begin{center} \footnotesize \setlength{\tabcolsep}{3pt} \includegraphics[width=1\textwidth]{figures/model_new_figure.pdf} \caption{The schematic image for the proposed deep learning model. The Seq2Seq encoder-decoder framework was used as the backbone of the proposed model. The model was built with 3D convolution layers {for feature encoding and output decoding} and 3D convolutional Long Short-Term Memory (3D ConvLSTM) layers {for spatial-temporal correlation between time points}. The last layer of the decoder was a spatial transform layer to warp the initial phase image with the predicted Deformation Vector Field (DVF). To modulate the respiratory motions the 1D breathing trace was given as input along with the initial phase image. {The dimension of image volume was 128 $\times$ 128 $\times$ 128 and the input feature to 3D ConvLSTM is 64 $\times$ 64 $\times$ 64 $\times$ 96 (Depth $\times$ Width $\times$ Height $\times$ Channel)} } \label{fig:Model} \end{center} \end{figure} \section{Materials and Methods} \subsection{Datasets} We used an internal lung 4D-CT dataset retrospectively collected and de-identified from 140 non-small cell lung cancer (NSCLC) patients receiving radiotherapy in our institution. The {helical and cine mode} 4D-CTs were acquired using Philips Brilliance Big Bore or GE Advantage {respectively} and binned into 10 phases using the vendor's proprietary software with breathing signals from bellows or external fiducial markers. The x-ray energy for the CT image was 120 kVp {and tube current varies case by case according to vendor-specific tube current modulations based on patient size. The mAs range is [100, 400] for GE and [500, 800] for Philips.} The image slice dimension was 512x512, while the number of image slices varied patient by patient. We used the 100:40 split for training:testing. We used 20 cases of the Lung Nodule Analysis (LUNA) challenge dataset~\cite{SETIO20171LUNA} {containing 3D radiology CTs for lung tumor screening} to show that our RMSim model trained with the internal dataset can be effectively applied to an external radiology/diagnostic dataset to generate realistic respiration motions (see accompanying \textbf{supplementary video}). {For quantitative evaluation of the model generality on an external data set, we used POPI \cite{vandemeulebroucke2011spatiotemporal} dataset which contains 6 10-phase 4D-CTs with segmented lung masks as well as annotated landmarks on the vessel and airway bifurcations.} {To validate} the effectiveness of data augmentation using synthetic respiratory motion images generated from our RMSim model in the deformable registration task, we used the Learn2Reg 2020 challenge dataset~\cite{hering_alessa_2020_Learn2Reg}. The Learn2Reg dataset consists of 30 subjects (20 for the training / 10 for the testing) with 3D CT thorax images taken in inhale and exhale phases. For each Learn2Reg 20 inhale/exhale pairs, we generated other phases of images using our RMSim model which was trained with the internal dataset, therefore increasing the sample size to 200 in total to augment the training of a well-known unsupervised deep learning {DIR} method, VoxelMorph~\cite{balakrishnan2019voxelmorph}. Unfortunately the inhale-exhale landmarks are not publicly available {in Learn2Reg dataset to assess the registration accuracy}. For the landmarks evaluation {in registration task}, we used the POPI dataset. {Brief description/purpose of all the datasets used in this study is given in Table~\ref{table:datasets}.} All datasets used in this study were cropped to eliminate the background and resampled to 128$\times$128$\times$128 with 2mm voxel size {due to the GPU memory constrains}. \begin{table*}[ht] \centering \caption{{Datasets used in this study.}} \label{table:datasets} \footnotesize \begin{tabular}{l|p{0.15\linewidth}|p{0.20\linewidth}|p{0.25\linewidth}|p{0.15\linewidth}} \hline \textbf{Dataset} & \textbf{Size} & \textbf{Description} & \textbf{Purpose} & \textbf{Evaluation} \\ \hline Internal 4D-CTs & 140 (100 training, 40 testing) & 10-phase radiotherapy 4D-CTs & Training and testing RMSim & Image similarity\\ \hline LUNA & 20 & Radiology CTs for lung nodule detection & Testing model generality & Visualization and qualitative \\ \hline POPI 4D-CTs & 6 & 10-phase 4D-CTs with landmarks & Testing model generality (evaluating DVF accuracy) & Target Registration Error (TRE) of landmarks \\ \hline Learn2Reg & 30 (20 training, 10 testing) & Inspiration-expiration thorax CT pairs with lung segmentations & Training and testing RMSim-augmented deep learning deformable image registration (Voxelmorph) & Lung segmentation (Dice score) and image similarity \\ \hline \end{tabular} \end{table*} \subsection{Realistic Respiratory Motion Simulation} \begin{figure}[t!] \begin{center} \footnotesize \setlength{\tabcolsep}{3pt} \includegraphics[width=1\textwidth]{figures/4dct_surrogate.pdf} \caption{Respiration motion surrogate extraction using a diaphragm point that has the maximum superior-inferior displacement across the phases. LDDMM was used to register the phase 1 (fixed) image to other phases (moving) to get the DVFs. The diaphragm point's trajectory in z-axis (shown in red) across the phases was considered as the breathing trace. Yellow line shows the diaphragm position at phase 1.} \label{fig:RPM} \end{center} \end{figure} { Sequence-to-Sequence (Seq2Seg) is a many-to-many network architecture originally developed for natural language processing tasks such as language translation. Inspired by Seq2Seq, the proposed RMSim, illustrated in Figure~\ref{fig:Model}, is a novel deep learning encoder-decoder architecture that comprises three main parts including 3D convolution, ConvLSTM3D (3D Convolutional Long-Short Term Memory), and spatial transformation layer (adapted from VoxelMorph \cite{balakrishnan2019voxelmorph}). The 3D convolution in the encoder is used to reduce the matrix dimension and extract salient features from images. We used 3$\times$3$\times$3 kernel size and 2$\times$2$\times$2 stride size to reduce the matrix dimension to 1/8. The number of channels for 3D convolution layer is 96. LSTM has a more complex cell structure than a neuron in classical recurrent neural network (RNN). Apart from the cell state, it contains gate units to decide when to keep or override information in and out of memory cells to better handle the gradient vanishing problem in recurrent neural network. This helps in learning long term dependencies. ConvLSTM \cite{ShiCovLSTMNIPS2015} replaces Hadamard product with convolution operators in the input as well as the state transitions to capture the spatial pattern of the feature representations aggregated from different time points. We implemented ConvLSTM in 3D for handling the 3D phase images from the 4D-CT. We used two stacked ConvLSTM3D layers to make the network deeper, adding levels of abstraction to input observations similar to the typical deep neural network. The hidden state output from ConvLSTM3D was fed to both the next layer in the same stack and the next timepoint ConvLSTM3D layer. The output of ConvLSTM3D in the decoder at each predicted time point was up-sampled to the original input resolution and output channels were reduced via 3D convolution, resulting in the 3D DVF for the final output. The initial phase CT image was then deformed to a predicted phase image at different breathing phase using spatial transformation layer and the predicted 3D DVFs.} Moreover, to modulate the predicted motion with a patient-specific pattern, we used an auxiliary input of 1D breathing trace. In this paper, we considered the amplitude of diaphragm apex motion as the surrogate of the respiratory signal~\cite{Cervi_o_2009}. {The 1D breathing trace for each training case was extracted using DVF obtained from} large deformation diffeomorphic metric mapping (LDDMM) DIR provided by ANTs (Advanced Normalization Tools). {Specifically, using the DVF, the apex point in diaphragm was propagated from the phase at the end of inhalation to other phases to generate the 1D displacement trace. The apex of the diaphragm was determined by finding the lung surface voxel with the maximum superior-inferior (z-axis) displacement among the DVFs. The z-axis displacement of the apex voxel at each phase resembles the 1D breathing trace.} Figure~\ref{fig:RPM} describes the process of preparing the 1D respiratory signal. {Feature-wise transformations, e.g. addition or multiplication, are simple and effective mechanisms to incorporate conditioning information from another data source to the features learned in the network.} In this paper, the hidden state of ConvLSTM at each phase is modulated by a simple element-wise multiplication of the phase-amplitude of the trace: \begin{equation} \label{eqn:modulation} m(H_t,b_t) =b_{t}H_t, \end{equation} where $H_t$ is the hidden state encoded from the sequence of phase images up to phase $t$ and $b_t$ is the amplitude of the breathing trace at phase $t$, The loss function for training includes the mean-squared error of ground truth phase image and predicted phase image, and the regularization on the gradient of DVF by promoting smoothness of DVF: \begin{equation} \label{eqn:loss} Loss =\sum_{t>0}[(Y_t-T(X_0,\phi_t))^2 + ||\nabla\phi_t||^2], \end{equation} where $X_0$ is the initial phase image (phase 1 in this paper), $T$ is the spatial transform (adapted from VoxelMorph), $\phi_t$ is the predicted DVF for phase $t$ and $Y_t$ is the ground truth phase image at phase $t$. We developed RMSim using the PyTorch library (version 1.2.0). We used Adam for optimization and set learning rate to be 0.001 (as done in the original Seq2Seq paper~\cite{ShiCovLSTMNIPS2015}). Due to the large data size of 4D image sequence (10 3D CT phase images constituting a single 4D-CT), the batch size was limited to 1 and the number of feature channels was 96, considering GPU memory and training time. The model was trained and tested on an internal high performance computing cluster with 4 NVIDIA A40 GPUs with 48GB memory each. Our model consumed 35.2 GB GPU memory and the training time was approximately 72 hours. The inference time for 9 phases and 40 total test cases from the internal dataset was less than 3 minutes. \subsection{Data augmentation by RMSim} Since RMSim can generate a series of realistic respiratory motion-induced images from a single 3D CT, one of its use cases is data augmentation for training DIR algorithms. For each of the 20 training cases in the Learn2Reg Grand Challenge dataset~\cite{hering_alessa_2020_Learn2Reg}, we randomly selected a 1D breathing trace from our internal dataset to modulate the motion {on the Learn2Reg inhalation image} to generate 9 additional phase images, increasing the training size 10-fold. {We chose a popular deep learning DIR method, VoxelMorph, suitable for unsupervised training for the propose of validating effectiveness of data augmentation.} We first trained a VoxelMorph model with the original 20 inhalation-to-exhalation image pairs in the Learn2Reg training set. We then trained another VoxelMorph model with the augmented data including 200 pairs of inhalation-to-phase images. We compared the registrations from the two VoxelMorph models for validating the effectiveness of data augmentation. \subsection{Evaluation Metrics} For image similarity, we used structure similarity index measure (SSIM) \cite{SSIM2004} which measures the similarity of two given images based on the degradation of structural information, including luminance, contrast and structure. The closer the SSIM value is to 1, the more similarity between the two images. SSIM was used for comparing RMSim-predicted phase images and ground truth phase images in the internal test cases. SSIM was also used for comparing deformable registration results from VoxelMorph to validate data augmentation effectiveness in Learn2Reg test cases, which additionally were evaluated with the provided lung segmentation using Dice score to compare the ground truth lung contours and propagated lung contours. For landmark comparison in the POPI dataset, we used Target Registration Error (TRE), defined as the Euclidean distance between a landmark position spatially transformed and the target position. \section{Results} For each test case in the internal 4D-CT dataset, we generated 9 simulated phase images from the ground truth phase 1 image by deforming the phase 1 image using the predicted DVF at each phase. We calculated SSIM to measure the image similarity (SSIM\textsuperscript{sim}) between the simulated phase image and the ground truth phase image. For comparison, we also calculated the SSIM (SSIM\textsuperscript{gnd}) between the ground truth phase 1 image and the rest of the ground truth phase images. The average SSIM\textsuperscript{sim} was 0.92$\pm$0.04, compared to 0.86$\pm$0.08 of SSIM\textsuperscript{gnd} ($p <0.01$.) {We also measured the diaphragm displacement between the reference respiratory signal and the predicted signal (see Figure~\ref{fig:error}). As can be seen, the error increased from inhale to exhale phases. This is because prediction accuracy decreases at later time points. However, the overall displacement error was within 3 mm. Adding more realistic respiratory data for training can further reduce this displacement error.} \begin{figure}[th!] \begin{center} \footnotesize \setlength{\tabcolsep}{3pt} \includegraphics[width=0.8\textwidth]{figures/Displacement_error.pdf} \caption{{The error between reference respiratory signal (diaphragm displacement in mm) and predicted signal.}} \label{fig:error} \end{center} \end{figure} To demonstrate the modulation flexibility of the 1D breathing traces, we applied {different} breathing traces to the same 3D CT image to generate different motion simulations, as shown in Figure~\ref{fig:Results}. The plot on the top illustrates the two 1D breathing traces used for modulation. The breathing trace 1 (BT1), denoted by orange color line, represents the original respiratory signal for the case. BT2 denoted by gray line is a trace from another patient that was used to generate the simulated images. The white horizontal line indicates the position of the apex of the diaphragm in the initial phase (the first column). It is used as a reference to show the relative positions of the diaphragm at different phases. The diaphragm in images on the upper row clearly shows the more significant movement as BT2 has higher amplitudes in the trace. \begin{figure}[!ht] \begin{center} \footnotesize \setlength{\tabcolsep}{3pt} \includegraphics[width=1\textwidth]{figures/2-Traces.pdf} \caption{Two different breathing traces, BT1 and BT2 shown in the plot, were used to simulate the respiration motion of an internal case, resulting in 2 series of modulated phase images according to the breathing traces. The diaphragm has larger displacement in images simulated with BT2 (upper row) than the displacement in images simulated with shallower BT1 (bottom row.) The white horizontal line indicates the position of the apex of the left diaphragm at the initial phase (left-most column.) We also overlay the propagated lung(in yellow), heart(in red), esophagus(in blue) and tumor(in green) contours using predicted DVFs.} \label{fig:Results} \end{center} \end{figure} {The amplitude range in our internal dataset was 0.14 -- 40 mm. To validate the prediction performance on out-of-range displacement, we predicted additional sequences using a 5 times larger respiratory amplitude. The prediction results using a 5 times larger respiratory signal achieve a higher diaphragm level which means the predicted respiratory has larger fluctuation than the original respiratory signal but it was not proportional to the respiratory signal that was used for inference (see Figure~\ref{fig:out_of_range}).} \begin{figure}[th!] \begin{center} \footnotesize \setlength{\tabcolsep}{3pt} \includegraphics[width=1\textwidth]{figures/Respiratory_amplitude.pdf} \caption{The predicted phase 5 images using different 1D respiratory signal. Blue line is original respiratory signal, orange line is 3 times amplitude and green line is 5 times amplitude.} \label{fig:out_of_range} \end{center} \end{figure} The results of propagating anatomical structures using the predicted DVFs are also shown in Figure~\ref{fig:Results}. We propagated the lung, heart, esophagus, and tumor from the initial phase image. The propagated contours are well-matched with the predicted image and the motion of structures looks very realistic. We also provided the \textbf{supplementary video} of the simulated 4D-CT along with the ground truth 4D-CT and the 3D volume-rendered visualizations. Specifically, the 3D volume-rendered visualizations on LUNA challenge datasets as well as internal lung radiotherapy datasets with structure propagation are included in the accompanying \textbf{supplementary video} with chained predictions for 60-phase predictions for LUNA challenge (radiology lung nodule) and 30-phase predictions for the lung radiotherapy datasets. In POPI dataset, there is only one case which contains lung segmentations on all the phases. For this case, we extracted 1D breathing trace from the lung segmentations as we did for our internal dataset. RMSim trained with our internal dataset predicted the remaining phases from the inhale phase with the modulation from the 1D breathing trace. The average TRE (Target Registration Error) of landmarks propagated with our predicted DVFs in this case was 0.92$\pm$0.64mm, showing that RMSim can accurately predict the patient-specific motion from the patient's 1D breathing trace. Figure \ref{fig:POP4DCTTRE} shows the TRE results for all predicted phases in this case. For the three other 4D-CT cases in POPI there were no lung segmentation masks so we performed semi-automatic lung segmentation for extracting the 1D breathing traces and the results are shown in Figure~\ref{fig:dir_valid_supp}. \begin{figure}[t!] \begin{center} \footnotesize \setlength{\tabcolsep}{3pt} \includegraphics[width=0.8\textwidth]{figures/TRE_POPI_4DCT.pdf} \caption{TRE results of all 9 phases from the 4DCT case in POPI. RMSim trained with the internal dataset were able to achieve sub-mm accuracy in this external case.} \label{fig:POP4DCTTRE} \end{center} \end{figure} \begin{figure}[!ht] \begin{center} \footnotesize \setlength{\tabcolsep}{3pt} \includegraphics[width=1\textwidth]{figures/supp_fig1.pdf} \caption{Three other 4D-CT POPI cases including 10 phases with landmarks on each phase (TRE plots for the three cases given below). For each case, we show original and predicted phase images overlaid with the difference with respect to original phase 1 input. In original DIR\_Validation\_03 phase difference image, the diaphragm in the left lung (viewer's right) did not move due to the large tumor but it does in our prediction (shown in red bounding boxes). This case does not deflect from the goals of this paper, i.e. data augmentation and DIR validation. The difference in Case \#1 appears minor because the breathing is shallower (less diaphragm movement) and Case \#2 and Case \#3 have larger differences due to deeper breathing.} \label{fig:dir_valid_supp} \end{center} \end{figure} Additionally, we used the RMSim for augmenting the Learn2Reg Challenge dataset. The Dice score of lung segmentation of 10 Learn2Reg testing cases using the {VoxelMorph without augmentation} was $0.96$ $\pm$ $0.01$ while the model trained with RMSim data augmentation was $0.97$ $\pm$ $0.01$ ($p$ $<$ 0.001 using the \textit{paired t-test}). The SSIM between the warped images and the ground truth images was $0.88$ $\pm$ $0.02$ for the {model without augmentation} and $0.89$ $\pm$ $0.02$ ($p$ $<$ 0.001) for the model with augmentation. To validate the improvement of DIR using VoxelMorph with augmentation, we propagated the landmark points from the inhale phase to the exhale phase for the 6 cases available in POPI dataset and computed the TRE. On average, pre-DIR TRE was 8.05$\pm$5.61mm, {VoxelMorph w/o augmentation} was 8.12$\pm$5.78mm compared to 6.58$\pm$6.38mm for VoxelMorph with augmentation ($p$ $<$ 3e-48). The TRE comparison of all 6 cases are shown in Figure \ref{fig:POPI_bar}. \begin{figure}[t!] \begin{center} \footnotesize \setlength{\tabcolsep}{3pt} \includegraphics[width=1\textwidth]{figures/TRE_POPI_6cases.pdf} \caption{TRE results of POPI dataset. VoxelMorph with RMSim augmentation outperformed the {VoxelMorph w/o augmentation} in all 6 cases.} \label{fig:POPI_bar} \end{center} \end{figure} \section{Discussion} In this work, we presented a 3D Seq2Seq network, referred to as RMSim, to predict patient-specific realistic motion induced/modulated with 1D breathing trace. We successfully validated our RMSim output with both {private and public benchmark datasets (healthy and cancer patients) and demonstrated that adding our patient-specific augmentations to training data can improve performance/accuracy of state-of-the-art deep learning DIR algorithms. We also showcased breathing trace-modulated respiratory motion simulations for public static radiology scans.} In this work, we predicted the motion in one breathing cycle. In the future, we will fine-tune our current model to predict multiple cycles in one-shot. Possible solutions include making our model bi-directional and using cross-attention to improve temporal dynamics in a long sequence. Further research is needed to investigate the impact of training data augmentation on different image modalities such as 4D-MRI. Another application of our work is in external radiotherapy treatment planning. RMSim simulated 4D-CT can be used to delineate the internal target volume (ITV) which is the union of the target volumes in all respiratory phases. The entire ITV is irradiated in radiation therapy to ensure all regions of tumor receive enough radiation. There is a more sophisticated alternative to ITV, referred to as robust treatment planning, where the key idea is to model the motion and directly incorporate it into the planning \cite{unkelbach_robust_2018}. This typically can be done by assuming a probability density function (PDF) for the position of the target and doing plan optimization based on that~\cite{lens2017probabilistic,watkins2014multiple}. It is also possible to assume a set of possible motion PDFs to account for uncertainty in breathing and plan accordingly \cite{heath2009incorporating, bortfeld2008robust}. The simulated 4D-CT can be used to extract the motion PDF or a set of motion PDFs from varied breathing patterns exhibited by the patient. {Additional interesting future direction is the extension of our earlier work in exhaustively simulating physics-based artifacts in CT and CBCT images for more robust cross-modal deep learning translation, segmentation, and motion-correction algorithms \cite{alam2021generalizable,alam2021motion,dahiya2021multitask}, available via our Physics-ArX library (\url{https://github.com/nadeemlab/Physics-ArX}). Specifically, in our previous work we presented a proof-of-concept pipeline for physics-based motion artifact simulation in CT/CBCT images using 4D-CT phases \cite{alam2021motion}. Using the method proposed in the current paper, we can generate and modulate large/diverse 4D-CT phases from any static 3D CT scan using the 1D RPM signal. These generated 4D-CT variations can then be used to produce large realistic motion-artifact variations via our earlier pipeline}\cite{alam2021motion}.} \noindent \textbf{Limitations:} For simplicity, we used the maximal displacement on the diaphragm as the surrogate of clinical breathing trace to drive the modulation. We assume (1) the breathing pattern is regular since we extracted the diaphragm displacements from amplitude-binned 4D-CT, and (2) regional DVFs are linearly scaled according to diaphragm motion. Note 1D breathing trace might not represent the actual cardiac motion. Because of the GPU memory constraints, our input and output dimension was limited to 128x128x128. Nevertheless, the precise estimation of motion is not required for providing realistic motion-induced ground truth DVFs for the validation of the DIR algorithms and data augmentation for training DIR algorithms, as shown in this work. To extend our work to tumor tracking during radiation treatment, we will use the signals from the actual external real-time motion management (RPM) device to drive the modulation more precisely. We will also explore incorporating 2D MV/kV projections acquired during the treatment to infer more realistic cardiac/tumor motion. \section*{Acknowledgements} This work was supported partially by NCI/NIH P30 CA008748. \section*{Conflict of interest} We have no conflict of interest to declare. \section*{Code Availability Statement} The code, pretrained models, and augmented DIR validation datasets will be released at \url{https://github.com/nadeemlab/SeqX2Y}. \section*{Data Availability Statement} The public datasets used in this study and their urls are as follows: (1) Learn2Reg Challenge Lung CT dataset (Empire10 Challenge Dataset): \url{https://drive.google.com/drive/folders/1yHWLQEK9c1xzggkCC4VX0X4To7BBDqu5}, (2) LUNA challenge dataset (subset0.zip): \url{https://zenodo.org/record/3723295}, (3) DIR Validation POPI Dataset (6 4D CT patients with landmarks): \url{https://www.creatis.insa-lyon.fr/rio/dir_validation_data}, and (4) POPI model dataset (one 4D CT patient dataset with landmarks on all phases as well as lung segmentation mask): \url{https://www.creatis.insa-lyon.fr/rio/popi-model_original_page}. \section*{References}
1,108,101,565,403
arxiv
\section{Introduction}\label{sec:intro} Networked systems are ubiquitous and include critical infrastructure networks such as the power grid, gas transmission pipelines, water networks and district heating systems. In such systems, optimization is often leveraged to maximize technical performance or economic efficiency, giving rise to what we will call Optimal Physical Network Flow (OPNF) problems. Optimization of system operation requires a mathematical model of the system. However, in practical systems, imperfect information and forecast errors introduce uncertainty in system operation and planning. If the uncertainty is not accounted for properly during the design and optimization process, the optimized system solution might be vulnerable to uncertainty, with potentially detrimental impacts on system risk. A prominent example is the Optimal Power Flow (OPF) problem in the electric power grid, which minimizes operational cost subject to technical constraints, and is used to clear electricity markets, perform security assessment and guide system expansion planning. The most significant source of uncertainty in the OPF problem is due to imperfect forecasts in renewable generation and loading conditions. System security must be maintained by ensuring that all variables are kept within acceptable values for a range of uncertainty realizations. Problems with similar structure also arise in other infrastructure networks such as natural gas and water networks. A typical approach to account for uncertainty is to formulate the OPNF as a robust or stochastic program. However, for many of the above mentioned systems, the physics governing the network flows are given by a set of non-linear equations, such as branch flow equations and nodal conservation laws. This gives rise to non-linear equality constraints, which are inherently non-convex and thus challenging for both deterministic and stochastic optimization algorithms. In addition, the non-linearity significantly complicates the characterization of the uncertainty propagation throughout the system. Most existing robust and stochastic programming methods rely on assumptions of convexity. For practical problems such as the OPF problem, solution methods for robust or stochastic problem formulations typically use linear approximations \cite{roald2017, dallanese2017} or convex relaxations \cite{vrakopoulou2013AC, lorca2017robust, nasri2016} to circumvent the problem of non-convexity. This enables the application of well-known methods for robust \cite{robustoptimization2009} or chance-constrained \cite{campi2006, calafiore2006} programming, at the expense of a reduction in model fidelity and less comprehensive feasibility guarantees for the underlying problem. In this paper, we take a different approach. Instead of approximating or relaxing the non-linear network flow equations, we aim at treating the non-convex problem directly using techniques from polynomial optimization \cite{Henrion09, Magron15, Lasserre17}. The method is applicable for problems where the equality and inequality constraints can be represented as polynomials in both the decision variables and uncertain parameters. We first formulate the uncertainty-aware problem as a Chance-Constrained OPNF (CC-OPNF) to guarantee that the constraints are satisfied with a high probability. Due to the chance-constraints, the CC-OPNF is intractable in the original form. The main contribution of the paper is to develop conservative, tractable approximations of the chance constraints in the form of polynomial constraints. We start from recent results for volume computations of semi-algebraic sets \cite{Henrion09} and projections of semi-algebraic sets \cite{Magron15}, which were shown to be useful for outer approximations of chance constraints \cite{Lasserre17}, and provide two crucial extensions: \begin{enumerate} \item While \cite{Lasserre17} allow for outer approximations of the chance constraints using a hierarchy of SDPs, in practice it is not straight-forward to obtain an \emph{inner approximation}, which is typically of interest in our setting. Therefore, we use a series of set manipulations to extend the existing methods towards practical inner approximations. \item To improve computational performance, we develop a \emph{two-step approximation procedure}, which allows for better approximations at lower computational overhead. \end{enumerate} Replacing the chance constraints by their respective polynomial approximations yields an Approximate CC-OPNF (ACC-OPNF) problem that is still non-convex, but readily solvable by state-of-the-art non-linear programming solvers. Note that the polynomial chance-constraint approximations, which can be computationally heavy to compute, are determined a-priori in a \emph{pre-processing} step to the ACC-OPNF. Based on a small case study for the AC OPF problem, we demonstrate the practical performance of the method. In particular, we demonstrate the value of the extensions to inner approximations and the benefit of the two-step procedure. The remainder of the paper is organized as follows. After discussing the problem formulation in Section \ref{sec:probform}, we explain how to obtain polynomial approximations of the chance constraints in \ref{sec:PolApprox}. Section \ref{sec:improved_stokes} describes the rationale behind the two-step procedure, while Section \ref{sec:overall} summarizes the overall approach. In Section \ref{sec:application ccacopf}, we describe how our general framework can be mapped to the CC-AC-OPF, before providing numerical results in Section \ref{sec:numerics}. Finally, Section \ref{sec:concl} summarizes and concludes. \section{Problem formulation}\label{sec:probform} We now present the problem formulation in abstract form for a generic physical network flow problem, as the method can be applied to any problem that has the structure described below. For a concrete example, we refer the reader to section {\ref{sec:application ccacopf}}, where the method is applied to the AC~OPF problem. \subsection{Deterministic Optimal Physical Network Flow} We define the problem variables as $\x = (x_1,\ldots,x_n)$ and $\y=(y_1,\ldots,y_m)$. For multivariate polynomials $f^0_1,\ldots,f^0_m$, $g^0_1,\ldots,g^0_k\in \rxy$ we consider the following Deterministic Optimal Physical Network Flow (D-OPNF) problem: \begin{subequations}\label{eq:abstract OPF} \begin{align} \min_{\x,\y}\;& c(\x,\y)\quad\st \nonumber \\ & f^0_i(\x,\y) = 0 ,\;i=1,\ldots,m, \label{con:f)_deterministic} \\ & g^0_j(\x,\y)\geq0,\;j=1,\ldots,k. \label{con:g0_deterministic} \end{align} \end{subequations} Here, the cost function is given by a polynomial $c\in \rxy$. The polynomial equality constraints $f^0_i(\x,\y)=0$ represent the network flow physics. The polynomial inequality constraints $g^0_j(\x,\y)\geq 0$ represent engineering limits. To explicitly describe the degree of freedom in the system, we have separated the variables into $\x$ and $\y$. Since the equality constraints $f_i^0(\x,\y) = 0$ eliminate $m$ degrees of freedom, the variables $\y$ are an implicit function of the independent variable $\x$. Note that due to the non-linearity of the $f_i^0$, in general $\y$ might not be determined uniquely by a choice of $\x$. In this paper we make a practical assumption stated below. \begin{assumption} \label{as:deterministic} The engineering limits $g^0_j(\x,\y)\geq 0$ are such that the solution $\y$ to the system of equalities $f_i^0 = 0$, whenever it exists is unique. As a result, we therefore can write $\y$ as a function of $\x$, i.e. $\y_\x:=\y(\x)$. \end{assumption} The above assumption reflects a feature often encountered in engineered networks. Even though, mathematically the network physics described by the non-linear system $f^0_i = 0$ can have multiple solutions, there is only one solution that is physically meaningful within the region that the system is operated. As soon as the variables $\x$ are set, the state of the system is fully determined. Assumption~\ref{as:deterministic} allows us to formalize this notion. \subsection{Chance-Constrained Optimal Physical Network Flow} The aim of this paper is to account for uncertainty in the D-OPNF \eqref{eq:abstract OPF}, and to this end, we formulate the problem as a chance-constrained optimization problem. The chance constraints limit the probability of constraint violations, and can be enforced either as joint chance constraints (several equations hold jointly with a given probability) or separate chance constraints (each constraint is assigned its own probability). Due to the underlying physics of the problem, the network flow constraints $f^0_{i}$ must be satisfied jointly: If one of them is violated, the solution is not physically valid and the remaining constraints are meaningless. The probability of not jointly satisfying the network flow constraints can be understood as the probability that the uncertainty realization will lead to a situation where the flow problem is unstable and there exist no steady-state operating point (e.g., voltage instability in electric power grids). The engineering limits $g_j^0$ can be satisfied either jointly or separately, depending on the preferred method for risk management. In this paper, we provide a method for enforcing the engineering limits as separate chance constraints. Let $(\Omega,\mu)$ be a probability space. The random variables $\w = (w_1,\ldots,w_\l)$ have zero mean $\boldsymbol{0}\in\R^l$. For every measurable event $A\subseteq\Omega$ denote the probability of $A$ by $\prob(A)=\int_A 1\mu(\d\w)$. Finally, let $f_1,\ldots,f_m,g_1,\ldots,g_k\in \rxyw$ be multivariate polynomials. The notation is motivated by the idea that $f_i^0(\x,\y) = f_i(\x,\y,\boldsymbol{0})$ and $g_j^0(\x,\y) = g_j(\x,\y,\boldsymbol{0})$. Define $f=\sum_{i=1}^m f_i^2$. Then enforcing the system of equations $f_i(\x,\y,\w) = 0, i=1,\ldots,m$ is equivalent to imposing $f=0$. We will use the later for better readability, although our implementation is based on the system of equalities rather than the single constraint $f=0$. We state the CC-OPNF problem: \begin{subequations}\label{eq:abstract CCOPF interim} \begin{align} \min_{\x,\y_\x,\y(\w)}\;& c(\x,{\y_\x)}\quad\st \nonumber \\ &\hspace{-0.3cm}{f^0_i(\x,\y_\x) = 0 ,\;i=1,\ldots,m,} \label{con:f0_inter}\\ &\hspace{-0.3cm}{g^0_j(\x,\y_\x)\geq 0,\;j=1,\ldots,k,}\label{con:g0_inter}\\ &\hspace{-0.3cm}\prob\left(f(\x,\y(\w),\w) = 0\right)\geq 1-\varepsilon_1, \label{con:f_inter}\\ &\hspace{-0.3cm} \prob \left({g_j(\x,\y(\w),\w)\geq0}\right) \nonumber \\ &\geq 1-\varepsilon_2,\;j=1,\ldots,k.\label{con:g_inter} \end{align} \end{subequations} In addition to the chance-constraints to account for uncertainty, we also keep the constraints \eqref{con:f0_inter}, \eqref{con:g0_inter} from problem \eqref{eq:abstract OPF}. These constraints give a precise meaning to the cost function $c(\x,{\y_\x)}$, which is expressed as the operation cost for the expected realization $\w=\boldsymbol{0}$. We note that the problem as presented in \eqref{eq:abstract CCOPF interim} is a variational optimization problem. The $\x$ variables however do not depend on $\w$, which means that once the controllable variables are chosen, they cannot be modified in response to uncertainty. Although, the $\y$ variables are a function of $\w$, similar to the DNF \eqref{eq:abstract OPF}, the equality constraints eliminate the degrees of freedom for $\y$, and by a direct generalization of Assumption~\ref{as:deterministic}, one can think of $\y$ in \eqref{eq:abstract CCOPF} as a function of $(\x,\w)$, within the region defined by $g_j \geq 0$. As a result, the constraints in \eqref{con:g_inter} are simply constraints on the variable $\x$, a property that we exploit in our approach to convert the variational problem in \eqref{eq:abstract CCOPF} into a standard optimization problem in $\x$. However, attempting to eliminate the $\y(\w)$ variables creates another issue - unlike in \eqref{eq:abstract OPF}, where the inequalities \eqref{con:g0_deterministic} along with Assumption~\ref{as:deterministic} guarantee uniqueness and physical interpretability, eliminating $\y(\w)$ means that there is no way to enforce that $(\x,\y(\w),\w)$ satisfy all the inequalities in \eqref{con:g_inter}, thus forfeiting the aforementioned guarantees. To circumvent this issue, we introduce a set $Y$ for the $\y$ variables and make the following assumption: \begin{assumption} Restricting the range of $\y$ to a set $Y\subseteq\R^m$ the solution $\y(\w)$ to the system of equalities $f=0$ in \eqref{con:f_inter} is unique whenever it exists. \end{assumption} The set $Y$ can be interpreted as domain specific knowledge about the system introduced in order to reduce the feasible space to a region where our physical model is valid and exclude physically meaningless solutions to $f=0$. We propose the abstract formulation of the CC-OPNF problem below: \begin{subequations}\label{eq:abstract CCOPF} \begin{align} \min_{\x,\y_\x}\;& c(\x,{\y_\x)}\quad\st \nonumber \\ &\hspace{-0.3cm}{f^0_i(\x,\y_\x) = 0 ,\;i=1,\ldots,m,} \label{con:f0}\\ &\hspace{-0.3cm}{g^0_j(\x,\y_\x)\geq 0,\;j=1,\ldots,k,}\label{con:g0}\\ &\hspace{-0.3cm}\prob\left({\exists \y\in Y,}\;f(\x,\y,\w) = 0\right)\geq 1-\varepsilon_1, \label{con:f}\\ &\hspace{-0.3cm} \prob \left({\exists\y\in Y,\;f(\x,\y,\w) = 0\land g_j(\x,\y,\w)\geq0}\right)\nonumber \\ &\geq 1-\varepsilon_2,\;j=1,\ldots,k.\label{con:g} \end{align}\end{subequations} The sole reason for including the constraints $f=0$ in \eqref{con:g} is to implicitly specify $\y$ as a function of $\x$ and $\w$. The main contribution of this paper is to provide \emph{tractable approximations} to the chance constraints \eqref{con:f}, \eqref{con:g}. The details of the approximation, which replaces the chance constraints in \eqref{con:f}, \eqref{con:g} by a set of polynomial constraints will be explained over the next sections. \section{Polynomial approximations of chance constraints} \label{sec:PolApprox} In this section, we first review results from the literature that use semi-definite programming (SDP) based methods for computing the volume of a basic semi-algebraic set, since they form the basis of the chance constraint approximations. Using these methods, we then develop inner and outer approximations of the chance constraint formulation in \eqref{eq:abstract CCOPF}. \subsection{Preliminaries} \label{subsec:prelim} Let $B = B_{\x} \times \Omega$, where $B_{\x} \subseteq \R^{n}$ is a hyper-interval or any other simple shape such that the moments with respect to the Lebesgue measure $\lambda_{\x}$ are known. We define a measure space $(B,\mu_{\x\w})$ by endowing $B$ with the product measure $\mu_{\x\w}$ given by $\mu_{\x\w} = \lambda_{\x} \otimes \mu$. Let $K\subseteq B$ be a basic semi-algebraic set, where for each $(\x,\w) \in K$ we interpret $\x$ as the variables and $\w$ as the uncertainty. We call all points $(\x,\w) \in K$ as \emph{feasible} points. For a given $\x$, a chance constraint enforces that the probability that $(\x,\w)$ is feasible is larger than a given value, i.e., \begin{align} \prob((\x,\w) \in K) \geq 1-\epsilon, \label{eq:cc_general} \end{align} where the probability is computed using the measure $\mu$ on $\w$. This probability can be interpreted as the volume of the set $K_{\x} := \{\w : (\x,\w) \in K \}$ with respect to the measure $\mu$: \begin{align} \rho(\x) := \prob((\x,\w) \in K) = \int_{\Omega} 1_{K_{\x}} d\mu, \label{eq:cc indicator} \end{align} where $1_{K_{\x}}$ denotes the indicator function of the set $K_{\x}$. \subsubsection{Approximating the volume of semi-algebraic sets} In \cite{Henrion09} Henrion et al. propose a hierarchy of semi-definite programs approximating the set $K$ by the level set of some polynomial. The starting point in \cite{Henrion09} is an infinite dimensional linear problem given as follows: \begin{equation}\label{prob:full size dual} \begin{split} \min_{p\in\rxw} & \int_B p(\x,\w)\d\mu_{\x\w} \\ \st & \quad p-1\geq 0 \mbox{ on } K,\\ & \quad p \geq 0 \mbox{ on } B. \end{split} \end{equation} Every feasible $p$ is an over-estimator of the indicator function of $K$ on $B$. By minimizing the integral over $p$, the optimal solution has to be close to the indicator function of $K$ in $L^1({\mu_{\x\w}})$-norm. The dual problem to \eqref{prob:full size dual} reads \begin{equation} \label{prob:full size primal} \begin{split} \max_{\substack{\phi\in\meas(K)\\\psi\in\meas(B)}} & \int_K 1\d\phi \quad \st\quad\forall (\alpha,\beta)\in \N_0^{n+\ell}\\ &\quad \int_K \x^\alpha\w^\beta \d\phi + \int_B \x^\alpha\w^\beta \d\psi = \int_B\x^\alpha\w^\beta \d\mu_{\x\w}, \end{split} \end{equation} where the optimization variables $\phi$ and $\psi$ are (positive) Borel measures supported on $K$ and $B$ respectively. As the moments, and in particular the mass of $\phi$ are bounded by the moments of ${\mu_{\x\w}}$, the optimal solution to \eqref{prob:full size primal} is the restriction of $\mu_{\x\w}$ to $K$. Consequently, the optimal value of \eqref{prob:full size primal} is the volume of $K$ with respect to $\mu_{\x\w}$. The infinite dimensional problems in \eqref{prob:full size primal} and \eqref{prob:full size dual} can be approximated by a hierarchy of semi-definite programs (SDPs) by using the method proposed by Lasserre \cite{Lasserre10}, which we briefly summarize. A finite dimensional problem is obtained by (i) restricting the feasible set of \eqref{prob:full size dual} to polynomials of a fixed degree $2d$, and (ii) replacing the non negativity condition in the constraints by an algebraic certificate for non negativity (such as Putinar's theorem) on $K$ and $B$, respectively, which can be expressed by linear constraints on positive semi-definite matrices. The number $d$ is referred to as the relaxation degree or the relaxation order. The interested reader is referred to \cite{Lasserre10} for a full description of this relaxation procedure. For any finite order of relaxation $d$ we obtain a polynomial $p_d\in\rxw$ of degree $2d$ that approximates the indicator function $1_K$ from above, and for any fixed $\x$ approximates the function $1_{K_{\x}}$ from above. Notice that since $p_d(\x,\w) \geq 1_K$ we have \begin{align} \label{eq:non-conservative} \rho(\x) = \int 1_{K_{\x}} d \mu \leq \int p_d(\x,\w) \d \mu =: h^\ast(\x), \end{align} where the integration is only with respect to $\mu$, i.e, not with respect to $\mu_{\x\w}$. The chance constraint in \eqref{eq:cc_general} now can be replaced by the tractable polynomial inequality given by \begin{align} \label{eq:general_pover} h^\ast(\x) \geq 1-\epsilon. \end{align} As $h^\ast$ is over approximating $\rho$, the constraint in \eqref{eq:general_pover} serves as an outer approximation of the chance constraints. \subsubsection{Approximating the volume of the projection of semi-algebraic sets} Comparing the generic representation of chance constraints in \eqref{eq:cc_general} to the one presented in \eqref{eq:abstract CCOPF}, we see that in many applications such as the OPNF, the presence of equality constraints introduce additional dependent variables $\y$ that are needed to describe the system. It is straightforward to extend the framework described above by appending the additional variables $\y$ to form the set $K$ in $(\x,\y,\w)$-space and apply the same procedure outlined above. However, since the variables $\y$ are fully specified by $(\x,\w)$ the volume of the set $K$ is zero leading to ill-conditioned problems while approximating the volume using \eqref{prob:full size primal}. This problem can be addressed by approximating the projection of $K$ onto the $(\x,\w)$ space where the volume is non-zero, instead of the original set $K$, using the method in Magron et al. \cite{Magron15}. To approximate the indicator function of the projection \[ \pi_{\x\w}(K) :=\{(\x,\w) : \exists \y\in\R^m, (\x,\y,\w)\in K \} \] of $K$ onto the (\x,\w)-space, consider the variant of \eqref{prob:full size dual}: \begin{equation}\label{prob:projection dual} \begin{split} \min_{p\in\rxw} & \int_B p(\x,\w)\d\mu_{\x\y\w} \\ \st & \quad p-1\geq 0 \mbox{ on } K,\\ & \quad p \geq 0 \mbox{ on } B, \end{split} \end{equation} where now $B:=B_\x\times B_\y\times\Omega$ for some set $B_\x$ and $B_\y$ for which it is easy to compute the moments of the Lebesgue measure, and $\mu_{\x\y\w} = \lambda_{\x} \otimes \lambda_{\y} \otimes \mu$. Note that the optimizing variable $p$ is restricted to be invariant in $\y$-direction. The constraints guarantee that $p$ is an over-estimator of the indicator function of $\pi_{\x\w}(K)$ on $\pi_{\x\w}(B)=B_\x\times\Omega$. Similar to the results in {\cite{Henrion09}}, Magron et al. prove convergence results for $p$ to the indicator function of $\pi_{\x\w}(K)$ and the optimal value to the volume of the projection with respect to the marginal of $\mu_{\x\y\w}$ for the corresponding semi-definite hierarchies. \subsection{Approximations of the CC-OPNF} In this subsection, we describe how to use the methods outlined in Section~\ref{subsec:prelim} to provide outer and inner approximations of the CC-OPNF \eqref{eq:abstract CCOPF}. We specify the feasible set of the chance constraints that we want to approximate by \begin{subequations}\label{eq:formulation CCOPF} \begin{align} \ensuremath{\mathcal{L}^{\x}}&:=\{\x: \prob(\exists \y\in Y, f(\x,\y,\w) = 0)\geq 1-\varepsilon_1, \label{lx:1}\\ &\prob (\exists \y\in Y, f(\x,\y,\w) = 0\land g_j(\x,\y,\w)\geq0)\nonumber\\ &\quad \geq 1-\varepsilon_2,\quad j=1,\ldots,k\}, \label{lx:2} \end{align} \end{subequations} where we assume that $\ensuremath{\mathcal{L}^{\x}}\subseteq B_\x$. As mentioned in Section~\ref{sec:probform}, our goal is to approximate the set $\ensuremath{\mathcal{L}^{\x}}$ by replacing the intractable chance constraints by polynomial constraints. We define the sets for which the constraints remain satisfied as \begin{align}\label{eq:kj outer} K_0&:=\{(\x,\y,\w)\in B: f(\x,\y,\w) = 0 \},\\ K_j&:=\{(\x,\y,\w)\in B: f(\x,\y,\w) = 0\land g_j(\x,\y,\w)\geq 0 \}, \nonumber \\ & \qquad j=1,\ldots,k. \nonumber \end{align} \subsubsection{Outer approximation of the feasible set}\label{sec:outer} An outer approximation of the set $\ensuremath{\mathcal{L}^{\x}}$ can be obtained by applying the method outlined in Section~\ref{subsec:prelim} to each of the sets $K_j$ for $j=0,\ldots,k$. For each $K_j$, we get a polynomial $h^\ast_j\in\rx$ which approximates the function $\x\mapsto\prob(\pi_{\x\w}(K_j))$ from above, leading to an overestimation of the satisfaction probability and an outer approximation of the chance constraints. Consequently the set \[ \{x\in B_\x: h^\ast_0(\x)\geq1-\varepsilon_1,h^\ast_j(\x)\geq1-\varepsilon_2,j=1,\ldots,k\} \] is an outer approximation of \ensuremath{\mathcal{L}^{\x}}, and the corresponding ACC-OPNF provides a lower bound to the optimal cost of the OPNF. \subsubsection{Inner approximation of the feasible set}\label{sec:inner} In applications where system security is of primary concern, obtaining feasible solutions to \eqref{eq:abstract CCOPF} are more important than obtaining lower bounds to the cost, motivating an investigation of inner approximations to the chance constraints. However, as opposed to the outer approximation, obtaining an inner approximation of \ensuremath{\mathcal{L}^{\x}}\ is more involved. In the following, we propose a modification of \ensuremath{\mathcal{L}^{\x}}\, which we can use to approximate (almost) from the interior. For $\varepsilon_1<\varepsilon_2$ define the set \begin{subequations} \begin{align} \ensuremath{K^\x}& := \{\x\in B_\x : \nonumber\\ &\prob(\exists \y\in Y, f(\x,\y,\w)= 0) \geq 1-\varepsilon_1, \label{kx:1}\\ &\prob(\exists \y\in Y, (f(\x,\y,\w)=0 \land g_j(\x,\y,\w) \leq 0)) \nonumber\\ &\quad \leq \varepsilon_2-\varepsilon_1,\quad j=1,\ldots,k\}. \label{kx:2} \end{align} \end{subequations} The essential difference between \ensuremath{\mathcal{L}^{\x}}\ and \ensuremath{K^\x}\ is that the probabilities \eqref{kx:2} in \ensuremath{K^\x}\ are bounded from above whereas the probabilites \eqref{lx:2} in \ensuremath{\mathcal{L}^{\x}}\ are bounded from below. Since the methods discussed in \ref{subsec:prelim} lead to over-estimators of the probability, the reversal of the inequality in the formulation in \ensuremath{K^\x}\ now enables us to approximate the sets described by the chance constraints in \eqref{kx:2} from the interior. The following proposition relates the approximating set \ensuremath{K^\x}\ to \ensuremath{\mathcal{L}^{\x}}. \begin{prop} \ensuremath{K^\x}\ is an inner approximation of \ensuremath{\mathcal{L}^{\x}}. \end{prop} The proof is simple and is given in the appendix. Instead of directly dealing with $\ensuremath{\mathcal{L}^{\x}}$, we attempt to approximate the set $\ensuremath{K^\x}$ from the interior. Using the same procedure we now compute polynomials $h^\ast_0,\ldots,h^\ast_m$ approximating the functions $\x\mapsto\prob(\pi_{\x\w}(K_j))$ where $K_j$ now is defined by \begin{align}\label{eq:kj inner} \ensuremath{K}_0:=& \{(\x,\y,\w)\in B:f(\x,\y,\w)=0 \},\\ \ensuremath{K}_j:=& \{(\x,\y,\w)\in B:f(\x,\y,\w)=0 \land g_j(\x,\y,\w)\leq 0 \}.\nonumber \end{align} Note that though we are aiming for an inner approximation of \ensuremath{K^\x}, the polynomials $h^\ast_j$ are over-approximators of the probability. The set \ensuremath{K^\x}\ is then approximated by the set \begin{subequations} \begin{align} \tilde\ensuremath{K^\x} := \{ x \in B_\x: &\; h^\ast_0(\x)\geq 1-\varepsilon_1 \label{eq:equality_approx}\\ &\; h^\ast_j(\x)\leq \varepsilon_2-\varepsilon_1,j=1,\ldots,k \label{eq:inequality_approx}\}. \end{align} \end{subequations} Since the polynomials $h^\ast_j(\x)$ over-approximate the probabilities in \eqref{kx:2}, the set defined by the inequalities in \eqref{eq:inequality_approx} are inner approximations of the corresponding sets defined by \eqref{kx:2}. Unfortunately the same relation is not true for the sets defined by \eqref{eq:equality_approx} and \eqref{kx:1} that correspond to the probability of joint violation of the equatility constraints $f_i(\x,\y,\w) = 0$. Therefore, $\tilde\ensuremath{K^\x}$ is an \emph{approximate} inner approximation to \ensuremath{\mathcal{L}^{\x}}. \section{Improved approximations through Stokes constraints} \label{sec:improved_stokes} The SDP hierarchy to approximate the chance constraints presented in Section~\ref{subsec:prelim} is guaranteed to converge to the optimum as $d$ grows to infinity, but much less is known about the associated rate of convergence. When the number of variables in the polynomial optimization problem is large, the computation times can become prohibitively expensive, since current SDP-solvers are not able to solve problems with variables of size $>1000$ on a standard computer. {Coupled with the fact that the size of SDP-variables at relaxation level $d$ is $\binom{N+d}{d}$,where $N$ is the number of variables of the polynomial optimization problem, it becomes crucial to achieve high approximation accuracy at lower values of $d$.} However, convergence of the indicator function approximation tends to be slow due to the so-called Gibbs' phenomenon: If a function has a discontinuity, every overestimating polynomial approximation $p$ overshoots the upper value at the jump \cite{Lasserre17}. In the following subsections, we first review existing results regarding the use of valid constraints generated via the Stokes Theorem to speed up the convergence rate, and then describe our approach to generalize this procedure to computing the volume/probability of projections of semi-algebraic sets. \subsection{Concept of Stokes constraints}\label{subsec:stokes for cc} In \cite{Lasserre17} Lasserre proposes to improve convergence of the hierarchy by adding additional constraints to the problem \eqref{prob:full size primal}. When a polynomial $t$ is known to vanish on the boundary of $K$, the optimal measure $\phi^\ast$ satisfies the equality $\int_K\theta(\x,\y)\d\phi^\ast = 0$ for some family of functions $\theta$ depending on $\mu$ and $t$. The equality is a consequence of the Stokes theorem, which is why the constraints are referred to as Stokes constraints. {We describe the procedure to generate these constraints below, in the case where $\mu$ is the uniform measure. For more general measures we refer to \cite{Lasserre17}.} Let $t\in \R[\x,\w]$ be a polynomial that vanishes on the boundary of $K$. Given any $(\alpha,\beta) \in \N_0^{n+\ell}$ and $z \in\{x_1,\ldots,x_n,w_1,\ldots,w_m\}$, define the polynomial $\theta_{\alpha\beta}^z$ as \begin{align} \label{eq:theta_def} \theta_{\alpha\beta}^z := \tfrac{\partial}{\partial z}\left(\x^\alpha\w^\beta t(\x,\w) \right). \end{align} Then by the Stokes formula, for all $(\alpha,\beta) \in \N_0^{n+\ell}$ and $z \in\{x_1,\ldots,x_n,w_1,\ldots,w_m\}$ we have \begin{align} \label{prob:stokes primal} \int_K \theta_{\alpha,\beta}^z \d \phi^\ast= \int_{ K}\tfrac{\partial}{\partial z}\left(\x^\alpha\w^\beta t(\x,\w)\right) \d \phi^\ast = 0. \end{align} Since the optimal measure satisfies all the equality constraints given in \eqref{prob:stokes primal}, we can add these equations as constraints to \eqref{prob:full size dual} without affecting the optimal solution. Adding these redundant constraints has been shown in some cases to greatly improve the rate of convergence of the SDP hierarchy, i.e., enabling higher accuracy at a lower relaxation level $d$. While the faster convergence is beneficial, the dual of \eqref{prob:full size dual} with addition of constraints in \eqref{prob:stokes primal} now reads \begin{equation}\label{prob:stokes dual} \begin{split} \min_{\substack{p\in\rxw,\\q_{\theta}\in \R}} & \int_B p(\x,\w)\d\mu_{\x\w} \\ \st & \quad p-1\geq \sum_{\theta\in \Theta}q_\theta\theta \mbox{ on } K,\\ & \quad p \geq 0 \mbox{ on } B, \end{split} \end{equation} where $\Theta$ is the set of all $\theta^z_{\alpha\beta}$ defined in \eqref{eq:theta_def}. Comparing \eqref{prob:stokes dual} to problem \eqref{prob:full size dual} we observe that the polynomial $p$ in \eqref{prob:stokes dual} is no longer an over-estimator of the indicator function $1_K$ on $B$. \begin{figure} \begin{center} \includegraphics[width = 0.48\textwidth]{Gibbs_Stokes.pdf} \caption{Effect of Stokes constraints on the dual variables. Polynomial $p$ approximating the indicator function on $[-\tfrac{1}{2},\tfrac{1}{2}]$ without (red) and with (blue) Stokes constraints. } \label{fig:stokes effect} \vspace{-12pt} \end{center} \end{figure} This effect is illustrated in Figure~\ref{fig:stokes effect}, where typical shapes of $p$ for problems \eqref{prob:full size dual} (red) and \eqref{prob:stokes dual} (blue) are shown. We observe that the red curve over-approximates the indicator function of $K$ (dashed black). Also we can see the mismatches at the discontinuities of the indicator function due to the Gibbs' phenomenon. We note that the $1$-super-level-set of the red curve is a good approximation of the set $K$. In contrast to that, when applying Stokes constraints, we observe that the $1$-super-level-set of the blue curve does not provide any information about the set $K$. The integral value of the blue curve however is closer to the volume of $K$ than the integral value of the red curve. Moreover the integral preserves the over-approximation property. Indeed for any polynomial $p$ feasible for \eqref{prob:stokes dual} we have \begin{align} \label{eq:stokes_volume_upper} \int_K p \d\mu_{\x\w} \geq \int_K 1 \d\mu_{\x\w} +\sum_{\theta\in \Theta}q_\theta\hspace{-6pt}\underbrace{\int_K\hspace{-6pt} \theta\d\mu_{\x\w}}_{=0\;\text{by Stokes}}\hspace{-6pt} {=} \vol(K). \end{align} \subsection{Partial Stokes constraints for chance constraints} \label{sec:partial_stokes} The polynomial $p(\x,\w)$ obtained above cannot be used to approximate the function $\rho(\x)$ in \eqref{eq:cc indicator}. If however we only use $\w$ as variables $z$ in \eqref{eq:theta_def} then for all $\x$ we have \begin{align} \label{eq:partial_stokes} \int_{B_{\x}} p d\mu \geq \int_{K_{\x}} 1 \d \mu + \sum_{\theta \in \Theta} \int_{K_{\x}} \theta \d \mu = \rho(\x). \end{align} Applying Stokes constraints only in the $\w$ direction hence allows us to both obtain the improved convergence rates while still obtaining an over-estimator of the probability of $K_{\x}$. \subsection{Partial Stokes constraints for projection of sets}\label{sec:two steps} The method in Section~\ref{subsec:stokes for cc} cannot directly be applied to the setting where the feasible set is described by the projection of a semi-algebraic set. This is because in order to be able to add Stokes constraints to the problem in \eqref{prob:projection dual}, we must first find a polynomial $t\in\rxw$ that vanishes on the boundary of the projection $\pi_{\x\w}(K)$ of $K$. Note that in Section~\ref{subsec:stokes for cc} where there is no projection involved, the polynomial $t\in\rxw$ that vanishes on the boundary of $K$ can be readily obtained as the product of the polynomials that define the semi-algebraic set $K$. For the projection $\pi_{\x\w}(K)$ of a semi-algebraic set $K$ in $(\x,\y,\w)$-space, this trick is not applicable. Our solution to this issue is a two-step-procedure: In the first step we approximate the projection $\pi_{\x\w}(K)$ by the super-level-set of a polynomial $p^{(1)}\in\rxw$. In the second step, we use this super-level-set $S$ to compute a second polynomial $p^{(2)}\in\rxw$ approximating the volume of $S$. This is explained in more detail below. \subsubsection{Step 1: Approximating the projection of $K$}\label{sec:step1} We first apply the method in Section~\ref{subsec:prelim} and solve the problem in \eqref{prob:projection dual} to obtain a polynomial $p^{(1)}$ that is an over-estimator of the indicator function of $\pi_{\x\w}(K)$, i.e. $p^{(1)}\geq1 \text{ on } \pi_{\x\w}(K)$. In particular the super-level-set given by \begin{align} S:=\{(\x,\w)\in B_\x \times \Omega : p^{(1)}(\x,\w)-1\geq0 \} \label{eq:S_def} \end{align} is an outer approximation of $\pi_{\x\w}(K)$. Fig.~\ref{fig:sketch1} illustrates this step. Numerical experiments have shown that the $1$-super-level-set of the optimizing polynomial is quite accurate already for low relaxation degrees. \subsubsection{Step 2: Probability approximation}\label{sec:step2} After the first step we replace the actual projection $\pi_{\x\w}(K)$ by its approximation $S$ defined in \eqref{eq:S_def}. In doing so we lose information about $\pi_{\x\w}(K)$ but we gain two important advantages. First, moving from $K$ to $S$ we get a significant reduction in the number of variables, as we eliminate the whole $\y$-space. This allows us to afford computational capacity for higher levels in the SDP relaxation hierarchy and get better volume approximations. Second, we now have a polynomial, specifically $p^{(1)}-1$, that vanishes on the boundary of $S$. This crucial difference enables us to use Stokes constraints to improve the volume approximation. Applying the method in Section~\ref{sec:partial_stokes}, we obtain a polynomial $p^{(2)}\in\rxw$ that still preserves the desired over-approximation property: \begin{align} h^\ast(\x):=\int_{\Omega} p^{(2)}(\x,\w)\d\mu \stackrel{(a)}{\geq} \prob(S) \stackrel{(b)}{\geq} \prob(\pi_{\x\w}(K)), \nonumber \end{align} \todo{} where $(a)$ follows from \eqref{eq:partial_stokes} and $(b)$ follows because ${\pi_{\x\w}(K)} \subseteq S$. This step is summarized in Fig.~\ref{fig:sketch2}. \begin{figure} \begin{center} \def0.4\textwidth{0.4\textwidth} \input{stepone_no_arrow.pdf_tex} \caption{Step 1: Projection step. The projection of $K$ is approximated as $S$, which is defined by the 1-super-level set of $p^{(1)}$.} \label{fig:sketch1} \end{center} \end{figure} \begin{figure} \begin{center} \def0.4\textwidth{0.4\textwidth} \input{steptwo_no_arrow.pdf_tex} \caption{Step 2: Probability approximation. The probability is approximated by integrating $p^{(2)}$ in $\Omega$ direction for every $\x$.} \label{fig:sketch2} \end{center} \end{figure} \section{The overall approach}\label{sec:overall} To summarize the overall approach, we first recall the problem formulation \eqref{eq:abstract CCOPF}. Our aim is to eliminate the chance constraints {\eqref{con:f} and \eqref{con:g}} and replace them by tractable polynomial constraints. The challenge is to (i) ensure existence of solution to the equality constraints, (ii) compute inner approximations to the chance constraints, and (iii) enable use of Stokes constraints to speed up convergence. We address the challenges in the following steps: \begin{enumerate} \item We reformulate the feasible set $\ensuremath{\mathcal{L}^{\x}}$ of the chance constraints by the set \ensuremath{K^\x}\ that allows us to obtain inner approximations. \item We eliminate the dependent $\y$ variables by approximating the projection of each $K_j$ defining \ensuremath{K^\x}\ as the super-level set $S$ of a polynomial $p_j^{(1)}$. \item We use the reduced set $S$ to compute the inner approximations to the chance constraints by polynomials $h^\ast_0(\x),\ldots,h^\ast_k(\x)$. To speed up convergence, we add Stokes constraints which is made possible by the availability of the polynomial $p^{(1)}$. \end{enumerate} Now, the chance constraints in original problem \eqref{con:f}, \eqref{con:g} can be replaced by their approximation to obtain the ACC-OPNF formulation: \begin{subequations}\label{eq:acc-opnf} \begin{align} \min_{\x,\y_\x}\;& c(\x,{\y_\x)}\quad\st \nonumber \\ &{f^0_i(\x,\y_\x) = 0 ,\;i=1,\ldots,m,}\\ &{g^0_j(\x,\y_\x)\geq 0,\;j=1,\ldots,k,}\\ & h_0(\x) \geq 1-\varepsilon_1,\label{con:h0}\\ & h_j(\x) \leq \varepsilon_2-\varepsilon_1,\; j=1,\ldots,k.\label{con:hj} \end{align}\end{subequations} Although obtaining the polynomials $h^\ast_0(\x),\ldots,h^\ast_k(\x)$ might be computationally heavy, this procedure is independent of the actual solution process for the resulting ACC-OPNF and can be considered as a pre-processing step to be executed offline. The resulting approximate CC-OPNF, despite remaining non-convex, can be solved to local optimality easily using a local non-linear solver. Furthermore, methods for global optimization of polynomial problems can be applied \cite{Lasserre2001}. \section{Application to Chance-Constrained AC Optimal Power Flow}\label{sec:application ccacopf} In this section, we present the mapping of a chance-constrained AC optimal power flow (CC-AC-OPF) problem onto the general CC-OPNF problem \eqref{eq:abstract CCOPF}. Motivated by the recent increase in generation uncertainty from renewable energy sources, our CC-AC-OPF formulation attempts to minimize generation cost, subject to engineering constraints while accounting for the uncertainty in renewable power generation. \subsection{Deterministic Optimal Power Flow} We first formulate the deterministic OPF problem where we assume perfect knowledge of the system. This problem corresponds to the deterministic OPNF \eqref{eq:abstract OPF}. \subsubsection{Notation} We consider an electric network where $\mathcal{N}$ and $\mathcal{E}$ denote the sets of nodes and edges. Without loss of generality, we assume that there is one generator, one demand and one uncertainty source per bus. Complex power is given by $s=p+j\cdot q$, where $p$ and $q$ are the active and reactive power. Subscripts $_R,~_G$ and $_D$ are for renewable energy sources, conventional generators and loads, respectively. The complex bus voltages are denoted by $v=v_{real}+v_{imag}$, and the corresponding voltage magnitudes by $|v|=(v_{real}^2+v_{imag}^2)^{1/2}$. \subsubsection{Problem formulation} Given the above considerations, the OPF problem is given by \begin{subequations} \begin{align} \min_{\substack{p_{G0}, \\ q_{G0},v_0}} ~&\sum_{i\in\mathcal{G}} c_{2,i} p_{G0,i}^2 + c_{1,i} p_{G0,i} + c_0 \label{eq:opfobjective}\\ \text{s.t.} ~~~ &s_{G0,i}+ s_{R,i} - s_{D,i} = \sum_{(i,j)\in\mathcal{E}} s_{0,ij}, && \forall i\in\mathcal{N}, \label{eq:acnodal}\\ &s_{0,ij} = \ensuremath{\mathbf{Y}}_{ij}^* v_{0,i} v_{0,i}^* - \ensuremath{\mathbf{Y}}_{ij}^* v_{0,i} v_{0,j}^*, && \forall (i,j) \!\in \!\mathcal{E}, \label{eq:acflow}\\ &p_{G,i}^{min} \leq p_{G0,i} \leq p_{G,i}^{max}, \quad &&\forall i\in\mathcal{N}, \label{eq:pG}\\ &q_{G,i}^{min} \leq q_{G0,i} \leq q_{G,i}^{max}, \quad &&\forall i\in\mathcal{N}, \label{eq:qG}\\ &|v|^{min} \leq |v_{0,j}| \leq |v|^{max}, \quad &&\forall j\in\mathcal{N}, \label{eq:v}\\ &|s_{0,ij}| \leq |s_{ij}|^{max}, \quad &&\forall (i,j)\!\in\!\mathcal{E}. \label{eq:s0} \end{align} \label{eq:detOPF} \end{subequations} The objective \eqref{eq:opfobjective} of the problem is to chose the generation dispatch point, given by the active and reactive power generation $p_{G0},~q_{G0}$ and the complex voltages $v_0$, such that the the cost of active power generation given by the quadratic function in \eqref{eq:opfobjective} is minimized. The AC power flow equations \eqref{eq:acnodal}, \eqref{eq:acflow} are a set of equality constraints describing the physical laws, with the nodal power balance given by \eqref{eq:acnodal}, and transmission line flows given by the Ohm's law \eqref{eq:acflow}, where $\ensuremath{\mathbf{Y}}$ is the so-called \emph{admittance matrix}. Note that we use the rectangular form of the power flow equations to obtain polynomial constraints. Further, we enforce a set of engineering limits \eqref{eq:pG}-\eqref{eq:s0}. The constraints \eqref{eq:pG}, \eqref{eq:qG} represent bounds on generation capacity, \eqref{eq:v} limits the voltage magnitudes to safe ranges and \eqref{eq:s0} enforces limits on the apparent power flow. Among thesse constraints, \eqref{eq:acnodal} and \eqref{eq:acflow} correspond to the equality constraints $f_i^0=0$ in the deterministic OPNF \eqref{eq:abstract OPF}, and the remaining constraints correspond to the inequality constraints $g^0_j\geq 0$. \subsection{Chance-Constrained Optimal Power Flow} We now extend the deterministic problem to the setting with uncertainty in the power injections. \subsubsection{Modelling uncertain injections} We model the uncertain active power injections from renewable generators as the sum of the expected value $p_R$ and a fluctuation $\w$. The expected reactive power injection is denoted by $q_R$. The reactive power injections are assumed to adjust in a way that the power factor, given by $\gamma=q_R/p_R$, remains constant: \begin{equation} s_R(\w) = (p_R + \w) + j \cdot (q_R + \gamma \w) \label{eq:power_factor} \end{equation} We assume that the probability distribution of $\w$ is known. The active and reactive power consumption of the loads, denoted by $p_L,~q_L$, are assumed to be constant, but could also be modelled similar to \eqref{eq:power_factor}. \subsubsection{Power flow equations under uncertainty} For non-zero uncertainty realization $\w$, the power flow equations \eqref{eq:acnodal} are adapted to account for $\w$, i.e. \begin{subequations} \label{eq:powerflow} \begin{align} &\!\!\!s_{G,i}(\w) + s_{R,i} + \w - s_{D,i} \!= \!\!\sum_{(i,j)\in\mathcal{E}} s_{ij}(\w), \!\!&& \!\forall i\in\mathcal{N}, \\ &\!\!\!s_{ij}(\w) \!=\! \ensuremath{\mathbf{Y}}_{ij}^* v_i(\w) v_i^*(\w) \!-\! \ensuremath{\mathbf{Y}}_{ij}^* v_i(\w) v_j^*(\w), \!\!&& \!\forall (i,j)\! \in\! \mathcal{E}. \end{align} \end{subequations} \subsubsection{Response to uncertainty} When the power injections fluctuate, the controllable generators must adjust their generation output $s_{G,i}(\w)$ to ensure that the power balance constraints \eqref{eq:acnodal} are satisfied. We adopt balancing practices typical in power systems operation, which require the definition of so-called $pv$, $pq$ and $v\theta$ (reference) buses. On each node of the network there are four state variables, namely the active power injection $p$, the reactive power injection $q$, and two voltage variables corresponding to the voltage magnitude and angle $|v|,~\theta$ (polar coordinates) or the real and imaginary voltage $v_{real},~v_{imag}$ (rectangular coordinates). The buses are classified according to the quantities that are controllable or specified: (i) $pq$ buses (such as loads) with specified real and reactive power, (ii) $pv$ buses (such as generators) with controllable active power and voltage magnitude, and (iii) $v\theta$ or reference bus with the voltage angle set to zero. The sets of nodes that correspond to the three categories are denoted by subscripts $\mathcal{N}_{pq},~\mathcal{N}_{pv}$ and $\mathcal{N}_{v\theta}$. Given the above definitions, we assume that the active power injections from generators at $pq,~pv$ buses remain constant throughout the fluctuations, and all fluctuations $\omega$ are balanced by the generator connected at the slack bus. Similarly, reactive power is balanced by adjusting the reactive power output of $pv$ and $v\theta$ buses to maintain constant voltage magnitudes, while the reactive power injections at $pq$ buses are kept constant. \subsection{Definition of $\x$ and $\y$ variables} We choose the rectangular coordinate representation in order to be able to employ the semi-algebraic methods described in this paper. This gives us $4$ variables per bus $p,q,v_{imag},v_{real}$. However, as described above, the standard model for $pv$ and $v\theta$ buses are based on polar coordinates, where we keep the voltage magnitude constant. We handle these requirements in rectangular coordinates by adding the constraints $v_{imag}=0$ and $v_{real,i}(\w)=v_{real,i}$ for $i\in\mathcal{N}_{v\theta}$, and the constraint $v_{real,i}(\w)^2 + v_{imag,i}(\w)^2 = |v|_i^2$ for $i\in\mathcal{N}_{pv}$. This results in two independent variables per bus, which we choose to also correspond to the quantities that can be controlled by the system operator. In particular, we define the independent $\x$ variables as \begin{align*} & p_{G0,i}, q_{G0,i}, ~ &&\forall i\in\mathcal{N}_{pq}, \\ & p_{G0,i}(\w),|v|_{0,i} , ~ &&\forall i\in\mathcal{N}_{pv},\\ & v_{\text{real}0,i}, v_{\text{imag}0,i}, ~ &&\forall i\in\mathcal{N}_{v\theta}. \end{align*} The variables that change as a function of $\w$ are the $\y$ variables in the CC-OPNF formulation \eqref{eq:abstract CCOPF}: \begin{align*} & v_{\text{real},i}(\w), v_{\text{imag},i}(\w) , ~ &&\forall i\in\mathcal{N}_{pq}, \\ & q_{G,i}(\w),v_{\text{real},i}(\w),v_{\text{imag},i}(\w) , ~ &&\forall i\in\mathcal{N}_{pv},\\ & p_{G,i}(\w),q_{G,i}(\w), ~ &&\forall i\in\mathcal{N}_{v\theta},\\ & s_{ij}(\w), ~ && \forall ij\in\mathcal{E}. \end{align*} Note that in the process of solving \eqref{eq:abstract CCOPF}, we are not explicitly assigning a value to these dependent quantities $\y(\w)$. However, the variables $\y_\x$, which correspond to the $\y$ variables at the expected operating point ($\w=0$), are explicitly defined. \subsubsection{Definition of constraints $f=0$ and $g\leq0$} As is evident from \eqref{eq:powerflow}, both the generation outputs $p_{G,i}(\w)$ and $q_{G,i}(\w)$, the power flows $s_{ij}(\w)$ and the voltage variables $v_i(\w)$ will change depending on the realization of $\w$. The constraints which incorporate those quantities are therefore enforced as chance constraints. The stochastic power flow equations \eqref{eq:powerflow} correspond to the equality constraints $f(\x,\y,\w) = 0$. When there is no solution to this set of equations, the system is unstable and might collapse at any point leading to complete blackout of the electric grid. We hence want the probability of violating any of the equality constraints to be very low, and enforce those constraints jointly as in \eqref{con:f} with a small acceptable violation probability $\varepsilon_1$. The inequality constraints $g_j(\x,\y,\w)\leq 0$ correspond to the engineering limits \begin{subequations} \label{eq:engineeringCC} \begin{align} &p_{G,i}^{min} \leq p_{G,i}(\w) \leq p_{G,i}^{max}, \quad &&\forall i\in\mathcal{N}_{v\theta} \label{eq:pGref}\\ &q_{G,i}^{min} \leq q_{G,i}(\w) \leq q_{G,i}^{max}, \quad &&\forall i\in\mathcal{N}_{pv}, \mathcal{N}_{v\theta} \label{eq:qGpv}\\ &|v_i|^{min} \leq |v_{i}|(\w) \leq |v_i|^{max}, \quad &&\forall i\in\mathcal{N}_{pq} \label{eq:vpq}\\ &v_{real,i}(\w)^2 + v_{imag,i}(\w)^2 = |v|_i^2, \quad &&\forall i\in\mathcal{N}_{pv} \label{eq:vpv}\\ &|s_{ij}|(\w) \leq |s_{ij}|^{max}, \quad &&\forall (i,j)\in\mathcal{E}. \label{eq:s} \end{align} \end{subequations} In contrast to a violation of the power flow equations \eqref{eq:powerflow}, a violation of one of the engineering constraints \eqref{eq:engineeringCC} would typically have a more local impact (e.g. overloading of a component), and can often be tolerated for a certain amount of time (e.g. violations of thermal capacity limits of transmission lines). We hence enforce \eqref{eq:engineeringCC} as separate chance constraints, and allow for a larger violation probability $\epsilon_2 > \epsilon_1$. \subsubsection{Choosing $Y$} The last parameter we must determine before the mapping from the CC-AC-OPF to the generic CC-OPNF problem \eqref{eq:abstract CCOPF} is complete, is the set $Y$ from Assumption 2. We would like to choose $Y$ such that solutions to \eqref{eq:powerflow} are unique and have a well-defined physical meaning, which for the OPF problem implies ensuring that low voltage solutions to the power flow equations are excluded. Therefore we define the sets $Y$ by the inequalities \begin{equation}\label{eq:anti low voltage} |v|^{min-} \leq |v_{i}|(\w),\quad\forall i\in\mathcal{N}_{pq}. \end{equation} Here, $|v|^{min-}$ is lower than the standard voltage bound $|v|^{min}$, but sufficiently large to exclude low voltage solutions. \section{Case study}\label{sec:numerics} We first describe the implementation and test system, before presenting the numerical results for the chance constraint approximation and the resulting approximate CC-OPNF. \subsection{Implementation} In this section, we describe our implementation to obtain the ACC-OPNF in Section~\ref{sec:overall} and evaluate its performance. To obtain the polynomials $h^\ast_0,\ldots,h^\ast_k$ in \eqref{eq:acc-opnf} we solve SDP relaxations to the infinite dimensional linear problems described in \ref{sec:step1} and \ref{sec:step2}. We use the GloptiPoly3 Matlab toolbox \cite{gloptipoly} to model the relaxations and Mosek \cite{mosek} to solve the SDPs. The resulting ACC-OPNF is implemented in Julia \cite{julia} with JuMP \cite{JuMP} and PowerModels.jl \cite{PowerModels} and then solved using the local non-linear solver Ipopt \cite{ipopt}. We also perform Monte-Carlo simulations for benchmarking which requires solving the standard power flow and the AC-OPF which are implemented using Matpower \cite{zimmermann2011} and PowerModels.jl respectively. \subsection{Test system} We run our numerical experiments on a modified version of a 4-bus system in \cite{4bus} (case4gs in the Matpower library) which is illustrated in figure \ref{fig:OPF}. The system has two conventional generators at Bus\,1 and Bus\,4, with active and reactive power limits $p_{Gi}^{min}=0,p_{Gi}^{max}=500$ and $q_{Gi}^{min}=-250,q_{Gi}^{max}=500$. Bus\,1 is the reference bus, while all other buses are PQ buses. We assume that the load at Bus\,2 is uncertain, with active power fluctuations $\omega$ uniformly distributed on $[-50,50]$. The reactive power fluctuations on Bus\,2 are proportional to the active power fluctuations, with $\gamma \approx 0.62$. We assume quadratic cost for Bus\,1 with $(c_{2,1},c_{1,1},c_{0,1})=(0.01,30,200)$ and a linear cost for Bus\,4 with $(c_{2,4},c_{1,4},c_{0,4})=(0,25,400)$. \begin{figure} \centering \includegraphics[width = 0.9\columnwidth]{4bussystem.png} \caption{Overview of the $4$-bus system. Generators marked in blue, uncertainty source in green and loads in black. } \label{fig:OPF} \end{figure} \subsection{Numerical results} We verify the approximation quality of the chance constraint approximation, and then assess the performance of the full CC-AC-OPF problem. \subsubsection{Approximation of chance constraints} We employ the two-step approach described in \ref{sec:two steps} to obtain the chance constraint approximations through the polynomials $h^\ast_0,\ldots,h^\ast_k$ given in \eqref{eq:acc-opnf}. We investigate the accuracy of this approximation and how the accuracy improves by increasing the relaxation order $d$ and the addition of Stokes constraints. We show results for both the outer approximation \ref{sec:outer} and the (approximate) inner approximation \ref{sec:inner}. To obtain outer and inner approximations we need to compute the probability of the projections of the sets $K_j$ defined in \eqref{eq:kj outer} and \eqref{eq:kj inner} respectively, by using the two-step method in Section~\ref{sec:two steps}. For the corresponding SDP relaxations, we choose the relaxation order of the first step to be $d=2$ or $3$ and for the second step to be $d+5=7$ or $8$. For the first step, a lower degree polynomial is sufficient to approximate the level sets of $K_j$, whereas in the second step needs higher orders for better approximation and benefiting from Stokes constraints. To assess how close we are to the true feasible set of the chance constraints, we created a large number of grid point to represent $B_\x$b using $100$ grid points for both active and reactive power for a total of $10'000$ grid points. For each grid point, we sampled 1'000 realizations of $\w$. For each $(\x,\w)$, we solved a standard power flow using Matpower. We then calculated the probability that a constraint holds for fixed \x\ by dividing the number of samples for which the power flow satisfies the constraints by the total number of samples for \w. Figure \ref{fig:comparison eps01} shows the feasible region for $\varepsilon_1=0.01$ and $\varepsilon_2=0.1$. We show both the inner (green) and outer (red) approximation of the feasible region for relaxation orders $d=2,3$, and both with and without Stokes constraints. As a benchmark, we also show the feasible region computed through the Monte Carlo simulation (blue). The closer the approximated regions (green and red) are to the benchmark (blue), the better the approximation. We remark that both increasing the relaxation order and introducing Stokes constraints increase the quality of the solution. The improvement obtained by introducing Stokes constraints is very significant, while increasing the relaxation order only slightly increases the quality of the approximation. \begin{figure} \begin{center} \includegraphics[width=0.8\columnwidth]{comp_eps010.pdf} \vspace{-15pt} \caption{Comparison of the outer (red) and inner (green) approximation with the Monte Carlo simulation (blue) for $\varepsilon_1=0.01$ and $\varepsilon_2=0.1$.} \label{fig:comparison eps01} \end{center} \end{figure} To further assess the quality of approximation, we report the ratios between the volume of the approximated feasibility regions and the volume computed through the Monte Carlo simulation in Table \ref{tab:ratio} for $\varepsilon_1=0.01$ and different values of $\varepsilon_2$. The addition of Stokes constraints clearly offers significant improvement. Interestingly, the quality of the outer approximation does not seem to depend on the choice of $\varepsilon_2$, while the accuracy of the inner approximation decreases with $\varepsilon_2$. We observe that the outer approximation to the chance constraints is not very tight, and might lead to violation probabilities significantly above the acceptable levels. The extension proposed in this paper to allow for an (approximate) inner approximation provides a significant practical advantage over the previously existing methods in terms of returning safe approximations. It is also accurate enough to provide non-empty feasible sets, even at low relaxation orders. \begin{table} \begin{center} \def1.5{1.25} \begin{tabular}{|c|cc|cc|cc|cc|} \hline & \multicolumn{4}{c}{outer}&\multicolumn{4}{c|}{inner}\\ & \multicolumn{2}{c}{$d=2$}& \multicolumn{2}{c|}{$d=3$}& \multicolumn{2}{c}{$d=2$}& \multicolumn{2}{c|}{$d=3$}\\ $\varepsilon_2$ & -- & Stokes & -- & Stokes & -- & Stokes & -- & Stokes \\ \hline 0.20 & 175\% & 165\% & 141\% & 124\% & 53\% & 79\% & 56\% & 79\% \\ 0.15 & 179\% & 168\% & 143\% & 126\% & 49\% & 75\% & 53\% & 76\% \\ 0.10 & 182\% & 171\% & 144\% & 126\% & 43\% & 69\% & 47\% & 70\% \\ 0.05 & 185\% & 173\% & 144\% & 125\% & 30\% & 56\% & 36\% & 58\% \\ \hline \end{tabular} \vspace{6pt} \caption{Ratio approximated vs. real volume for different values of $\varepsilon_2$.} \label{tab:ratio} \vspace{-24pt} \end{center} \end{table} \subsection{Solving an instance of a CC-AC-OPF} We assess the performance of the ACC-OPNF formulation in \eqref{eq:acc-opnf} by evaluating the cost of the optimal generation dispatch, the empirical constraint violation probability and by relating it to the deterministic AC-OPF. For this experiment, we use the best inner approximation with relaxation order $d=3$ as well as the Stokes constraints to approximate the CC-AC-OPF \eqref{eq:abstract CCOPF}. We solve both the deterministic AC-OPF and the approximation of the CC-AC-OPF for different values of $\varepsilon_2$. We then compare the power injections, the cost and the maximal empirical violation probability of the individual chance constraints $\varepsilon_2^\ast$, which is computed through another Monte Carlo simulation at the obtained solution point using 1'000 samples of $\w$. Table \ref{tab:cc-ac-opf} summarizes the results. Column \emph{Det.} we show the results for the deterministic AC-OPF. The other columns are labeled by their acceptable violation probability for the individual constraint violation $\varepsilon_2$. The violation probability $\varepsilon_1=0.01$ for all experiments. The variables $p_{G0,4}$ and $q_{G0,4}$ are the independent variables $\x$ in our problem formulation, corresponding to the active and the reactive power of the generator at Bus $4$ in the test case. The power injections at the slack bus generator $p_{G0,1}$ and $q_{G0,1}$ are among the dependent $\y_\x$ variables. Since these generators will adjust their values based on the realization of $\w$, we report their expected values in the table. Further, we list the cost of the operating point and the maximum empirical violation probabilities $\epsilon_2^\ast$ among all individual constraints. We do not show results for the empirical violation probability of the joint chance constraint $\varepsilon^\ast_1$, as it was constantly $0\%$ for all optimal operating points. This is expected, since the engineering limits are typically more limiting than the power flow solvability conditions. As the violation probability $\epsilon_2$ decreases, more and more of the system load must be covered by the more expensive slack generator, resulting in a higher value for $p_{G0,1}$ and a higher expected cost. Considering the violation probabilities of the individual chance constraints we see that the optimal solution to the deterministic AC-OPF violates at least one of these constraints with a probability of almost $40\%$. For the approximations of the CC-AC-OPF the empirical violation probability $\varepsilon_2^\ast$ of the individual chance constraints is always below the requested probability $\varepsilon_2$, reflecting the fact that we indeed obtain a true inner approximation. While the empirical violation probability is quite close to the acceptable level for $\varepsilon_2=20\%$ and $\varepsilon_2=15\%$, respectively, the approximation is significantly more conservative for lower values of $\varepsilon_2$. For $\varepsilon_2=5\%$ no violations are observed. \begin{table} \begin{center} \def1.5{1.5} \begin{tabular}{|l|r|r|r|r|r|} \hline & Det.\hspace{0.5cm} & $\varepsilon_2= 20\%$& $\varepsilon_2=15\%$ & $\varepsilon_2=10\%$& $\varepsilon_2=5\%$\\ \hline $p_{G0,1}$ & 8.5 & 30.1 & 36.3 & 44.7 & 58.8 \\ $q_{G0,1}$ & 158.4 & 168.0 & 168.2 & 168.6 & 169.1 \\ $p_{G0,4}$ & 500.0 & 477.6 & 471.2 & 462.4 & 447.9 \\ $q_{G0,4}$ & 149.5 & 135.4 & 134.0 & 132.1 & 129.1 \\ \hline cost & 13\,357 & 13\,452 & 13\,481 & 13\,523 & 13\,596 \\ $\varepsilon_2^\ast$& 39.8\% & 18.2\% & 12.1\% & 3.7\% & 0.0\% \\ \hline \end{tabular} \vspace{6pt} \caption{Optimal values and solutions to \eqref{eq:abstract OPF} and \eqref{eq:acc-opnf} for $\varepsilon_1=0.01$ and different values of $\varepsilon_2$} \vspace{-24pt} \label{tab:cc-ac-opf} \end{center} \end{table} \section{Conclusion}\label{sec:concl} In this paper, we develop a new approach to handle chance constrained optimization problems in non-linear physical networks. The method is based on Semidefinite Programming (SDP) techniques to compute the volume of semi-algebraic sets, from which polynomial approximations of the chance constraints are obtained. To make existing results applicable in our practical setting, we (i) propose a set reformulation in order to enable inner approximations, and (ii) develop a two-step procedure to improve approximation quality at lower computational overhead. The method is applicable to any problem with polynomial equality and inequality constraints, and we demonstrate it numerically on the chance constrained AC Optimal Power Flow. In our experiments, the polynomial approximations were shown to provide sufficiently accurate representations of the feasible domain, and the resulting CC-AC-OPF was able to provide safe operating points with limited violation probability. The method presented is a powerful and novel technique to handle chance constrained optimization for non-linear systems. Although, in its current form the method is applicable to small systems, it has the potential for multiple extensions and improvements. One promising future direction is to exploit the sparsity structure of networks to scale the method to larger instances. \section*{Acknowledgment} The first author is grateful to the Los Alamos National Laboratory for hosting him during summer 2017. His work is supported by ERC-Advanced Grant \#666981 TAMING. \bibliographystyle{IEEEtran}
1,108,101,565,404
arxiv
\section{Introduction} \label{sec:introduction} Protein mediated regulation of membrane curvature occurs during many cellular processes such as cargo trafficking, cell motility, cell growth, and division \cite{McMahon2005, Zimmerberg2006, Jarsch2016, Bassereau2018}. Recently, several classes of proteins capable of curvature generation have been identified~\cite{Farsad2003, Voeltz2007, Shibata2009}. Dynamin and proteins with the crescent-shaped Bin Amphiphysin Rvs (BAR) domain were found to generate curvature by the scaffolding mechanism \cite{Peter2004}. On the other hand, epsin protein with an N-terminal helix generates curvature using the hydrophobic insertion mechanism~\cite{Bhatia2009, Hatzakis2009}. These curvature generating proteins are also capable of sensing membrane curvature~\cite{Antonny2011}. Curvature sensing refers to the ability of proteins to bind onto membranes depending on the local curvature. Recent experiments have reported that membrane curvature gives a cue to localization of proteins in bacteria and viruses~\cite{Wasnik2015, Gill2015, Martyna2016, Draper2017}. This phenomenon is believed to be exploited by cells during the process of budding and fission. For example, in the clathrin-mediated membrane fission, the narrow neck between the clathrin-bound bud and the parent membrane preferentially recruits the dynamin proteins responsible for membrane scission. Thus, it is important to understand these processes of curvature sensing and generation to gain insight into many of the cellular processes. Biophysical experimental setups such as Single Liposome Curvature (SLiC) assays and tethers pulled from giant unilamellar vesicles (GUVs) have been extensively used to quantify curvature sensing~\cite{Baumgart2011, Aimon2014}. These two methods are schematically illustrated in Fig.~\ref{fig:schematic}. Considering their small throughput, \textit{in vivo} alternatives have also been used~\cite{Rosholm2017}. In the tether pulling experiments, a narrow membrane tube of few tens of nanometer radius is pulled from a GUV of a few microns radius. Curvature sensitive proteins are introduced to these two membrane surfaces with very different curvatures. The relative binding fraction of proteins on the two surfaces is then measured based on the intensity of fluorescently tagged proteins. On the other hand, in SLiC assays, proteins are introduced in a medium containing liposomes of different radii~\cite{Bhatia2009}. Similar to the case of tether pulling experiments, the intensity of fluorescently tagged proteins is utilized to estimate the binding fraction of proteins on liposome surfaces. Several quantitative analytical models have also been proposed to study the phenomena of curvature sensing~\cite{Zhu2012, Bozic2015, Svetina2015}. Currently, there exist two thermodynamic models for curvature sensing/generation --- the spontaneous curvature model and the curvature mismatch model. The two models differ in their treatment of the membrane elastic energy due to the deformation induced by proteins. Although both the models have been shown to fit various experimental data, it is not clear which among the two models is most suitable to study the curvature sensing/generation behavior of a particular protein. In the present work, we describe the two models and compare the results obtained using analytical calculations as well as monte carlo simulations. \begin{figure} \centering \includegraphics[width=0.6\textwidth]{./schematic.png} \caption{Preferential binding of proteins to highly curved membrane surfaces. Schematic of typical biophysical experimental setups used to study curvature sensing phenomena. }\label{fig:schematic} \end{figure} The article is organized as follows. Section~\ref{sec:models} introduces the two thermodynamic models and presents analytical results. In sec.~\ref{sec:simulations}, we discuss the sensing/generation behaviour of the two models studied using MC simulations. The article ends with a few concluding remarks in sec.~\ref{sec:conclusions} \section{Models for curvature sensing} \label{sec:models} The curvature sensing ability of proteins is a consequence of the interaction between the proteins and the membrane. Although the specific interactions between membrane patches and protein domains are quite complicated, their effects can be understood in terms of a few coarse-grained interaction parameters. At mesoscopic length scales, several quantitative analytical models have been proposed to study the phenomena of curvature sensing. Below we describe and compare two models, which are most often used. \subsection{Spontaneous Curvature Model} \label{sec:spcur_model} Spontaneous Curvature (SC) model assumes that the only effect of the bound protein, on the elastic energy, is to induce a preferred local curvature of the membrane. This model has been employed previously to study sorting of amphiphysin in tube pulling assays~\cite{Sorre2012} as well as in modeling of lipopolysaccharide-binding on synthetic lipid vesicles~\cite{Mally2017}. In this model, the energy of the membrane surface is given by the spontaneous curvature form of the Helfrich free energy~\cite{Helfrich1973}, \begin{align} \mathcal{H} = \int \textrm{d}A \frac{\kappa}{2} \left(2H - C_0 \right)^2, \label{eq:ci_energy} \end{align} where $\kappa$ is the bending rigidity and $C_0$ is the membrane spontaneous curvature. The integral is over the entire area of the membrane surface. The spontaneous curvature is usually assumed to be linearly dependent on the protein area fraction $\phi$~\cite{Markin1981, Leibler1986}, \begin{align} C_0 = C_p \phi, \end{align} where $C_p$ is the intrinsic curvature of the protein. In essence, this model assumes that the protein sets a preferred local curvature on the membrane depending on its bound density. \begin{figure*} \centering \includegraphics[width=\textwidth]{./theory} \caption{Protein binding on a non-deformable spherical vesicle studied using the spontaneous curvature model and the curvature mismatch model. Adsorption isotherms of proteins with different $C_p$ on vesicle of size $R = 21$ in (a) the SC model and (c) the CM model. Curvature sensing curves for proteins of various $C_p$ values at $\mu = -4$ for (b) the SC model and (d) the CM model.}\label{fig:analytical} \end{figure*} We consider the vesicle as a triangulated surface with $N_v$ vertices. A discretized Hamiltonian for this surface can be written as, \begin{align} \mathcal{H}_{\textrm{SC}} &= \frac{\kappa}{2} \sum_{i=1}^{N_{v}} (2H_i - C_p\phi_{i})^2 A_{i} - \mu \sum_{i=1}^{N_{v}} \phi_{i}, \label{eq:sc_hamiltonian} \end{align} where $H_{i}$ and $\phi_{i}$ are respectively the mean curvature and the protein-bound state at vertex $i$. The concentration of proteins in bulk is taken into account indirectly through the binding affinity parameter $\mu$. The parameter $\mu$ is the free energy of the proteins in the reservoir for binding onto the membrane surface. It depends on the interaction energy between the membrane and the protein and also the concentration of the protein in the bulk ($c_{\textrm{bulk}}$) through the relation, \begin{align} \mu = \mu_0 + \log \frac{c_{\textrm{bulk}}}{c_0} \end{align} where $\mu_0$ and $c_0$ are, respectively, the standard state protein chemical potential and concentration~\cite{Sachin2019}. For small bound fractions, the proteins do not significantly affect the membrane curvature if they are homogeneously distributed over the surface. Therefore, we can simplify the expression for free energy by assuming a perfectly spherical surface with each vertex having the same curvature ($2H$). For such a uniformly spherical surface, the mean curvature at each vertex is simply the inverse of the vesicle radius, \textit{ie.} $H_{i} = H = 1/R$. The variable $\phi_i$ takes value one in vertices with a bound protein and zero in others. In this model, protein-bound vertices will have minimum energy when the local curvature matches with protein intrinsic curvature. Further, if the area at each vertex $A_{i}$ is same, say $a$, we can write an effective free energy per-vertex as a function of the protein bound fraction $\rho = N_{p} / N_{v}$ as \begin{align} f_{\textrm{SC}} (\rho) &= \frac{\kappa a}{2} \left[ \left(2H\right)^2 (1 - \rho) + \left(2H - C_p\right)^2 \rho \right] - \mu \rho \nonumber \\ &+ k_{\textrm B}T \left[ \rho \log (\rho) + (1 - \rho) \log (1 - \rho) \right], \label{eq:effective_ci_energy} \end{align} where, $N_p=\sum_{i=1}^{N_v} \phi_i$, is the total number of vertices occupied by the protein field. The first term is obtained as a result of separating the sums for vertices with and without proteins in Eq.~(\ref{eq:sc_hamiltonian}). The last term in Eq.~(\ref{eq:effective_ci_energy}) represents the mixing free energy of proteins on the discretized surface. Note that such a mixing free energy is due to the exclusion interaction of proteins on the discretized surface. The protein bound fraction in equilibrium is obtained by minimizing the effective free energy wrt $\rho$ as \begin{align} \rho_{\textrm{eq}} = \frac{1}{1 + e^{-\beta\left[ \mu - \frac{\kappa a C_{p}}{2} \left(C_p - 4H \right) \right]}}. \label{eq:shifted_langm} \end{align} When $C_p = 0$, the above equation takes the form of standard Langmuir isotherm. For non-zero $C_p$ values, the Langmuir isotherm is recovered by defining an effective chemical potential, \begin{align} \mu' = \mu - \frac{\kappa a C_p}{2} \left( C_p - 4H \right). \end{align} The adsorption isotherms for different $C_p$ for the SC model is shown in Fig.~\ref{fig:analytical}a. The isotherms for non-zero spontaneous curvatures are shifted Langmuir isotherms as predicted by Eq.~(\ref{eq:shifted_langm}). Experiments have reported that the adsorption of some proteins on vesicles follows the Langmuir isotherm~\cite{Bhatia2009}. The preferential binding of proteins to vesicles of various sizes is characterized using a curvature sensing curve, wherein the bound fraction of protein is plotted against the vesicle size at a particular binding affinity. The curvature sensing curve at $\mu = -4$ is shown in Fig.~\ref{fig:analytical}b. When $C_p = 0$, the protein bound fraction does not depend on the vesicle radius as there is no coupling between the mean curvature $H$ and the protein bound fraction $\phi$ in Eq.~(\ref{eq:ci_energy}). Therefore, within the SC model, $C_p = 0$ corresponds to the case where membrane curvature is insensitive to that of protein. For non-zero $C_p$, the bound fraction increases with decreasing vesicle radius; approaching the maximum of $1$ as $R \rightarrow 0$ (or $H \rightarrow \infty$ in Eq.~(\ref{eq:shifted_langm})). Note that, in the SC model, the protein bound fraction monotonically decreases with increasing vesicle size. \subsection{Curvature Mismatch Model} \label{sec:mismatch_model} Curvature mismatch (CM) model supposes a) an energy penalty when there is a difference in the local membrane and protein curvatures and b) curvature stiffness of the membrane to depend on the local protein concentration. It has successfully reproduced the preferential binding of I-BAR proteins to negatively curved membranes~\cite{Prevost2015}, sorting of potassium channel KvAP~\cite{Bozic2015}, and sorting of transmembrane proteins in live cell filopodia~\cite{Rosholm2017}. In the curvature mismatch (CM) model, the Hamiltonian is of the form, \begin{align} \mathcal{H} = \int \textrm{d} A \left[ \frac{\kappa}{2} \left(2H\right)^2 + \frac{\bar{\kappa}}{2} \left(2H - C_p \right)^2 \phi \right]. \label{eq:cm_energy} \end{align} Here the first term is the Helfrich energy for the membrane surface and the second term is the curvature mismatch energy. The parameter $\bar{\kappa}$ decides the strength of the mismatch penalty. In regions where there are no bound proteins, $\phi = 0$, only the first term in Eq.~(\ref{eq:cm_energy}) contributes to the energy. In this limit of no bound protein, both spontaneous curvature model and the curvature mismatch model have the same Hamiltonian. In order to compare the CM model with the SC model discussed previously, we derive the equilibrium protein bound fraction on a non-deformable vesicle. The discretized form of the CM free energy model is given by, \begin{align} \mathcal{H}_{\textrm{CM}} &= \sum_{i=1}^{N_{v}} \left[ \frac{\kappa}{2} (2H_i)^2 + \frac{\bar{\kappa}}{2} (2H_i - C_p)^2 \phi_i \right] A_{i} - \mu \sum_{i=1}^{N_{v}} \phi_{i}. \label{eq:cm_hamiltonian} \end{align} In this model, protein bound vertices will have the same energy as that for the unbound vertices, when the local curvature matches with the protein curvature. As in Eq.~(\ref{eq:effective_ci_energy}), the effective free energy for the CM model in terms of the bound fraction $\rho$ is \begin{align} f_{\textrm{CM}} (\rho) &= \frac{\kappa a}{2} (2H)^2 + \frac{\bar{\kappa}a}{2} (2H - C_p)^2 \rho - \mu \rho \nonumber \\ &+ k_{\textrm B} T \left[ \rho \log (\rho) + (1 - \rho) \log (1 - \rho) \right]. \end{align} For simplicity, we assume that $\kappa = \bar{\kappa}$ in the rest of the discussion. The equilibrium bound fraction obtained after minimizing the effective free energy with respect to $\rho$ is, \begin{align} \rho_{\textrm{eq}} = \frac{1}{1 + e^{-\beta\left[ \mu - \frac{\kappa a}{2} \left(2H - C_p \right)^2 \right]}}. \end{align} Here again, the binding assumes the form of a shifted Langmuir isotherm. However, the effective binding affinity is different from that obtained for the SC model. For the CM model, the effective binding affinity takes the form, \begin{align} \mu' = \mu - \frac{\kappa a}{2} \left( 2H - C_p \right)^2. \end{align} Here, the effective binding affinity is quadratic in the vesicle curvature, with an additional term $-2 \kappa H^2 a$. This is unlike in the SC model, where the dependence on curvature is linear. For large vesicle radius (small $H$), we see no difference in the adsorption isotherm obtained with the two models (see Fig.~\ref{fig:analytical}a and Fig.~\ref{fig:analytical}c). The additional quadratic term in the effective binding affinity of CM model becomes relevant when the vesicle size is small (large $H$). Consequently, the curvature sensing curves predicted using the two models differ significantly as seen in Fig.~\ref{fig:analytical}b and Fig.~\ref{fig:analytical}d. While the SC model predicts a monotonic inverse relation between the protein bound fraction and the vesicle radius, the CM model predicts a non-monotonic dependence. Since, the additional term is negligible at small vesicle curvatures ($H \ll C_p$), the predictions from the two models are similar for larger vesicles. \begin{figure*} \centering \includegraphics[width=\textwidth]{./simulation.pdf} \caption{Analysis of protein binding on a deformable sphere modeled using the spontaneous curvature (SC) model and the curvature mismatch (CM) model. Adsorption isotherms for vesicles of various sizes at $C_p = 0.6$ for (a) SC model and (c) CM model. Curvature sensing curves for the SC model for different protein spontaneous curvatures (b) at $\mu = -4$ for the SC model and (d) at $\mu = 0$ for the CM model.}\label{fig:simulation} \end{figure*} One can ask the question -- what is the size of the vesicle that shows maximum protein binding for proteins with fixed intrinsic curvature ($C_p$) at a given concentration ($\mu$)? We see that for the SC model, the protein bound fraction is maximum as $H \to \infty$ or in other words, for the smallest vesicle. On the other hand, the CM model predicts that the maximum binding is when the vesicle radius is $C_p^{-1}$, \textit{ie.} $H = C_p / 2$. Essentially, the observed difference between the two models can be attributed to the fact that, in the CM model, a bound protein, in addition to inducing curvature, also adds to the membrane stiffness. The analysis presented above is restricted to vesicles of fixed size and shape or in other words, the shape of the vesicle is assumed to not change on protein binding. The curvature generation by proteins is completely neglected because analytical minimization of the free energy is complicated when we allow for both the local mean curvature ($H$) and the protein bound state ($\phi$) to vary. Therefore, in the subsequent section, we use computer simulations to perform this minimization where both curvature sensing and curvature generation by proteins are accounted for. \section{Curvature sensing and generation} \label{sec:simulations} We employed dynamic triangulation monte carlo (DTMC) simulations with protein binding, in the grand canonical ensemble, as described in Ref.~\cite{Sachin2019}. At any instant of the simulation, vesicles are represented by a triangulated surface, whereas proteins are represented by an occupation number defined at the vertices on the triangulated surface. The simulations are carried out using both the SC and CM models. The adsorption isotherm obtained using the SC model is shown in Fig.~\ref{fig:simulation}a. At low $\mu$, we see that the protein bound fraction depends on the vesicle size at a fixed binding affinity. This is referred to as the curvature sensing regime. At high values of $\mu$, the protein binding fraction is independent of vesicle size. This is the curvature generation regime~\cite{Sachin2019}. The adsorption curve for the CM model with $\bar{\kappa} = 10$ is shown in Fig.~\ref{fig:simulation}c. Although the adsorption isotherm appears to be Langmuir-like at small binding affinities, it significantly deviates from the Langmuir behavior at higher $\mu$ values. For the SC model, curvature sensing happens at low binding affinity, whereas for the CM model, curvature sensing is more at higher binding affinities. Curvature sensing is quantitatively measured using the equilibrium bound fraction of proteins for different vesicle sizes at the same binding affinity. Curvature sensing curve, from the SC model, monotonically increases with decreasing radius (see Fig.~\ref{fig:simulation}b), which is qualitatively similar to the predictions with non-deforming vesicles. At $C_p=0$, the bound fraction is independent of the vesicle radius, \textit{ie}. there is no curvature sensing. For non-zero $C_p$, the bound fraction is maximum in the limit of zero radius. On the other hand, in the curvature sensing curve for the CM model, shown in Fig.~\ref{fig:simulation}d, proteins with $C_p = 0.0$ is also coupled to the membrane curvature and senses it with more binding on larger vesicles. The simulation results show that, for $C_p\ne0.0$, protein binding is maximum at a finite non-zero vesicle radius. When $C_p = 0.3$, there is a clear maximum at $R_0\approx 7.0$. We expect that such a maxima exists for other non-zero $C_p$ values, however they fall outside the range of vesicle radius studied in our simulations. Here again, the curvature sensing curves are qualitatively similar to the curves obtained using the analytical model. \section{Concluding remarks} \label{sec:conclusions} The main differences in results from the two models can be summarized as follows: \begin{itemize} \item SC model has a monotonic curvature sensing behavior, while the CM model has a non-monotonic sensing curve. \item The SC model has a curvature sensing regime at low $\mu$ and a curvature generation regime at high $\mu$, whereas the CM model shows curvature sensing for all $\mu$ values explored here. \item the $C_p = 0$ case does not sense curvature in the SC model, while in the CM model, proteins show sensing behavior at all $C_p$ values. \end{itemize} The curvature sensing behavior is observed when the membrane is stiff. In the case of deformable vesicles, the binding of proteins leads to a softening of the membrane in both SC and CM models. In the CM model, there is also a term that rescales the effective bending modulus of the membrane with protein binding (see Eq.~(\ref{eq:cm_hamiltonian})). Thus, the softening is significantly higher for SC model than the CM model. Consequently, for the same $\mu$, protein binding is always higher for the SC model than the CM model. At high $\mu$ in the SC model, the membrane is soft enough to conform to any protein curvature and hence we do not see curvature sensitivity. On the other hand, for the CM model, the membrane does not become soft enough to allow curvature generation even at high $\mu$. In the SC model, the coupling between protein density and membrane elasticity is only through the parameter $C_p$ (refer Eq.~(\ref{eq:sc_hamiltonian})), which serves as the source for curvature generation and sensing. At low $C_p$ values, curvature sensing and generation is weak due to the weak coupling. Such a model is probably adequate for peripheral proteins that generate curvature through the hydrophobic insertion mechanism, where the strength of the coupling and the curvature generated are directly related. In the CM model, on the other hand, the parameter $C_p$ has two contributions to the membrane elasticity. As in SC model, here too $C_p$ serves as the coupling strength between the membrane curvature and protein density. In addition, it couples the membrane stiffness to the protein concentration through the $\bar{\kappa}$ term in Eq.~(\ref{eq:cm_hamiltonian}). Experimentally, such a scenario arises when the dominant interaction with the membrane is coming from a laterally extended region on the protein, say a region of charge leading to electrostatic binding. A recent finite element analysis of curvature generation on a 3D linear elastic membrane has proposed that electrostatic interaction is essential for curvature generation by BAR domains~\cite{Mahata2017}. Thus, the CM model may be more appropriate to model peripheral proteins that generate curvature using the scaffolding and other mechanisms such as oligomerization or steric repulsion or in modeling transmembrane proteins. \section*{Acknowledgement} TVSK thanks IIT Palakkad for hospitality and computational resources. The authors thank Department of Biotechnology, Ministry of Science and Technology, Govt.\ of India for the financial support through grant no. BT/PR8025/BRB/10/1023/2013.
1,108,101,565,405
arxiv
\section{Introduction} \subsection{$\widehat{HF}(Y)$ an invariant associated with $\pi_1(Y)$} Heegaard Floer homology is an invariant associated with a closed oriented three manifold which was introduced by $\text{Ozsv\'{a}th}$ and $\text{Szab\'{o}}$ in \cite{OS}. There are four versions of this invariant: hat, plus, minus, and infinity Heegaard Floer homology groups. In this paper we work with the hat version with coefficients in $\mathbb{Z}_2$. This is, in fact, an invariant associated with the fundamental group of the three manifold. To see this, first note that there is the following $\text{K\"{u}nneth}$ formula for this invariant: \begin{proposition}(c.f. \cite[Theorem 1.5]{OS2})\label{Pr01} Let $Y_1$ and $Y_2$ be a pair of three manifolds, equipped with $\text{Spin}^c$ structures $\mathfrak{s}_1$ and $\mathfrak{s}_2$. Then, there is an identification $$\widehat{HF}_k(Y_1\sharp Y_2,\mathfrak{s}_1\sharp\mathfrak{s}_2)=\bigoplus_{i+j=k}\widehat{HF}_i(Y_1,\mathfrak{s}_1)\otimes_{\mathbb{Z}_2}\widehat{HF}_j(Y_2,\mathfrak{s}_2).$$ \end{proposition} On the other hand, the following theorems imply that the fundamental group of a three manifold determines it up to indeterminacy arising from lens spaces. \begin{theorem}(c.f. \cite[Theorem 1]{Ml})\label{Th01} Every compact 3-manifold $Y$, which is not isomorphic to $S^3$, is isomorphic to a sum $Y_1\sharp\dots\sharp Y_k$, of prime manifolds. The summands $Y_i$ are uniquely determined up to order and isomorphism. \end{theorem} \begin{theorem}(c.f. \cite[Theorem 2.1.1]{Fr})\label{Th0.1} Let $Y$ be a closed, oriented 3-manifold. If $\pi_1(Y)=\Gamma_1*\Gamma_2$, then there exist closed, oriented 3-manifolds $Y_1$ and $Y_2$ with $\pi_1(Y_i)=\Gamma_i$, for $i = 1,2$, and $Y=Y_1\sharp Y_2$. \end{theorem} \begin{theorem}(c.f. \cite[Theorem 2.1.2]{Fr})\label{Th02} Let $Y_1$ and $Y_2$ be two closed, prime 3-manifolds with $\pi_1(Y_1)=\pi_1(Y_2)$. Then either $Y_1$ and $Y_2$ are homeomorphic, or $Y_1$ and $Y_2$ are both lens spaces. \end{theorem} From Proposition \ref{Pr01}, Theorem \ref{Th01}, Theorem \ref{Th0.1}, Theorem \ref{Th02}, and the fact that the Heegaard Floer homology of a lens space only depends on its fundamental group, we have: \begin{corollary}\label{C01} Let $Y_1$ and $Y_2$ be two closed 3-manifolds. If $\pi_1(Y_1)=\pi_1(Y_2)$ then $\widehat{HF}(Y_1)=\widehat{HF}(Y_2)$. \end{corollary} This observation suggests that there must be a way to compute the hat Heegaard Floer homology group of a three manifold from its fundamental group. Corollary \ref{C01} also states that, for a given presentation of the fundamental group of a three manifold $Y$ which arises from a Heegaard diagram, $\widehat{HF}(Y)$ is invariant under \emph{stable Andrews-Curtis transformations}. These transformations are extended Nielsen transformations along with a stabilization transformation that act on a group presentation and result in another presentation for the same group (c.f. \cite{A-C,A-C2} and Section \ref{s2} for definitions). Motivated by these facts, in this paper we present a plan to associate a Heegaard Floer homology group with a family of finite group presentations and take several steps towards fulfilling this plan. Certain technical parts remain incomplete while we hope to carry out these parts in future. \subsection{Summary of results} In this paper, a Heegaard diagram is a triple $(\Sigma, \boldsymbol{\alpha},\boldsymbol{\beta})$ where $\Sigma$ is a surface and $\boldsymbol{\alpha}$ and $\boldsymbol{\beta}$ are collections of disjoint oriented simple closed curves on the surface $\Sigma$. We assume that the number of curves in $\boldsymbol{\alpha}$ is equal to the number of curves in $\boldsymbol{\beta}$, and for each component $\Sigma_i$ of $\Sigma$, $\Sigma_i-\boldsymbol{\alpha}$ and $\Sigma_i-\boldsymbol{\beta}$ are connected. % Let $\mathcal{H}$ be a Heegaard diagram for a closed oriented three manifold $Y$. There is a balanced presentation for the fundamental group of $Y$ which arises naturally from $\mathcal{H}$, see Example \ref{ex1}. Let $G$ be a group which has a balanced presentation $P$. Modulo some extra choices and technical assumptions, we associate a Heegaard Floer homology group $\widehat{HF}_P(G)$ with the pair $(G,P)$. We show the independence from some of these extra choices, while two technical steps remain unsettled (see Claim \ref{claim2} and Claim \ref{claim3}). Let $[P]$ denote the set containing all presentations $P'$ for $G$ which result from the action of a sequence of stable Andrews-Curtis transformations on $P$. Assuming the aforementioned two claims, we prove: \begin{claim}\label{claim1} Let $P$ be a balanced presentation for the group $G$. The homology group $\widehat{HF}_P(G)$ is an invariant associated with $G$ and $[P]$. Moreover, if $G$ is the fundamental group of a closed oriented three manifold $Y$ and $P$ is a presentation associated with a Heegaard diagram of $Y$, then we have $\widehat{HF}_P(G)=\widehat{HF}(Y)$. \end{claim} Claim \ref{claim1} suggests a method to approach the stable Andrews-Curtis conjecture. This conjecture states that: \begin{conjecture1}(c.f. \cite{A-C}) Every balanced group presentation for the trivial group may be changed to the trivial presentation by a finite sequence of stable Andrews-Curtis transformations. \end{conjecture1} This conjecture has topological interpretations and consequences which would follow from the conjecture and are studied in \cite{A-C,WR}. A group-theoretical and a topological survey of the conjecture can be found in \cite{Bu-Mac,Hog-Met}. In \cite{Freedman,Kirby}, the relation between this conjecture and the smooth 4-dimensional Poincar\'{e} conjecture is discussed. It is a general opinion that the stable Andrews-Curtis conjecture is false. There are several potential counterexamples, amongst them one can mention \cite{Akb,Miller,Shpil}. Assuming Claim \ref{claim1}, in order to disprove the stable Andrews-Curtis conjecture it suffices to find a group presentation $P$ for the trivial group $I$ such that $\widehat{HF}_P(I)\neq\mathbb{Z}_2$. \subsection{Organization} In Section \ref{s2}, we associate a \textit{dual} presentation with a given balanced presentation $P$ of a group $G$. We also define a restricted version of stable Andrews-Curtis transformations which act on a presentation $P$ and its dual presentation at the same time. In Section \ref{s3}, we explain how one can associate a Heegaard diagram with a presentation $P$ and its dual presentation. In Section \ref{s4}, we describe the changes of this corresponding Heegaard diagram when the presentation $P$ and its dual presentation undergo the transformations defined in Section \ref{s2}. In Section \ref{s5}, we associate a Heegaard Floer homology group with the Heegaard diagram constructed in Section \ref{s3}. Moreover, we present a proof of Claim \ref{claim1} in this section. \section{Dual presentations and AC-moves}\label{s2} First, we recall some elementary concepts from group theory. \begin{definition} (c.f. \cite{John}) Let $X$ be a set, $F=F(X)$ denote the free group on $X$, and $R$ be a subset of $F$. \begin{itemize} \item[$\bullet$] The group $G=\langle F|R\rangle$ is defined as the quotient group $F/N$ where $N$ is the smallest normal subgroup of $F$ which contains $R$. $(X,R)$ is called a \emph{free presentation}, or simply a \emph{presentation} of $G$. The elements of $X$ are called the \emph{generators} and those of $R$ the \emph{relators}. \item[$\bullet$] A group $G$ is called \emph{finitely presented} if it has a presentation with both $X$ and $R$ finite sets. \end{itemize} \end{definition} A finite presentation $\langle X|R\rangle$ is called a \emph{balanced presentation} if we have $|X|=|R|$. \begin{definition}\label{def01} Let $P=\langle a_1,\dots,a_d|b_1,\dots,b_d\rangle$ and $P^*=\langle b_1^*,\dots,b_d^*$ $|a_1^*,\dots,a_d^*\rangle$ be two balanced presentations. We say $P$ and $P^*$ are dual presentations if, possibly after rearranging the indices, there exist bijections \begin{equation*} f_{ij}:A_{ij}\rightarrow \overline{A}^*_{ji}\ \ \text{and}\ \ \overline{f}_{ij}:\overline{A}_{ij}\rightarrow A^*_{ji} \end{equation*} where \begin{align*} &A_{ij}=\{k|a_i\ \text{is}\ k^{th}\ \text{letter in}\ b_j, 1\leq k\leq |b_j|\},\\ &\overline{A}_{ij}=\{k|a^{-1}_i\ \text{is}\ k^{th}\ \text{letter in}\ b_j, 1\leq k\leq |b_j|\},\\ &A^*_{ij}=\{k|b_i^*\ \text{is}\ k^{th}\ \text{letter in}\ a_j^*,1\leq k\leq |a_j^*|\},\\ &\overline{A}^*_{ij}=\{k|b_i^{*-1}\ \text{is}\ k^{th}\ \text{letter in}\ a_j^*,1\leq k\leq |a_j^*|\},\\ \end{align*} for $1\leq i,j\leq d$. Here $|b_j|$ denotes the number of letters in the word $b_j$. We denote the dual presentations $P$ and $P^*$ together with the family $\mathcal{F}=\{f_{ij},\overline{f}_{ij}\}_{i,j}$ of correspondences by $(P,P^*)_\mathcal{F}$. \end{definition} \begin{remark}\label{R2} For each $i$, $f_{ij}$ and $\overline{f}_{ij}$, $1\leq j\leq d$ induce a cyclic ordering on all occurrences of the letter $a_i$, independent from its sign, in the relators. In fact, elements of the sets $f_{ij}(A_{ij})$ and $\overline{f}_{ij}(\overline{A}_{ij})$, $j=1,\dots,d$, are distinct and show different occurrences of the letters $b_j^*$ and $b_j^{*-1}$ in the relator $a_i^*$. Therefore \begin{equation*} \bigcup_{j=1}^df_{ij}(A_{ij})\cup\overline{f}_{ij}(\overline{A}_{ij})=\{1,\dots,|a_i^*|\}. \end{equation*} In other words, $f_{ij}$ and $\overline{f}_{ij}$, $j=1,\dots,d$, induce a correspondence between all occurrences of the letter $a_i$ in the relators $b_j$ and the elements of $\{1,\dots,|a_i^*|\}$. Now the natural cyclic ordering of the elements of $\{1,\dots,|a_i^*|\}$ induces the desired cyclic ordering. \end{remark} \begin{remark} There might be more than one dual presentation for a given presentation and they may present different groups. For the trivial presentation $T=\langle a|a\rangle$ of the trivial group, we have $T^*=T$. \end{remark} Andrews-Curtis transformations are defined on a presentation $P=\langle a_1,\dots,a_n|b_1,\dots,b_m\rangle$ of a group $G$ as follows: \begin{itemize} \item[1.] Replace $b_i$ with $b_ib_j$ for some $j\neq i$; \item[2.] Replace $b_i$ with $b_i^{-1}$; \item[3.] Replace $b_i$ with $b_igg^{-1}$, where $g$ is one of $a_j$ or its inverse; \end{itemize} Moreover, we allow the stabilization transformation: \begin{itemize} \item[4.]Add$/$remove $a_{n+1}$ as both a generator and a relator. \end{itemize} These four transformations are called the stable Andrews-Curtis transformations. It is clear that each stable Andrews-Curtis transformation on a presentation $P$ of the group $G$ gives another presentation for the group $G$. Let $(P,P^*)_\mathcal{F}$ be a pair of dual presentations as in Definition \ref{def01}. Corresponding to each transformation of types 1-4, we associate a dual transformation which acts on the dual presentation $P^*$ as follows: \begin{itemize} \item[1$^*$.] Replace all $b_j^*$s (resp. $b_j^{*-1}$s) in $a_k^*$, $k=1,\dots,d$, with $b_j^*b_i^*$ (resp. with $b_i^{*-1}b_j^{*-1}$), for $i\neq j$; \item[2$^*$.]Replace $b_i^*$ with $b_i^{*-1}$ in all the relators; \item[3$^*$.]Replace $a_j^*$ with $a_j^*b_i^{*}b_i^{*-1}$; \item[4$^*$.]Add$/$remove $b_{d+1}^*$ as both a generator and a relator. \end{itemize} Define the inverse of the third Andrews-Curtis transformation and its dual as follows: \begin{itemize} \item[$5$.]Replace a relator $b_i=b_i'gg^{-1}$ with $b_i'$, where $g$ is one of $a_j$ or its inverse; \item[5$^*$.]Remove $b_i^{*}b_i^{*-1}$ from the relator $a_j^*$. If $g=a_j$, $b_i^*$ is $f_{ji}(|b_i|)^{th}$ letter in $a_j^*$, and if $g=a_j^{-1}$, $b_i^*$ is $f_{ji}(|b_i|-1)^{th}$ letter in $a_j^*$. \end{itemize} \begin{remark} Although the transformation $5$ is not mentioned in the stable Andrews-Curtis transformations, it is the inverse of $3$. Note that corresponding to $5$, the transformation $5^{*}$ is not always possible. \end{remark} For the dual pair of presentations $(P,P^*)_\mathcal{F}$, we always assume that Andrews-Curtis moves come in pairs. This makes the Andrews-Curtis moves for dual pairs restricted in comparison with the classical Andrews-Curtis moves. Nevertheless, we will see later that there are detours around this extra restriction. \begin{definition}\label{def02} An AC-move for the pair $(P,P^*)_\mathcal{F}$, is one of the transformations 1-5 along with its corresponding dual transformation. \end{definition} \section{Dual presentations and the associated Heegaard diagram}\label{s3} The following example describes a pair of dual presentations $P$ and $P^*$ for the fundamental group of a closed oriented three manifold. \begin{example}\label{ex1} Let $Y$ be a closed oriented three manifold and $$\mathcal{H}=(\Sigma,\boldsymbol{\alpha}=\{\alpha_1\dots,\alpha_g\},\boldsymbol{\beta}=\{\beta_1,\dots,\beta_g\})$$ be a Heegaard diagram for $Y$. This diagram gives a balanced presentation $$P=\langle a_1,\dots,a_g|b_1,\dots,b_g\rangle$$ for $\pi_1(Y)$ as follows. Fix an orientation for each one of the curves $\alpha_1,\dots,\alpha_g$ and $\beta_1,\dots,\beta_g$. Let $\alpha_1^*,\dots,\alpha_g^*$ denote oriented simple closed curves in $\Sigma$ based at the point $p\in\Sigma-\boldsymbol{\alpha}-\boldsymbol{\beta}$ such that each $\alpha_i^*$ positively intersects $\alpha_i$ in one point and stays disjoint from the rest of $\alpha$ curves. We call $\alpha_i^*$ a \emph{dual curve} for $\alpha_i$, see Figure \ref{fig:dual}. \begin{figure}[h] \def12cm{8cm} \begin{center} \input{dual.pdf_tex} \caption{$\alpha_i^*$ is a dual curve for $\alpha_i$} \label{fig:dual} \end{center} \end{figure} Attach 2-handles $D_1,\dots,D_g$ to $\Sigma\times[0,1]$ along $\alpha_i\times\{0\}$. Now attach a $3$-ball to the two sphere boundary of it. This results in a handlebody $H_1$. We have $\pi_1(H_1)\cong\langle a_1,\dots,a_g\rangle$ where $a_i$ is the homotopy class of $\alpha_i^*$ in $\mathcal{H}_1$. Consider a small cylindrical neighborhood $N_j$ of each $\beta_j$ in $\Sigma$ and let $p_j$ be a point on $\beta_j-\alpha_1-\dots-\alpha_g$. Now start from $p_j$ on $\beta_j$ and traverse $\beta_j$ in its direction. In this path, let $\alpha_{k_i}$ be the $i^{th}$ $\alpha$ curve in the neighborhood $N_i$ which intersects $\beta_j$. If the intersection number of $\alpha_{k_i}$ with $\beta_j$ is $\circ_i$, for $i=1,\dots,n$, we obtain a word $b_j=a_{k_1}^{\circ_1}a_{k_2}^{\circ_2}\dots a_{k_n}^{\circ_n}$ with $\circ_1,\circ_2,\dots,\circ_n\in\{\pm1\}$ (see Figure \ref{fig:curve}). This is called the \textit{relator associated with the curve $\beta_j$}. \begin{figure}[h] \def12cm{12cm} \begin{center} \input{curve.pdf_tex} \caption{Neighborhood of $\beta_j$ and its associated relator $b_j$.} \label{fig:curve} \end{center} \end{figure} Consider a corresponding curve $\tilde{\beta}_{j}={\alpha^{*}_{k_{1}}}^{\circ_{1}}{\alpha^*_{k_{2}}}^{\circ_{2}}\dots{\alpha^*_{k_{n}}}^{\circ_{n}}$ where $\alpha_i^{*-1}$ denotes the curve $\alpha_i^*$ with the reverse orientation. This curve is homotopic to $\beta_j$ in $H_{1}$. Therefore, the homotopy class of $\beta_{j}$ in $\pi_{1}(H_{1})$ is $b_{j}$. Attach 2-handles $\widetilde{D}_1,\dots,\widetilde{D}_g$ to $H_1$ along $\beta_j\times\{1\}$ (curves in the boundary $\Sigma\times\{1\}$ of $H_1$) and denote the resulting space with $\widetilde{H}_1$. By Van-Kampen Theorem, we have \begin{equation*} \pi_1(\widetilde{H}_1)\cong\langle a_1,\dots,a_g\rangle/N \end{equation*} where $N$ is the normal subgroup of $\pi_1(H_1)$ generated by $\{b_1,\dots,b_g\}$. Therefore, we have $\pi_1(\widetilde{H}_1)\cong\langle a_1,\dots,a_g|b_1,\dots,b_g\rangle$. $\widetilde{H}_1$ embeds in $Y$ and its complement in $Y$ is an open three-ball. Again by Van-Kampen Theorem, we have \begin{equation*} \pi_1(Y)\cong\pi_1(\widetilde{H}_1)\cong\langle a_1,\dots,a_g|b_1,\dots,b_g\rangle. \end{equation*} If we use the dual Heegaard diagram for $Y$, i.e. $$\mathcal{H}^*=(\Sigma,\boldsymbol{\beta}=\{\beta_1,\dots,\beta_g\},\boldsymbol{\alpha}=\{\alpha_1\dots,\alpha_g\}),$$ another presentation for $\pi_1(Y)$ is obtained, which is denoted by: $$P^*=\langle b_1^*,\dots,b_g^*|a_1^*,\dots,a_g^*\rangle.$$ Here the generators $b_i^*$ are in correspondence with dual curves for $\beta_i$ and the relators $a_j^*$ are obtained from $\alpha_j$ curves with the same method as mentioned above after fixing a point $q_j$ on $\alpha_j-\beta_1-\dots-\beta_g$. With the notation of Definition \ref{def01}, we define $f_{ij}:A_{ij}\rightarrow\overline{A}_{ji}^*$ and $\overline{f}_{ij}:\overline{A}_{ij}\rightarrow A_{ji}^*$, $1\leq i,j\leq g$, as follows. Let $a_i$ be $k^{th}$ letter in $b_j$. This means that if we start from $p_j$ on $\beta_j$ and traverse $\beta_j$ in its direction, $\alpha_i$ is $k^{th}$ curve which intersects $\beta_j$ and the intersection number of $\alpha_i$ with $\beta_j$ is $+1$. Now start from $q_i$ on $\alpha_i$ and traverse $\alpha_i$ in its direction. Let this intersection of $\beta_j$ with $\alpha_i$, which corresponds to a letter $b_j^{-1}$, be the $l^{th}$ letter in $a_i^*$. We define $f_{ij}(k)=l$. $\overline{f}_{ij}$ is defined similarly. If we set $\mathcal{F}=\{f_{ij},\overline{f}_{ij}\}_{i,j}$, $(P,P^*)_\mathcal{F}$ is a pair of dual presentations. \end{example} We call $P$ and $P^*$ a pair of dual presentations associated with the diagram $\mathcal{H}$. In Example \ref{ex1}, we may use a Heegaard diagram $\mathcal{H}$ which does not correspond to a three manifold. In fact, the method of this example can be used to assign such a dual pair $(P,P^*)$ to any Heegaard diagram. The following proposition gives a semi-inverse construction. \begin{proposition}\label{P01} Let $(P,P^*)_\mathcal{F}$ be a pair of dual presentations (with the notation of Definition \ref{def01}). There is a unique associated Heegaard diagram $\mathcal{H}_{(P,P^*)_\mathcal{F}}=(\Sigma, \boldsymbol{\alpha},\boldsymbol{\beta})$ such that all regions in $\Sigma-\boldsymbol{\alpha}-\boldsymbol{\beta}$ are polygons and its associated pair of dual presentations is $(P,P^*)_\mathcal{F}$. \end{proposition} \begin{proof} Let $\beta_j$ denote an oriented circle with $|b_j|$ marked points on it which are numbered $1,\dots,|b_j|$. Similarly let $\alpha_i$ denote an oriented circle with $|a_i^*|$ marked points on it which are numbered $1,\dots,|a_i^*|$. $\mathcal{F}$ gives an identification of marked points on $\alpha_i$ with marked points on $\beta_j$. Construct a 4-regular graph from $\alpha_i$ and $\beta_j$ using the correspondence $\mathcal{F}$ by identifying these marked points. The vertices are then the intersection points of $\alpha$ curves with $\beta$ curves. By an $\alpha$-edge, we mean an edge of the graph which is part of an $\alpha$ curve. Similarly, a $\beta$-edge, is an edge of the graph which is part of a $\beta$ curve. Let $P=A_1B_1A_2B_2\dots A_nB_nA_{n+1}$ be a sequence of $\alpha$-edges and $\beta$-edges such that $A_i$ is adjacent to $B_i$ and $B_i$ is adjacent to $A_{i+1}$, for $1\leq i\leq n$ with $A_{n+1}=A_1$. Let $A_i\cap B_i=\{v_i\}$ and $B_i\cap A_{i+1}=\{w_i\}$, for $i=1,\dots,n$. Let $A_i$ be part of $\alpha_{j_i}$ and $B_i$ be part of $\beta_{k_i}$. Set an orientation on the edge $A_{i}$ from $w_{i-1}$ to $v_{i}$, $i=1,\dots,n$, with $w_0=w_{n}$. If this orientation is the same as the orientation of $\alpha_{j_i}$, set $\epsilon_{A_i}=1$. Otherwise set $\epsilon_{A_i}=-1$. Also, set an orientation on the edge $B_i$ from $v_i$ to $w_i$, $i=1,\dots,n$. If this orientation is the same as the orientation of $\beta_{k_i}$, set $\epsilon_{B_i}=1$. Otherwise set $\epsilon_{B_i}=-1$. Corresponding to $v_i$, there is a term $a_{j_i}^{\circ_i}$ in the relator $b_{k_i}$. Define $\epsilon_{v_i}=\epsilon_{A_i}\epsilon_{B_i}\circ_{i}$. Also, corresponding to $w_i$, there is a term $a_{j_{i+1}}^{\circ'_i}$ in the relator $b_{k_i}$. Define $\epsilon_{w_i}=-\epsilon_{A_{i+1}}\epsilon_{B_i}\circ'_{i}$. We say $P$ is a \emph{good sequence} if $\epsilon_{v_i}=\epsilon_{w_i}=\epsilon_{v_{i+1}}$, $i=1,\dots,n$, with $v_{n+1}=v_1$. Each good sequence $P$ determines an oriented polygon. Each pair of successive letters in $a_k^*$ correspond to an $\alpha$-edge which appears in two oriented polygons, and the corresponding polygons may be glued along these edges. Similarly, polygons may be glued along $\beta$-edges. These gluings of polygons give the surface $\Sigma$. \end{proof} The following corollary is trivial from the above construction. \begin{corollary}\label{C02} If $(P,P^*)_\mathcal{F}$ is a dual pair of presentations associated with a Heegaard diagram $\mathcal{H}=(\Sigma,\boldsymbol{\alpha}=\{\alpha_1\dots,\alpha_d\},\boldsymbol{\beta}=\{\beta_1,\dots,\beta_d\})$ for which all regions of $\Sigma-\boldsymbol{\alpha}-\boldsymbol{\beta}$ are polygons, then we have $\mathcal{H}=\mathcal{H}_{(P,P^*)_\mathcal{F}}$. \end{corollary} \section{AC-moves on dual pairs of presentations}\label{s4} Let $(P_0,P_0^*)_\mathcal{F}$ be a pair of dual presentations as in Definition \ref{def01}. \begin{lemma}\label{L3-1} If an AC-move acts on the pair $(P_0,P_0^*)_\mathcal{F}$, it results in a pair $(P_1,P_1^*)_{\mathcal{F}_1}$ of dual presentations where $P_0$ and $P_1$ (resp. $P_0^*$ and $P_1^*$) are presentations of the same group. \end{lemma} \begin{proof} Let AC-n denote the $\text{n}^{th}$ AC-move. First, assume that $n\neq4$. Let $$P_1=\langle a_1,\dots,a_d|b_1',\dots,b_d'\rangle,\ \ P_1^*=\langle b_1^*,\dots,b_d^*|{a_1^{*}}',\dots,{a_d^{*}}'\rangle.$$ For each AC-n, there is a family of correspondences , denoted by $\mathcal{F}_1=\{{f_{ij}}',{\overline{f}_{ij}}'\}_{i,j}$, ${f_{ij}}':{A}'_{ij}\rightarrow {\overline{A}_{ji}^{*}}'$ and ${\overline{f}_{ij}}':\overline{A}'_{ij}\rightarrow {A_{ji}^{*}}'$ ($A'_{ij}$, ${\overline{A}_{ji}^{*}}'$ and $\overline{A}'_{ij}$, ${A_{ji}^{*}}'$ are defined for each AC-n below), which is induced from the correspondences given by $\mathcal{F}$ in an obvious way, such that $P_1$ and $P_1^*$ are dual presentations with the correspondences in $\mathcal{F}_1$. We will thus leave the identification of the correspondence to the reader. For AC-1, let $b_i'=b_ib_j$ and $b_k'=b_k$ for $k\neq i$. Let ${a_l^*}'$ be obtained from $a_l^*$ by replacing all $b_j^*$s (resp. $b_j^{*-1}$s) with $b_j^*b_i^*$ (resp. with $b_i^{*-1}b_j^{*-1}$). If we substitute the generator $b_j^*$ in $P_1^*$ with ${b_j^*}'=b_j^*b_i^*$, then $P_1^*$ is changed to $P_0^*$. Let $A_{jl}^*\cup\overline{A}_{jl}^*=\{k_1,\dots,k_m|k_v< k_{v+1},v=1,\dots,m-1\}$, $k_0=0$, and $A+v=\{k+v|k\in A\}$. Set \begin{flalign*} &A_{li}'=A_{li}\cup (A_{lj}+|b_i|),&&\\ &{\overline{A}_{il}^*}'=\{k+v|k_v< k\leq k_{v+1},v=0,\dots,m-1,k\in\overline{A}_{il}^*\}\cup\{k_v+v-1|k_v\in\overline{A}_{jl}^*\}, \end{flalign*} \begin{flalign*} &\overline{A}_{li}'=\overline{A}_{li}\cup (\overline{A}_{lj}+|b_i|),&&\\ &{A_{il}^*}'=\{k+v|k_v< k\leq k_{v+1},v=0,\dots,m-1,k\in A_{il}^*\}\cup\{k_v+v|k_v\in A_{jl}^*\}, \end{flalign*} \begin{flalign*} &A_{lj}'=A_{lj},&&\overline{A}_{lj}'=\overline{A}_{lj},&&&\\ &{\overline{A}_{jl}^*}'=\{k_d+v|k_v\in\overline{A}_{jl}^*\},&&{A_{jl}^*}'=\{k_v+v-1|k_v\in A_{jl}^*\},&&&\\\\ &A_{lt}'=A_{lt},\ t\neq i,j,&&\overline{A}_{lt}'=\overline{A}_{lt},\ t\neq i,j,&&&\\ &{\overline{A}_{tl}^*}'=\{k+v|k_v< k\leq k_{v+1},\ k\in\overline{A}_{tl}^*\},&&{A_{tl}^*}'=\{k+v|k_v< k\leq k_{v+1},\ k\in A_{tl}^*\},\ t\neq i,j. \end{flalign*} For AC-2, let $b_i'=b_i^{-1}$ and $b_j'=b_j$ for $j\neq i$. Let ${a_l^{*}}'$ be obtained from $a_l^*$ by replacing all $b_i^*$s (resp. $b_i^{*-1}$s) with $b_i^{*-1}$ (resp. with $b_i^{*}$). If we substitute the generator $b_i^*$ in $P_1^*$ with ${b_i^*}'=b_i^{*-1}$, then $P_1^*$ is changed to $P_0^*$. Set \begin{flalign*} &A_{li}'=|b_i|-\overline{A}_{li}+1,\ \ \ \ \ \ \ \ &&\overline{A}_{li}'=|b_i|-A_{li}+1,&&&\\ &{\overline{A}_{il}^*}'=A_{il}^*,&&{A_{il}^*}'=\overline{A}_{il}^*,&&&\\\\ &A_{lj}'=A_{lj},\ j\neq i,&&\overline{A}_{lj}'=\overline{A}_{lj},\ j\neq i,&&&\\ &{\overline{A}_{jl}^*}'=\overline{A}_{jl}^*,\ j\neq i,&&{A_{lj}^*}'=A_{lj}^*,\ j\neq i. \end{flalign*} For AC-3, let $g=a_j$ (the case for $g=a_j^{-1}$ is similar), ${b_i}'=b_ia_ja_j^{-1}$, and $b_k'=b_k$ for $k\neq i$. Let ${a_j^{*}}'=a_j^*b_i^{*}b_i^{*-1}$ and ${a_k^*}'=a_k^*$ for $k\neq i$. Set \begin{flalign*} & A_{ji}'=A_{ji}\cup\{|b_i|+1\},&&\overline{A}_{ji}'=\overline{A}_{ji}\cup\{|b_i|+2\},&&&\\ &{\overline{A}_{ij}^*}'=\overline{A}_{ij}^*\cup\{|a_j^*|+2\},&&{A_{ij}^*}'=A_{ij}^*\cup\{|a_j^*|+1\},\\\\ &A_{kl}'=A_{kl},\ k\neq j,\ or\ l\neq i,&&\overline{A}_{kl}'=\overline{A}_{kl},\ k\neq j,\ or\ l\neq i,\\ &{\overline{A}_{lk}^*}'=\overline{A}_{lk}^*,\ k\neq j,\ or\ l\neq i,&&{A_{lk}^*}'=A_{lk}^*,\ k\neq j,\ or\ l\neq i. \end{flalign*} For AC-5, let $g=a_j$ (the case for $g=a_j^{-1}$ is similar), $b_i'$ be the relator obtained from $b_i=b_i'a_ja_j^{-1}$ by removing $a_ja_j^{-1}$, and $b_k'=b_k$ for $k\neq i$. Let ${a_j^{*}}'$ be a relator obtained from $a_j^*$ by removing $b_i^{*}b_i^{*-1}$ and ${a_k^*}'=a_k^*$ for $k\neq i$. Let $A_{ij}^*=\{k_1,\dots,k_m\}$ where the removed $b_i^*$ is $k_s^{th}$ letter in $a_j^*$, $\overline{A_{ij}^*}=\{l_1,\dots,l_n\}$ where the removed $b_i^{*-1}$ is $l_t^{th}$ letter in $a_j^*$ (note that $k_s=\overline{f_{ji}}(|b_i|)$ and $l_t=f_{ji}(|b_i|-1)$ in AC-5). Set \begin{flalign*} &A_{ji}'=A_{ji}-\{|b_i|-1\},&&\overline{A}_{ji}'=\overline{A}_{ji}-\{|b_i|\},&&&\\ &{\overline{A}_{ij}^*}'=\{l_1,\dots,l_{t-1},l_{t+1}-2,\dots,l_n-2\},&&{A_{ij}^*}'=\{k_1,\dots,k_{s-1},k_{s+1}-2,\dots,l_m-2\}, \end{flalign*} \begin{flalign*} &A_{kl}'=A_{kl},\ k\neq j,\ or\ l\neq i,&&\overline{A}_{kl}'=\overline{A}_{kl},\ k\neq j,\ or\ l\neq i,&&&\\ &{\overline{A}_{lk}^*}'=\overline{A}_{lk}^*,\ k\neq j,\ or\ l\neq i,&&{A_{lk}^*}'=A_{lk}^*,\ k\neq j,\ or\ l\neq i. \end{flalign*} For AC-4, let $a_{d+1}$ be added as both a generator and a relator (the removal of a generator and a relator is similar). Let $$P_1=\langle a_1,\dots,a_d,a_{d+1}|b_1,\dots,b_d,a_{d+1}\rangle,\ \ P_1^*=\langle b_1^*,\dots,b_d^*,b_{d+1}^*|a_1^{*},\dots,a_d^{*},b_{d+1}^*\rangle.$$ Set \begin{flalign*} &{A_{d+1d+1}}'=\{1\},&&\\ &{\overline{A}_{d+1d+1}^*}'=\{1\}, \end{flalign*} \begin{flalign*} &{A_{kl}}'=\varnothing,\ k\neq d+1, l=d+1\ (k=d+1,l\neq d+1),&&\\ &{\overline{A}_{lk}^*}'=\varnothing,\ k\neq d+1, l=d+1\ (k=d+1,l\neq d+1), \end{flalign*} \begin{flalign*} &{\overline{A}_{kl}}'=\varnothing,\ k\neq d+1, l=d+1\ (k=d+1,l\neq d+1),&&\\ &{A_{lk}^*}'=\varnothing,\ k\neq d+1, l=d+1\ (k=d+1,l\neq d+1), \end{flalign*} \begin{flalign*} &A_{kl}'=A_{kl},\ k,l\neq d+1,&&\overline{A}_{kl}'=\overline{A}_{kl},\ k,l\neq d+1,&&&\\ &{\overline{A}_{lk}^*}'=\overline{A}_{lk}^*,\ k,l\neq d+1,&&{A_{lk}^*}'=A_{lk}^*,\ k,l\neq d+1. \end{flalign*} For each AC-n move, it is clear that $P_0$ and $P_1$ (also $P_0^*$ and $P_1^*$) are presentations of the same group. \end{proof} \begin{lemma}\label{L3} If an AC-move acts on the pair $(P_0,P_0^*)_\mathcal{F}$ and gives the pair $(P_1,P_1^*)_{\mathcal{F}_1}$, then $\mathcal{H}_{(P_1,P_1^*)_{\mathcal{F}_1}}$ is obtained from $\mathcal{H}_{(P_0,P_0^*)_\mathcal{F}}$ by one of the following changes: \begin{itemize} \item[$\bullet$]A Heegaard move (i.e. a handleslide or an isotopoy); \item[$\bullet$]Attaching a 1-handle to the Heegaard surface plus a Heegaard move; \item[$\bullet$]Changing the orientation for a $\beta$ curve; \item[$\bullet$]Adding$/$removing a component which is a standard genus one Heegaard diagram for the three sphere; \item[$\bullet$]A Heegaard move plus removing a 1-handle from the Heegaard surface. \end{itemize} \end{lemma} \begin{proof} Let $\mathcal{H}_{(P,P^*)_\mathcal{F}}=(\Sigma,\boldsymbol{\alpha}=\{\alpha_1,\dots,\alpha_d\},\boldsymbol{\beta}=\{\beta_1,\dots,\beta_d\})$. We discuss each AC-move separately. \begin{itemize} \begin{figure}[h] \def12cm{13cm} \input{handleslide.pdf_tex} \caption{ For the first AC-move, $\mathcal{H}_{(P_1,P_1^*)_{\mathcal{F}_1}}$ is obtained from $\mathcal{H}_{(P,P^*)_{\mathcal{F}}}$ by a handleslide (Part $\mathsf{A}$) or attaching a 1-handle plus a handleslide (Parts $\mathsf{B}$,$\mathsf{C}$, and $\mathsf{D}$). Each 1-handle is considered as $S^1\times[0,1]$ where its boundaries are identified with an orientation preserving homeomorphism with the two circles in Parts $\mathsf{B}$,$\mathsf{C}$, or $\mathsf{D}$. } \label{fig:handleslide} \end{figure} \item[1.] Let $p_i\in\beta_i-\bigcup_{k=1}^g\alpha_k$ be such that if we start from $p_i$ and write the associated relator for $\beta_i$ it results in $b_i$, for $i=1,\dots,d$. Let us first assume that $p_i$ and $p_j$ are on the edges of one polygon $\mathsf{P}$ and let $\mathsf{P}'$ be a polygon adjacent to $\mathsf{P}$ via the edge containing $p_j$. Depending on the orientations of $\beta_i$ and $\beta_j$, either slide $\beta_i$ over $\beta_j$ through $\mathsf{P}$ (see Figure \ref{fig:handleslide}-$\mathsf{A}$) or connect $\mathsf{P}$ and $\mathsf{P}'$ by a 1-handle and slide $\beta_i$ over $\beta_j$ through the handle (see Figure \ref{fig:handleslide}-$\mathsf{B}$). Alternatively, if $p_i$ and $p_j$ are on the edges of two distinct polygons $\mathsf{P}_1$ and $\mathsf{P}_2$, respectively, let $\mathsf{P}_2'$ be a polygon adjacent to $\mathsf{P}_2$ via the edge containing $p_j$. Depending on the orientations of $\beta_i$ and $\beta_j$, either connect $\mathsf{P}_1$ and $\mathsf{P}_2$ (see Figure \ref{fig:handleslide}-$\mathsf{C}$) or $\mathsf{P}_1$ and $\mathsf{P}_2'$ (see Figure \ref{fig:handleslide}-$\mathsf{D}$) by a 1-handle and slide $\beta_i$ over $\beta_j$ through the handle. In both cases, slide $\beta_i$ using an edge containing $p_i$ over $\beta_j$ using an edge which contains $p_j$. This gives a diagram $\mathcal{H}$ for which all regions are polygons and its associated pair of dual presentations is $(P_1,P_1^*)_{\mathcal{F}_1}$. Therefore, Corollary \ref{C02} implies that $\mathcal{H}=\mathcal{H}_{(P_1,P_1^*)_{\mathcal{F}_1}}$. \item[2.] Associated with the second move, just changes the orientation of $\beta_i$ to obtain $\mathcal{H}_{(P_1,P_1^*)_{\mathcal{F}_1}}$. \begin{figure}[h] \def12cm{13cm} \begin{center} \input{isotopy.pdf_tex} \caption{ For the third AC-move, $\mathcal{H}_{(P_1,P_1^*)_{\mathcal{F}_1}}$ is obtained from $\mathcal{H}_{(P,P^*)_{\mathcal{F}}}$ by an isotopy (Part $\mathsf{A}$) or attaching a 1-handle plus an isotopy (Parts $\mathsf{B}$,$\mathsf{C}$, and $\mathsf{D}$). } \label{fig:isotopy} \end{center} \end{figure} \item[3.]Choose $p_i$ as in part $1$, and let $\mathsf{P}$, $\mathsf{P}'$, $\mathsf{P}_1$, $\mathsf{P}_2$, $\mathsf{P}_2'$ be as before. Depending on the orientation of $\beta_i$ and $\alpha_j$, either isotope $\beta_i$ over $\alpha_j$ through $\mathsf{P}$ (see Figure \ref{fig:isotopy}-$\mathsf{A}$) or connect $\mathsf{P}$ and $\mathsf{P}'$ by a 1-handle and isotope $\beta_i$ over $\alpha_j$ through the handle (see Figure \ref{fig:isotopy}-$\mathsf{B}$) or connect $\mathsf{P}_1$ and $\mathsf{P}_2$ (see Figure \ref{fig:isotopy}-$\mathsf{C}$) or $\mathsf{P}_1$ and $\mathsf{P}_2'$ (see Figure \ref{fig:isotopy}-$\mathsf{D}$) by a 1-handle and isotope $\beta_i$ over $\alpha_j$ through the handle. In all cases, isotope the edge containing $p_i$ over the edge containing $p_j$. This gives a diagram $\mathcal{H}$ for which all regions are polygons and its associated pair of dual presentations is $(P_1,P_1^*)_{\mathcal{F}_1}$. Therefore Corollary \ref{C02} implies that $\mathcal{H}=\mathcal{H}_{(P_1,P_1^*)_{\mathcal{F}_1}}$. \item[4.] The forth move is equivalent to adding or removing a component, which is the standard genus one Heegaard diagram for the three sphere, to or from $\mathcal{H}_{(P,P^*)_{\mathcal{F}}}$. Note that the Heegaard surface may be disconnected. The resulting diagram is $\mathcal{H}_{(P_1,P_1^*)_{\mathcal{F}_1}}$. \begin{figure}[h] \def12cm{14cm} \begin{center} \input{removehandle.pdf_tex} \caption{Fifth AC-move is equivalent to an isotopy through a bigon (middle figure) or an isotopy through a bigon plus removing a 1-handle (figure on the right). } \label{fig:removehandle} \end{center} \end{figure} \item[5.] Consider the fifth AC-move which removes $a_ja_j^{-1}$ from the relator $b_i$ and removes $b_i^{*}b_i^{*-1}$ from the relator $a_j^*$. This means that there is a bigon in $\mathcal{H}_{(P,P^*)_{\mathcal{F}}}$ such that its $\beta$-edge is determined by $b_i^{*-1}b_i^*$ and its $\alpha$-edge is determined by $a_ja_j^{-1}$. Let $\mathsf{P}_1$ and $\mathsf{P}_2$ denote polygons in $\mathcal{H}_{(P,P^*)_{\mathcal{F}}}$ which have vertices but no edges in common with the bigon (see Figure \ref{fig:removehandle}). After an isotopy through the bigon, the two intersections between $\beta_i$ and $\alpha_j$ disappear. If $\mathsf{P}_1$ and $\mathsf{P}_2$ are two distinct polygons (see Figure \ref{fig:removehandle} in the middle), denote the resulting diagram by $\mathcal{H}$. If $\mathsf{P}_1$ and $\mathsf{P}_2$ are the same polygon (see Figure \ref{fig:removehandle} on the right), then the isotopy changes $\mathsf{P}_1$ and $\mathsf{P}_2$ to a cylinder which may be thought of as a 1-handle. Remove this 1-handle from the diagram and denote the resulting diagram by $\mathcal{H}$. All regions of $\mathcal{H}$ are polygons and its associated pair of dual presentations is $(P_1,P_1^*)_{\mathcal{F}_1}$. Therefore Corollary \ref{C02} results in $\mathcal{H}=\mathcal{H}_{(P_1,P_1^*)_{\mathcal{F}_1}}$. \end{itemize} \end{proof} \section{Heegaard Floer homology}\label{s5} We restrict our attention to presentations $P$ of a group $G$ which admit a dual presentation $P^*$ with a correspondence $\mathcal{F}$ such that the Heegaard diagram $$\mathcal{H}_{(P,P^*)_\mathcal{F}}=(\Sigma,\boldsymbol{\alpha},\boldsymbol{\beta})$$ has the following properties. \begin{itemize} \item[(A)]For each component $\Sigma_i$ of $\Sigma$, $\Sigma_i-\boldsymbol{\alpha}$, and $\Sigma_i-\boldsymbol{\beta}$ are connected; \item[(B)]The diagram has a set of $\textit{completing curves}$ $\boldsymbol{\alpha^c}$, as defined in Definition \ref{def03} below. \end{itemize} From Lemma \ref{L3}, the first property is preserved under AC-moves on the pair $(P,P^*)_\mathcal{F}$. Assuming Claim \ref{claim3}, we show (via Lemma \ref{L6}) that the second property is also preserved under AC-moves. \subsection{Completing curves} Let $P=\langle a_1,\dots,a_d|b_1,\dots,b_d\rangle$ and $P^*=\langle b_1^*,\dots,b_d^*|a_1^*,\dots,a_d^*\rangle$ be a pair of dual presentations for $G$ with a family of correspondences given by $\mathcal{F}$ and $$\mathcal{H}_{(P,P^*)_\mathcal{F}}=(\Sigma,\boldsymbol{\alpha}=\{\alpha_1,\dots,\alpha_d\},\boldsymbol{\beta}=\{\beta_1,\dots,\beta_d\}).$$ \begin{definition}\label{def03} Let $\mathcal{H}_{(P,P^*)_\mathcal{F}}$ be as above. A set of marked, oriented, disjoint, simple closed curves $\boldsymbol{\alpha^c}=\{\alpha_{d+1},\dots,\alpha_g\}$ in $\Sigma$ is called a set of \emph{completing curves} for $\Sigma$ (or for $\mathcal{H}_{(P,P^*)_\mathcal{F}}$) if \begin{itemize} \item[1.]$\boldsymbol{\alpha}\cap\boldsymbol{\alpha^c}=\varnothing$; \item[2.]For each component $\Sigma_i$ of $\Sigma$, $\Sigma_i-\{\alpha_1,\dots,\alpha_g\}$ is a punctured sphere; \item[3.]The relators associated with $\alpha_i$, $i=d+1,\dots,g$ are trivial in $\langle b_1^*,\dots,b_d^*|a_1^*,\dots,a_d^*\rangle$. \end{itemize} \end{definition} We assume that there is an arc, denoted by $\beta_i$, which only intersects $\alpha_i$ in a single point at the marked point on $\alpha_i$, $i=d+1,\dots,g$, and is disjoint from $\alpha$ and $\beta$ curves. We denote the set of these $\beta$ arcs by $\boldsymbol{\beta^a}$. \subsection{Heegaard Floer homology groups for diagrams with $\beta$-arcs} Let $\overline{\mathcal{H}}_{(P,P^*)_\mathcal{F}}=(\Sigma,\boldsymbol{\alpha}\cup\boldsymbol{\alpha^c}=\{\alpha_1,\dots,\alpha_g\},\boldsymbol{\beta}\cup\boldsymbol{\beta^a}=\{\beta_1,\dots,\beta_g\},\mathbf{z})$, where the marked points $\mathbf{z}$ are in $\Sigma-\alpha_1-\dots-\alpha_g-\beta_1-\dots-\beta_g$ and each component of $\Sigma$ contains exactly one marked point. The following proposition is similar to \cite[Proposition 7.1]{OS}. \begin{proposition}\label{P3} Two Heegaard diagrams \begin{displaymath} (\Sigma,\boldsymbol{\alpha}\cup\boldsymbol{\alpha^c}=\{\alpha_1,\dots,\alpha_g\},\boldsymbol{\beta}\cup\boldsymbol{\beta^a}=\{\beta_1,\dots,\beta_g\},\mathbf{z}) \end{displaymath} \begin{displaymath} (\Sigma,\boldsymbol{\alpha}\cup\boldsymbol{\alpha^c}=\{\alpha_1,\dots,\alpha_g\},\boldsymbol{\beta}\cup\boldsymbol{\beta^a}=\{\beta_1,\dots,\beta_g\},\mathbf{w}) \end{displaymath} completing $\mathcal{H}_{(P,P^*)_\mathcal{F}}$ with different choices of the set of marked points are related by a finite sequence of pointed Heegaard moves (i.e. Heegaard moves supported in the complement of the marked point). \end{proposition} Let $\mathbb{T}_\alpha=\alpha_1\times\dots\times\alpha_g$ and $\mathbb{T}_\beta=\beta_1\times\dots\times\beta_g$ denote the subspaces of the symmetric product $Sym^g(\Sigma)$ where $\mathbb{T}_\alpha$ is a torus and $\mathbb{T}_\beta$ is an open subset of a torus. For $\mathbf{x},\mathbf{y}\in\mathbb{T}_\alpha\cap\mathbb{T}_\beta$, let $\pi_2(\mathbf{x},\mathbf{y})$ denote the set of homotopy classes of Whitney disks connecting $\mathbf{x}$ and $\mathbf{y}$. Let $J(T)$ denote a generic path of complex structures on $\Sigma$ such that each boundary component of a fixed neighborhood $\mathcal{N}_i$ of each $\alpha_i$, $i=d+1,\dots,g$ is pinched to a point as $T$ goes to infinity. Let $\widehat{\mathcal{M}}_{J(T)}(\phi)$ denote the moduli space of $J(T)$-holomorphic representatives of the Whitney class $\phi$ (modulo action of $\mathbb{R}$). \begin{theorem}\label{Thm1} Let $J(T_k)$ denote a path of almost complex structures associated with $\Sigma$ as above, with $T_k\rightarrow\infty$ as $k\rightarrow\infty$. For every $\phi\in\pi_2(\mathbf{x},\mathbf{y})$ the number of solutions in $\widehat{\mathcal{M}}_{J(T_k)}(\phi)$ (counted with sign) becomes stable for sufficiently large values of $k$. \end{theorem} To prove this theorem, we use the cylindrical reformulation of Heegaard Floer homology (c.f. \cite{RL}). \begin{proof} We claim that there is some $N$ such that for any $k\geq N$, the part of boundary of any holopmrphic disk in $\mathcal{M}_{J(T_k)}(\phi)$ which uses $\beta_i$ does not leave the stretched neighborhood $\mathcal{N}_i$. If this is not true, then there is some $d+1\leq i_0\leq g$, and a sequence of holopmorphic disks $u_k\in\mathcal{M}_{J(T_k)}(\phi)$ such that some part of the boundary of $u_k$ which uses $\beta_{i_0}$ leaves the neighborhood $\mathcal{N}_{i_0}$. Corresponding to each $u_k$, there is a surface $S_k$ (with boundary) which is constructed from $\mathcal{D}(u_k)$, the domain associated with $u_k$, and $u_k$ is a holomorphic map from $S_k$ to $\Sigma\times\mathbb{D}$. Here $\mathbb{D}$ is the unit disk and the almost complex structure on $\Sigma$ is $J(T_k)$. Let $u_k^\Sigma:S_k\longrightarrow\Sigma$ and $u_k^{\mathbb{D}}:S_k\longrightarrow\mathbb{D}$ denote the projections of this map to $\Sigma$ and $\mathbb{D}$ respectively. There are parts of the boundary components of $S_k$ which are mapped to $\alpha_i$ and $\beta_i$, $i=d+1,\dots, g$, by $u_k^\Sigma$. Since $\{u_k\}_k$ is a sequence of holomorphic curves with bounded energy, it has a weak limit. Let $\tilde{u}:S\longrightarrow\Sigma\times\mathbb{D}$ denote a component of this weak limit such that at least one boundary component of $S$ is mapped to $\beta_{i_0}\setminus\mathcal{N}_{i_0}$ by $\tilde{u}^\Sigma$, the projection of $\tilde{u}$ to $\Sigma$. Therefore, there is a boundary component of $S$ which is mapped to $\beta_{i_0}$ and no part of it is mapped to $\alpha_{i_0}$. Since $\beta_{i_0}$ only intersects $\alpha_{i_0}$, the whole boundary component of $S$ is mapped to $\beta_{i_0}$. If this boundary component is projected by $\tilde{u}^\mathbb{D}$ to the whole boundary of $\mathbb{D}$, then all the boundary components of $S$ are projected to $\beta$ curves and $\beta$ arcs by $\tilde{u}^\Sigma$. On the other hand, if this boundary component is projected by $\tilde{u}^\mathbb{D}$ to a single point (i.e. a point with negative real coordinate on the boundary of $\mathbb{D}$) then the maximum principle implies that $S$ is mapped to this single point by $\tilde{u}^\mathbb{D}$. Once again, this means that all the boundary components of $S$ are mapped to $\beta$ curves and $\beta$ arcs by $\tilde{u}^\Sigma$. In both cases, we conclude that $\mathcal{D}(\tilde{u})$ is a periodic domain which crosses the marked point. This is in contradiction with the assumption $n_z(\phi)=0$. \end{proof} The path $J(T)$, for $T$ sufficiently large so that the condition of Theorem \ref{Thm1} is true, will be called sufficiently pinched near $\alpha_{d+1},\alpha_{d+2},\dots,\alpha_g$. Let $\widehat{CF}(\overline{\mathcal{H}}_{(P,P^*)_\mathcal{F}})$ be a free abelian group over $\mathbb{Z}/2$ generated by the $g$-tuples $\mathbf{x} = \{x_1,\dots,x_{g}\}$ $\in\mathbb{T}_\alpha\cap\mathbb{T}_\beta$ such that $x_i$ is an intersection point of $\alpha_i$ with some $\beta_{\sigma(i)}$, where $\sigma$ is a permutation on $g$ letters. For sufficiently large values of $T_i$, as stated in Theorem \ref{Thm1}, let $$\partial_{J(T_i)}:\widehat{CF}(\overline{\mathcal{H}}_{(P,P^*)_\mathcal{F}})\rightarrow\widehat{CF}(\overline{\mathcal{H}}_{(P,P^*)_\mathcal{F}})$$ be the map defined by $$\partial_{J(T_i)}\mathbf{x}=\sum_{\mathbf{y}\in\mathbb{T}_\alpha\cap\mathbb{T}_\beta}\sum_{\{\phi\in\pi_2(\mathbf{x},\mathbf{y})|\mu(\phi)=1,\ n_z(\phi)=0\}}\#\widehat{\mathcal{M}}_{J(T_i)}(\phi)\mathbf{y}.$$ A small modification of standard arguments in Heegaard Floer theory implies that $$(\widehat{CF}(\overline{\mathcal{H}}_{(P,P^*)_\mathcal{F}}),\partial_{J(T_i)})$$ is a chain complex. We define the Floer homology group $\widehat{HF}(\overline{\mathcal{H}}_{(P,P^*)_\mathcal{F}})$ to be the homology group associated with the chain complex $(\widehat{CF}(\overline{\mathcal{H}}_{(P,P^*)_\mathcal{F}}),\partial_{J(T_i)})$. Similar to \cite[Theorem 7.3 and Theorem 9.5]{OS}, we can prove the following proposition. \begin{proposition}\label{P4} Let $\mathcal{H}_1=(\Sigma,\boldsymbol{\alpha}\cup\boldsymbol{\alpha^c}=\{\alpha_1,\dots,\alpha_g\},\boldsymbol{\beta}\cup\boldsymbol{\beta^a}=\{\beta_1,\dots,\beta_g\},\mathbf{z})$ and suppose that $\boldsymbol{\tilde{\alpha}}\cup\boldsymbol{\tilde{\alpha}^c}$ (resp. $\boldsymbol{\tilde{\beta}}$) is obtained from $\boldsymbol{\alpha}\cup\boldsymbol{\alpha^c}$ (resp. $\boldsymbol{\beta}$) by a sequence of handleslides and isotpoies, and let $\mathcal{H}_2=(\Sigma,\boldsymbol{\tilde{\alpha}}\cup\boldsymbol{\tilde{\alpha}^c}=\{\alpha_1,\dots,\alpha_g\},\boldsymbol{\tilde{\beta}}\cup\boldsymbol{\beta^a}=\{\beta_1,\dots,\beta_g\},\mathbf{w})$. Then we have $\widehat{HF}(\mathcal{H}_1)\cong\widehat{HF}(\mathcal{H}_2)$. \end{proposition} \subsection{Attaching a one handle} Let $\mathcal{H}=(\Sigma,\boldsymbol{\alpha}=\{\alpha_1,\dots,\alpha_g\},\boldsymbol{\beta}=\{\beta_1,\dots,\beta_g\},\mathbf{z})$ be a Heegaard diagram, possibly with $\beta$-arcs. In a component of $\Sigma$, we connect two regions of this diagram by a 1-handle and denote the new surface by $\Sigma_1$. Let $\alpha_{g+1}$ be the meridian of this 1-handle and $\beta_{g+1}$ be an arc which intersects only $\alpha_{g+1}$ in a single point. Let $\mathcal{H}_1=(\Sigma_1,\boldsymbol{\alpha}\cup\{\alpha_{g+1}\},\boldsymbol{\beta}\cup\{\beta_{g+1}\},\mathbf{z})$. $\widehat{CF}(\mathcal{H}_1)$ is then a free abelian group generated by $g+1$-tuples $\mathbf{x} = \{x_1,\dots,x_{g+1}\}$ such that $x_i$ is an intersection point of $\alpha_i$ with a $\beta_{\sigma(i)}$, where $\sigma$ is a permutation on $g+1$ letters. Clearly $x_{g+1}$ is the unique intersection point between $\alpha_{g+1}$ curve and $\beta_{g+1}$ arc. Corresponding to each generator $\mathbf{x}$, we consider a $g$-tuple $\mathbf{\bar{x}}=\{x_1,\dots,x_g\}$ for the diagram $\mathcal{H}$. . Let $D_1,\dots,D_m$ denote domains of $\mathcal{H}_1$, i.e. closure of components of $\Sigma-\boldsymbol{\alpha}\cup\{\alpha_{g+1}\}-\boldsymbol{\beta}$, such that $D_1$ and $D_2$ contain $\alpha_{g+1}$ in their boundaries. Let $\bar{D}_1$ and $\bar{D}_2$ be domains obtained from $D_1$ and $D_2$ by attaching disks to their $\alpha_{g+1}$ boundaries. Therefore, the domains of $\mathcal{H}$ are $\bar{D}_1,\bar{D}_2,D_3,\dots,D_m$. Let $J$ be a path of complex structure on $\Sigma$ and $J(T)$ denote a path of complex structure on $\Sigma_1$ which is sufficiently pinched near $\alpha_{g+1}$ such that as $T$ goes to infinity, each boundary of a tubular neighborhood of $\alpha_{g+1}$ is pinched to a point. Similar to \cite[Proposition 5.1]{Ak} one can prove: \begin{proposition}\label{P1} Let $J(T_i)$ denote a path of complex structures associated with $\Sigma_1$ such that $\alpha_{g+1}$ is pinched as $T_i\rightarrow\infty$. Choose $\phi_1\in\pi_2(\mathbf{x},\mathbf{y})$ with $\mathcal{D}(\phi_1)=\sum_{i=1}^ma_iD_i$ and let $\phi$ be a corresponding Whitney disk in $\mathcal{H}$ connecting corresponding generators $\mathbf{\bar{x}}$ and $\mathbf{\bar{y}}$ with $\mathcal{D}(\phi)=a_1\bar{D}_1+a_2\bar{D}_2+\sum_{i=3}^ma_iD_i$. We have $\mu(\phi_1)=\mu(\phi)$ and if $\mathcal{M}(\phi_1)$ is nonempty for the sequence $J(T_i)$ of almost complex structures, then $\mathcal{M}(\phi)$ is also nonempty. Moreover, if $\mu(\phi_1)=1$, then we have $\widehat{\mathcal{M}}(\phi_1)\cong\widehat{\mathcal{M}}(\phi)$ for sufficiently large $T_i$. \end{proposition} \subsection{Invariance} Let $\boldsymbol{\alpha^c_0}$ be a set of completing curves for $\mathcal{H}_{(P,P^*)_\mathcal{F}}$. If $\boldsymbol{\alpha^c_1}$ is obtained from $\boldsymbol{\alpha^c_0}$ by a sequence of isotopies and handleslides over curves in $\boldsymbol{\alpha}\cup\boldsymbol{\alpha^c_0}$, it clearly determines a set of completing curves and from Proposition \ref{P4}, $\widehat{HF}(\overline{\mathcal{H}}_{(P,P^*)_\mathcal{F}})$ is invariant under these changes of completing curves. The set of completing curves is thus partitioned into equivalence classes, where the sets of completing curves in each equivalence class are related to each other by isotopies and handleslides. $\widehat{HF}(\overline{\mathcal{H}}_{(P,P^*)_\mathcal{F}})$ remains invariant on each equivalence class. However, it is not clear that different equivalence classes give the same Heegaard Floer homology group. This is the main unknown part of the proof of Claim \ref{claim1}. \begin{claim}\label{claim2} $\widehat{HF}(\overline{\mathcal{H}}_{(P,P^*)_\mathcal{F}})$ remains invariant under different choices of completing curves for $\boldsymbol{\alpha}$. \end{claim} Assuming this claim is valid, we denote $\widehat{HF}(\overline{\mathcal{H}}_{(P,P^*)_\mathcal{F}})$ by $\widehat{HF}(\mathcal{H}_{(P,P^*)_\mathcal{F}})$. The above claim is used in the arguments of this section. \begin{lemma}\label{L6} Let $(P_1,P_1^*)_{\mathcal{F}_1}$ be a pair of dual presentations obtained from $(P,P^*)_\mathcal{F}$ by an AC-i move, $1\leq i\leq 4$. If there is a set of completing curves for $\mathcal{H}_{(P,P^*)_\mathcal{F}}$, then $\mathcal{H}_{(P_1,P^*_1)_{\mathcal{F}_1}}$ also has a set of completing curves and $$\widehat{HF}(\mathcal{H}_{(P,P^*)_\mathcal{F}})\cong\widehat{HF}(\mathcal{H}_{(P_1,P^*_1)_{\mathcal{F}_1}}).$$ \end{lemma} \begin{proof} According to the proof of Lemma \ref{L3}, $\mathcal{H}_{(P_1,P^*_1)_{\mathcal{F}_1}}$ is obtained from $\mathcal{H}_{(P,P^*)_\mathcal{F}}$ by a Heegaard move, or by attaching a 1-handle plus a Heegaard move, or by changing the orientation for a $\beta$ curve, or by adding$/$removing a component which is the standard genus one Heegaard diagram for $S^3$. For the third and forth case, it is clear that $\mathcal{H}_{(P,P^*)_\mathcal{F}}$ and $\mathcal{H}_{(P_1,P^*_1)_{\mathcal{F}_1}}$ have the same set of completing curves and $\widehat{HF}(\mathcal{H}_{(P,P^*)_\mathcal{F}})\cong\widehat{HF}(\mathcal{H}_{(P_1,P^*_1)_{\mathcal{F}_1}})$. Let \begin{align*} &\mathcal{H}_{(P,P^*)_\mathcal{F}}=(\Sigma,\boldsymbol{\alpha}=\{\alpha_1,\dots,\alpha_d\},\boldsymbol{\beta}=\{\beta_1,\dots,\beta_d\})\ \text{and}\\ &\overline{\mathcal{H}}_{(P,P^*)_\mathcal{F}}=(\Sigma,\boldsymbol{\alpha}\cup\boldsymbol{\alpha^c}=\{\alpha_1,\dots,\alpha_g\},\boldsymbol{\beta}\cup\boldsymbol{\beta^a}=\{\beta_1,\dots,\beta_g\},\mathbf{z}). \end{align*} Let $\mathcal{H}_{(P_1,P^*_1)_{\mathcal{F}_1}}$ be obtained from $\mathcal{H}_{(P,P^*)_\mathcal{F}}$ by a Heegaard move. The proof of Lemma \ref{L3} implies that this Heegaard move is a handleslide of a $\beta_i$ curve over a $\beta_j$ curve or an isotopy of a $\beta_i$ curve over an $\alpha_j$ curve. Therefore, $\overline{\mathcal{H}}_{(P_1,P_1^*)_\mathcal{F}}$ is obtained from $\overline{\mathcal{H}}_{(P,P^*)_\mathcal{F}}$ by Heegaard moves and lemma is proved in this case, by standard arguments in Heegaard Floer theory. Let $\mathcal{H}_{(P_1,P^*_1)_{\mathcal{F}_1}}$ be obtained from $\mathcal{H}_{(P,P^*)_\mathcal{F}}$ by inserting a handle to $\Sigma$ and doing an isotopy of $\beta_i$ on $\alpha_j$ or doing a handleslide of $\beta_i$ over $\beta_j$, $i,j\in\{1,\dots,d\}$ through this handle. Let $\Sigma_1$ denote the new surface. First suppose that the new handle connects two different components of $\Sigma$ and two different components of $\Sigma-\boldsymbol{\beta}$. In this case, we may assume that the polygons containing the legs of the 1-handle contain marked points from $\mathbf{z}$. In fact, each component of $\Sigma$ contains a marked point and according to Proposition \ref{P3}, there is a finite sequence of Heegaard moves which relates different choices of marked points in each component of $\Sigma$ which (by Proposition \ref{P4}) result in the same Heegaard Floer homology group. Then, we have \begin{equation*} \overline{\mathcal{H}}_{(P_1,P^*_1)_{\mathcal{F}_1}}=(\Sigma_1,\boldsymbol{\alpha}\cup\boldsymbol{\alpha^c},\boldsymbol{\beta_1}\cup\boldsymbol{\beta}^a,\mathbf{z_1}), \end{equation*} where $\boldsymbol{\beta_1}$ is obtained from $\boldsymbol{\beta}$ by doing a Heegaard move and $\mathbf{z_1}$ is the same set as $\mathbf{z}$ except that the two marked points next to the legs of the 1-handle are identified. Let $\mathcal{H}=(\Sigma_1,\boldsymbol{\alpha}\cup\boldsymbol{\alpha^c},\boldsymbol{\beta}\cup\boldsymbol{\beta^a},\mathbf{z_1})$. We then have $\widehat{HF}(\overline{\mathcal{H}}_{(P_1,P_1^*)_{\mathcal{F}_1}})\cong\widehat{HF}(\mathcal{H})$. The two diagrams $\mathcal{H}$ and $\overline{\mathcal{H}}_{(P,P^*)_{\mathcal{F}}}$ have the same set of generators and since the Whitney disks in $\mathcal{H}$ do not use the handle, it follows that the Whitney disks in $\mathcal{H}$ and the corresponding moduli spaces are in correspondence with the Whitney disks in $\overline{\mathcal{H}}_{(P,P^*)_\mathcal{F}}$ and the corresponding moduli spaces. From here, $\widehat{HF}(\overline{\mathcal{H}}_{(P,P^*)_{\mathcal{F}}})\cong\widehat{HF}(\mathcal{H})$ and the lemma follows in this case from Claim \ref{claim2}. \begin{figure}[!h] \def12cm{10cm} \begin{center} \input{case3.pdf_tex} \caption{The new handle connects two polygons $D_1$ and $D_2$. This handle is attached to a component of $\Sigma-\boldsymbol{\alpha}$ but connects two different components of $\Sigma-\boldsymbol{\beta}$. } \label{fig:case3} \end{center} \end{figure} Suppose now that the new handle is attached to a single component of $\Sigma$. Let $\alpha_{g+1}$ be the meridian of this 1-handle and $\beta_{g+1}$ be an arc which only intersects $\alpha_{g+1}$ in a single point. We may assume that one of the polygons containing the legs of the 1-handle contains a marked point $z\in\mathbf{z}$. In fact, according to Proposition \ref{P3}, there is a finite sequence of moves which relates different choices of marked points which (by Proposition \ref{P4}) result in the same Heegaard Floer homology group. Let $$\mathcal{H}=(\Sigma_1,\boldsymbol{\alpha}\cup\boldsymbol{\alpha^c}\cup\{\alpha_{g+1}\},\boldsymbol{\beta}\cup\boldsymbol{\beta^a}\cup\{\beta_{g+1}\},\mathbf{z}).$$ There is a correspondence between the generators of $\mathcal{H}$ and $\mathcal{H}_{(P,P^*)_{\mathcal{F}}}$. In fact, if $x_{g+1}$ denotes the intersection point of $\alpha_{g+1}$ and $\beta_{g+1}$, then each generator of $\mathcal{H}$ is of the form $\mathbf{x}\cup\{x_{g+1}\}$ where $\mathbf{x}$ is a generator of $\mathcal{H}_{(P,P^*)_{\mathcal{F}}}$. Let $D_1,D_2,D_3,\dots,D_m$ denote the domains of $\mathcal{H}$, i.e. closure of components of $$\Sigma_1-\boldsymbol{\alpha}\cup\boldsymbol{\alpha^c}\cup\{\alpha_{g+1}\}-\boldsymbol{\beta}\cup\boldsymbol{\beta^c},$$ such that $D_1$ and $D_2$ contain $\alpha_{g+1}$ in their boundaries and $D_1$ contains the marked point $z$. Let $\bar{D_1}$ and $\bar{D_2}$ be domains obtained from $D_1$ and $D_2$ by attaching disks to their $\alpha_{g+1}$ boundaries. Therefore, domains of $\mathcal{H}_{(P,P^*)_{\mathcal{F}}}$ are $\bar{D_1},\bar{D_2},D_3,\dots,D_m$. Let $\phi\in\pi_2^0(\mathbf{x},\mathbf{y})$, where $\pi_2^0(\mathbf{x},\mathbf{y})$ denotes the space of homotopy classes of Whitney disks in $\mathcal{H}_{(P,P^*)_{\mathcal{F}}}$, connecting $\mathbf{x}$ and $\mathbf{y}$, which do not cross the marked point. Further assume that $\mathcal{D}(\phi)=c_2\bar{D_2}+\sum_{i=3}^mc_iD_i$. Now a corresponding disk $\tilde{\phi}\in\pi_2(\{x_{g+1}\}\cup\mathbf{x},\{x_{g+1}\}\cup\mathbf{y})$ is determined by the domain $\sum_{i=2}^mc_iD_i$. From Proposition \ref{P1}, we have \begin{displaymath} \widehat{HF}(\overline{\mathcal{H}}_{(P,P^*)_{\mathcal{F}}})\cong\widehat{HF}(\mathcal{H}) \end{displaymath} and lemma is proved in this case. After doing the isotopy or handleslide through the handle (corresponding to the AC-move), the relator associated with $\alpha_{g+1}$ is $b_i^*b_i^{*-1}$ or $b_i^{*-1}b_i^*$, therefore curves in $\boldsymbol{\alpha^c}\cup\{\alpha_{g+1}\}$ satisfy condition 3 in Definition \ref{def03}. Clearly the curves in $\boldsymbol{\alpha^c}\cup\{\alpha_{g+1}\}$ satisfy conditions 1-2 and 4 of Definition \ref{def03}. Let $$\overline{\mathcal{H}}_{(P_1,P^*_1)_{\mathcal{F}_1}}=(\Sigma_1,\boldsymbol{\alpha}\cup\boldsymbol{\alpha^c}\cup\{\alpha_{g+1}\},\boldsymbol{\beta_1}\cup\boldsymbol{\beta^a}\cup\{\beta_{g+1}\},\mathbf{z}),$$ where $\boldsymbol{\beta_1}$ is obtained by $\boldsymbol{\beta}$ after doing the Heegaard move. From Proposition \ref{P4}, we have $\widehat{HF}(\overline{\mathcal{H}}_{(P_1,P_1^*)_{\mathcal{F}_1}})\cong\widehat{HF}(\mathcal{H})$, and therefore $\widehat{HF}(\overline{\mathcal{H}}_{(P_1,P_1^*)_{\mathcal{F}_1}})\cong\widehat{HF}(\overline{\mathcal{H}}_{(P,P^*)_{\mathcal{F}}})$. \end{proof} The second technical step which remains unsettled in this paper is the following. \begin{claim}\label{claim3} Let $(P_1,P_1^*)_{\mathcal{F}_1}$ be a pair of dual presentations obtained from $(P,P^*)_\mathcal{F}$ by the fifth AC-move. If there is a set of completing curves for $\mathcal{H}_{(P,P^*)_\mathcal{F}}$, then $\mathcal{H}_{(P_1,P^*_1)_{\mathcal{F}_1}}$ also has a set of completing curves and $$\widehat{HF}(\mathcal{H}_{(P,P^*)_\mathcal{F}})\cong\widehat{HF}(\mathcal{H}_{(P_1,P^*_1)_{\mathcal{F}_1}}).$$ \end{claim} Assuming Claim \ref{claim2} and Claim \ref{claim3}, the invariance under several other choices involved in the construction of the groups $\widehat{HF}(\mathcal{H}_{(P,P^*)_\mathcal{F}})$ may be proved. \begin{lemma}\label{L7} Let $(P,P^*)_\mathcal{F}$ and $(P,P^*)_\mathcal{G}$ be two pairs of dual presentations. Then we have $\widehat{HF}(\mathcal{H}_{(P,P^*)_\mathcal{F}})\cong\widehat{HF}(\mathcal{H}_{(P,P^*)_\mathcal{G}})$. \end{lemma} \begin{proof} First consider the simplest case where the two families $\mathcal{F}=\{f_{ij},\overline{f}_{ij}\}_{i,j}$ and $\mathcal{G}=\{g_{ij},\overline{g}_{ij}\}_{i,j}$ are the same except that $f_{11}$ is obtained from $g_{11}$ by composing with a transposition (the case where $\overline{f}_{11}$ is obtained from $\overline{g}_{11}$ by composing with a transposition is similar). Let $b_1^{*-1}$ be $l_1^{th}$ and $l_2^{th}$ letter in the word $a_1^*$ and $a_1$ be $k_1^{th}$ and $k_2^{th}$ letter in the word $b_1$. Let $f_{11}$ send $k_1$ to $l_1$ and $k_2$ to $l_2$ and $g_{11}$ send $k_1$ to $l_2$ and $k_2$ to $l_1$. Consider a Heegaard diagram $\mathcal{H}_{(P,P^*)_\mathcal{F}}=(\Sigma,\boldsymbol{\alpha}=\{\alpha_1,\dots,\alpha_d\},\boldsymbol{\beta}=\{\beta_1,\dots,\beta_d\})$. Figure \ref{fig:corres1} (on the top) shows part of this diagram. Let $\mathcal{H}'=(\Sigma_1,\boldsymbol{\alpha},\boldsymbol{\beta})$ where $\Sigma_1$ is a surface obtained from $\Sigma$ by adding two 1-handles as in Figure \ref{fig:corres1} on the bottom. Here two parts of $\beta_1$ are connected by two 1-handles. One may imagine each handle as an $S^1\times [0,1]$ such that its boundaries are identified with the two oriented circles with the same color, with an orientation preserving homeomorphisms. Let us first assume that all regions in $\mathcal{H}'$ are polygons. Then, by Corollary \ref{C02}, $\mathcal{H}'=\mathcal{H}_{(P,P^*)_\mathcal{G}}$. Let $\boldsymbol{\alpha^c}=\{\alpha_{d+1},\dots,\alpha_g\}$ denote the completing curves for $\mathcal{H}_{(P,P^*)_\mathcal{F}}$. If $\alpha_i$, for $i=g+1,g+2$, denote the two simple closed curves as illustrated in Figure \ref{fig:corres2-1}, then clearly $\boldsymbol{\alpha^c}\cup\{\alpha_{g+1},\alpha_{g+2}\}$ are completing curves for $\mathcal{H}_{(P,P^*)_\mathcal{G}}$. \begin{figure}[h] \def12cm{12cm} \begin{center} \input{corres1.pdf_tex} \caption{The figure on the top is part of a diagram determined by $(P,P^*)_\mathcal{F}$ and the figure on the bottom is part of a diagram determined by $(P,P^*)_\mathcal{G}$. In these figures, the boxes $A$, $B$, and $C$ shows other possible $\alpha$ curves which intersect $\beta_1$. } \label{fig:corres1} \end{center} \end{figure} \begin{figure}[h] \def12cm{12cm} \begin{center} \input{corres2-1.pdf_tex} \caption{$\alpha_{g+1}$ and $\alpha_{g+2}$ along with curves in $\boldsymbol{\alpha^c}$ are completing curves for the diagram $\mathcal{H}_{(P,P^*)_\mathcal{G}}$. } \label{fig:corres2-1} \end{center} \end{figure} Let $\beta_i$, for $i=g+1,g+2$, denote two arcs as illustrated on the top of Figure \ref{fig:corres2-1} and $\beta_i'$, $i=g+1,g+2$, denote two simple closed curves as illustrated on the bottom of Figure \ref{fig:corres2-1}. Let $$\mathcal{H}=(\Sigma_1,\boldsymbol{\alpha}\cup\boldsymbol{\alpha^c},\boldsymbol{\beta}\cup\{\beta'_{g+1},\beta'_{g+2}\}).$$ For each diagram in Figure \ref{fig:corres2-1}, starting from the part of $\alpha_1$ which intersects $\beta_1$ between $\alpha$ curves in the boxes $A$ and $B$, handleslide of $\alpha_1$ over $\alpha_{g+1}$, then handleslide the $\alpha$ curves in the box $B$, first over $\alpha_{g+2}$ and then over $\alpha_{g+1}$. Finally, starting from the part of $\alpha_1$ which intersects $\beta_1$ between $\alpha$ curves in the boxes $B$ and $C$, handleslide $\alpha_1$ over $\alpha_{g+2}$. After doing these handleslides, we obtain the two diagrams on the top of Figure \ref{fig:corres2-1-1}. Now, isotope $\beta_1$ in the first diagram of Figure \ref{fig:corres2-1-1}, and isotope $\beta'_{g+1}$, $\beta'_{g+2}$ and $\beta_1$ in the second diagram of Figure \ref{fig:corres2-1-1}, to obtain the two diagrams on the bottom of Figure \ref{fig:corres2-1-1} respectively. \begin{figure}[h] \def12cm{12cm} \begin{center} \input{corres2-1-1.pdf_tex} \caption{Doing a sequence of handleslides of some curves $\alpha\in\boldsymbol{\alpha}\cup\boldsymbol{\alpha^c}$ over $\alpha_{g+1}$ and $\alpha_{g+2}$, we obtain the two diagrams on the top. Doing isotopies of $\beta_{g+1}$, $\beta_{g+2}$, and $\beta_1$, we obtain the two diagrams on the bottom. } \label{fig:corres2-1-1} \end{center} \end{figure} Let $x_i$, $i=g+1,g+2$, denote the unique intersection point of $\alpha_i$ with the arc $\beta_i$ for the third diagram in Figure \ref{fig:corres2-1-1} and \begin{equation*} \mathbf{x}=\{x_1,\dots,x_g,x_{g+1},x_{g+2}\},\ \ \mathbf{y}=\{y_1,\dots,y_g,x_{g+1},x_{g+2}\} \end{equation*} be two generators for this diagram where $x_i,y_i\in\alpha_i\cap\beta_{\sigma(i)}$, $i=1,\dots,g$ and $\sigma$ is a permutation on $g$ letters. Let $x'_i$, $i=g+1,g+2$, denote the unique intersection point of $\alpha_i$ with $\beta'_i$, $x_0$ (resp. $x'_0$) be the intersection point of $\alpha_{g+1}$ (resp. $\alpha_{g+2}$) with $\beta_1$ as denoted in Figure \ref{fig:corres2-1-1} on the bottom and \begin{equation*} \mathbf{x'}=\{x_1,\dots,x_g,x'_{g+1},x'_{g+2}\},\ \ \mathbf{y'}=\{y_1,\dots,y_g,x'_{g+1},x'_{g+2}\} \end{equation*} be the two corresponding generators for this diagram. Consider the class of a Whitney disk $\phi\in\pi_2(\mathbf{x'},\mathbf{y'})$ in this diagram and let $m_i$ and $k_i$, $i=1,2$ denote local coefficients of $\phi$ on the two sides of $\alpha_{g+1}$ and $\alpha_{g+2}$, as denoted in Figure \ref{fig:corres2-1-1} on the bottom. A computation of coefficients for the disk $\phi$ around $x_0$ and $x'_0$ shows that \begin{align*} m_1+k_1=m_2+k_1\ \ \text{and}\ \ m_1+k_1=m_1+k_2. \end{align*} This means that for each disk $\phi$ as above, the coefficients of $\phi$ around $x_{g+1}$ (resp. $x_{g+2}$) are the same. As a result, for the two diagrams in Figure \ref{fig:corres2-1-1} on the bottom, the generators and the Whitney disks are in correspondence. An argument similar to the proof of Theorem \ref{Thm1} proves that the moduli spaces of holomorphic disks in the two diagrams are also in correspondence. Therefore, from Proposition \ref{P4}, $\widehat{HF}(\mathcal{H}_{(P,P^*)_\mathcal{G}})\cong\widehat{HF}(\mathcal{H})$. \begin{figure}[!h] \def12cm{11cm} \begin{center} \input{corres2.pdf_tex} \caption {The figure shows how a sequence of handleslides and isotopies removes the two handles which are determined by the meridians $\alpha_{g+1}$ and $\alpha_{g+2}$. } \label{fig:corres2} \end{center} \end{figure} On the other hand, for the diagram $\mathcal{H}$, after doing a handleslide of $\beta_1$ over $\beta_{g+1}$ (see diagram $\mathsf{B}$ in Figure \ref{fig:corres2}) and then doing isotopies on $\beta_1$ (see diagram $\mathsf{C}$ in Figure \ref{fig:corres2}), no $\beta$ curve intersects $\alpha_{g+2}$ and we can remove a 1-handle which has $\alpha_{g+1}$ as its meridian (see diagram $\mathsf{D}$ in Figure \ref{fig:corres2}). Now, doing a second handleslide of $\beta_1$ over $\beta_{g+1}$ (see diagram $\mathsf{E}$ in Figure \ref{fig:corres2}) and then doing some isotopies of $\beta_1$ (see diagram $\mathsf{F}$ on the right in Figure \ref{fig:corres2}), no $\beta$ curve intersects $\alpha_{g+1}$ and we can remove the 1-handle which has $\alpha_{g+1}$ as its meridian (see diagram $\mathsf{G}$ in Figure \ref{fig:corres2}). In this diagram, the associated relator for $\beta_1$ is $b_1$ and the correspondences are given by $\mathcal{F}$. Therefore from Lemma \ref{L6}, we have $\widehat{HF}(\mathcal{H}_{(P,P^*)_\mathcal{F}})\cong\widehat{HF}(\mathcal{H})$, which proves $$\widehat{HF}(\mathcal{H}_{(P,P^*)_\mathcal{F}})\cong\widehat{HF}(\mathcal{H}_{(P,P^*)_\mathcal{G}}).$$ Now let us consider the case where some regions for the diagram $\mathcal{H}'$ are not polygons. By doing isotopies in the diagram $\mathcal{H}_{(P,P^*)_\mathcal{F}}$, which are equivalent to the first AC-move on the pair $(P,P^*)_{\mathcal{F}}$, we can assume that $\mathsf{P}_i$ and $\mathsf{P}'_i$, $i=1,\dots,4$, are disjoint polygons (see Figure \ref{fig:corres1}). Let $(P_1,P_1^*)_{\mathcal{F}_1}$ be obtained from $(P,P^*)_{\mathcal{F}}$ by these AC-moves. Also let $(P_1,P_1^*)_{\mathcal{G}_1}$ be obtained from $(P,P^*)_{\mathcal{G}}$ by the corresponding AC-moves. Since $\mathsf{P}_i$ and $\mathsf{P}'_i$, $i=1,\dots,4$, are disjoint polygons in the diagram $\mathcal{H}_{(P_1,P_1^*)_{\mathcal{F}_1}}$, the diagram $\mathcal{H}_{(P_1,P_1^*)_{\mathcal{G}_1}}$ is obtained as above by adding two one-handle to $\mathcal{H}_{(P_1,P_1^*)_{\mathcal{F}_1}}$ and two completing curves. Therefore, from the above discussion we have $$\widehat{HF}(\mathcal{H}_{(P_1,P_1^*)_{\mathcal{F}_1}})\cong\widehat{HF}(\mathcal{H}_{(P_1,P_1^*)_{\mathcal{G}_1}}).$$ Also from Lemma \ref{L6} and Claim \ref{claim3}, we have \begin{equation*} \widehat{HF}(\mathcal{H}_{(P,P^*)_\mathcal{F}})\cong\widehat{HF}(\mathcal{H}_{(P_1,P_1^*)_{\mathcal{F}_1}}),\ \ \widehat{HF}(\mathcal{H}_{(P,P^*)_\mathcal{G}})\cong\widehat{HF}(\mathcal{H}_{(P_1,P_1^*)_{\mathcal{G}_1}}) \end{equation*} which proves the lemma in this case. For general families of correspondences $\mathcal{F}=\{f_{ij},\overline{f}_{ij}\}_{i,j}$ and $\mathcal{G}=\{g_{ij},\overline{g}_{ij}\}_{i,j}$, note that each map $f_{ij}$ (resp. $\overline{f}_{ij}$) is a composition of $g_{ij}$ (resp. $\overline{g}_{ij}$) with some transpositions. This proves the lemma in the general case. \end{proof} \begin{lemma}\label{Inofdual} Let $(P_1,P^*)_\mathcal{F}$ and $(P_2,P^*)_\mathcal{G}$ be two pairs of dual presentations with the same dual presentation $P^*$ for the two presentations $P_1$ and $P_2$. Then we have $$\widehat{HF}(\mathcal{H}_{(P_1,P^*)_\mathcal{F}})\cong\widehat{HF}(\mathcal{H}_{(P_2,P^*)_\mathcal{G}}).$$ \end{lemma} \begin{proof} Let $P_1=\langle a_1,\dots,a_d|b_1,\dots,b_d\rangle$ and $P_2=\langle a_1,\dots,a_{d'}|b'_1,\dots,b'_{d'}\rangle$. Let \begin{equation*} A_{ij}=\{k|a_i\ is\ k^{th}\ letter\ in\ b_j, 1\leq k\leq |b_j|\}, \end{equation*} \begin{equation*} A_{ij}'=\{k|a_i\ is\ k^{th}\ letter\ in\ b_j', 1\leq k\leq |b'_j|\}, \end{equation*} and \begin{equation*} \overline{A}_{ij}=\{k|a^{-1}_i\ is\ k^{th}\ letter\ in\ b_j, 1\leq k\leq |b_j|\}, \end{equation*} \begin{equation*} \overline{A}_{ij}'=\{k|a^{-1}_i\ is\ k^{th}\ letter\ in\ b'_j, 1\leq k\leq |b'_j|\}, \end{equation*} as in Defintion \ref{def01}. Since $P_1$ and $P_2$ have the same dual presentation, from Definition \ref{def01} we have $d=d'$, $|A_{ij}|=|A'_{ij}|$, and $|\overline{A}_{ij}|=|\overline{A}'_{ij}|$, for each $1\leq i,j\leq d$. This means that for each $j$, $j=1,2,\dots,d$, using a permutation on the letters of the relation $b_j$, we can obtain the relation $b'_j$. Let us consider the simplest case where $b_j=b'_j$ for $j=2,\dots,d$, and $b_1'$ is obtained from $b_1$ by a transposition which permutes two letters, say $a_1$ and $a_2$. Figure \ref{fig:Inofdual} on the top shows part of the diagram for $\mathcal{H}_{(P,P^*)_\mathcal{F}}=(\Sigma,\boldsymbol{\alpha},\boldsymbol{\beta})$ where the curves $\alpha_1$ and $\alpha_2$ correspond to $a_1$ and $a_2$. If we connect two parts of $\beta_1$ with two 1-handles, as we did in the proof of Lemma \ref{L7}, we obtain a Heegaard diagram $\mathcal{H}=(\Sigma_1,\boldsymbol{\alpha},\boldsymbol{\beta})$ where $\Sigma_1$ is obtained from $\Sigma$ by adding two 1-handles, see Figure \ref{fig:Inofdual} on the bottom. Similar to the discussion in Section \ref{s2}, we can associate the pair of dual presentations $(P_2,P^*)_{\mathcal{G}'}$, with this diagram, for some correspondence $\mathcal{G}'$. If all the regions in the right diagram of Figure \ref{fig:Inofdual} are polygons, then from Corollary \ref{C02}, we have $\mathcal{H}=\mathcal{H}_{(P_2,P^*)_{\mathcal{G}'}}$. An argument similar to the proof for Lemma \ref{L7} shows that $$\widehat{HF}(\mathcal{H}_{(P_1,P^*)_\mathcal{F}})\cong\widehat{HF}(\mathcal{H}_{(P_2,P^*)_{\mathcal{G}'}}).$$ From Lemma \ref{L7}, we have $\widehat{HF}(\mathcal{H}_{(P_2,P^*)_\mathcal{G}})\cong\widehat{HF}(\mathcal{H}_{(P_2,P^*)_{\mathcal{G}'}})$ which proves the lemma in this case. \begin{figure}[h] \def12cm{12cm} \begin{center} \input{Inofdual.pdf_tex} \caption{On the left, there is a part of diagram for $\mathcal{H}_{(P,P^*)_\mathcal{F}}=(\Sigma,\boldsymbol{\alpha},\boldsymbol{\beta})$. On the right, two parts of $\beta_1$ are connected with two 1-handles. In this diagram two specified parts of $\alpha_1$ and $\alpha_2$ on the left digram are replaced with each other. } \label{fig:Inofdual} \end{center} \end{figure} Otherwise, if connecting two parts of $\beta_1$ in the diagram $\mathcal{H}_{(P,P^*)_\mathcal{F}}$ makes some regions non-polygons, we follow an strategy similar to the proof of Lemma \ref{L7}. We do some isotopies in the diagram $\mathcal{H}_{(P,P^*)_\mathcal{F}}$, which are equivalent to the first AC-move on the pair $(P,P^*)_{\mathcal{F}}$ and do the corresponding isotopies for the diagram $\mathcal{H}$ to obtain diagrams $\mathcal{H}_{(P',P'^*)_{\mathcal{F}'}}$ and $\mathcal{H}'$, respectively. Let $(P',P'^*)_{\mathcal{F}'}$ be obtained from $(P,P^*)_\mathcal{F}$ by these AC-moves and suppose that the associated pair of dual presentations of $\mathcal{H}'$ is $(P'_2,P'^*)_{\mathcal{G}''}$, which is obtained from $(P_2,P^*)_{\mathcal{G}'}$ by the corresponding AC-moves. $\mathcal{H}'$ has the property that all regions are polygons. Therefore, from Corollary \ref{C02}, we have $\mathcal{H}'=\mathcal{H}_{(P'_2,P'^*)_{\mathcal{G}''}}$ and from the proof of Lemma \ref{L7}, we have $\widehat{HF}(\mathcal{H}_{(P'_1,P'^*)_{\mathcal{F}'}})\cong\widehat{HF}(\mathcal{H}_{(P'_2,P'^*)_{\mathcal{G}''}})$. From Lemma \ref{L6}, \begin{equation*} \widehat{HF}(\mathcal{H}_{(P_1,P^*)_\mathcal{F}})\cong\widehat{HF}(\mathcal{H}_{(P'_1,P'^*)_{\mathcal{F}'}}),\ \ \widehat{HF}(\mathcal{H}_{(P_2,P^*)_{\mathcal{G}'}})\cong\widehat{HF}(\mathcal{H}_{(P'_2,P'^*)_{\mathcal{G}''}}). \end{equation*} From Lemma \ref{L7}, $\widehat{HF}(\mathcal{H}_{(P_2,P^*)_\mathcal{G}})\cong\widehat{HF}(\mathcal{H}_{(P_2,P^*)_{\mathcal{G}'}})$. This proves the lemma in this case. For the general presentations, note that each $b'_j$ is obtained from $b_j$ by composition with some transpositions. \end{proof} \begin{remark}\label{R1} Note that if $(P,P^{*})_{\mathcal{F}}$ is a pair of dual presentations with $\mathcal{F}={\{f_{ij},\overline{f}_{ij}\}}_{i,j}$, then clearly $(P^{*},P)_{\overline{\mathcal{F}}}$ with $\overline{\mathcal{F}}={\{f^{-1}_{ij},\overline{f}_{ij}^{-1}\}}_{i,j}$ is also a pair of dual presentations. \end{remark} Remark \ref{R1} and Lemma \ref{Inofdual} state that $\widehat{HF}_{(P,P^*)_\mathcal{F}}$ is independent from the choice of dual presentation $P^*$ and from Lemma \ref{L7}, $\widehat{HF}_{(P,P^*)_\mathcal{F}}$ is independent from the choice of the correspondence $\mathcal{F}$. We may thus denote $\widehat{HF}_{(P,P^*)_\mathcal{F}}$ by $\widehat{HF}_P(G)$. \begin{proof}[Proof of Claim \ref{claim1} based on Claim \ref{claim2} and Claim \ref{claim3}] Let $P^*$ be a dual presentation for $P$ with a correspondence $\mathcal{F}$ which satisfies conditions $A$ and $B$ at the beginning of Section \ref{s5}. If the Andrews-Curtis moves 1-4 act on $P$, the corresponding AC-moves 1-4 act on the pair $(P,P)_\mathcal{F}$. From Lemma \ref{L6}, $\widehat{HF}_{(P,P^*)_\mathcal{F}}$ is invariant under AC-moves. Therefore $\widehat{HF}_P(G)$ is invariant under Andrews-Curtis moves 1-4. Let the Andrews-Curtis move $5$, which is the inverse of the Andrews-Curtis move $3$, act on $P$. Change $P^*$ such that $b_j^*$ and $b_j^{*-1}$ are consecutive letters in the relation $a_i^*$ (note that from Lemma \ref{Inofdual} and Remark \ref{R1}, we are allowed to change $P^*$ as described). Also from Lemma \ref{L7}, one can change the correspondence $\mathcal{F}$ such that these two letters $b_j^*$ and $b_j^{*-1}$ are consecutive in the cyclic ordering determined by $\mathcal{F}$ (see Remark \ref{R2}). Now, the fifth AC-move can be used on this new pair $(P,P^*)_\mathcal{F}$ of dual presentations. Based on Claim \ref{claim3}, $\widehat{HF}_{(P,P^*)_\mathcal{F}}$ is invariant under the fifth AC-move. Therefore $\widehat{HF}_P(G)$ is invariant under the inverse of third Andrews-Curtis move. To prove the second part of the claim, let $(P,P^*)$ be a pair of dual presentations associated with a Heegaard diagram $\mathcal{H}=(\Sigma,\boldsymbol{\alpha},\boldsymbol{\beta})$ of $Y$ (see Example \ref{ex1}). $P$ is a presentation for $G=\pi_1(Y)$. Since $\Sigma\setminus\boldsymbol{\alpha}$ is a punctured sphere, we have $\boldsymbol{\alpha^c}=\emptyset$ and by doing isotopies in the diagram $\mathcal{H}$, which are equivalent to the first AC-move on the pair $(P,P^*)_{\mathcal{F}}$, we can obtain a Heegaard diagram $\mathcal{H}_1$ in which all regions are polygons. Let $(P_1,P_1^*)_{\mathcal{F}_1}$ be a pair obtained from $(P,P^*)_{\mathcal{F}}$ by these AC-moves. $(P_1,P_1^*)_{\mathcal{F}_1}$ is a pair of dual presentations associated with the diagram $\mathcal{H}_1$, therefore, from Corollary \ref{C02}, we have $\mathcal{H}_1=\mathcal{H}_{(P_1,P_1^*)_{\mathcal{F}_1}}$. $\widehat{HF}_{P_1}(G)$ is defined as the Heegaard Floer homology group associated with the chain complex $(\widehat{CF}(\overline{\mathcal{H}}_1),\partial_{J(T_i)})$ which is the same as $\widehat{HF}(Y)$. From the first part of the claim, $\widehat{HF}_{P}(G)\cong\widehat{HF}_{P_1}(G)$. This completes the proof. \end{proof}
1,108,101,565,406
arxiv
1,108,101,565,407
arxiv
\section{Introduction} One definition of the epoch of galaxy formation is when galaxies first began to form stars. Different types of galaxies (e.g., ellipticals and spirals) or different parts of the same type of galaxies (e.g., bulge and disk of spirals) may have formed at different epochs. A important goal of observational cosmology is to identify these different formation epochs. Conventional wisdom suggests that ellipticals and the spheroidal component of spirals formed very early, followed by disks. The population of galaxies at $z>3$ identified using the Lyman limit drop-out technique may very well be the progenitors of the spheroidal component of massive galaxies [1,2]. Here we discuss evidence for a sharp rise in the metallicity distribution of damped Ly$\alpha$ absorbers at $z\leq 3$, which may signify the onset of star formation in galactic disks. \section{Results} Damped Ly$\alpha$ (DLA) absorption systems seen in spectra of background quasars are widely accepted to be the progenitors of present-day galaxies [3], although their exact nature (dwarfs, spheroids, or disks?) is still unclear. A program is carried out using the Keck telescopes to study the chemical compositions of the absorbing gas in DLA systems. One of the goals is to (hopefully) identify the epoch of the first episode of star formation in these galaxies, hence constraining theories of galaxy formation. Figure 1 shows the distribution of [Fe/H] in DLA systems as a function of redshift. Detailed descriptions of the data and analyses are given in refs [4,5]. The low metallicities of DLAs testify the youth of these galaxies: they have yet to make the bulk of their stars. Remarkably, all 6 of the highest redshift absorbers have [Fe/H]$\leq -2$; while many absorbers have reached ten times higher metallicity at just slightly lower redshifts. This indicates an epoch of rapid star formation at $z\sim 3$. The effect is likely to be real: if DLA systems at $z>3$ have [Fe/H] that is uniformly distributed between $-1$ and $-2.5$ (i.e.,, similar to the distribution at $2<z<3$), then the {\it posterior} probability for all six of the highest redshift systems to have [Fe/H]$\leq -2$ by chance is $1.4\times 10^{-3}$. \begin{figure} \psfig{figure=figure1.ps,height=2.5in,width=3.5in} \caption{Metallicity distribution of damped Ly$\alpha$ absorbers. \label{fig:figure 1}} \end{figure} Coincidentally, the metallicities of DLA systems at $z>3$, [Fe/H]$=-2$ to $-2.5$, is identical (within the uncertainties) to those found for the IGM clouds at similar redshifts, as inferred from the C IV absorption associated with Ly$\alpha$ forest clouds [6,7,8]. This coincidence suggests that the metals in DLA galaxies at $z>3$ may simply reflect those in the IGM, however they were made (e.g., Pop III stars, ejected from protogalaxies); DLA galaxies did not start making their own stars (hence metals) until $z\sim 3$. \section{Discussion} The implications of the above result for the general question of galaxy formation and evolution depend on the nature of the DLA galaxies. It was suggested [9] that DLA systems may represent the progenitors of disk galaxies. This is supported by the very recent finding [10] that the kinematics of DLA absorbers as inferred from the metal absorption line profiles appears to be dominated by rotation with large circular velocities ($>200$ km s$^{-1}$). However, the mean metallicities of DLAs at $z>1.6$ are significantly below that of the Milky Way disk at the corresponding epoch [4,11]. The problem with the metallicity distribution may be lessened if DLAs represent a thick disk phase of galaxies [3] or if low surface brightness disk galaxies (which have substantially sub-solar metallicities) make up a significant fraction of DLA absorbers [12]. {\it If} the disk hypothesis for DLA absorbers is correct, we may have identified the epoch of initial star formation in disk galaxies. Alternatively, DLAs may represent dwarf galaxies or the spheroidal component of massive galaxies; this conjecture stems from the similarity between the metallicity distribution of DLAs and those in halo globular clusters and local gas-rich dwarf galaxies [4]. In this case, however, one has to explain the kinematics of DLAs [10] by other means. \section*{Acknowledgments} LL appreciates support from a Hubble Fellowship (HF1062-01-94A). WWS was supported by NSF grant AST95-29073. \section*{References}
1,108,101,565,408
arxiv
\section{Introduction} Nonequilibrium molecular dynamics techniques are widely employed in the study of molecular fluids under steady flow. Periodic boundary conditions (PBCs) are employed to study bulk properties of a fluid, but standard PBCs with a fixed simulation box are incompatible with a homogeneous linear background flow $A = \nabla u \in \mathbb R^{3 \times 3}$, such as shear or elongational flow. In such a simulation, the periodic replicas of a particle have different velocities, consistent with the background flow. If we let $$ L_t = \bigg[ {\mathbf v}^1_t \ {\mathbf v}^2_t \ {\mathbf v}^3_t \bigg] \in \mathbb R^{3 \times 3}, t \in [0, \infty) $$ denote the time-dependent lattice basis vectors defining the simulation box, then a particle with phase coordinates $({\mathbf Q}, {\mathbf V})$ has periodic replicas with coordinates at $({\mathbf Q} + L_t {\mathbf n}, {\mathbf V} + A L_t {\mathbf n})$ for all integer triples ${\mathbf n} \in {\mathbb Z}^3.$ The velocity relations \begin{equation*} \frac{d}{dt} ({\mathbf Q} + L_t {\mathbf n}) = {\mathbf V} + A L_t {\mathbf n} \text{ for all } {\mathbf n} \in {\mathbb Z}^3 \end{equation*} imply that the simulation box must move with the flow, \begin{equation} \label{Lt} \frac{d}{dt} L_t = A L_t, \text{ which has solution } L_t = e^{A t} L_0. \end{equation} For general flows, depending on the orientation of $L_0$ the simulation box can become quite elongated so that a particle is approached by its periodic replicas, which causes numerical instability in the simulation. For example, a planar elongational flow whose contraction is parallel to one of the simulation box edges ${\mathbf v}^i_0$ has one periodic direction that shrinks exponentially fast. This puts a finite limit on the simulation stability~\cite{houn92, bara95}. While these time periods are sometimes long enough to allow for the accurate computation of statistical observables in simple molecular fluids, there is need for boundary conditions without time limitations for the simulation of complex molecular systems. For shear flow, the Lees-Edwards boundary conditions~\cite{lees72} allow for time-periodicity in the deforming simulation box itself. For planar elongational flow, the Kraynik-Reinelt (KR) boundary conditions~\cite{kray92, todd98, todd99, bara99} achieve time periodicity in the simulation box by carefully choosing the vectors defining the initial simulation box. In particular, the box is rotated so that the edges form an angle of approximately 31.7 degrees with respect to the background flow. However, the KR formalism does not apply to general three dimensional flows, in particular it cannot treat uniaxial or biaxial flow~\cite{kray92}. In this paper, we generalize the KR boundary conditions to handle any homogeneous, incompressible, three-dimensional flow whose velocity gradient is a nondefective matrix (see Section~\ref{sec:model} for a precise description of the flow types handled). We greatly enlarge the class of flows handled, including uniaxial and biaxial flows. The proposed algorithm gives an initial orientation for the lattice vectors $L_0,$ evolves the vectors according to the differential equation~\eqref{Lt}, and remaps the vectors in a fashion that preserves the periodic lattice structure and keeps the total deformation bounded for all time. Unlike Lees-Edwards and Kraynik-Reinelt boundary conditions, the boundary conditions do not in general have a time-periodic simulation box; however, the deformation of the simulation box is kept bounded and particle replicas stay separated by a bounded distance. In Section~\ref{sec:kr} we review the KR boundary conditions and describe them in a framework useful for the generalization later. In Section~\ref{sec:genkr} the new boundary conditions are derived and explained theoretically. Section~\ref{sec:algo} contains a self-contained description of the algorithm with default choices for parameters given. We note that the boundary conditions described here are not tied to a particular choice of nonequilibrium dynamics. Typically, the flow in a nonequilibrium simulation is driven by a specialized dynamics, for example, the deterministic SLLOD~\cite{evan07,edbe86} or g-SLLOD~\cite{tuck97,edwa06} dynamics or the nonequilibrium stochastic dynamics such as those in~\cite{mcph01,dobs12}. \section{Flow Types and Automorphisms} \label{sec:model} Since the background flow treated here is incompressible, $A$ is a trace-free matrix. Let $J = S^{-1} A S$ denote the real Jordan canonical form for $A,$ where all 3 by 3 matrices fall in four possible cases, \begin{align} \label{eq:J_nondef}J_1 = \left[ \begin{array}{rrr} \varepsilon_1 & 0 & 0 \\ 0 & \varepsilon_2 & 0 \\ 0 & 0 & - \varepsilon_1 - \varepsilon_2 \end{array}\right], J_2 = \left[ \begin{array}{rrr} \varepsilon & -r & 0 \\ r & \varepsilon & 0 \\ 0 & 0 & -2 \varepsilon \end{array}\right], \\ \label{eq:J_def} J_3 = \left[ \begin{array}{rrr} \varepsilon & 1 & 0 \\ 0 & \varepsilon & 0 \\ 0 & 0 & - 2 \varepsilon \end{array}\right], \text{ or } J_4 = \left[ \begin{array}{rrr} 0 & 1 & 0 \\ 0 & 0 & 1 \\ 0 & 0 & 0 \end{array}\right]. \end{align} The form $J_1$ includes several standard matrices, for example planar elongational flow (PEF) where $\varepsilon_1 = -\varepsilon_2,$ uniaxial stretching flow (USF) where $\varepsilon_1 = \varepsilon_2 < 0,$ and biaxial stretching flow (BSF) where $\varepsilon_1 = \varepsilon_2 > 0.$ The matrix $J_2$ arises in the case of complex eigenvalues, corresponding to a rotational flow (which may be an inward spiral $a < 0,$ an outward spiral $a > 0,$ or a center $a=0$). Both $J_3$ and $J_4$ are defective matrices, since they have rank-deficient eigenspaces. The generalized KR boundary conditions apply to any matrices of the form $J_1$ or $J_2$. For $J_3,$ if $\varepsilon = 0,$ then this is a case of planar shear flow and the Lees-Edwards boundary conditions can be employed. Likewise, similar boundary conditions can be employed for the case $J_4;$ however, we have not been able to extend the boundary conditions described here to the $J_3$ case for nonzero $\varepsilon.$ In the following, we transform the lattice $L_t$ with elements of $\SLZ{3},$ the matrix group of orientation-preserving linear lattice automorphisms. This is the set of all three by three matrices with integer entries whose determinant is 1. By Cramer's rule, such a matrix has an inverse with integer entries. For any $M \in \SLZ{3},$ the lattices generated by $L_t$ and $L_t M$ are identical, and thus, the two sets of particles $\{ {\mathbf Q}_i + L_t {\mathbf n} \, | \, {\mathbf n} \in {\mathbb Z}^3\}$ and $\{ {\mathbf Q}_i + L_t M {\mathbf n} \, | \, {\mathbf n} \in {\mathbb Z}^3 \}$ are identical. Applying such an automorphism transforms the simulation box without changing the simulated dynamics. Through the careful choice of initial simulation box $L_0$ and automorphisms, we can simulate a system where all particles maintain a minimum distance from their periodic replicas for all time. \section{KR boundary conditions and planar flows} \label{sec:kr} We first present a review of the KR boundary conditions for planar elongational flow along with a summary of techniques for other planar flows. \subsection{KR boundary conditions for planar elongational flow} Consider a diagonal flow of the form \begin{equation*} A = \left[ \begin{array}{rrr} {\varepsilon}&0 &0\\ 0&-{\varepsilon} &0\\ 0&0&0 \end{array}\right], \end{equation*} where ${\varepsilon} >0$. The KR boundary conditions~\cite{kray92} consist in choosing a basis for the unit cell such that after a finite time, the elongational flow maps the lattice generated by the unit cell onto itself. That is, one finds a basis $L$ and time $t_* > 0$ such that $$e^{A t_*} L = L M,$$ for some $M \in \SLZ{3}.$ The mapping $M$ is a parameter of the algorithm. The method was first described in~\cite{kray92}, where the authors showed how to find reproducible square and hexagonal lattices in planar elongational flow. In~\cite{todd98,bara99,todd99} the authors employed these reproducible lattices in nonequilibrium molecular dynamics simulations by using them to describe the periodicity of groups of particles. Choose $M \in \SLZ{3}$ with positive eigenvalues, other than the identity matrix. For example, the choice $$ M = \left[ \begin{array}{rrr} 2 & -1 & 0 \\ -1 & 1 & 0 \\ 0 & 0 & 1 \end{array} \right]$$ is common, and it has been shown to give a system with the largest possible minimal spacing between periodic replicas~\cite{kray92}. Let $V$ denote a matrix of eigenvectors for $M$, and let $$\Lambda = \left[ \begin{array}{rrr} \lambda & 0 & 0 \\ 0 & \lambda^{-1} & 0 \\ 0 & 0 & 1 \end{array} \right] $$ denote the matrix of corresponding eigenvalues, so that \begin{equation*} M V = V \Lambda. \end{equation*} We order the eigenvalues so that $\lambda > 1.$ The fact that the eigenvalues are inverses of one another follows from $\det(M) = 1.$ We define the lattice time period \begin{equation} \label{tzero} t_* = \frac{\log(\lambda)}{{\varepsilon}} \end{equation} so that $e^{{\varepsilon} t_*} = \lambda.$ Let $L_0 = V^{-1}$ be the matrix of initial lattice vectors. Note that while it is typical to choose eigenvectors to have norm one, the vectors in $V$ should be scaled so that the unit cell $L_0 = V^{-1}$ has the desired volume for the simulation box. If one chooses the vectors of $V$ to have the same length, then the vectors of $V$ are orthogonal, and $V^{-1} = \frac{1}{\det(V)^2} V^T.$ Since the lattice vectors move with the flow as in~\eqref{Lt}, at time $t_*$, they satisfy \begin{equation*} L_{t_*} = e^{A t_*} V^{-1} = \Lambda V^{-1} = L_0 M. \end{equation*} Thus, the lattice vectors $L_{t_*}$ generate the same lattice as $L_0$, demonstrating the time periodicity of the lattice. In simulations, the simulation box is remapped by setting $$L_{t_*^+} := L_0$$ to avoid the use of highly elongated basis vectors. This transformation does not move any of the periodic replicas of the particles in the simulation; however, since the basis vectors have changed, the periodic boundary conditions need to be applied on stored particle positions so that the stored particle displacements fall within the simulation box. \subsection{General planar flows} As mentioned in~\cite{kray92} and implemented for mixed flow in~\cite{hunt10}, the above algorithm can be applied to certain nondiagonal matrices $A$ by diagonalization. However, in~\cite{kray92}, it is shown by consideration of the characteristic polynomial for members of $\SLZ{3}$ that there is no reproducible lattice for either USF or BSF. Suppose now that $A$ denotes a general incompressible planar flow, that is, all nonzero entries of the matrix act on a two-dimensional eigenspace. This corresponds to cases $J_1$ with $\varepsilon_1 = \varepsilon_2$, $J_2$ with $\varepsilon = 0,$ or $J_3$ with $\varepsilon=0$ in~\eqref{eq:J_nondef} and~\eqref{eq:J_def}. There are three cases to consider, two nonzero real eigenvalues, two purely imaginary eigenvalues, or only zero eigenvalues. \subsubsection{Elongational flow} \label{sec:2D_elong} If the eigenvalues of $A$ are real and distinct then $A$ is diagonalizable and corresponds to an elongational flow. Let $S$ denote a matrix of eigenvectors and $D$ denote the matrix of eigenvalues for $A$ so that $A S = S D.$ Then, upon choosing the basis $L_0 = S V^{-1},$ we have \begin{equation*} \begin{split} L_{t_*} &= e^{A t_*} L_0 = e^{A t_*} S V^{-1} = S e^{D t_*} V^{-1} = S V^{-1} M = L_0 M. \end{split} \end{equation*} We note that this includes the mixed flow case treated in~\cite{hunt10}. \subsubsection{Rotational flow} In the case the eigenvalues are pure imaginary, and the flow is rotational. Writing $A$ in real Jordan normal form, we choose real $S$ so that \begin{equation*} S^{-1} A S = \left[\begin{array}{rrr} 0 & r & 0\\ -r & 0 & 0\\ 0 & 0 & 0 \end{array}\right]. \end{equation*} We define $L = S$ and then have that $e^{A t} L = R_t L,$ where $R_t$ is a rotation for all $t$. There is no need to reset the simulation box in this case. \subsubsection{Shear flow} \label{sec:2D_shear} The final case of all zero eigenvalues corresponds to shear flow. We note that in this case, there is a $t_*$ and $S$ such that $e^{A t_*} S = S M,$ for \begin{equation*} M = \left[ \begin{array}{rrr} 1 & 1 & 0\\ 0 & 1 & 0\\ 0 & 0 & 1 \end{array} \right]. \end{equation*} This is the Lagrangian rhomboid scheme, which is equivalent to the Lees-Edwards boundary conditions~\cite{lees72, evan07}. \section{Generalized KR boundary conditions} \label{sec:genkr} In this section we generalize the boundary conditions to nondefective incompressible linear flows in three dimensions. In the following, rather than find a time $t_0$ such that $L_{t_0} = L_0 M$ for a single automorphism $M \in \SLZ{3},$ we consider the successive application of two different automorphisms $M_1, M_2 \in \SLZ{3}$ to $L_t$ in order to keep the total deformation of the unit cell small for all times. Suppose that $M_1, M_2 \in \SLZ{3}$ are a pair of commuting, symmetric automorphisms. Then the matrices are simultaneously diagonalizable by an orthogonal matrix $V.$ Let \begin{equation*} \Lambda_{i}=V^{-1} M_i V \end{equation*} denote the matrix of eigenvalues corresponding to $M_i,$ whose diagonal entries are denoted by $ \lambda_{i, 1}, \lambda_{i, 2}, \lambda_{i, 3}.$ We define the logarithm of the ordered spectrum for each operator \begin{equation} \label{logspec} \hat{\omega}_i = \left[ \begin{array}{c} \log \lambda_{i,1} \\ \log \lambda_{i,2} \\ \log \lambda_{i,3} \\ \end{array} \right] = \log \diag( V^{-1} M_i V ), \end{equation} where $\diag(M)$ denotes the column vector made up of the diagonal entries of the matrix $M.$ We assume the following about $M_1$ and $M_2.$ \begin{assumption} \label{spec_ass} We assume that $M_1, M_2 \in \SLZ{3}$ are symmetric, commute, and have positive eigenvalues. We assume that $\hat{\omega}_1$ and $\hat{\omega}_2,$ defined in~\eqref{logspec}, are linearly independent. \end{assumption} An example of such a pair of matrices is given in Section~\ref{sec:algo}. Note that the choice of $M_1$ and $M_2$ does not depend on the matrix $A$. We describe the technique first in the diagonal case before discussing in turn the four possible cases for three dimensional flows. After the derivation given here, the main algorithm is presented in a concise form in Section~\ref{sec:algo}. \subsection{Diagonal case} Let us first consider a diagonal flow of the form \begin{equation} \label{adiag} A = \left[ \begin{array}{rrr} \varepsilon_1 & & \\ & \varepsilon_2 & \\ & & \varepsilon_3 \\ \end{array} \right], \end{equation} where $\varepsilon_1 + \varepsilon_2 + \varepsilon_3 = 0.$ Then the matrix exponential \begin{equation} \label{expAt} e^{A t} = \left[ \begin{array}{rrr} e^{\varepsilon_1 t} & & \\ & e^{\varepsilon_2 t} & \\ & & e^{\varepsilon_3 t} \\ \end{array} \right], \end{equation} is diagonal for all time $t.$ Let $M_1$ and $M_2$ satisfy Assumption~\ref{spec_ass}. We choose initial lattice basis $L_0 = V^{-1},$ where $V$ diagonalizes $M_1$ and $M_2.$ Applying the transformation $M_i$ to $L_t$ gives \begin{equation*} \begin{split} L_t M_i &= e^{A t} V^{-1} M_i \\ &= \left[ \begin{array}{rrr} e^{\varepsilon_1 t} & & \\ & e^{\varepsilon_2 t} & \\ & & e^{\varepsilon_3 t} \end{array} \right] \left[ \begin{array}{rrr} \lambda_{i, 1} & & \\ & \lambda_{i, 2} & \\ & & \lambda_{i, 3} \end{array} \right] V^{-1} \\ &= \exp\left( \left[ \begin{array}{rrr} {\varepsilon_1 t} + \log\lambda_{i, 1} & & \\ & {\varepsilon_2 t} + \log\lambda_{i, 2} & \\ & & {\varepsilon_3 t} + \log\lambda_{i, 3} \end{array} \right] \right) V^{-1}. \end{split} \end{equation*} Similarly, if we apply multiple transformations at once, we have \begin{equation} \label{mult_trans} \begin{split} L_t M_1^{n_1} M_2^{n_2} = \exp\left( \left[ \begin{array}{rrr} {\varepsilon_1 t} & & \\ & {\varepsilon_2 t} & \\ & & {\varepsilon_3 t} \end{array} \right] + \sum_{i=1}^2 n_i \left[ \begin{array}{rrr} \log\lambda_{i, 1} & & \\ & \log\lambda_{i, 2} & \\ & & \log\lambda_{i, 3} \end{array} \right] \right) V^{-1} \end{split} \end{equation} where $n_1, n_2 \in {\mathbb Z}.$ The idea of the algorithm presented in Section~\ref{sec:algo} is to apply automorphisms so that the argument of the exponential in~\eqref{mult_trans} stays bounded for all times $t >0.$ We define a vector that equals the diagonal part of the stretch, \begin{equation*} \widehat{\varepsilon}_t = \left[ \begin{array}{c} \varepsilon_1 t \\ \varepsilon_2 t \\ \varepsilon_3 t \end{array} \right], \end{equation*} and note that $\widehat{\varepsilon}_t,$ $\hat{\omega}_1,$ and $\hat{\omega}_2$ belong to the two dimensional subspace $\SS \subset \mathbb R^3$ of mean-zero vectors. The vectors $\hat{\omega}_1$ and $\hat{\omega}_2$ generate a lattice in $\SS,$ \begin{equation*} \mathcal L = \left\{ \left(n_1 - \frac{1}{2}\right) \hat{\omega}_1 + \left(n_2 - \frac{1}{2}\right) \hat{\omega}_2 \ | \ n_1, n_2 \in {\mathbb Z} \right\}, \end{equation*} where we have added an offset of $1/2$ so that the unit cell $$ \widehat{\Omega} = \left\{ \theta_1 \hat{\omega}_1 + \theta_2 \hat{\omega_2} \ | \ \theta_1, \theta_2 \in \left(-\frac{1}{2},\frac{1}{2}\right] \right\} $$ is centered at the origin. At each time $t > 0,$ by applying powers of the automorphisms to the lattice, we can transform so that the remapped simulation box $$\widetilde{L}_t = L_t M_1^{n_1} M_2^{n_2}$$ has a small stretch vector $\tilde{\varepsilon}_t = \hat{\varepsilon}_t + n_1 \hat{\omega_1} + n_2 \hat{\omega_2}.$ \subsection{Diagonalizable flow} Suppose that $A$ is diagonalizable, \begin{equation*} A = S D S^{-1}. \end{equation*} As pointed out for the planar case in Section~\ref{sec:2D_elong}, we can extend the above algorithm, by choosing $L_0 = S V^{-1}.$ We then have \begin{equation*} L_t M_1^{n_1} M_2^{n_2} = e^{A t} S V^{-1} M_1^{n_1} M_2^{n_2} = S e^{D t} V^{-1} M_1^{n_1} M_2^{n_2}. \end{equation*} The automorphisms act to bound the stretch vector corresponding to the diagonal term $e^{D t}.$ We note that since $S$ is not orthogonal if $A$ is nonsymmetric, the original lattice vectors $L_0$ are not orthogonal in that case. \subsection{Complex eigenvalues} It is also possible that $A$ has a pair of complex eigenvalues and a single real eigenvalue. We denote the spectrum of A as $\{ \varepsilon + i r, \varepsilon - i r, -2 \varepsilon\}.$ In this case, we write the real Jordan normal form for the matrix, \begin{equation*} A = S J_2 S^{-1}, \end{equation*} where $S$ is real and $J_2$ is the block-diagonal matrix \begin{equation*} J_2 = \left[ \begin{array}{rrr} \varepsilon & r & 0 \\ -r & \varepsilon & 0 \\ 0 & 0 & -2\varepsilon \end{array} \right]. \end{equation*} We decompose $J_2 = D + B$ where \begin{equation*} D = \left[ \begin{array}{rrr} \varepsilon & 0 & 0 \\ 0 & \varepsilon & 0 \\ 0 & 0 & -2\varepsilon \end{array} \right] \text{ and } B = \left[ \begin{array}{rrr} 0 & r & 0 \\ -r & 0 & 0 \\ 0 & 0 & 0 \end{array} \right]. \end{equation*} We note that since $D B = B D,$ the matrix exponential splits into a rotation and a stretch giving \begin{equation*} e^{A t} = S e^{J t} S^{-1} = S e^{B t} e^{D t} S^{-1}, \end{equation*} where $e^{B t}$ is a rotation matrix. We again take initial lattice vectors $L_0 = S V^{-1}$ and control size of the stretch vector \begin{equation*} \widehat{\varepsilon}_t = \left[ \begin{array}{r} \varepsilon t\\ \varepsilon t\\ -2 \varepsilon t \end{array} \right], \end{equation*} using the automorphisms $M_1$ and $M_2.$ No effort is made to undo the effect of $e^{B t}$ since it is simply a rotation. \subsection{Defective matrices} The final possible case is when $A$ is a defective matrix, that is, it has a repeated eigenvalue whose eigenspace does not have full rank. In three dimensions, a defective matrix can only occur for a matrix with a real spectrum, and so the only possible Jordan forms, up to rearrangement of the blocks, are \begin{equation*} J_3 = \left[ \begin{array}{rrr} \varepsilon & 1 & 0 \\ 0 & \varepsilon & 0 \\ 0 & 0 & -2\varepsilon \end{array}\right] \text{ or } J_4 = \left[ \begin{array}{rrr} 0 & 1 & 0 \\ 0 & 0 & 1 \\ 0 & 0 & 0 \end{array}\right]. \end{equation*} We can treat the $J_4$ case very similarly to the shear flow case in Section~\ref{sec:2D_shear}, using the identity \begin{equation*} e^{J_2 t} = \left[ \begin{array}{rrr} 1 & t & \frac{t^2}{2} \\ 0 & 1 & t \\ 0 & 0 & 1 \end{array}\right]. \end{equation*} We choose initial lattice basis $L_0 = S$ and note that at time $t_0 = 2,$ we have \begin{equation*} \begin{split} L_{t_0} &= e^{2 A} S \\ &= S e^{2 J_2} \\ &= S \left[ \begin{array}{rrr} 1 & 2 & 2 \\ 0 & 1 & 2 \\ 0 & 0 & 1 \end{array} \right]\\ &= S M, \end{split} \end{equation*} where $M \in \SLZ{3}.$ We have not been able to generalize our algorithm to the case of $J_3,$ when $\varepsilon \neq 0.$ The difficulty lies with the off-diagonal terms of the matrix exponential $$ e^{ J_3 t} = \left[ \begin{array}{rrr} e^{\varepsilon t} & t e^{\varepsilon t} & 0 \\ 0 & e^{\varepsilon t} & 0 \\ 0 & 0 & e^{-2 \varepsilon t} \end{array} \right]. $$ One approach considered is to find matrices $M_j \in \SLZ{3}$ and a common matrix $V$ such that $V^{-1} M_j V$ is upper triangular, in order to control the diagonal and off-diagonal terms at the same time, but we have not had success in such a construction. \section{Algorithm} \label{sec:algo} We now provide an explicit construction of the generalized KR boundary conditions algorithm. The following two matrices are in $\SLZ{3}$ and they commute: \begin{equation*} M_1 = \left[ \begin{array}{rrr} 1 & 1 & 1 \\ 1 & 2 & 2 \\ 1 & 2 & 3 \end{array} \right] \qquad M_2 = \left[ \begin{array}{rrr} 2 & -2 & 1 \\ -2 & 3 & -1 \\ 1 & -1 & 1 \end{array} \right]. \end{equation*} We choose the initial lattice vectors $L_0 = a V^{-1},$ where $V$ denotes the matrix of eigenvectors for $M_1$ and $M_2,$ and $a^3$ is the volume of the simulation box. We fix the choice of ordering for the eigenvectors by giving the first few digits of $V^{-1}$, \begin{equation*} V^{-1} = \left[ \begin{array}{rrr} 0.591 & -0.737 & 0.328 \\ 0.737 & 0.328 &-0.591 \\ 0.328 & 0.591 & 0.737 \end{array} \right] \end{equation*} Direct computation shows that the ordered spectra of the two operators are positive and the corresponding $\hat{\omega}_i,$ given by \begin{equation*} \hat{\omega}_1 \approx \left[ \begin{array}{r} - 1.178 \\ 1.619 \\ - 0.441 \end{array} \right] \qquad \hat{\omega}_2 \approx \left[ \begin{array}{r} 1.619 \\ - 0.441 \\ - 1.178 \\ \end{array} \right] \end{equation*} are linearly independent. Suppose that $A$ is written in real Jordan normal form $A = S J S^{-1}$ and $J$ is decomposed as $J = D + B$ where \begin{equation*} D = \left[ \begin{array}{rrr} \varepsilon_1 & 0 & 0 \\ 0 & \varepsilon_1 & 0 \\ 0 & 0 & \varepsilon_3 \end{array} \right] \text{ and } B = \left[ \begin{array}{rrr} 0 & r & 0 \\ -r & 0 & 0 \\ 0 & 0 & 0 \end{array} \right]. \end{equation*} This encompasses both diagonalizable flow (where $B = 0$) and the case of complex eigenvalues, but does not include the defective matrix case~\eqref{eq:J_def}. For time $t \geq 0$, we define the reduced stretch $\widetilde{\varepsilon}_t$ as follows \begin{equation*} \frac{d}{d t} \, \widetilde{\varepsilon}_t = \left[ \begin{array}{r} \varepsilon_1 \\ \varepsilon_2 \\ \varepsilon_3 \end{array} \right], \qquad \widetilde{\varepsilon}_0 = \left[ \begin{array}{r} 0 \\ 0 \\ 0 \end{array} \right], \end{equation*} where $\widetilde{\varepsilon}_t$ is restricted to be within the unit cell \begin{equation*} \widehat{\Omega} = \left\{ \theta_1 \hat{\omega}_1 + \theta_2 \hat{\omega}_2 \ | \ \theta_1, \theta_2 \in \left(-\frac{1}{2}, \frac{1}{2}\right] \right\}, \end{equation*} by periodic boundary conditions. An example curve $\widetilde{\varepsilon}_t$ is depicted in Figure~\ref{fig:stretch}. The lattice basis vectors for the simulation are then defined to be \begin{equation*} \widetilde{L}_t = S e^{B t} e^{\widetilde{\varepsilon}_t} V^{-1}, \end{equation*} where we define $$e^{\widetilde{\varepsilon}_t} = \exp\left( \left[ \begin{array}{rrr} \widetilde{\varepsilon}_{t,1} & & \\ & \widetilde{\varepsilon}_{t,2} & \\ & & \widetilde{\varepsilon}_{t,3} \end{array} \right] \right).$$ This process can be carried out for arbitrarily long times, and the stretch $\widetilde{\varepsilon}_t$ stays bounded for all times. This gives the following pseudocode for the discretized version of the NEMD system: \\ Given $S, D, B,$ and the time step $\Delta t$, compute $(\delta_1, \delta_2)$ so that $\delta_1 \hat{\omega}_1 + \delta_2 \hat{\omega}_2 = [\varepsilon_1, \varepsilon_2, \varepsilon_3]^T.$ \\ For each time step do: \begin{enumerate} \item $\theta_i \leftarrow \theta_i + \delta_i \Delta t$ \item $\theta_i \leftarrow \theta_i - \mathrm{round}(\theta_i)$ \item $\tilde{\varepsilon}_t \leftarrow \theta_1 \hat{\omega_1} + \theta_2 \hat{\omega_2}$ \item $\widetilde{L}_t \leftarrow S e^{B t} e^{\widetilde{\varepsilon}_t} V^{-1}$ \end{enumerate} Note that we recompute the lattice basis vectors at each step, and we do not explicitly apply automorphisms nor do we directly reset the lattice vectors. \begin{figure}[tb] \centerline{\input{figs/stretch.tex}} \caption{\label{fig:stretch} As the simulation progresses, $\widetilde{\varepsilon}_t$ traces a curve in the unit cell $\hat{\Omega}$ within $\SS.$ Here, the unit cell $\widehat{\Omega}$ of the lattice in stretch space has been projected into the xy plane. The lines within the parallelogram denote the evolution of $\widetilde{\varepsilon}_t$ during a simulation of uniaxial stretching flow. The depicted unit cell corresponds to the example automorphisms given in Section~\ref{sec:algo}.} \end{figure} \subsection{Minimum replica distance} The boundary conditions above limit the stretch $\widetilde{\varepsilon}_t$ to live within the unit cell $\widehat{\Omega}$ which is defined by the vectors~\eqref{logspec}. The minimum distance between a particle and a periodic replica within the simulation is given by \begin{equation*} d = \min_{\substack{{\mathbf n} \in {\mathbb Z}^3 \setminus \{ 0 \}\\ t \in \mathbb R^{\geq 0}} } \| {\mathbf q}_i + \widetilde{L}_t {\mathbf n} - {\mathbf q}_i \| \geq \min_{\substack{{\mathbf n} \in {\mathbb Z}^3 \setminus \{ 0 \} \\ \widetilde{\varepsilon} \in \widehat{\Omega} }} \| S e^{\widetilde{\varepsilon}} V^{-1} {\mathbf n}\|. \end{equation*} Using the boundedness of $\widehat{\Omega},$ we can limit the search to a small number of ${\mathbf n} \in {\mathbb Z}^3,$ and minimization over $\widehat{\Omega}$ leaves a quick computation. For the matrices in Section~\ref{sec:algo}, if the vectors of $S$ are orthogonal, the minimum distance is $d \approx 0.8198 a,$ where we recall that $a^3$ is the volume of the simulation box. \section{Numerics} In the following, we test the consistency of our algorithm by comparing computations for a WCA fluid under three-dimensional elongation to those presented in~\cite{bara95}. In previous works, the simulation time was restricted by the elongation of the unit cell, though the authors in~\cite{bara95} proposed a doubling scheme that increased the size of the unit cell to increase the simulation time. This came at the tradeoff of additional computational cost. In the following, we show that our simulations using the generalized KR boundary conditions converge to the same macroscopic quantities even after several cell resets. We use the WCA potential~\cite{week71}, which is given by \begin{equation*} \phi(r) = \begin{cases} \displaystyle 4 \left[ \frac{1}{r^{12}} - \frac{1}{r^6} \right] + 1, &r \leq 2^{1/6}, \\ 0, &r > 2^{1/6}. \end{cases} \end{equation*} We simulate $N=512$ particles at the scaled temperature $T=0.722$ and fluid density $\rho = 0.8442.$ For consistency with previous works~\cite{bara95, hunt10}, we employ the SLLOD equations of motion~\cite{evan84} with Gaussian (isokinetic) thermostat~\cite{evan07}, which is given by \begin{equation*} \begin{split} \frac{d {\mathbf q}}{dt} &= {\mathbf v}, \\ \frac{d {\mathbf v}}{dt} &= M^{-1} {\mathbf f} + A A {\mathbf q} - \alpha ({\mathbf v} - A {\mathbf q}), \\ \alpha &= \frac{(M^{-1} {\mathbf f} - A {\mathbf v} + A A {\mathbf q}) \cdot ({\mathbf v} - A {\mathbf q})}{(v - A {\mathbf q}) \cdot ({\mathbf v} - A {\mathbf q})}, \end{split} \end{equation*} where ${\mathbf q} \in \mathbb R^{3N}$ denotes the vector of all particle positions, ${\mathbf v} \in \mathbb R^{3N}$ denotes the corresponding velocities, and ${\mathbf f} \in \mathbb R^{3N}$ denotes the interaction forces on the particles. The factor $\alpha$ ensures that the relative kinetic energy $ \frac{1}{2} ({\mathbf v} - A {\mathbf q})^2$ is exactly preserved by the dynamics. We run our simulations up to time $t_{\rm max}=20,$ with time step $\Delta t = 0.002.$ The initial positions are on a lattice with random velocities that are scaled so the system has the temperature $T=0.722.$ We allow a decorrelation step from the initial conditions up to time $t=2,$ and then average the desired observables until $t_{\rm max}.$ For the largest strains, the unit cell is remapped approximately $15$ times over the course of the simulation. We run ten realizations for each type of flow. We compute the virial stress tensor, \begin{equation} \label{virial} { {\sigma}} = - \frac{1}{\det L_t} \sum_{i =1}^{N} \left(M ({\mathbf v}_i- A {\mathbf q}_i) \otimes ({\mathbf v}_i- A {\mathbf q}_i) + \frac{1}{2} \sum_{\substack{i,j = 1\\j \neq i}}^{N} ( {\mathbf q}_i - {\mathbf q}_j) \otimes f^{(i j)}\right) \end{equation} where $$f^{(i j)} = - \phi'(|{\mathbf q}_i - {\mathbf q}_j|) \frac{{\mathbf q}_i - {\mathbf q}_j}{| {\mathbf q}_i - {\mathbf q}_j|}.$$ We also use the pressure tensor, $P = - \sigma.$ In Figure~\ref{fig:pressures}, we plot the pressures for three different elongational flow types, planar elongational flow (PEF), uniaxial stretching flow (USF), and biaxial stretching flow (BSF), which have the respective velocity gradients \begin{equation*} A_{PEF} = \left[ \begin{array}{rrr} \varepsilon & & \\ & -\varepsilon & \\ & & 0 \\ \end{array} \right] \quad A_{USF} = \left[ \begin{array}{rrr} \varepsilon & & \\ & -\varepsilon / 2 & \\ & & -\varepsilon / 2 \\ \end{array} \right] \quad A_{BSF} = \left[ \begin{array}{rrr} - \varepsilon & & \\ & \varepsilon / 2 & \\ & & \varepsilon / 2 \\ \end{array} \right] \end{equation*} where $\varepsilon > 0.$ In Figure~\ref{fig:pressures}(a) the pressure in the extensional direction is plotted versus $\varepsilon$, and in Figure~\ref{fig:pressures}(b) the pressure in the compression direction is plotted versus $\varepsilon$. These plots show close agreement to the plots~\cite[Fig.~8 and Fig.~9]{bara95}. \begin{figure} \centerline{ \input{figs/pressure_vs_extension.tex} \input{figs/pressure_vs_extension_2.tex} } \centerline{ (a)\hspace{3in} (b)} \caption{\label{fig:pressures}Pressures for PEF, USF, and BSF flows. Components of the pressure tensor (which is the negative stress~\eqref{virial}) are plotted against the largest magnitude component of the velocity gradient tensor. In (a) the pressure in the direction of extension is plotted, while in (b) the pressure in the direction of contraction is plotted. These plots show close agreement to the plots~\cite[Fig.~8 and Fig.~9]{bara95}.} \end{figure} For a given velocity gradient $A,$ we define $ \gamma = A + A^T,$ and define the generalized viscosity~\cite{houn92} \begin{equation*} \eta = \frac{ \sigma : \gamma}{ \gamma : \gamma}, \end{equation*} where $A : B = \sum_{i,j} A_{ij} B_{ij}$ denotes the contraction product of a pair of tensors. In Figure~\ref{fig:viscosity} we plot the viscosity against the square root of $\varepsilon.$ \begin{remark} We note that the WCA fluid we simulate is a simple fluid, with short decorrelation time, so that it is possible to use finite duration simulations. Our algorithm has more practical application for complex molecular systems where the decorrelation time is longer than allowed by traditional, time-restricted simulations. The above numerics are to show consistency of the computational results in a simple case. \end{remark} \begin{figure} \centerline{\input{figs/viscosity_vs_extension.tex}} \caption{\label{fig:viscosity}Viscosity for PEF, USF, and BSF flows. These plots show close agreement to the plots~\cite[Fig.~6]{bara95}.} \end{figure} \section{Conclusion} We have generalized the KR boundary conditions to handle all homogeneous, incompressible three dimensional flows whose velocity gradient is a nondefective matrix. In particular, it can treat the cases of uniaxial or biaxial flow, which could not be treated with the original KR boundary conditions. The boundary conditions allow the simulations to continue for arbitrarily long times, which is important for the simulation of complex fluids with large decorrelation times. \section*{Acknowledgements} The author would like to thank Gabriel Stoltz for a careful reading of an early manuscript, as well as Bob Kohn for helpful discussions.
1,108,101,565,409
arxiv
\section{Introduction} Lasers emit light over a range of wavelengths described by the laser line shape function.\cite{Csele,Milonni,Pedrotti} For a HeNe laser operating under normal conditions, the main source of laser line shape broadening is Doppler broadening in the lasing medium, resulting in a Gaussian gain profile (see Fig.~\ref{fig:long_modes2}). The laser does not emit a continuous spectrum of wavelengths over this Gaussian gain-permitted wavelength range; rather, it can only lase when there is resonance in the lasing cavity. For the TEM$_{00}$ mode there exists an integer number, $N$, of half wavelengths between the mirrors of the laser cavity, resulting in the allowed resonance wavelengths \begin{equation} \lambda_N = \frac{2nL}{N}, \end{equation} where $L$ is the length of the laser cavity and $n$ is the index of refraction of the medium filling the laser cavity. The laser output consists of discrete wavelength peaks with power dictated by the Gaussian line shape envelope and the unsaturated gain threshold (see Fig.~\ref{fig:long_modes2}). These peaks are called longitudinal cavity modes. When the laser cavity supports more than one peak (that is, where the gain is greater than the losses for those peaks), the laser output consists of multiple discrete wavelengths. If the light from these multiple modes is projected onto a detector (for example, a photodiode), then the photocurrent will oscillate at the difference frequency, producing a beat signal. The beat frequency of interest is at the frequency due to the spacing between adjacent longitudinal modes. The frequency of the $N$th mode can be derived from Eq.~(1) to be $f_N = N(c/2nL)$. Thus the beat frequency is given by \begin{equation} \Delta f = {c\over{\lambda_{N+1}}} - {c\over{\lambda_{N}}} = {c \over 2nL}, \end{equation} and therefore $L = c/2n\Delta f$, indicating that the cavity length is directly proportional to the reciprocal of the beat frequency.\cite{Razdan} Observing the variation in beat frequency between adjacent longitudinal modes with the cavity length $L$ gives the speed of light. \begin{figure}[h!] \centering \includegraphics[width=0.37\textwidth]{DOrazio_Fig01} \caption{Schematic illustration of the longitudinal cavity modes and gain bandwidth of a laser. In the situation shown, the net gain minus losses is sufficient for laser output at only two longitudinal cavity modes. The beat frequency that we observe to measure the speed of light is the spacing between these adjacent modes.} \label{fig:long_modes2} \end{figure} Accurate measurements of the beat frequency are accomplished inexpensively by directing the output of the laser onto a high-speed photodetector\cite{Detectors} monitored with an RF spectrum analyzer or frequency counter.\cite{CSA,Phillips,Conroy} This approach has been demonstrated in Ref.~\onlinecite{Brickner} in an undergraduate experiment with the goal of measuring the speed of light using the relation in Eq.~(2) for a single laser cavity length and single corresponding beat frequency. The method is easily understood because it is analogous to investigations of waves on a string. It has a drawback, however; the inability to obtain a precise measurement of the cavity length (from the inner-cavity side of the output coupler to inner-cavity side of the back mirror) inevitably leads to results that are only marginally better than those obtained with standard time-of-flight or Foucault methods commonly used in undergraduate physics laboratories, which typically yield measurements accurate to within $\approx \pm 1 \%$.\cite{Bates,Fiber,Foucault} Minor improvements on this method can be made by collecting data for multiple lasers of different lengths and plotting the beat frequency as a function of cavity length. In addition to the uncertainty in length between the mirrors, there is also the problem of not knowing a precise (and constant) value for the index of refraction inside the gas tube. These obstacles can be overcome by using the laser as a simple light source, amplitude modulated at the intermode beat frequency, and measuring the phase difference between detectors placed at two different locations along the laser path.\cite{Barr} This modulation technique improves the measurement of the speed of light by an order of magnitude, but at the cost of increasing the conceptual complexity. The introduction of the adjustable-length HeNe laser significantly reduces the consequences of uncertainty in mirror location and the index of refraction, and improves the measurement by a order of magnitude over the modulation technique, while retaining the conceptual simplicity of the original study of Ref.~\onlinecite{Brickner}. \section{Methods} Figure~\ref{fig:HeNe_set-up} represents a schematic of the experimental set-up. The laser has an adjustable, open-cavity design with a 28\,cm HeNe plasma tube terminated on one side with a mirror and on the other with a Brewster window. The Brewster window suppresses modes with polarization orthogonal to the Brewster plane, so that all supported modes have the same polarization and thus mix effectively in the photodetector.\cite{Csele, Milonni} The experiment can be conducted without a Brewster window, but due to mode competition, adjacent longitudinal modes are typically polarized orthogonal to each other and do not mix in the photodetector, resulting in an observed signal with twice the expected frequency.\cite{Tang} If a Brewster window is not present, the situation can be remedied by placing a linear polarizer in front of the photodetector to project the polarizations of adjacent modes onto a common axis. \begin{figure}[h!] \centering \includegraphics[width=0.37\textwidth]{DOrazio_Fig02} \caption{A schematic of the experimental set-up. The length of the cavity can be adjusted over a range of approximately 16\,cm by sliding the output coupler along an optical track. The mode structure of the laser output is monitored using a scanning Fabry-Perot interferometer with a free spectral range of 1.5\,GHz and a Finesse of 250. The mode structure is controlled via an adjustable iris in the cavity. The portion of the beam that is not analyzed by the Fabry-Perot is incident on a fast photodetector (1\,ns rise time), which is coupled to an RF spectrum analyzer on which the beat signal between adjacent longitudinal modes is observed. (NPBS = non-polarizing beam splitter.)} \label{fig:HeNe_set-up} \end{figure} The variable-length cavity system has been reported and widely used in undergraduate labs to explore laser cavity modes and stability.\cite{Brandenberger,Polik,Jackson,Melles} The output coupler is a 0.60\,m radius-of-curvature mirror held in a gimbal mount. It is attached to a sliding track, allowing the cavity length to be changed from $\approx 38$\,cm (lower bound limited by the length of the plasma tube) up to $\approx 54$\,cm (upper bound restricted by laser losses). Typically we see two or three longitudinal modes separated by about 300\,MHz within the 1.5\,GHz gain bandwidth of the HeNe medium.\cite{Pedrotti} Inside the cavity, between the output coupler and the plasma tube, is an iris used to restrict gain in the region away from the optical axis of the cavity and thus force the laser to emit in the TEM$_{00}$ (Gaussian) mode. Restricting the laser to a single transverse mode is necessary because higher-order modes produce additional beat frequencies that complicate the RF spectrum. The allowed frequencies for the TEM$_{00}$ mode are given by Eq.~(1), and the allowed frequencies for higher-order TEM$_{ij}$ modes are given by \begin{equation} f_{Nij} = {c \over{2L}} \left[ N + {1 \over{\pi}}(i + j + 1)\cos^{-1}(\sqrt{g_1g_2}) \right], \end{equation} where $N$ is the same mode number as in Eq.~(1) and $g_1g_2$ is the resonator stability.\cite{Milonni, Goldsborough} Thus if TEM$_{00}$ and TEM$_{ij}$ are allowed to exist simultaneously in the cavity, beat frequencies will exist at ${c/2L}$ and ${c/2L} \pm {(1/\pi)} (i + j + 1)\cos^{-1}\sqrt{g_1g_2}$. These additional beat frequencies could provide an interesting method for measuring the resonator stability, $g_1g_2$, for a fixed cavity length. \subsection{Cavity Length Measurement} As noted in Sec.~I, we cannot accurately measure the entire laser cavity length due to the uncertainty of the position of the mirror in the HeNe tube. In addition, the index of refraction within the He- and Ne-filled tube is different from that in the rest of the cavity, which is filled with air (and a small length of glass at the window). Because we do not know the index of refraction inside the laser plasma tube, we modify Eq.~(2) by splitting $L$ into the two main regions within the laser cavity that have different indices of refraction. Let $n_{\mathrm{HeNe}}$ be the index of refraction inside the laser plasma tube and $n_{\rm air}$ be the index of refraction of air between the Brewster window and the output coupler. Then, $nL = n_\mathrm{HeNe}L_{\mathrm{HeNe}} + n_{\mathrm{\rm air}}L_{\mathrm{\rm air}}$, where additional fixed components such as the glass window and dielectric mirror coatings are assumed in the first term. In practice neither of these $L$ values is simple to measure accurately, and thus we split $L_{\rm air}$ further into two arbitrary pieces (a fixed length and a measured variable length) such that $nL = n_{\mathrm{HeNe}}L_{\mathrm{HeNe}} + n_{\mathrm{\rm air}}[L_{\mathrm{fixed}} + \Delta L]$ (see Fig.~\ref{fig:HeNe_set-up}). We substitute this expression into Eq.~(2) and obtain \begin{equation} \Delta L = {c \over{2n_{\mathrm{\rm air}} \Delta f}} - {\frac{n_{\mathrm{HeNe}}}{n_{\mathrm{\rm air}}}}L_{\mathrm{HeNe}} - L_{\mathrm{fixed}}, \end{equation} which is the equation of a line with slope $c/ 2n_{\rm air}$. Equation (4) allows us to measure the cavity length to an arbitrarily chosen reference point fixed between the laser plasma tube output and the output coupler. In practice we measure $\Delta L$ from a fixed block near the sliding track to the base of the output coupler using digital vernier calipers. The speed of light is then found from the slope of a $\Delta L$ versus $1/\Delta f$ plot. The unknown details of $n_{\mathrm HeNe}$, $L_{\mathrm HeNe}$, and similar terms for the glass window are gathered in the $y$-intercept. This algebraic trick works only when the laser is in the TEM$_{00}$ mode, and does not work if the laser were in transverse TEM$_{Nij}$ modes (where $i$ and $j$ are nonzero), as represented in Eq.~(3). More elegantly, we are taking the derivative of Eq.~(2) in the region of air where we are free to move the output coupler as shown: \begin{equation} \frac{dL}{d(\frac{1}{\Delta f})} = \frac{c}{2n_{\rm air}}. \end{equation} \subsection{Frequency Measurement} For the range of laser cavity lengths in the set-up ($\approx 0.54$\,m to 0.38\,m), the beat frequency varies from $\approx 280$\,MHz to 390\,MHz, a change of 110\,MHz over 16\,cm. The signal from the photodetector was analyzed with an RF spectrum analyzer with a maximum span of 3\,GHz and a minimum resolution bandwidth of $10$\,Hz.\cite{Detectors,CSA} A frequency counter could in principle be used, but would not provide insight into additional beat frequencies from transverse mode contributions. In addition to analyzing the laser output with the photodetector and spectrum analyzer, we split off a portion of the laser output to a scanning Fabry-Perot interferometer to observe its longitudinal mode structure.\cite{Fabry} The Fabry-Perot spectrum shows the number of modes and their amplitudes (and therefore the amplitude of the gain curve). The amplitude of the modes provides information on frequency pulling and pushing, which cause small but statistically significant shifts in the beat frequency. \textit{Frequency pulling} refers to a change in the spacing of longitudinal modes under a gain curve resulting from the different indices of refraction experienced by each mode. Across the range of frequencies that lie within the laser gain curve, the index of refraction varies steeply near the resonance transition, being lower or higher for frequencies below or above the resonance transition. From Eq.~(2) we see that the allowed frequencies below the gain peak occur at higher frequencies than would be expected and vice versa. The result is a ``pulling" of the longitudinal modes toward the center of the gain curve, effectively decreasing the difference frequency between the two. The amount by which the modes are pulled together and the beat frequency is lowered is a function of the relative intensity of the two heterodyning modes. For a given gain curve amplitude we find that the beat frequency varies over $\approx 30$ to 40\,kHz for the full range of mode relative intensities, in agreement with other studies.\cite{Lindberg} \textit{Frequency pushing} refers to the increase of the difference frequency between longitudinal modes as the field intensity in the laser cavity increases.\cite{Siegman, Shimoda} As the gain in the cavity is increased, the beat frequency also increases. We observe this increase in our set-up; when two adjacent longitudinal modes are observed with identical intensities, for a $\approx 10$\% change in total amplitude of the gain curve, there is a $\approx 9$\,kHz change in beat frequency. Figure~\ref{fig:pushing} shows this effect over a wide range of amplitudes, showing a linear relation between the change in the intensity of the modes and the frequency pushing effect. When taking data to measure the speed of light, we are able to hold our amplitude fluctuation to a variation of $\pm 10$\%. To minimize inconsistencies due to frequency pulling effects, we use the Fabry-Perot to ensure that each measurement (that is, the beat frequency at each cavity length) is taken for two longitudinal modes at the same relative intensities (see Fig.~\ref{fig:lmodes_neq}). The refractive index within the laser tube is then the same for both modes and very similar for all beat frequency measurements, reducing the pulling effect. More complex methods of ensuring that the two longitudinal modes are symmetric about the frequency of the emission line have been implemented in other studies.\cite{Balhorn} These involve using a non-Brewster window laser and subtracting the outputs of the orthogonal modes detected with two photodetectors and a polarizing beam splitter. This difference is used to control the electronic feedback to make slight adjustments to the length of the cavity. We have not attempted such elaborate feedback schemes. Instead, students make the necessary adjustments by applying gentle pressure to the optical table, which affects the cavity length on the micron scale. \begin{figure}[h!] \centering \includegraphics[width=0.4\textwidth]{DOrazio_Fig03} \caption{A sample plot of beat frequency as a function of gain curve amplitude as read from the Fabry-Perot transmission showing the effects of frequency pushing. The uncertainty in the gain curve amplitude of $\pm 10\%$ corresponds to a 18\,kHz frequency variation equivalent to a $\pm 9$\,kHz uncertainty in the beat frequency. The $0\%$ mark in this figure refers to the desired amplitude at which the frequency measurement is to be taken.} \label{fig:pushing} \end{figure} To counteract inconsistencies due to frequency pushing effects, we use the Fabry-Perot to ensure that each measurement is taken with the longitudinal modes at the same total amplitude and thus at the same laser intensity (see Fig. ~\ref{fig:lmodes_eq}). The laser power is controlled by changing the cavity loss by adjusting the intra-cavity iris. \begin{figure}[h!] \centering \includegraphics[width=0.4\textwidth]{DOrazio_Fig04} \caption{Screen shots from the oscilloscope showing transmission of the scanning Fabry-Perot interferometer. The laser output power is the same in both cases. (a) An instance where the two mode intensities are asymmetrical around the center of the gain curve, whereas (b) shows the two modes when they have equal intensities. Due to frequency pulling, the two instances will produce beat frequency values differing by a few kHz. } \label{fig:lmodes_neq} \end{figure} \begin{figure}[h] \centering \includegraphics[width=0.4\textwidth]{DOrazio_Fig05} \caption{Screen shots from the oscilloscope showing transmission of the scanning Fabry-Perot interferometer. Both show the existence of two longitudinal modes at the same relative intensity and thus each exhibit the same frequency pulling induced effects. (a) Two modes when the laser is operating at a higher gain setting than is present in (b). Due to frequency pushing, the beat frequency produced by the modes in (a) is higher than the beat frequency produced by the modes in (b). } \label{fig:lmodes_eq} \end{figure} \section{Data Analysis and Results} Figure~\ref{fig:HeNe_Data_Plot} represents experimental data for 28 cavity lengths. The uncertainty in our $\Delta L$ measurement is $\pm 1 \times 10^{-5}$\,m, dictated by the measurement limit of the digital vernier calipers. The uncertainty in our beat frequencies is dominated by frequency variability due to frequency pulling and pushing and has been minimized with the use of the Fabry-Perot interferometer. Due to frequency pulling and pushing, a change in the relative or total intensities of the heterodyning longitudinal modes corresponds to a change in the beat frequency. Thus the uncertainty in the beat frequency is found by estimating the precision to which we can achieve both the desired mode relative intensity and desired gain curve amplitude. Using the Fabry-Perot interferometer, we find that we can steadily hold the two longitudinal modes at equal relative intensities, resulting in a negligible uncertainty of $\approx \pm 2$\,kHz due to frequency pulling. Most of the uncertainty comes from frequency pushing, for it is not as simple to hold the total amplitude of the gain curve at a fixed value. To estimate this uncertainty, the precision to which the amplitudes of the modes can be held constant is converted into an uncertainty in frequency from the spread of beat frequencies observed simultaneously on the spectrum analyzer. We observe that by adjusting the position and aperture size of the iris in the resonator, we can manipulate the output to have two longitudinal modes with equal intensity and an overall gain amplitude that is constant to within $\pm 10\%$. Figure~\ref{fig:pushing} shows the beat frequency as a function of the total mode amplitude for our system. A $\pm 10\%$ variation in the total mode amplitude corresponds to an uncertainty in a single measurement of the beat frequency of $\pm9$\,kHz. The uncertainty in the frequency measurement, $\sigma_{\Delta f}$, and the uncertainty in the length measurement, $\sigma_{\Delta L}$, are fixed for each data point, but the uncertainty in the reciprocal beat frequency, $\sigma_{{1/\Delta f}}$, is a function of $\Delta f$ (which varies for each data point). Hence the uncertainty in $1/{\Delta f}$ is not fixed for each data point: $\sigma_{{1/\Delta f}}=\sigma_{\Delta f}/(\Delta f)^2$. Additionally the equivalent uncertainty in $\Delta L$ due to the uncertainty in $\Delta f$ is of the same order of magnitude as $\sigma_{\Delta L}$. That is, \begin{equation} {{d (\Delta L)} \over{d ({1 \over{\Delta f}})}} \sigma_{1/\Delta f} \approx \sigma_{\Delta L}. \end{equation} For this reason, a weighted least squares regression incorporating uncertainty in both variables is performed for the $\Delta L$ versus $1/\Delta f$ data.\cite{Bevington} The final result for the speed of light in air based on the data plotted in Fig.~6 is \begin{equation} c = (2.9972 \pm 0.0002)\times10^8\,\mbox{m/s}. \end{equation} The uncertainty of $\pm 0.0002$ is small enough to discriminate between the speed of light in air ($2.9971 \times 10^8$\,m/s for $n_{\rm air} = 1.00027$) and the speed of light in a vacuum ($2.9979 \times 10^8$\,m/s). \begin{figure}[h] \centering \includegraphics[width=0.45\textwidth]{DOrazio_Fig06} \caption{The plot of 28 data points are fit using a weighted least squares regression. Errors are too small to display on this scale. We find a slope of $c /2n_{\rm air} = (1.4986 \pm 0.0001) \times 10^8$\,m/s.} \label{fig:HeNe_Data_Plot} \end{figure} The measured speed of light yields an index of refraction for air in our lab of $n_{\rm air} = 1.00024 \pm 0.00006$. We compare this value to the index of refraction of air as a function of temperature, wavelength, pressure, and humidity. At conditions of $20^{\circ}$C, 632.8\,nm, 1 atm, and 40\% relative humidity, the accepted index of refraction of air is 1.00027.\cite{NIST} No realistic changes in relative humidity, room temperature, or atmospheric pressure significantly affect the result. Therefore, the method described here does not have the necessary precision to demonstrate the effects of atmospheric fluctuations on the index of refraction. \section{Conclusion} This experiment exposes students to a variety of experimental and mathematical techniques, demonstrates the importance of uncertainty in measurement, provides a meaningful context for using weighted regression, and familiarizes the student with three ubiquitous instruments: the laser, the Fabry-Perot interferometer, and the RF spectrum analyzer. In addition the experiment yields satisfying results, allowing measurement of the speed of light to a precision which differentiates between the speed of light in air and the speed of light in a vacuum. The precision to which the measurement is taken is limited by both the precision of our length measurement and our ability to minimize uncertainties due to the frequency pushing and pulling. One could improve length measurements with a precision linear stage and one could lock the HeNe laser so that the longitudinal modes are held to the same amplitude, but both of these improvements would be beyond the necessary scope of an intermediate physics laboratory course. \begin{acknowledgments} The authors would like to thank all of the Juniata College students who have performed this measurement in the Advanced Physics Lab over the past five years. The students that have been particularly instrumental in improving the experimental technique or data analysis have been included as authors. We also thank the reviewers for their insightful comments. This work has been supported by the von Liebig Foundation and NSF PHY-0653518. \end{acknowledgments}
1,108,101,565,410
arxiv
\section{Introduction} Correlations in ultracold atomic gases arise from the interplay of quantum statistics, interactions and thermal and quantum fluctuations. Recently, a lot of progress has been made experimentally to probe and characterize these correlations \cite{stenger99,altman04,bloch:885,gericke08}. The one-dimensional Bose gas is a particularly interesting system as quantum correlations generally play a larger role compared to three dimensional Bose-Einstein condensates and regimes with very different correlation properties can be probed experimentally~\cite{paredes04,weiss04}. In these experiments, elongated ``spaghetti'' traps are created by optical lattices, which confine the atomic motion in the transverse dimensions to zero-point quantum oscillations~\cite{goerlitz01}. Thus, the systems become effectively one-dimensional. Theoretically, interactions of the rarefied atoms in one-dimensional waveguides are well described by effective $\delta$-function interactions \cite{olshanii98}. The resulting model of a one-dimensional Bose gas is an archetype of an integrable but non-trivial many-body system that has been receiving long standing interest from physicists and mathematicians alike. The model was first solved with Bethe ansatz by Lieb and Liniger \cite{lieb63:1,lieb63:2}, who calculated the ground-state and excitation energies. Depending on the value of the dimensionless coupling strength, the Lieb-Liniger model describes various regimes with the corresponding correlations. Being exactly solvable, the model, however, does not admit complete analytic solution for the correlation functions. Up-to-now, this is complicated and challenging problem in 1D physics~\cite{korepin93:book,giamarchi04:book}. Dynamical density-density correlations can be measured in cold atoms by the two-photon Bragg scattering \cite{stenger99,ozeri04}. Theoretically, they are described by the dynamic structure factor (DSF) \cite{pitaevskii03:book} \begin{equation} S(k,\omega)=L\int \frac{\d t\d x}{2\pi\hbar}\,e^{i(\omega t-k x)} \langle0|\delta\hat{\rho}(x,t)\delta\hat{\rho}(0,0)|0\rangle , \label{eqn:dsfdef} \end{equation} where $\delta\hat{\rho}(x,t)\equiv\hat{\rho}(x,t)-n$ is the operator of density fluctuations and $n=N/L$ is the equilibrium density of particles. We consider zero temperature, where $\langle0|\ldots|0\rangle$ denotes the ground-state expectation value. The DSF is proportional to the probability of exciting a collective mode from the ground state with the transfer of momentum $k$ and energy $\hbar\omega$, as can be seen from the energy representation of Eq.~(\ref{eqn:dsfdef}) \begin{equation} S(k,\omega)=\sum_m |\langle0|\delta\hat{\rho}_k|n\rangle|^2\delta(\hbar\omega-E_m+E_0), \label{eqn:dsfenergy} \end{equation} where $\delta\hat{\rho}_k=\sum_{j}e^{-i k x_j}-N\Delta(k)$ is the Fourier component of $\delta\hat{\rho}(x)$, $\Delta(k)=1$ at $k=0$ and $\Delta(k)=0$ otherwise. Once the DSF is known, the static structure factor $S(k)$ and the pair distribution function $g(x)$ can be calculated by integration as is discussed in Sec. III.B. Previously known results for the DSF of the one-dimensional Bose gas come from Luttinger liquid theory, which predicts a power-law behavior of the DSF at low energies in the vicinity of the momenta $k=0,2\pi n, 4\pi n\ldots$ and yields universal values for the exponents \cite{haldane81,castro_neto94,astrakharchik04}. In the regime of strong interactions, we have previously derived perturbatively valid expressions covering arbitrary energies and momenta at zero \cite{brand05} and finite temperature~\cite{cherny06}. For finite systems, it is possible to compute the correlation functions numerically, using the results of the algebraic Bethe ansatz calculations \cite{caux06,caux07}. Finally, the exact power-law behavior along the limiting dispersion curve of the collective modes has recently been calculated in Refs.~\cite{khodas07,imambekov08}. These exponents differ from those predicted by Luttinger liquid theory raising the question whether the different results are compatible with each other. We address this question in Sec.~\ref{sec:apprexpr} of this paper, where we show that the results can be reconciled by taking appropriate limits. The apparent difference between the edge exponents valid along the dispersion curves and the Luttinger liquid result in the limit of vanishing energy can be traced back to the fact that the dispersion relations are curved and not straight, as is presumed by Luttinger liquid theory. The exact values of the exponents found in Refs.~\cite{khodas07,imambekov08} are of importance; however, they are not sufficient for practical estimations of the DSF as long as the prefactors are not known. In this paper we construct an approximate formula for the DSF \cite{noteconf} based on the exponents of Refs.~\cite{khodas07,imambekov08}. Within the proposed scheme, the prefactor can be found using the well-known $f$-sum rule (see, e.g.~\cite{pitaevskii03:book}.) The result turns out to be consistent with numerical results by Caux and Calabrese \cite{caux06}. Besides, it is compatible with the results of Luttinger liquid theory \cite{haldane81,castro_neto94,astrakharchik04} and perturbation theory \cite{brand05}. The approximate formula, in effect, takes into account single quasiparticle-quasihole excitations but neglects multiparticle excitations. We also present an approximate expression for the static structure factor and for the density-density correlation function, which is derived from the approximation for the DSF. \section{Exact results for dynamic structure factor in Lieb-Liniger model} \label{sec:exact} We model cold bosonic atoms in a waveguide-like micro trap by a simple 1D gas of $N$ bosons with point interactions of strength $g_{\rm B}>0 $ \begin{equation} H = \sum_{i=1}^N -\frac{\hbar^2}{2 m}\frac{\partial^2}{\partial x_i^2} + g_{\mathrm{B}} \sum_{1\leqslant i<j\leqslant N} \delta(x_i - x_j) \label{LLham} \end{equation} and impose periodic boundary conditions on the wave functions. The strength of interactions can be measured in terms of dimensionless parameter $\gamma= m g_{\mathrm{B}}/(\hbar^2 n)$. In the limit of large $\gamma$, the model is known as the Tonks-Girardeau (TG) gas. In this limit, it can be mapped onto an ideal \emph{Fermi} gas since infinite contact repulsions emulate the Pauli principle. In the opposite limit of small $\gamma$, we recover the Bogoliubov model of weakly interacting bosons. \subsection{DSF expansion in $1/\gamma$} For finite $\gamma$, the model can also be mapped onto a Fermi gas \cite{cheon99} with local interactions, inversely proportional to $g_{\mathrm{B}}$ \cite{girardeau04,granger04,brand05,cherny06}. Using the explicit form of the interactions, one can develop the time-dependent Hartree-Fock scheme \cite{brand05,cherny06} in the strong-coupling regime with small parameter $1/\gamma$. The scheme yields the correct expansion of the DSF up to the first order \cite{brand05,cherny06} \begin{equation} S(k,\omega)\frac{\varepsilon_{\mathrm F}}{N}= \frac{k_{\mathrm F}}{4 k}\left(1+\frac{8}{\gamma}\right) +\frac{1}{2\gamma}\ln \frac{\omega^{2}-\omega_{-}^{2}} {\omega_{+}^{2}-\omega^{2}}+ O\left(\frac{1}{\gamma^2}\right), \label{DSFlinear} \end{equation} for $\omega_{-}(k)\leqslant\omega\leqslant\omega_{+}(k)$, and zero elsewhere \cite{note1}. The symbol $O(x)$ denotes terms of order $x$ or even smaller. Here $\omega_\pm(k)$ are the limiting dispersions \cite{note} that bound quasiparticle-quasihole excitations (see Fig.~\ref{fig:omplmi}); in the strong-coupling regime they take the form \begin{equation} \omega_\pm(k)={\hbar |2 k_{\mathrm F} k \pm k^2|}(1-4/\gamma)/{(2 m)} +O(1/\gamma^2). \label{ompmstrong} \end{equation} By definition, $k_{\mathrm F}\equiv\pi n$ and $\varepsilon_{\mathrm F}\equiv\hbar^{2}k_{\mathrm F}^{2}/(2m)$ are the Fermi wave vector and energy of TG gas, respectively. \subsection{Link to Luttinger liquid theory} \label{LLth} Luttinger liquid theory describes the behavior of the DSF at low energies for arbitrary strength of interactions \cite{haldane81,astrakharchik04}. In particular, one can show \cite{astrakharchik04,castro_neto94} that in the vicinity of ``umklapp" point ($k=2\pi n$, $\omega =0$) it is given by \begin{equation} \frac{S(k,\omega)}{N}=\frac{n c}{\hbar\omega^{2}} \left(\frac{\hbar\omega}{m c^{2}}\right)^{2K} A(K)\left(1-\frac{\omega^{2}_{-}(k)}{\omega^2}\right)^{K-1} \label{pitdsf} \end{equation} for $\omega\geqslant\omega_{-}(k)$, and zero otherwise. Within the Luttinger-liquid theory, the dispersion is \emph{linear} near the umklapp point: $\omega_{-}(k)\simeq c|k-2 \pi n|$. By definition, \begin{equation} K\equiv \hbar\pi n/(m c) \label{Kdef} \end{equation} and $c$ is the sound velocity. For the repulsive bosons, the value of parameter $K$ lies between $1$ (TG gas) and $+\infty$ (ideal Bose gas). In the strong-coupling regime, the linear behavior of the dispersions (\ref{ompmstrong}) at small momentum determines the sound velocity, which allows us to calculate the value of the Luttinger parameter \begin{equation} K=1 +4/\gamma + O(1/\gamma^2). \label{Kstrong} \end{equation} The coefficient $A(K)$ is model-dependent; in the Lieb-Liniger model, it is known in two limiting cases: $A(K)=\pi/4$ at $K=1$ and $A(K)\simeq 8^{1-2K}\exp(-2\gamma_{\mathrm{c}}K)\pi^{2}/\Gamma^{2}(K)$ for $K\gg 1$ \cite{astrakharchik04}, where $\gamma_{\mathrm{c}}=0.5772\ldots$ is the Euler constant and $\Gamma(K)$ is the gamma function. By comparing the first-order expansion (\ref{DSFlinear}) in the vicinity of the umklapp point with Eq.~(\ref{pitdsf}) and using the expansion (\ref{Kstrong}), one can easily obtain the model-dependent coefficient at large but \emph{finite} interactions when $K-1\ll 1$ \begin{align} \label{akseries} A(K) =\frac{\pi}{4}[1 - \left(1+4\ln 2\right)(K-1)] + O\left((K-1)^2\right). \end{align} Note that the relation (\ref{pitdsf}) leads to different exponents precisely at the umklapp point and outside of it: \begin{equation} \label{pitdsfexp} S(k,\omega)\sim \left\{\begin{array}{ll} \omega^{2(K-1)},& k=2\pi n,\\ (\omega-\omega_{-})^{K-1},& k\not=2\pi n. \end{array}\right. \end{equation} \subsection{Exact edge exponents from the Lieb-Liniger solutions} \begin{figure}[t,b] \includegraphics[width=.8\columnwidth]{dsf_c10_caux.eps} \caption{\label{fig:omplmi} (Color online) Numerical values of the DSF (\ref{eqn:dsfenergy}) for the coupling parameter $\gamma = 10$ \cite{caux06}. The dimensionless value of the rescaled DSF $S(k,\omega)\varepsilon_{F}/N$ is shown in shades of gray between zero (white) and 1.0 (black). The upper and lower solid (blue) lines represent the dispersions $\omega_+(k)$ and $\omega_-(k)$, respectively, limiting the single ``particle-hole" excitations in the Lieb-Liniger model at $T=0$. The dispersions are obtained numerically by solving Lieb-Liniger's system of integral equations (see Appendix \ref{sec:LL}). The gray scale plot of the DSF demonstrates that the main contribution to the DSF comes from the single particle-hole excitations, lying inside the region $\omega_-(k)\leqslant\omega\leqslant\omega_+(k)$ (see also Fig.~\ref{fig:dsf}). } \end{figure} \begin{figure}[t,b] \includegraphics[width=.6\columnwidth]{mu_plmi_c10.eps} \caption{\label{muplmi} (Color online) Typical behavior of the exact exponents in Eq.~(\ref{glazexp}). The diagram shows $\mu_\pm$ for $\gamma=10$ obtained numerically using the method of Ref.~\cite{imambekov08} described in Appendix \ref{sec:LL}. } \end{figure} As was shown in Refs.~\cite{khodas07,imambekov08} (see also \cite{cheianov08}), within the Lieb-Liniger model the DSF exhibits the following power-law behavior near the borders of the spectrum $\omega_\pm(k)$ \begin{equation} S(k,\omega)\sim \big|\omega-\omega_{\pm}(k)\big|^{\mp \mu_{\pm}(k)}. \label{glazexp} \end{equation} The positive exponents $\mu_\pm$ \cite{note} are related to the quasi-particle scattering phase and can be calculated in the thermodynamic limit by solving a system of integral equations \cite{imambekov08}. In particular, Imambekov and Glazman \cite{imambekov08} found the following right limit \begin{equation} \label{mumiumklapp} \lim_{k\to 2 \pi n^-}\mu_{-}(k)=2\sqrt{K}(\sqrt{K}-1), \end{equation} which is different from the Luttinger liquid exponent (\ref{pitdsfexp}). However, Imambekov's and Glazman's result (\ref{mumiumklapp}) is accurate in the immediate vicinity of $\omega_{-}$ provided that the finite curvature of $\omega_{-}(k)$ is taken into consideration. Thus the difference in the exponents can be attributed \cite{imambekov08} to the linear spectrum approximation within the Luttinger liquid theory. Note, however, that the thin ``strip" in $\omega$-$k$ plane, where the exponents are different, vanishes in the point $k=2\pi n$; hence, the Luttinger exponent $2(K-1)$ becomes exact there. A typical behavior of the exponents is shown in Fig.~\ref{muplmi}. As described in Appendix \ref{sec:LL}, the exponents can be easily evaluated by solving equation (\ref{LLshitTL}) for the shift function and using Eq.~(\ref{IGmu}). \subsection{Algebraic Bethe ansatz} Recent progress in the computation of correlation functions within the Lieb-Liniger model and other 1D models has been achieved through the algebraic Bethe ansatz \cite{caux06}. In this method, matrix elements of the density operator involved in Eq.~(\ref{eqn:dsfenergy}) were calculated with the algebraic Bethe ansatz. They are given by the determinant of a matrix, which can be evaluated numerically for a finite number of particles. So, this method is based on combining integrability and numerics. The results of the numerical calculations of Ref.~\cite{caux06} are shown in Figs.~\ref{fig:omplmi} and \ref{fig:dsf}. \begin{figure}[t,b] \begin{center} \noindent\includegraphics[width=\columnwidth]{dsf_c10_q1.eps}\\ \includegraphics[width=\columnwidth]{dsf_c10_q2.eps} \end{center} \caption{\label{fig:dsf} (Color online) The Dynamic Structure Factor (DSF) in the thermodynamic limit. The proposed approximation (\ref{dsfapp1}) (line) is compared to numerical data from Caux and Calabrese \cite{caux06} (open dots). The dashed (red) line shows the data of Eq.~(\ref{dsfapp1}) convoluted in frequency with a Gaussian of width $0.07\varepsilon_{\mathrm F}/\hbar$ in order to simulate smearing that was used in generating the numerical results of Ref.~\cite{caux06}. The numerical data of Ref.~\cite{caux06} suggest that contributions from multi-particle excitations for $\omega>\omega_+$ (sharp line in parts a and b) are very small. Such contributions are not accounted for by the formula (\ref{dsfapp1}). Insert: DSF at the umklapp point in logarithmic scale. The graph shows that the DSF behaves as predicted by the Luttinger liquid theory (\ref{glazdsfexp}) with the exponent $2(K-1)$, where $K=1.402\ldots$ at $\gamma=10$.} \end{figure} \section{Approximate expression for dynamic structure factor} \subsection{Approximate expression for arbitrary values of interaction strength} \label{sec:apprexpr} Here we suggest a phenomenological expression, which is consistent with all the above-mentioned results. It reads \begin{equation} S(k,\omega)=C \frac{(\omega^{\alpha}-\omega_{-}^{\alpha})^{\mu_{-}}} {(\omega_{+}^{\alpha}-\omega^{\alpha})^{\mu_{+}}} \label{dsfapp1} \end{equation} for $\omega_{-}(k)\leqslant\omega\leqslant\omega_{+}(k)$, and zero otherwise. It follows from energy and momentum conservation that $S(k,\omega)$ is exactly equal to zero below $\omega_{-}(k)$ for $0\leqslant k \leqslant 2 \pi n$. In the other regions of $\omega > \omega_{+}$ and $\omega < \omega_{-}$ (for $k > 2 \pi n$), possible contributions can arise due to coupling to multi-particle excitations \cite{lieb63:2}. However, these contributions are known to vanish in the Tonks-Girardeau ($\gamma \to \infty$) and Bogoliubov ($\gamma \to 0$) limits and are found to be very small numerically for finite interactions \cite{caux06}. In Eq.~(\ref{dsfapp1}) $C$ is a normalization constant, $\mu_{+}(k)$ and $\mu_{-}(k)$ are the exponents of Eq.~(\ref{glazexp}), and $\alpha\equiv 1+1/\sqrt{K}$. From the definition of $K$ (\ref{Kdef}), one can see that for repulsive spinless bosons $K\geqslant 1$, and, hence, $1<\alpha\leqslant 2$. The normalization constant depends on the momentum but not the frequency and can be determined from the $f$-sum rule \cite{pitaevskii03:book} \begin{equation} \int_{0}^{+\infty} \d\omega\, \omega S(k,\omega)= N\frac{k^{2}}{2m}. \label{fsum} \end{equation} In Eq.~(\ref{dsfapp1}) we assume that the value of the exponent $\mu_{-}(k=2\pi n)$ coincides with its limiting value (\ref{mumiumklapp}) in vicinity of the umklapp point. The most general way of obtaining $\omega_{\pm}(k)$, $\mu_{\pm}(k)$, and $K$ is to solve numerically the corresponding integral equations (see Appendix \ref{sec:LL}). Note that the sum rule for the isothermal compressibility \cite{pitaevskii03:book} \begin{equation} \label{m-1} \lim_{k\to 0}\int_{0}^{+\infty} \frac{S(k,\omega)\,\d\omega}{N \omega}=\frac{1}{2 mc^2} \end{equation} is satisfied by virtue of Eq.~(\ref{relexpzero}) and the phonon behavior of the dispersions at small momentum: $\omega_{\pm}(k)\simeq c k$ (see Fig.~\ref{fig:omplmi}). Now one can see from (\ref{dsfapp1}) that \begin{equation} \label{glazdsfexp} S(k,\omega)\sim \left\{\begin{array}{ll} \omega^{2(K-1)},& k=2\pi n,\\ (\omega-\omega_{-})^{\mu_{-}(k)},& k\not=2\pi n. \end{array}\right. \end{equation} Thus, the suggested formula (\ref{dsfapp1}) is consistent with both the Luttinger liquid behavior at the umklapp point and Imambekov's and Glazman's power-law behavior in vicinity of it, as it should be. In the strong-coupling regime, Eq.~(\ref{dsfapp1}) yields the correct first order expansion (\ref{DSFlinear}). In order to show this, it is sufficient to use the strong-coupling values of $K$ (\ref{Kstrong}), the exponents (\ref{mustrong}), and the frequency dispersions (\ref{ompmstrong}). Comparison with the numerical data by Caux and Calabrese \cite{caux06} (Fig.~\ref{fig:dsf}) shows that the suggested formula works well in the regimes of both weak and strong coupling. Let us discuss how the Bogoliubov approximation arises in the weak-coupling regime in spite of the absence of the Bose-Einstein condensation in one dimension even at zero temperature \cite{book:bogoliubov70,hohenberg67}. At small $\gamma$, the upper dispersion curve $\omega_{+}(k)$ is described well \cite{lieb63:2} by the Bogoliubov relation \cite{bogoliubov47} \begin{equation} \hbar\omega_k=\sqrt{T_k^{2}+4 T_k \varepsilon_{\mathrm F} \gamma/\pi^{2}}, \label{bogdisp} \end{equation} where $T_{k}=\hbar^{2}k^{2}/(2m)$ denotes the usual one-particle kinetic energy. Besides, when $q$ is finite and $\gamma\to0$, the associated exponents $\mu_{+}$ approach the limiting value (\ref{relexp}), which in turn is very close to one. This implies that the DSF has a strong singularity near $\omega_{+}$, and, hence, it is localized almost completely within a small vicinity of the upper branch (see Fig.~\ref{fig:approx2}). Thus, the behavior of the DSF simulates the $\delta$-function spike. One can simply put $S_{\mathrm{Bog}}(k,\omega)=C \delta(\omega-\omega_{k})$ and determine the constant $C$ from the $f$-sum rule (\ref{fsum}) \begin{equation} S_{\mathrm{Bog}}(k,\omega)=N\frac{T_{k}}{\hbar\omega_k} \delta(\omega-\omega_{k}). \label{dsfbog} \end{equation} \subsection{Simplified analytic approximation for intermediate and large strength of interactions} One can further simplify the expression for the DSF and replace the parameter $\alpha$ in Eq.~(\ref{dsfapp1}) by its limiting value $\alpha=2$ for the Tonks-Girardeau gas, which turns out to be a good approximation even for intermediate coupling strength $\gamma\gtrsim 1$. This replacement allows us to write down the normalization constant explicitly. From the $f$-sum rule we obtain \begin{align} S(k,\omega)= &N\frac{k^{2}}{m}\frac{\Gamma(2+\mu_{+}-\mu_{-})} {\Gamma(1+\mu_{-})\Gamma(1-\mu_{+})} \frac{(\omega^{2}-\omega_{-}^{2})^{\mu_{-}}} {(\omega_{+}^{2}-\omega^{2})^{\mu_{+}}}\nonumber \\ &\times (\omega_{+}^{2}-\omega_{-}^{2})^{\mu_{+}-\mu_{-}-1} \label{dsfc} \end{align} for $\omega_{-}(k)\leqslant\omega\leqslant\omega_{+}(k)$, and zero otherwise. This approximation ensures all the properties of the DSF mentioned in Sec.~\ref{sec:exact}, except for the Luttinger liquid theory predictions in the close vicinity of the umklapp point (see discussion in Sec.~\ref{LLth}). However, outside the umklapp point, it agrees well with the Caux and Calabrese numerical data (see Figs.~\ref{fig:dsf} and \ref{fig:approx2}). \begin{figure}[t,b] \begin{center} \noindent\includegraphics[width=\columnwidth]{dsf_c1_q2_uni_int.eps}\\ \end{center} \caption{\label{fig:approx2} (Color online) Comparison of the two approximations for the DSF. The solid (blue) line represents the ``universal" approximation (\ref{dsfapp1}). The dashed (red) line is the approximation (\ref{dsfc}). The two curves coincide almost everywhere except for the umklapp point ($\omega=0$, $k=2\pi n$). The ``universal" approximation reproduces the correct power-low behavior of the Luttinger liquid theory $S(k,\omega)\sim \omega^{2(K-1)}$ with $K=3.425\ldots$ at $\gamma=1$, see the insert. However, the difference in absolute values is negligible due to a strong suppression of the DSF outside the close vicinity of the upper brunch.} \end{figure} From the explicit formula (\ref{dsfc}) one can find analytic expressions for the static structure factor and the dynamic polarizability. The static structure factor $S(k)\equiv \langle\hat{\rho}_{k} \hat{\rho}_{-k} \rangle/N$ contains information about the static correlations of the system and it is directly related to the pair distribution function \cite{pitaevskii03:book,pines66:book} \begin{equation} g(x)=1+\int_{0}^{+\infty} \frac{\d k}{\pi n}\,\cos(k x)\big[S(k)-1\big]. \label{sgx} \end{equation} The static structure factor can be obtained by integrating the DSF over the frequency \begin{align} S(k)= \frac{\hslash}{N}\int_{0}^{+\infty}S(k,\omega)\,\d\omega. \label{ssfgen} \end{align} Note that the ``phonon" behavior of both dispersions ensures the correct behavior of the static structure factor at small momentum. Indeed, it follows from the general expression (\ref{dsfapp1}) that $S(k)\simeq\hbar k/(2mc)$. In the large-momentum limit, we have $\omega_{+}/\omega_{-}\simeq 1$, which leads to the correct asymptotics $S(k)\to 1$ as $k\to+\infty$. Equations (\ref{dsfc}) and (\ref{ssfgen}) yield \begin{align} S(k)=&{}_{2}F_{1}\Big(\frac{3}{2}\!+\!\mu_{-}\!-\!\mu_{+},1\!+\!\mu_{-},2\!+\!\mu_{-}\!-\!\mu_{+},1\!-\!\frac{\omega_{-}^{2}}{\omega_{+}^{2}}\Big) \nonumber \\ &\times\frac{\hbar k^{2}}{2m\omega_{+}}\Big(\frac{\omega_{-}}{\omega_{+}}\Big)^{1+2\mu_{-}}, \label{ssf} \end{align} where ${}_{2}F_{1}$ is the hypergeometric function. The results for the static structure factor are plotted in Fig.~\ref{fig:ssf}. One can see that the formula for the static structure function works well even for weak coupling. This is due to the smallness of the DSF contribution to the static structure function at the umklapp point for small $\gamma$. Thus, the approximate formula provides a good accuracy for arbitrary strength of interactions. In the weak-coupling regime, one can obtain a good approximation for the static structure function from the Bogoliubov formula (\ref{dsfbog}) for the DSF \begin{equation} S(k)=\frac{T_{k}}{\hbar\omega_k}. \label{ssfbog} \end{equation} \begin{figure}[t,b] \noindent\includegraphics[width=.9\columnwidth]{ssf_caux_analyt1.eps}\\ \includegraphics[width=.9\columnwidth]{ssf_analyt_numeric.eps} \caption{\label{fig:ssf} (Color online) The static structure factor versus wavenumber for different values of the coupling constant $\gamma$. (a) The numerical data by Caux and Calabrese \cite{caux06} (open circles) are compared with the proposed analytical formula (\ref{ssf}) (solid lines). The dashed (red) line shows the static structure factor in the Bogoliubov limit (\ref{ssfbog}). (b) The static structure factor obtained with Eq.~(\ref{ssfgen}) from the general formula for the DSF (\ref{dsfapp1}) is shown by the solid line. These data are consistent with the analytical formula (\ref{ssf}) (dashed line). This indicates that the analytical formula for the static structure factor can be used even for small values of $\gamma$. } \end{figure} \begin{figure}[t,b] \noindent\includegraphics[width=.9\columnwidth]{gx_analyt.eps} \caption{\label{fig:gx} (Color online) The pair distribution function $g(x)$ versus the distance in dimensionless units $1/k_{\mathrm F}$ ($k_{\mathrm F}\equiv\pi n$) for different values of the coupling constant $\gamma$. The solid (blue) line represents the pair distribution function (\ref{sgx}) obtained using the approximation (\ref{ssf}). The Bogoliubov approximation (\ref{ssfbog}) is indicated by dashed (red) line, while the strong-coupling approximation (\ref{gxfirstour}) by dotted (green) line. The values of $g(0)$ are consistent with the results of Ref.~\cite{gangardt03}.} \end{figure} The behavior of the pair distribution function (\ref{sgx}) in the Lieb-Liniger model was studied at large \cite{korepin93:book} and short distances \cite{gangardt03,cherny06,sykes08} in various regimes. For $\gamma\ll 1$, one can obtain from Eqs.~(\ref{sgx}) and (\ref{ssfbog}) the analytical expression \cite{sykes08} \begin{equation} g(x)=1-\sqrt{\gamma}\big[\bm{L}_{-1}(2\sqrt{\gamma}\,k_{\mathrm F} x/\pi) -I_1(2\sqrt{\gamma}\,k_{\mathrm F} x/\pi)\big], \label{gxbog} \end{equation} where $\bm{L}_{-1}(x)$ is the modified Struve function and $I_1(x)$ is a Bessel function. In the opposite limit $\gamma\gg 1$, one can directly use the strong-coupling expression (\ref{DSFlinear}) for the DSF and obtain \cite{cherny06} \begin{align} g&(x)=\ 1-\frac{\sin^{2}z}{z^{2}} -\frac{2\pi}{\gamma}\frac{\partial}{\partial z}\frac{\sin^{2}z}{z^{2}} -\frac{4}{\gamma}\frac{\sin^{2}z}{z^{2}}\nonumber\\ & +\frac{2}{\gamma}\frac{\partial}{\partial z} \left[\frac{\sin z}{z}\int_{-1}^{1}\d\eta\,\sin(\eta z) \ln\frac{1+\eta}{1-\eta}\right] + O(\gamma^{-2}), \label{gxfirstour} \end{align} where $z=k_{\mathrm F} x=\pi n x$. The last equation implies that $g(x=0)$ vanishes not only in the TG limit but also in the first order of $\gamma^{-1}$, which is consistent with the results of Refs.~\cite{lieb63:1,gangardt03}. The behavior of $g(x)$, obtained from the formula (\ref{ssf}) is shown in Fig.~\ref{fig:gx}. It is consistent with both the weak- and the strong-coupling limits. The dynamic polarizability determines the linear response of the density to an external field \cite{pitaevskii03:book,pines66:book}. It can be calculated using the DSF \begin{align} \chi(k,z)=\int_{0}^{+\infty}\frac{2\omega'S(k,\omega')}{{\omega'}^{2}-z^{2}}\d\omega' \label{chigen} \end{align} On substituting Eq.~(\ref{dsfc}) in (\ref{chigen}), we get \begin{align} \chi(k,z)=&{}_{2}F_{1}\Big(1,1+\mu_{-},2+\mu_{-}\!-\mu_{+},\frac{\omega_{+}^{2}-\omega_{-}^{2}}{z^{2}-\omega_{-}^{2}}\Big)\nonumber \\ &\times N\frac{k^{2}}{m}\frac{1}{\omega_{-}^{2}-z^{2}}. \label{chi} \end{align} For a retarded response, we should put here $z=\omega+i\varepsilon$. At zero temperature the relation $S(k,\omega) =\mathrm{Im} \chi(k,\omega +i\varepsilon)/\pi$ holds. The obtained relations (\ref{ssf}) and (\ref{chi}) successfully reproduce the Tonks-Girardeau limit, considered in detail in Refs.~\cite{brand05,cherny06}. \section{Conclusion} We have discussed an approximate formula [Eq.\ (\ref{dsfapp1})] for the DSF of the one-dimensional Bose gas at zero temperature, which can be used for a wide range of momenta, energies, and coupling strengths. It neglects, in effect, only the multiparticle excitations, whose contribution is small, anyway, outside the bounds given by the dispersion curves $\omega_\pm$. Our formula is consistent with the predictions of the Luttinger liquid theory. It gives the exact exponents at the edge of the spectrum, the correct first-order expansion in the strong-coupling regime, and shows good agreement with the available numerical data. For intermediate and large values of the interaction strength $\gamma\gtrsim 1$ and outside the close vicinity of the umklapp point ($\omega=0$, $k=2\pi n$), the further simplified analytic formulas for the DSF (\ref{dsfc}) and the dynamic polarizability (\ref{chi}) provide excellent accuracy. The analytic expression (\ref{ssf}) for the static structure factor works well even for weak interactions. Our results provide a reference against which experimental measurements of static and dynamic density correlations in the one-dimensional Bose gas can be tested. They further provide a basis for future work on the consequences of correlations in this interesting system. \begin{acknowledgments} The authors are grateful to Lev Pitaevskii for valuable discussions and to Jean-Sebastien Caux for making the data of numerical calculations of Ref.~\cite{caux06} available to us and to Thomas Ernst for checking our numerical results. JB is supported by the Marsden Fund Council (contract MAU0706) from Government funding, administered by the Royal Society of New Zealand. AYuCh thanks Massey University for hospitality. \end{acknowledgments}
1,108,101,565,411
arxiv
\section{Introduction} \label{sec:intro} The strength and variation of observed solar activity is governed by the spatio-temporal dependence of flow fields in the convective envelope \citep{charbonneau05,fan09}. Thus, understanding the physics that governs the evolution and sustenance of the activity cycle of the Sun necessitates imaging its internal layers. While differential rotation has the most significant imprint on Dopplergram images \citep{schou98}, signatures due to weaker effects, such as meridional circulation \citep{giles97,BasuAntia99,zhao04,gizon2020_sci} and magnetic fields \citep{gough90,goode04,AntiaChitre13}, are also noticeable. The ability to image these weaker effects therefore critically depends on an accurate measurement of the dominant flows. This makes inferring the strength of the dominant flows along with assigning appropriate statistical uncertainties an important area of study. Differences between normal modes of the Sun and those predicted using standard solar models may be used to constrain solar internal properties. The standard models are typically adiabatic, hydrodynamic, spherically symmetric and non-rotating, also referred to as SNRNMAIS \citep{lavely92,jcd}. The usual labelling convention, using 3 quantum numbers, $(n,\ell,m)$, where~$n$ denotes radial order,~$\ell$ the angular degree, and~$m$ the azimuthal order, are used to uniquely identify normal modes. Departures of solar structure from the SNRNMAIS are modelled as small perturbations \citep{jcd_notes_orig}, which ultimately manifest themselves as observable shifts (or splittings) in the eigenfrequencies and distortions in the eigenfunctions \citep{woodard89}. The distorted eigenfunctions may be expressed as a linear combination of reference eigenfunctions and are said to be coupled with respect to the reference. Observed cross-spectra of spherical-harmonic time series corresponding to full-disk Dopplergrams are used to measure eigenfunction distortion. In the present study, we use observational data from the \emph{Helioseismic Magnetic Imager} (HMI) onboard the \emph{Solar Dynamics Observatory} \citep{schou-hmi-2012}. Different latitudes of the Sun rotate at different angular velocities, with the equator rotating faster than the poles \citep{howard84,ulrich1988}. To an observer in a frame co-rotating at a specific rotation rate $\bar{\Omega}$ of the Sun, this latitudinal rotational shear is the most significant perturbation to the reference model. This large-scale toroidal flow $\Omega\,(r,\theta)$ is well approximated as being time-independent \citep[shown to vary less than 5\% over the last century in][]{gilman74,howard84,basu_antia_2003} and zonal, with variations only along the radius $r$ and co-latitude $\theta$. Very low $\ell \leq 5$ modes penetrate the deepest layers of the Sun and were used in earlier attempts to constrain the rotation rate in the core and radiative interior \citep{claverie81,chaplin99,eff-darwich02, couvidat03,chaplin04}. However, observed solar activity is believed to be governed by the coupling of differential rotation and magnetic fields in the bulk of the convection zone \citep{miesch05}. Subsequently, studies using intermediate~$\ell \leq 100$ \citep{duvall84,brown_morrow87,brown89,libbrecht89,Duvall1996} and modes with relatively high $\ell \leq 250$ \citep{thompson96,kosovichev97_mdi, schou98} yielded overall convergent results for the rotation profile. Among other features of the convection zone \citep{howe09}, these studies established the presence of shear layers at the base of the convection zone (the tachocline) and below the solar surface. Most of these studies used measurements of frequency splittings in a condensed convention known as $a$-coefficients \citep{ritzwoller}. The azimuthal and temporal independence make differential rotation particularly amenable to inversion via $a$-coefficients. The assumption behind this formalism is that multiplets, identified by ~$(n,\ell)$, are well separated in frequency from each other, known as the `isolated multiplet approximation'. This assumption holds true when differential rotation is the sole perturbation under consideration \citep{lavely92}, even at considerably high~$\ell$. We therefore state at the outset that estimates of $a$-coefficients determined from frequency splitting serve as reliable measures of differential rotation \citep{chatterjee-antia-2009}. Nevertheless, the estimation of non-axisymmetric perturbations requires a rigorous treatment honoring the cross coupling of multiplets \citep{hanasoge17_etal,sbdas20}. In such cases, measuring changes to the eigenfunctions is far more effective than, for instance, the $a$-coefficient formalism. As a first step, it is therefore important to explore the potential of eigenfunction corrections to infer differential rotation. The theoretical modeling of eigenfunction corrections for given axisymmetric -- zonal and meridional -- flow fields may be traced back to \cite{woodard89}, followed up by further investigations \cite{Woodard00,Gough10,vorontsov07,schad11}. \cite{schad13} and \cite{schad20} used observables in the form of mode-amplitude ratios to infer meridional circulation and differential rotation, respectively. In this study, we adopt the closed-form analytical expression for correction coefficients first proposed by \cite{vorontsov07} and subsequently verified to be accurate up to angular degrees as high as~$\ell \leq 1000$ \cite{vorontsov11}, henceforth V11. The method of using cross-spectral signals to fit eigenfunction corrections was first applied by \cite{woodard13}, henceforth W13, to infer differential rotation and meridional circulation. A simple least-squares fitting, assuming a unit covariance matrix, was used for inversions in W13. Their results of odd $a$-coefficients (which encodes differential rotation), even though qualitatively similar, show a considerably larger spread than the results from frequency splittings. Moreover, the authors of W13 note that the inferred meridional flow was ``less satisfactory" [than their zonal flow estimates]. Cross spectra are dominated by differential rotation, a much larger perturbation than meridional circulation. Although zonal and meridional flows are measured in different cross-spectral channels, the inference of meridional flow is affected by differential rotation through leakage. Thus, the accurate determination of odd $a$-coefficients is critical to the inference of meridional flow. The relatively large spread in inferences of differential rotation obtained by W13 may be due to (a) a poorly conditioned minimizing function with multiple local minima surrounding the expected (frequency splitting) minima, (b) a relative insensitivity of various modes to differential rotation, resulting in a flat minimizing function close to the expected minima, (c) an inaccurate estimation of the minimizing function on account of assuming a unit-data covariance matrix, and/or (d) eigenfunction corrections only yielding accurate results in the limit of large $\ell$ ($>250$), where the isolated-multiplet approximation starts worsening. In this study, we investigate the above issues and explore the potential of using eigenfunction corrections as a means to infer differential rotation using tools from Bayesian statistics. We apply the Markov Chain Monte Carlo (MCMC) algorithm \citep{metropolis-ulam-1949,metropolis-etal-1953} using a minimizing function calculated in the L2 norm, adequately weighted by data variance. We do not bias the MCMC sampler in light of any previous measurement, effectively using an uninformed prior. The results inferred, therefore, are an independent measurement constrained only by observed cross spectra. Since Bayesian inference is a probabilistic approach to parameter estimation, we obtain joint probability-density functions in the $a$-coefficient space. This allows us to rigorously compute uncertainties associated with the measurements. We compare and qualify the results obtained with independent measurements from frequency splitting and those obtained using similar cross-spectral analysis in W13. Further, we report the inadequacy of this method for low angular-degree modes on account of poor sensitivity of spectra to rotation via $a$-coefficients. The structure of this paper is as follows. We establish mathematical notations and describe the basic physics of normal-mode helioseismology in Section~\ref{sec:basic_framework}. The governing equations which we use for modeling cross spectra using eigenfunction-correction coefficients are outlined in Section~\ref{sec:vorontsov_theory}. Section~\ref{sec:data_analysis} elaborates the steps for computing the observed cross spectra and building the misfit function and estimating data variance for performing the MCMC. Results are discussed in Section~\ref{sec:results}. Using the $a$-coefficients inferred from MCMC, cross-spectra are reconstructed in Section~\ref{sec:reconst-spectra}. A discussion on sensitivity of the current model to the model parameters is presented in Section~\ref{sec:a-coeff-sens}. The conclusions from this work are reported in Section~\ref{sec:conclusion}. \section{Theoretical Formulation} \label{sec:theory} \subsection{Basic Framework and Notation} \label{sec:basic_framework} For inferring flow profiles in the solar interior, we begin by considering the system of coupled hydrodynamic equations, namely, \begin{eqnarray} \partial_t \rho &=& - \ensuremath{\bm{\nabla}} \cdot (\rho\, \mathbf{v}), \label{eqn: HD1} \\ \rho (\partial_t \mathbf{v} + \mathbf{v}\cdot \ensuremath{\bm{\nabla}} \mathbf{v} ) &=& - \ensuremath{\bm{\nabla}} P - \rho \ensuremath{\bm{\nabla}} \phi, \label{eqn: HD2}\\ \partial_t P &=& - \mathbf{v}\cdot \ensuremath{\bm{\nabla}} P - \gamma\, P\, \ensuremath{\bm{\nabla}} \cdot \mathbf{v} \label{eqn: HD3}, \end{eqnarray} where $\rho$ is the mass density, $\mathbf{v}$ the material velocity, $P$ the pressure, $\phi$ the gravitational potential and $\gamma$ the ratio of specific heats determined by an adiabatic equation of state. The eigenstates of the Sun are modeled as linear combinations of the eigenstates of a standard solar model. Here we use model S as this reference state, which is discussed in \cite{jcd}. In absence of background flows, $\tilde{{\mathbf{v}}} = {\bf 0}$, the zeroth-order hydrodynamic equations trivially reduce to the hydrostatic equilibrium $\ensuremath{\bm{\nabla}} \tilde{P} + \tilde{\rho} \ensuremath{\bm{\nabla}} \tilde{\phi} = 0$. Hereafter, all zeroth-order static fields, unperturbed mode eigenfrequencies, eigenfunctions and amplitudes corresponding to the reference model will be indicated using tilde (to maintain consistency with notation used in W13). In response to small perturbations to the static reference model, the system exhibits oscillations $\mbox{\boldmath $\bf \xi$} (\mathbf{r},t)$. These oscillations may be decomposed into resonant ``normal modes" of the system, labeled by index $k$, with characteristic frequency $\tilde\omega_k$ and spatial pattern $\tilde\mbox{\boldmath $\bf \xi$}_k$, as follows: \begin{equation} \boldsymbol{\xi} (\boldsymbol{r},t) = \sum_k \tilde\Lambda_k(t)\,\tilde\boldsymbol{\xi}_k(\boldsymbol{r})\exp(i\tilde\omega_k t), \end{equation} where $\tilde\Lambda_k$ are the respective mode amplitudes and $\boldsymbol{r} = (r,\theta,\phi)$ denote spherical-polar coordinates. Linearizing eqns.~(\ref{eqn: HD1})--(\ref{eqn: HD3}) about the hydrostatic background model gives \citep[for a detailed derivation refer to][]{jcd_notes_orig} \begin{equation} \label{eqn:sol_wave_eqn} \mathcal{L}_0 \tilde{\boldsymbol{\xi}}_{k} = - \ensuremath{\bm{\nabla}} (\tilde{\rho} c_s^2\, \ensuremath{\bm{\nabla}} \cdot \tilde{\boldsymbol{\xi}}_{k} - \tilde{\rho} g\, \tilde{\boldsymbol{\xi}}_{k} \cdot \ev{r}) - g \,\ev{r} \ensuremath{\bm{\nabla}} \cdot (\tilde{\rho}\, \tilde{\boldsymbol{\xi}}_{k}) = \tilde{\rho} \,\tilde{\omega}_{k}^2\, \tilde{\boldsymbol{\xi}_{k}}. \end{equation} Here $\tilde{\rho}(r), c_s(r)$, and $g(r)$ denote density, sound speed, and gravity (directed radially inward) respectively of the reference solar model, and $\mathcal{L}\,_0$ is the self-adjoint unperturbed wave operator. This ensures that the eigenfrequencies $\tilde{\omega}_{k}$ are real and eigenfunctions $\tilde{\mbox{\boldmath $\bf \xi$}}_{k}$ are orthogonal. Introducing flows and other structure perturbations through the operator $\delta \mathcal{L}\,$ (e.g., magnetic fields or ellipticity) modifies the unperturbed wave equation~(\ref{eqn:sol_wave_eqn}) to \begin{equation} \label{eqn:total_wave_op} \tilde{\rho} \,\omega_k^2\, \boldsymbol{\xi}_k = \left(\mathcal{L}\,_0 + \delta \mathcal{L}\, \right) \boldsymbol{\xi}_k, \end{equation} where $\omega_k = \tilde{\omega}_{k} + \delta \omega_k$ and $\mbox{\boldmath $\bf \xi$}_k = \sum_{k'} c_{k'} \tilde{\mbox{\boldmath $\bf \xi$}}_{k'}$ are the eigenfrequency and eigenfunction associated with the perturbed wave operator $\mathcal{L}\,_0 + \delta \mathcal{L}\,$. The Sun, a predominantly hydrodynamic system, is thus treated as a fluid body with vanishing shear modulus \citep{DT98}. This is unfavourable for sustaining shear waves and therefore the eigenfunctions of the reference model are very well approximated as spheroidal \citep{kendall}, \begin{eqnarray} \tilde{\boldsymbol{\xi}}_k(r,\theta,\phi) = {}_nU{}_{\ell}(r) \,Y_{\ell m} (\theta,\phi) \,\ev{r} + {}_nV{}_{\ell}(r) \, \ensuremath{\bm{\nabla}}_1 Y_{\ell m}(\theta,\phi). \label{eqn: xi_exp} \end{eqnarray} $\boldsymbol{\ensuremath{\bm{\nabla}}}_1 = \ev{\theta}\,\partial_{\theta} + \ev{\phi}\,(\sin\theta)^{-1}\partial_{\phi}$ is the dimensionless lateral covariant derivative operator. Suitably normalized eigenfunctions $\tilde{\mbox{\boldmath $\bf \xi$}}_k$ and $\tilde{\mbox{\boldmath $\bf \xi$}}_{k'}$, where $k' = (n',\ell',m')$, satisfy the orthonormality condition \begin{equation} \label{eqn: orthonormality} \int_{\odot} \mathrm{d}^3\mathbf{r}\, \rho\,\boldsymbol{\tilde{\xi}}_{k'}^* \cdot \boldsymbol{\tilde{\xi}}_{k} = \delta_{n'n}\, \delta_{\ell' \ell}\, \delta_{m' m}. \end{equation} Since we observe only half the solar surface, orthogonality cannot be used to extract each mode separately. Windowing in the spatial domain results in spectral broadening, where contributions from neighbouring modes seep into the observed mode signal $\varphi^{\ell m}(\omega)$, as described by the leakage matrix \citep{schou94}, \begin{equation} \label{eqn: leakage} \varphi^{\ell m}(\omega) = \sum_{k'} L^{\ell m}_{k'} \, \Lambda^{k'}(\omega) = \sum_{k'} \tilde{L}^{\ell m}_{k'} \,\ \tilde{\Lambda}^{k'}(\omega). \end{equation} Here, leakage matrices $L^{\ell m}_{k'}, \tilde{L}^{\ell m}_{k'}$ and amplitudes $\Lambda^{k'}(\omega), \tilde{\Lambda}^{k'}(\omega)$ of the observed surface velocity field ${\mathbf{v}}(\omega)$ correspond to the bases of perturbed ($\mbox{\boldmath $\bf \xi$}_{k'}$) and unperturbed eigenfunctions ($\tilde{\mbox{\boldmath $\bf \xi$}}_{k'}$), respectively, \begin{equation} {\mathbf{v}} = \sum_{k'} \Lambda^{k'} \mbox{\boldmath $\bf \xi$}_{k'} = \sum_{k'} \tilde{\Lambda}^{k'} \tilde{\mbox{\boldmath $\bf \xi$}}_{k'}. \end{equation} Since leakage falls rapidly with increasing spectral distance $(|\ell - \ell'|, |m-m'|)$, Eqn.~(\ref{eqn: leakage}) demonstrates the entangling of modes in spectral proximity to $(\ell,m)$. The presence of a zeroth-order flow field $\tilde{{\mathbf{v}}}$ in Eqns.~(\ref{eqn: HD1})--(\ref{eqn: HD3}) gives rise to perturbed eigenfunctions $\mbox{\boldmath $\bf \xi$}_{k}$ and therefore introduces correction factors $c_{k}^{k'}$ with respect to the unperturbed eigenfunctions $\tilde{\mbox{\boldmath $\bf \xi$}}_{k'}$. \begin{equation} \label{eqn: eigfn_corr} \mbox{\boldmath $\bf \xi$}_k = \sum_{k'} c_k^{k'} \tilde{\mbox{\boldmath $\bf \xi$}}_{k'}. \end{equation} \begin{figure}[!ht] \centering \includegraphics[width=\textwidth]{efn_pert.pdf} \caption{Differential rotation induces 3D distortions in radial eigenfunction of unperturbed mode $(n, l)=(2, 150)$ for $m=10, 75, 140$ at radii $r/R_\odot = 0.95, 1.0$. Each column in the \textit{upper} panel correspond to 2D surfaces for the undistorted eigenfunctions $\tilde{\vec{\xi}}_{nlm}$ in the \textit{left} slice and differences between distorted and undistorted eigenfunctions $\hat{r} \cdot (\vec{\xi}_{nlm} - \tilde{\vec{\xi}}_{nlm})$ in the \textit{right} slice. The \textit{middle} panel shows the difference in the radial variation of eigenfunctions for a chosen $(\theta_0,\phi_0) = (67.8^{\circ},177.6^{\circ})$. The \textit{lower} panels indicate the magnitudes of the coupling coefficients that induce eigenfunction distortion, as in Eqn.~(\ref{eqn: eigfn_corr}). The self-coupling coefficients $c^{\ell m}_{\ell m}$ (i.e., $p=0$), being the most dominant, are not shown, in order to highlight the contributions of cross-coupling coefficients ($p\neq 0$.)} \label{fig:efn_pert} \end{figure} Using this, the statistical expectation of the cross-spectral measurement is expressed as in Eqns.~(14)--(17) of W13, \begin{equation} \label{eqn: mode_coupling} \langle \varphi^{\ell' m'} \varphi^{\ell m} \rangle = \sum_{i,j,k} \tilde{L}^{\ell'm'}_{j} \, \tilde{L}^{\ell m *}_{k} \, c^j_i \, c^{k*}_i \, \langle |\Lambda^i(\omega)|^2 \rangle, \end{equation} where $ \langle |\Lambda^i(\omega)|^2 \rangle$ denotes Lorentzians centered at resonant frequencies $\omega = \omega_i$ corresponding to the perturbed modes $\mbox{\boldmath $\bf \xi$}_{i}$. \subsection{Eigenfunction corrections due to axisymmetric flows} \label{sec:vorontsov_theory} This study uses the fact that eigenfunction-correction factors $c_k^{k'}$ in Eqn.~(\ref{eqn: eigfn_corr}) carry information about the flow field $\tilde{{\mathbf{v}}}$. Although this problem was first addressed by \cite{woodard89}, a rigorous treatment using perturbative analysis of mode coupling was only presented in V11. In this section, we outline the governing equations for the eigenfunction-correction factors $c_k^{k'}$ due to differential rotation and meridional circulation as shown in V11. Upon introducing flows, the model-S eigenfunctions are corrected as follows: \begin{equation} \label{eqn:voront-pert} \mbox{\boldmath $\bf \xi$}_{\ell} = \sum_{\ell'} c_{\ell}^{\ell'} \, \tilde{\mbox{\boldmath $\bf \xi$}}_{\ell'} + \delta \mbox{\boldmath $\bf \xi$}_\ell = \sum_{p = 0,\pm 1, \pm 2, ...} c_{\ell}^{\ell+p} \, \tilde{\mbox{\boldmath $\bf \xi$}}_{\ell+p} + \delta \mbox{\boldmath $\bf \xi$}_\ell, \end{equation} where $p = \ell' - \ell$ is used to label the offset (in angular degrees) of the neighbouring mode contributing to the distortion of the erstwhile unperturbed eigenfunction $\mbox{\boldmath $\bf \xi$}_{\ell}$ --- visual illustration may be found in Figure~\ref{fig:efn_pert}. Correction factors $c_{\ell}^{\ell+p}$ solely from modes with the same radial orders and azimuthal degrees are considered in Eqn.~(\ref{eqn:voront-pert}) and therefore labels $n$ and $m$ are suppressed. $c_{\ell,m}^{\ell+p,m'} = 0$ for $m \neq m'$ since differential rotation and meridional circulation are axisymmetric (see selection rules imposed due to Wigner 3-$j$ symbols in Appendix~A of V11). Corrections from modes belonging to a different radial order $n$ are accumulated in $\delta \mbox{\boldmath $\bf \xi$}$. Following V11 and W13, subsequent treatment ignores terms in $\delta \mbox{\boldmath $\bf \xi$}$ since it is considered to be of the order of the perturbation $\delta \mathcal{L}\,$ or smaller (rendering them at least second order in perturbed quantities). This is because the correction factor $c_{n \ell}^{n' \ell'}$ is non-trivial only if modes $\mode{n}{\ell}$ and $\mode{n'}{\ell'}$ are proximal in frequency space as well as the angular degree $s$ of the perturbing flow satisfies the relation $|\ell' - \ell| \leq s$. For modes belonging to different dispersion branches $(n \neq n')$, with either $\ell$ or $\ell'$ being moderately large ($> 50$) the prior conditions are not satisfied, since, for differential rotation, the largest non-negligible angular degree of perturbation is $s = 5$. As shown in V11, using eigenfunction perturbations as in Eqn.~(\ref{eqn:voront-pert}) and eigenfrequency perturbations $\omega_{\ell} = \tilde{\omega}_{\ell} + \delta \omega_{\ell}$, the wave equation~(\ref{eqn:total_wave_op}) reduces to an eigenvalue problem of the form \begin{equation} \mathbf{Z}\, \boldsymbol{\mathcal{C}}_{\ell} = \delta \omega_{\ell} \, \boldsymbol{\mathcal{C}}_{\ell}, \end{equation} where $\boldsymbol{\mathcal{C}}_{\ell} = \{...,c_{\ell}^{\ell-1},c_{\ell}^{\ell}, c_{\ell}^{\ell+1},...\}$ are eigenvectors corresponding to the $(P \times P)$ self-adjoint matrix ${\mathbf{Z}}$ and $P = \mathrm{max}(|\ell'-\ell|)$ denotes the largest offset of a contributing mode $\ell'$ from $\ell$ according as Eqn.~(\ref{eqn:voront-pert}). From detailed considerations of first- and second-order quasi-degenerate perturbation theory, V11 showed that the following closed-form expression for correction coefficients is accurate up to angular degrees as high as $\ell = 1000$: \begin{equation} c_{\ell}^{\ell+p} = \tfrac{1}{\pi} \int_0^{\pi} \cos \left[pt - \sum_{k=1,2,...}\tfrac{2}{k}\mathrm{Re}(b_k) \sin{(kt)} \right] \times \mathrm{exp}\left[i \sum_{k=1,2,...} \tfrac{2}{k} \mathrm{Im}(b_k) \cos{(kt)} \right] \mathrm{d}t, \qquad p=0,\pm1,... \label{eqn:clp}, \end{equation} where the convenient expressions for real and imaginary parts of $b_k$ are \begin{eqnarray} \mathrm{Re}(b_k) &=& \ell \left(\frac{\partial \tilde{\omega}}{\partial \ell} \right)^{-1}_n \sum_{s+k = \mathrm{odd}} (-1)^{\frac{s-k+1}{2}} \frac{(s-k)!!(s+k)!!}{(s+k)!} \times P_s^k \left(\frac{m}{\ell} \right) \langle \Omega_s \rangle_{n\ell}, \quad k = 1,2,... \label{eqn: b_k_real} \\ \mathrm{Im}(b_k) &=& k\ell \left(\frac{\partial \tilde{\omega}}{\partial \ell} \right)^{-1}_n \sum_{s+k = \mathrm{even}} (-1)^{\frac{s-k+2}{2}} \left(\frac{2s+1}{4\pi}\right)^{1/2} \frac{(s-k-1)!!(s+k-1)!!}{(s+k)!} \times P_s^k \left(\frac{m}{\ell} \right) \langle \frac{v_s}{r} \rangle_{n\ell}, \quad k = 1,2,... \label{eqn: b_k_imag}. \end{eqnarray} We consider only odd-$s$ dependencies of $\Omega$. The even-$s$ correspond to North-South (NS) asymmetry in differential rotation and are estimated to be weak at the surface \citep[NS asymmetry coefficients are estimated to be an order of magnitude smaller than their symmetric counterparts;][]{mdzinarishvili2020}. The contribution of even-$s$ components to the real part of $b_k$ can thus be ignored. For the asymptotic limit of high-degrees, \begin{equation} \label{eqn: b_k_odd_s} \mathrm{Re}(b_k) = \ell \left(\frac{\partial \tilde{\omega}}{\partial \ell} \right)_n^{-1} \sum_{s+k = odd} (-1)^{\frac{k-2}{2}} \frac{s!(s-k)!!(s+k)!!}{(s+k)!s!!s!!} \times a^{n\ell}_s \, P_s^k \left(\frac{m}{\ell} \right), \quad k = 2,4,... , \end{equation} \begin{equation} a^{n\ell}_s \approx (-1)^{\frac{s-1}{2}} \frac{s!! s!!}{s!} \langle \Omega_s\rangle_{n\ell}, \quad s = 1,3,... \end{equation} Figure~\ref{fig:efn_pert} illustrates the distortion of eigenfunctions due to an equatorially symmetric differential rotation (using frequency splitting estimates of $a_3$ and $a_5$ coefficients). It can be seen that differences between distorted eigenfunctions $\vec{\xi}_{nlm}$ and their undistorted counterparts $\tilde{\vec{\xi}}_{nlm}$ are at around the $50\%$ level for some azimuthal orders. The correction coefficients, given by $c^{\ell+p, m}_{\ell, m}$, are shown in the bottom panel of Figure~\ref{fig:efn_pert}. Since the largest contribution to $\boldsymbol{\xi}_{\ell}$ comes from $\tilde{\boldsymbol{\xi}}_{\ell}$, $c^{\ell, m}_{\ell, m} (\gtrsim 0.8)$ are not plotted to highlight the corrections from neighbouring modes with $p \neq 0$. Visual inspection shows that $c^{\ell+p, m}_{\ell, m}$ have non-zero elements at $p = \pm 2, \pm 4$, as expected from selection rules due to the rotation field $\Omega_s(r)$ for $s=3,5$. High $\ell$ eigenfunctions are predominantly large close to the surface. Consequently, we see that their distortions are much larger at shallower than deeper depths. We choose to plot three cases --- low, intermediate, and high $m$. For the extreme cases of $m = 0$ and $m = \ell$, $c^{\ell+p, m}_{\ell, m} \sim 0$, since for odd $s$ and even $k$, $P_s^k(\mu)$ vanishes at $\mu = 0,1$. Thus these eigenfunctions remain undistorted under an equatorially symmetric differential rotation. For sake of completeness, it may be mentioned that the finite $c^{\ell+p, m}_{\ell, m}$ for $p \neq 0$ seemingly disqualifies the frequency-splitting measurements, which assume isolated multiplets --- meaning $c^{\ell+p, m}_{\ell, m} = \delta_{p,0}$. However, it does not necessarily imply that the isolated multiplet approximation is poor at these angular degrees. If the eigenfunction error $\delta \boldsymbol{\xi}_k$ incurred on neglecting cross-coupling is of order $\mathcal{O}(\epsilon)$ then it can be shown \citep[see Chapter 8 of][]{freidberg_2014, cutler} that the error in estimating eigenfrequency $\delta \omega_k$ is at most of order $\mathcal{O}(\epsilon^2)$, where $\epsilon$ is small. To illustrate this further, if the error in estimating eigenfunction distortion on neglecting cross coupling $(p \neq 0)$ is written as $\epsilon \, \boldsymbol{\xi}_{\ell + p}$, then from inspecting Eqn.~(\ref{eqn:voront-pert}), we see that $\epsilon \sim |c_{\ell}^{\ell+p}|$. Upon investigating the $(n,\ell) = (2,150)$ case presented in Figure~\ref{fig:efn_pert} for $p \neq 0$, we find $c_{\ell}^{\ell+p} \lesssim \mathcal{O}(10^{-1})$. The equivalent error incurred in eigenfrequency estimation may be computed according to the discussion in Section~\ref{sec:QDPT_vs_DPT}. This yields $\delta \omega / \omega \lesssim \mathcal{O}(10^{-2})$ in the range $150 \leq \ell \leq 250$ thereby confirming the above argument for $\epsilon \sim 10^{-1}$. Given the leakage matrices and Lorentzians, the forward problem of modeling $\langle \varphi^{\ell' m'} \varphi^{\ell m *} \rangle$ requires constructing eigenfunction corrections $c^{\ell+p}_{\ell}$ using the $a$-coefficients in Eqn.~(\ref{eqn: b_k_odd_s}) and the poloidal flow in Eqn.~(\ref{eqn: b_k_imag}). Thus, for axisymmetric flows, the cross spectra for moderately large $\ell_1$ and $\ell_2$ from Eqn.~(\ref{eqn: mode_coupling}) may be written more explicitly as \begin{equation} \langle \varphi^{\ell_1, m_1} \, \varphi^{\ell_2, m_1 *} \rangle = \sum_{p,p',\ell,m} \, \tilde{L}_{\ell+p,m}^{\ell_1,m_1} \, \tilde{L}_{\ell+p',m}^{\ell_2, m_2} \, c_{\ell,m}^{\ell+p,m} \, c_{\ell,m}^{\ell+p',m*} \, \langle |\Lambda^{\ell,m}(\omega)|^2 \rangle. \end{equation} The leakage matrices $\tilde{L}_{\ell+p,m}^{\ell_1,m_1}$ impose bounds on the farthest modes that leak into mode amplitude $\varphi^{\ell m}$. This is because $\tilde{L}_{\ell+p,m}^{\ell_1,m_1}$ is non-zero only when $\ell+p \in [\ell_1 - \delta \ell, \ell_1 + \delta \ell]$ and $m \in [m_1 - \delta m, m_1 + \delta m]$, where $\delta \ell$ and $\delta m$ are the farthest spectral offsets. Thus, for a given $\ell$, we must determine the correction coefficients $c_{\ell,m}^{\ell+p,m}$ such that $p \in [\ell_1 - \delta \ell -\ell,\ell_1 + \delta \ell-\ell]$. Similar bounds on $p'$ in $c_{\ell,m}^{\ell+p',m}$ are imposed by the second leakage matrix $\tilde{L}_{\ell+p',m}^{\ell_2,m_2}$, namely, \begin{equation} \langle \varphi^{\ell_1,m_1} \, \varphi^{\ell_1 + \Delta \ell, m_1 *} \rangle = \sum_{p,p',\ell,m} \, \tilde{L}_{\ell+p,m}^{\ell_1,m_1} \, \tilde{L}_{\ell+p',m}^{\ell_1 + \Delta \ell, m_1} \, c_{\ell,m}^{\ell+p,m} \, c_{\ell,m}^{\ell+p',m*} \, \langle |\Lambda^{\ell,m}(\omega)|^2 \rangle . \label{eqn:mode-coupling} \end{equation} Being significantly weaker than differential rotation, we neglect the contribution of meridional circulation \citep{imada18,gizon2020_sci} to the eigenfunction corrections. \section{Data Analysis} \label{sec:data_analysis} \begin{figure} \centering \includegraphics[width=1.0\textwidth]{csdata_200_202.pdf} \caption{Cross-spectral signal for $\ell = 200$, $\Delta \ell = 2$ and $n=0$. Panel (a, b): Observed cross-spectrum corresponding to $m^+$ and $m^-$. Panel (c, d): Derotated cross spectrum corresponding to $m^+$ and $m^-$. Panel (e, f): $D^{\ell, \Delta \ell, \pm}_n$. The baseline is indicated by the dashed blue line. The blue dots represent observations from the five 72-day time series and the red curve corresponds to the expectation value of the cross-spectrum.} \label{fig:cs-200-202} \end{figure} We use the full-disk 72-day gap-filled spherical-harmonic time series $\varphi^{\ell m}(t)$, which are recorded at a cadence of 45~seconds by HMI \citep{larson-schou-2015}. The data are available for harmonic degrees in the range $\ell \leq 300$. The time series is transformed to the frequency domain to obtain $\varphi^{\ell m}(\omega)$. The negative-frequency components are associated with the negative $m$ components using the symmetry relation (Appendix~\ref{apdx:sph-symm}) \begin{equation} \varphi^{\ell, -|m|}(\omega) = (-1)^{|m|}\varphi^{\ell, |m|*}(-\omega). \end{equation} The ensemble average of the cross spectrum is computed by averaging five continuous 72-day time series, which corresponds to 360~days of helioseismic data. The eigenfrequencies of the unperturbed model $\tilde{\omega}_{n\ell m}$ are degenerate in~$m$, i.e., $\tilde{\omega}_{n\ell m} = \tilde{\omega}_{n\ell 0}$. Rotation breaks spherical symmetry and lifts the degeneracy in~$m$. As in W13, we show the cross-spectrum for $n=0$ and $\ell=200$, $\Delta \ell=2$ in Figure~\ref{fig:cs-200-202}. The effect of rotation is visible through the inclination of the ridges in the $m-\nu$ spectrum, as seen in Panels~(a, b) of the figure. The multiple vertical ridges are due to leakage of power. The cross spectra are derotated and stacked about the central frequency, corresponding to~$m=0$, which is shown in Panels~(c, d) of Figure~\ref{fig:cs-200-202}. In order to improve the signal-to-noise ratio, the stacked cross spectrum is summed over azimuthal order~$m$. This quantity is used to determine the extent of coupling, denoted by $D^{\ell,\Delta \ell, \pm}_n$. The~$-$~($+$) signs indicates summation over negative (positive)~$m$. For notational convenience, we define~$m^+$ when referring to $m \geq 0$ and $m^-$ to denote $m \leq 0$. The operation of stacking (derotating) the original spectra is denoted by $\mathcal{S}_m$. Since differential rotation affects only the real part of the cross-spectrum (see Eqn.~\ref{eqn:clp}), $D^{\ell, \Delta \ell, \pm}_n$ refers to the real part of the cross-spectrum. \begin{equation} D_n^{\ell, \Delta \ell, \pm}(\omega) = \left \langle \sum_{m^\pm} \mathcal{S}_m \left( \text{Re}\left[\varphi^{\ell m}(\omega) \varphi^{\ell+\Delta \ell, m*}(\omega) \right] \right) \right\rangle . \label{eqn:data-measurement} \end{equation} The cross-spectral model is a combination of Lorentzians and is based on Eqn.~(\ref{eqn:mode-coupling}). The HMI-pipeline analysis provides us with mode amplitudes and linewidths for multiplets $(n, \ell)$. The~$m$ dependence of frequency, $\omega_{n\ell m} - \omega_{nl0}$, is encoded in 36 frequency-splitting coefficients $(a^{nl}_1, a^{nl}_2, ..., a^{nl}_{36})$. These values are used to construct the Lorentzians for the model, which is denoted by $M^{\ell, \Delta \ell, \pm}$ and expressed as \begin{equation} M^{\ell, \Delta \ell, \pm}_n(\omega) = \sum_{m\pm} \mathcal{S}_m \left( \sum_{p,p',\ell,m'} \, \tilde{L}_{\ell+p,m'}^{\ell_1,m} \, \tilde{L}_{\ell+p',m'}^{\ell_1 + \Delta \ell, m} \, c_{\ell,m'}^{\ell+p,m'} \, c_{\ell,m'}^{\ell+p',m'*} \, \langle |\Lambda^{\ell,m'}_n(\omega)|^2 \rangle \right) \label{eqn:mode-coupling-model}. \end{equation} As seen in Panels~(e, f) of Figure~\ref{fig:cs-200-202}, the cross spectra sit on a non-zero baseline. This is a non-seismic background and hence is explicitly fitted for before further analysis of the data. The complete model of the cross spectrum involves leakage from the power spectrum, eigenfunction coupling, as well as the non-seismic background, i.e., the data $D^{\ell, \Delta\ell, \pm}_n(\omega)$ is modelled as \( M^{\ell, \Delta \ell, \pm}_n(\omega) + b^{\ell, \Delta \ell, \pm}_n (\omega) \). The baseline $b_n^{\ell, \Delta \ell, \pm}(\omega)$ is computed by considering 50 frequency bins on either side, far from resonance, and fitting a straight line through them, in a least-squares sense. The model $M^{\ell, \Delta \ell, \pm}_n(\omega)$ depends on the $a^{nl}_3$ and $a^{nl}_5$ splitting coefficients via the eigenfunction-correction coefficients $c^{\ell + p}_{\ell}$. A Bayesian-analysis approach is used to estimate the values $(a^{nl}_3, a^{nl}_5)$, using MCMC, described in Section~\ref{sec:MCMC}. The misfit function that quantifies the goodness of a chosen model is given by \begin{equation} \Xi_n = \sum_{l, \omega, \pm} \left( \frac{ D^{\ell, \Delta \ell, \pm}_n(\omega) - (M^{\ell, \Delta \ell, \pm}_n (\omega) + b^{\ell, \Delta \ell, \pm}_n(\omega))} {\sigma^{\ell, \Delta \ell, \pm}_n(\omega)} \right)^2 \label{eqn:misfit}, \end{equation} where $[\sigma^{\ell, \Delta \ell, \pm}_n(\omega)]^2$ denotes the variance of the data $D^{\ell, \Delta \ell, \pm}_n(\omega)$ and is given by \begin{equation} [\sigma^{\ell, \Delta \ell, \pm}_n(\omega)]^2 = \left\langle \left( \sum_{m\pm} \mathcal{S}_m \left[\phi^{\ell, m}(\omega)\, \phi^{\ell+\Delta \ell, m*}(\omega) \right] - D^{\ell, \Delta \ell, m\pm} \right)^2 \right\rangle . \label{eqn:variance} \end{equation} \subsection{Bayesian Inference: MCMC} \label{sec:MCMC} Bayesian inference is a statistical method to determine the probability distribution functions (PDF) of the inferred model parameters. For data $D$ and model parameters $a$, the posterior PDF $p(a|D)$, which is the conditional probability of the model given data, may be constructed using the likelihood function $p(D|a)$ and a given prior PDF of the model parameters $p(a)$. The prior encapsulates information about what is already known about the model parameters $a$. \begin{equation} p(a|D) \propto p(D|a) p(a). \end{equation} The constant of proportionality is the normalization factor for the posterior probability distribution, which may be difficult to compute. The sampling of these PDFs is performed using MCMC, which involves performing a biased random walk in parameter space. Starting from an initial guess of parameters, a random change is performed. The move is accepted or rejected based on the ratio of the posterior probability at the two locations. Hence, the normalization factor is superfluous to the MCMC method. Bayesian MCMC analysis has been used quite extensively in astrophysical problems \citep[][and references therein]{saha-williams-1994,christensen-meyer-1998, sharma2017mcmc} and terrestrial seismology \citep[][and references therein]{sambridge2002}. However, the use of MCMC in global helioseismology has been limited as compared to terrestrial seismology \citep{jackiewicz2020}. The aim of the current calculation is the estimation of $(a_3^{n\ell}, a_5^{n\ell})$ that best reproduce the observed cross-spectra from the model, given by Eqn.~(\ref{eqn:mode-coupling-model}), where it is seen that the coupling coefficients $c^{\ell + p}_{\ell}$ depend on $(a_3^{n\ell}, a_5^{n\ell})$. However, because of leakage, neighbouring $\ell$ corresponding to the spectrum in question also contribute to the cross-spectrum. Hence, the spectrum of $(\ell, \Delta \ell)$ depends on $(a_3^{n\ell'}, a_5^{n\ell'})$ for $\ell' \in [\ell-\delta\ell, \ell+\Delta\ell+\delta\ell]$. Since we only consider mode leakage at the same radial order $n$, we are forced to simultaneously estimate all the $(a_3^{n\ell}, a_5^{n\ell})$ for a given $n$. For instance, at $n=0$, we have 52 modes with $\ell < 250$, and 94 spectra corresponding to $\Delta \ell = 2, 4$, for both $m^+$ and $m^-$ branches. In this case, there are 52 $(a_3^{0\ell}, a_5^{0\ell})$ pairs that need to be estimated and 188 spectra which need to be modeled. Performing inversions on a high dimensional, jagged landscape is a challenge as the fine tuning of regularization is tedious. However, since we have a model which encodes the dependence of the $a$ coefficients on the cross-spectrum, we could ``brute-force'' the estimation of parameters. The utility of MCMC is that it enables us to sample the entire parameter space. Since the inference of the posterior PDF depends strongly on the prior, it is instructive to use an uninformed or flat prior. For the MCMC simulations, we use the Python package \texttt{emcee} by \cite{emcee}. The package is based on the affine invariant ensemble sampler by \cite{goodman_weare_2010}. Multiple random walkers are used to sample high-dimensional parameter spaces efficiently. We use a flat prior for all $a_3^{n\ell}$ and $a_5^{n\ell}$ given by \begin{align} p(a_3) = \frac{1}{20} \qquad 15 \le a_3 \le 35 \qquad \text{and} \qquad p(a_5) = \frac{1}{16} \qquad -16 \le a_5 \le 0, \end{align} and zero everywhere else for all $(\ell, n)$. This is motivated by the results of frequency splittings. For modes near the surface, i.e., for low values of $\nu_{n\ell}/\ell$, $a_3$ has been measured to be nearly $22$~nHz and $a_5$ is $-4$~nHz. The likelihood function is defined as \begin{equation} p(D|a) = \exp(-\Xi_n), \label{eqn:likelihood} \end{equation} where~$\Xi_n$ is the misfit given by Eqn.~(\ref{eqn:misfit}). Flat priors enable us to sample the likelihood function in the given region in parameter space. We perform MCMC inversions for $n=0, 1,..8$ and find that the likelihood function is unimodal in all model parameters. For the sake of illustration, a smaller computation is presented in Appendix~\ref{sec:MCMC_demo}. \begin{table}[h!] \begin{varwidth}[b]{0.35\linewidth} \centering \begin{tabular}{| c | c |} \hline Radial order $n$ & Range of $\ell$ for $(a_3, a_5)$\\ \hline\hline 0 & 192--241, 241--281, 271--289 \\ 1 & 80--120, 110--150, 140--183 \\ 2 & 60--100, 90--130, 120--161 \\ 3 & 43--73, 73--113, 103--145 \\ 4 & 40--80, 70--110, 100--140 \\ 5 & 46--86, 76--116, 106--146 \\ 6 & 58--98, 88--128, 118--138 \\ 7 & 64--104, 94--114 \\ 8 & 73--103 \\ \hline \end{tabular} \vspace*{7mm} \caption{List of modes $(n,\ell)$ used in MCMC. These are marked as \textit{black} dot in Figure~\ref{fig:mode-selection}.} \label{tab:modelist} \end{varwidth}% \hfill \begin{minipage}[b]{0.65\linewidth} \centering \includegraphics[width=0.95\textwidth] {modes-list.pdf} \captionof{figure}{Classification of modes.} \label{fig:mode-selection} \end{minipage} \end{table} \section{Results and Discussion} \label{sec:results} The MCMC analysis is performed for each radial order separately. The current model only considers leakage between modes of the same radial order and hence the ideal way of estimating the parameters would be to estimate all $(a_3, a_5)$ at a given radial order by modelling all the cross-spectra at the same radial order. However, this makes the problem computationally very demanding as the MCMC method used requires at least $2k+1$ random walkers for $k$ different parameters to be fit. To work around this, we break the entire set of parameters into chunks of 40 pairs, while ensuring an overlap of 10 pairs between the chunks. In Table~\ref{tab:modelist}, we list the set of $\ell$'s for which MCMC sampling is performed and parameters are estimated. Figure~\ref{fig:mode-selection} marks the multiplets $(n,\ell)$ available from the HMI pipeline. The multiplets whose modes are used for this study are labelled as black dots. The red dots, which are located at lower $\ell$, correspond to those modes which have contributions from neighbouring radial orders within the temporal-frequency window. This gets worse for $\ell<20$, where contributions from neighbouring radial orders may be seen even near central peaks. Modelling these spectra would require including coupling across radial orders, which is not the case in the present analysis. Thus we only use modes corresponding to $\nu_{n\ell}/\ell < 45$. Figure~\ref{fig:mode-selection} also marks unused HMI-resolved modes as blue dots on either side of the black dots (used modes). This is because we consider only modes that may be fully modelled with parameters available from the HMI pipeline. Modelling a the degree $\ell$ requires mode parameters corresponding to modes from $(\ell-\delta\ell)$ to $(\ell+\delta\ell)$. The existence of unresolved modes (with no mode-parameter information from the HMI pipeline) in this region means that modelling is incomplete, i.e., there would be peaks in the observed spectrum that are missed by the model. Hence, such modes are not considered for the present work. For any given radial order, the first $\delta\ell$ and the last $\delta\ell$ modes cannot be modelled and thus we see blue points on either side of the set of black dots in Figure~\ref{fig:mode-selection}. The results of the MCMC analysis at all the radial orders are combined and presented in Figure~\ref{fig:error-bars}. We note that that the confidence intervals become larger for higher $\nu/\ell$. The reasons for this are discussed in Section~\ref{sec:a-coeff-sens}. Estimates of $a$-coefficients are largely in agreement with the splitting coefficients --- although the most probable values of the coupling-derived parameters are different from their splitting counterparts, they predominantly lie within the 1-$\sigma$ confidence interval. The confidence intervals of $a_3$ and $a_5$ are nearly the same size. We obtain better results, in terms of the spread in the inferred $a_3$-coefficients, than W13. This may be attributed to the consideration of data variance as well as simultaneous fitting for model parameters using a Bayesian approach. For instance, the spread of $a_3$ in the range $0 < \nu/\ell \lesssim 40$ is seen to be in the range 7.5--30~nHz in W13, whereas our estimates are in the range 15--26~nHz. The present method allows us to quantify the 1-$\sigma$ confidence interval around the most probable values for estimated $a$-coefficients, whereas W13 have shown only inversion values of $a$-coefficients without their respective uncertainties. However, we also note that the estimates of $a_5$ from Bayesian analysis are comparable to the least-squares inversions of W13. \subsection{Reconstructed power and cross spectra} \label{sec:reconst-spectra} \begin{figure}[h!] \centering \includegraphics[width=\textwidth]{00_222.pdf} \caption{Cross spectrum for $\ell=222$ and $\Delta\ell = 0, 2, 4$. The upper panels correspond to $m^+$ and lower panels to $m^-$. The black curve shows observed data. The blue curve is the model before considering eigenfunction coupling and the red curve corresponds to model constructed using parameters estimated from MCMC.} \label{fig:spectra_00_222} \end{figure} \begin{figure}[h!] \centering \includegraphics[width=\textwidth]{04_70.pdf} \caption{Cross spectrum for $\ell=70$ and $\Delta\ell = 0, 2, 4$. The upper panels correspond to $m^+$ and lower panels to $m^-$. The black curve shows observed data. The blue curve is the model before considering eigenfunction coupling and the red curve corresponds to the model constructed using parameters estimated from MCMC.} \label{fig:spectra_04_70} \end{figure} The $a$-coefficients obtained from the MCMC analysis are used to reconstruct cross-spectra, e.g., Figure~\ref{fig:spectra_00_222} shows the cross spectrum for $(n=0, \ell=222)$. It may be seen that, before considering eigenfunction corrections (in the absence of differential rotation), the spectrum shown in blue is considerably different --- in both magnitude and sign --- from the observed data. After including eigenfunction corrections, which have been estimated from MCMC, we see that the model is in close agreement with the data. In the intermediate-$\ell$ range, we show cross-spectra for $(n=4, \ell=70)$ in Figure~\ref{fig:spectra_04_70}. The corrections due to eigenfunction distortion are markedly less significant when compared to $(\ell=222, n=0)$, demonstrating loss of sensitivity of the model to the coupling coefficients. \begin{figure} \centering \includegraphics[width=1.0\textwidth]{error_bars.pdf} \caption{Inferred $a_3$ and $a_5$ coefficients from MCMC are shown as black dots with 1-$\sigma$ confidence intervals. The values from frequency splitting are shown in red.} \label{fig:error-bars} \end{figure} \subsection{Sensitivity of $a$-coefficients to differential rotation} \label{sec:a-coeff-sens} Mode coupling has diminished sensitivity in estimating $a$-coefficients for low-$\ell$ modes. The coupling coefficients $c_\ell^{\ell+p}$ depend on the real and imaginary parts of $b_k$. Differential rotation contributes to only the real part of $b_k$ (Eqn.~[\ref{eqn: b_k_real}]) and the dependence on $\ell$ appears through the factor $\ell (\partial \omega_{nl}/\partial \ell)^{-1}$. The plot of eigenfrequencies $\omega_{n\ell}$ against $\ell$ is known to flatten for higher $\ell$. Hence, $\partial \omega_{n\ell}/\partial \ell$ is large for small $\ell$ and small for large $\ell$ \citep[see Figure~1 in][]{rhodes1997}. This results in $b_k$ being small for low $\ell$ and its magnitude increases with $\ell$, causing this decreased sensitivity to low $\ell$. The lower sensitivity implies that the misfit function $S$ is flatter at lower $\ell$. To demonstrate this, we compute $S$ over $\ell=80$--245 for a range of values of $a-$coefficients and determine how wide or flat $S$ is in the neighbourhood of the optimal solution. Figure~\ref{fig:acoef_sens} shows that the misfit is wide for $\ell=80$ and it becomes sharper with increasing $\ell$. As the highest-resolved mode for $n=1$ corresponds to $\ell=179$, we consider the radial order $n=0$ in order to study this in an extended region of $\ell$. The first two panels show the colour map of the misfit function. Near the optimal value $a^{n\ell}_s/a^{n\ell}_{FS} = 1$, the synthetic misfit falls to~$0$. This is possible as the synthetic data is noise free and it can be completely modeled. The misfit increases on either side of the optimum value. The second panel shows the scaled misfit for HMI data, which is close to $1$ at the optimum, increasing on either side of the optimal value. We see the dark patch become wider at lower $\ell$, indicating the flatness of the misfit function for low $\ell$. The likelihood function, which is defined to be $\exp(-S)$, is approximated as a Gaussian in the vicinity of the optimum. The width of this Gaussian is treated as a measure of the width of the misfit function $S$, with wider misfit implying lower sensitivity to $a-$coefficients. This is shown in the third panel of Figure~\ref{fig:acoef_sens}, where we see a decreasing trend in misfit width, indicating that the sensitivity of mode coupling increases with $\ell$. \begin{figure} \centering \includegraphics[width=\textwidth]{acoeff_sens.pdf} \caption{Sensitivity of spectral fitting to $a$-coefficients as a function of angular degree $\ell$. \textit{Top} panel shows the variation of misfit between synthetic data calculated using frequency-splitting $a$-coefficients $a_{s,\mathrm{FS}}^{n\ell}$ and synthetic spectra computed from a scaled set of $a$ coefficients $a_{s}^{n\ell}$. A well-defined minimum along $a_s^{n\ell}/a_{s,\mathrm{FS}}^{n\ell} = 1.0$, which broadens towards smaller $\ell$, shows a drop in sensitivity of the spectra to variations in $a$ coefficients, as predicted by theory. \textit{Middle} panel shows the sensitivity of $a$ coefficients, but now computed using the misfit between HMI and synthetic spectra computed from a scaled set of $a$ coefficients $a_{s}^{n\ell}$. While it has the same qualitative drop in $a$-coefficient sensitivity for decreasing $\ell$, the ridge of the minimum (darkest patch) is seen to deviate from $a_{s,\mathrm{FS}}^{n\ell}$. \textit{Bottom} panel shows in \textit{black} the effective variance of misfit for each $\ell$. The narrowing confinement of the data misfit towards higher $\ell$ is seen as a decreasing effective variance with increasing $\ell$. The \textit{red} line shows the increase in the factor $\ell/(\partial \omega_{n\ell}/\partial \ell)$ that enhances sensitivity at higher $\ell$, as predicted by Eqn.~(\ref{eqn: b_k_odd_s}). The areas corresponding to radial orders $n=0, 1$ are indicated on top of each plot. } \label{fig:acoef_sens} \end{figure} \subsection{Scaling factor for synthetic spectra} The model constructed using mode parameters obtained from the HMI pipeline needs to be scaled to match the observations. This scaling factor has to be empirically determined. Since there is no well-accepted convention to estimate this factor, it is worthwhile to explore different methods of its estimating. We employ three different methods to infer the scale factor and show that the results are nearly identical. \begin{compactitem} \item Consider all the power spectra for a given radial order and perform a least-squares fitting for the scale factor $N_0$. \item Fit for the scale factor $N_\ell$ as a function of the spherical harmonic degree $\ell$ by considering all power spectra at a given radial order. \item Include the scale factor as an independent parameter to be estimated in the MCMC analysis. \end{compactitem} Figure~\ref{fig:norm_plot} shows that all the independent ways of estimating the scale factor are within 5\% of each other, indicating robustness. \begin{figure} \centering \includegraphics[width=\textwidth]{Norm_plot.pdf} \caption{The red line corresponds to $N_0$. The gray region corresponds to 5\% error from $N_0$. The gray points correspond to $N_\ell$ and the solid black lines are from each MCMC simulation. The right-most panel shows the histogram of all the gray points, taken from all radial orders.} \label{fig:norm_plot} \end{figure} \subsection{How good is the isolated multiplet approximation for $\ell \leq 300$?} \label{sec:QDPT_vs_DPT} \begin{figure} \centering \includegraphics[width=\textwidth]{DR_all.pdf} \caption{The relative offset of $L_2^\text{QDPT}$ as compared to that of $L_2^\text{DPT}$ (see Eqn~[\ref{eqn: l2_Q},\ref{eqn: l2_D}]) under the perturbation of an axisymmetric differential rotation $\Omega(r,\theta)$ as observed in the Sun. An increase in intensity of the color scale indicates worsening of the isolated multiplet approximation. The measures of offset are plotted for the HMI-resolved multiplets shown in Figure~\ref{fig:mode-selection}.} \label{fig:qdpt_vs_dpt_DR} \end{figure} Estimation of $a$ coefficients through frequency splitting measurements assumes validity of the isolated multiplet approximation using degenerate perturbation theory (DPT). However, an inspection of the distribution of the multiplets in $\nu-\ell$ space (as shown in Fig.~[\ref{fig:mode-selection}]) shows that it is natural to expect this approximation to worsen with increasing $\ell$. This necessitates carrying out frequency estimation respecting cross-coupling of modes across multiplets, also known as quasi-degenerate perturbation theory (QDPT). A detailed discussion on DPT and QDPT in the context of differential rotation can be found in \cite{ritzwoller} and \cite{lavely92}. In this section we discuss the goodness of the isolated multiplet approximation in estimating $a_{n \ell}$ due to $\Omega(r,\theta)$ for all the HMI-resolved modes shown in Figure~\ref{fig:mode-selection}. A similar result but for $\ell \leq 30$ was presented in Appendix~G of \cite{sbdas20}. In Figure~\ref{fig:qdpt_vs_dpt_DR} we color code multiplets to indicate the departure of frequency shifts obtained from QDPT $\delta{}_{n}\omega{}_{\ell m}^{Q}$ as compared to shifts obtained from DPT $\delta{}_{n}\omega{}_{\ell m}^{D}$. Strictly speaking, carrying out the eigenvalue problem in the QDPT formalism causes garbling of the quantum numbers ---$n$, $\ell$, and~$m$ are no longer good quantum numbers--- and prevents a one-to-one mapping of unperturbed to perturbed modes. This prohibits an explicit comparison of frequency shifts on a singlet-by-singlet basis. However, modes belonging to the same multiplet can still be identified visually and grouped together. So, to quantify the departure of $\delta{}_{n}\omega{}_{\ell m}^{Q}$ from $\delta{}_{n}\omega{}_{\ell m}^{D}$ we calculate the Frobenius norm of these frequency shifts corresponding to each multiplet: \begin{eqnarray} L_2^\text{QDPT} &=& \sqrt{\sum_m (\delta {}_{n}\omega{}_{\ell m}^\text{Q})^2} \qquad \text{for cross-coupling,} \label{eqn: l2_Q}\\ L_2^\text{DPT} &=& \sqrt{\sum_m (\delta {}_{n}\omega{}_{\ell m}^\text{D})^2} \qquad \text{for self-coupling.} \label{eqn: l2_D} \end{eqnarray} The color scale intensity in Figure~\ref{fig:qdpt_vs_dpt_DR} indicates the relative offset of $L_2^\text{QDPT}$ as compared to $L_2^\text{DPT}$ for a multiplet $(n, \ell)$ marked as an `o'. Larger offset indicates the degree of worsening of the isolated multiplet approximation. We find that the largest error incurred using DPT instead of QPDT is 0.27\% this is found to be at $\ell=300$. This clearly shows that even for the $f$ mode (which is the most susceptible to errors) the frequency splitting $a$-coefficients are exceptionally accurate. \section{Conclusion} \label{sec:conclusion} Most of what is currently known about solar differential rotation is derived from from $a$-coefficients using frequency splitting measurements. Inferring these $a$-coefficients involves invoking the isolated multiplet approximation based on degenerate perturbation theory. Although this approximation works well even for high $\ell \leq 300$ modes, reasons motivating the need to investigate the possibility of erroneous $a$-coefficients from frequency splitting measurements at even higher $\ell$ stem from a combination of two effects, namely, the increasing proximity of modes (in frequency) along the same radial branch, and spectral-leakage from neighbouring modes. Partial visibility of the Sun causes broadening of peaks in the spectral domain, referred to as mode leakage \citep{schou94, hanasoge18}. This causes proximal modes at high $\ell$ to widen and resemble continuous ridges in observed spectra. As a result, spectral-peak identification for frequency-splitting measurements are harder and increasingly inaccurate. Moreover, since the $a$-coefficient formalism breaks down for non-axisymmetric perturbations, considering techniques which respect cross-coupling becomes indispensable. Thus, mode coupling becomes more relevant in these regimes, and it is important to investigate the potential of mode-coupling techniques as compared to frequency splittings. Hence, this study was directed towards answering the following broad questions. (i) Can mode-coupling via MCMC use information stored in eigenfunction distortions to constrain differential rotation as accurately as frequency splittings? This would also serve to compare the potential of a Bayesian approach with the least square inversion performed in W13. (ii) Can this technique further increase the accuracy of $a_{n\ell}$ at $\ell \geq 150$? We already know that higher $\ell$ estimates are increasingly precise and accurate from W13. (iii) What are the uncertainties in estimating $a_{n\ell}$ using mode-coupling theory and do they fall within 1-$\sigma$ of frequency splitting estimates? (iv) Why are mode-coupling results poorer in the low $\ell$ regime? This is seen in earlier studies, which aimed to go deeper into the convection zone and obtained significantly imprecise and inaccurate results \citep{woodard13,schad20}. The approach in this study is broadly based on the theoretical formulations from V11 and modelling from W13. However, the novelty of the current work lies in 3 main aspects. (a) The MCMC analysis enabled exploration of the complete parameter space, and it was found that the chosen misfit function is unimodal in nature, for all degrees~$\ell$ and radial orders~$n$. This establishes that the method of normal-mode coupling does return a unique value of $(a_3, a_5)$. (b) Leakage of power occurs for modes in the same radial order $n$ and hence the determination of $(a_3, a_5)$ in a consistent manner would involve simultaneous estimation of splitting coefficients for all~$\ell$ and the same radial order. However, the number of parameters is large and hence we break it into chunks of 40 pairs of $(a_3, a_5)$ per MCMC, with an overlap of $N_o$ pairs of the parameters between two different chunks. To settle on a reasonable value of $N_o$, we perform a simple experiment. From MCMC simulations with different overlap numbers $N_o = \{0, 2, 4, 6, 8, 10\}$, we find that for $N_o > 6$, the inferred $a$-coefficients vary less than 1-$\sigma$ and therefore reasonably stable for larger $N_o$. Hence, we choose the modal overlap number $N_o=10$ for computation at all radial orders. (c) Since a large number of splitting coefficients are determined simultaneously, a corresponding number of spectra is used. Hence, estimation of the data variance becomes critical in order to appropriately weight different data points according to their noise levels. These improvements lead to a better estimate of differential rotation using mode coupling. The inference of rotation at lower $\ell$ ($<50$) suffers for two reasons. (a) Low sensitivity of the model to the $a$-coefficients. (b) Proximity of modes of radial orders $(n+1)$ and $(n-1)$ to modes at radial order $n$. Since the current model only accounts for leakage of power within the same radial order, a chosen frequency window in data would contain peaks from neighboring radial orders, which are not modelled. Hence, an improvement might be achieved at lower $\ell$ by modelling the interaction of modes of different radial orders. Finally, in this study we also show that even though frequency splitting is much more precise for low $\ell \leq 150$, mode coupling estimates of differential rotation improves at high $\ell \geq 200$. Therefore, it is expected that mode-coupling would be comparable to (or possibly more accurate than) frequency splitting for very high $\ell \geq 300$. This would then allow one to compare mode-coupling estimates of shallow, small-scale structures with results from methods in local helioseismology. Going this high in angular degree for mode-coupling, however, introduces some challenges: (a) The computation of leakage matrices for high $\ell$ is very expensive. (b) $\partial \omega/\partial \ell$ decreases as $\ell$ grows and the spectrum becomes a continuous ridge in frequency space making it harder to resolve the modes completely. In conclusion, there remains scope for improvement and related lines of study. In this study, we have ignored the even-$s$ components of $\Omega_s$, which are the NS-asymmetric components of differential rotation. These components have been estimated to be small at the surface and are anticipated to be small in the interior. However, this assumption may be premature given that prior estimates of interior rotation-asymmetries are based on non-seismic surface measurements. Since the V11 formalism is capable of accommodating the estimation of even-$s$ components as well, this could be the focus of a future investigation. Additionally, the current analysis was performed after summing up the stacked cross-spectrum. Although this was done to improve the signal-to-noise ratio, the spectrum at different azimuthal orders $m$ are not identical. Hence a more complete computation would involve the misfit computed using the full spectrum as a function of $m$. This may possibly lead to better results of the $a$-coefficients, as there exists structure in the azimuthal order (see Fig.~[\ref{fig:cs-200-202}]), which is lost after summation. \\ The authors of this study are grateful to Jesper Schou (Max Planck Institute for Solar System Research) for numerous insightful discussions as well as detailed comments that helped us improve the quality of the manuscript. The authors thank the anonymous referee for valuable suggestions that helped improve the text and figures in this manuscript. \begin{appendices} \section{Spherical harmonics symmetry relations}\label{apdx:sph-symm} Consider a time-varying, real-valued scalar field on a sphere \(\phi(\theta, \phi, t)\). The spherical harmonic components are given by \begin{equation} \phi^{l,|m|}(t) = \int_\Omega {\mathrm{d}}\Omega Y^{*l, |m|}(\theta, \phi) \phi(\theta, \phi, t) = (-1)^{|m|} \int_\Omega {\mathrm{d}}\Omega Y^{l, -|m|} \phi(\theta, \phi, t) = (-1)^{|m|} \phi^{*l, -|m|}(t) \end{equation} where $d\Omega$ is the area element, the integration being performed over the entire surface of the sphere. After performing a temporal Fourier transform, we have \begin{equation} \phi^{l,|m|}(\omega) = \frac{1}{\sqrt{2\pi}}\int_{-\infty}^\infty {\mathrm{d}} t e^{-i\omega t} \phi^{l, |m|}(t) = (-1)^{|m|} \frac{1}{\sqrt{2\pi}}\int_{-\infty}^\infty {\mathrm{d}} t e^{-i\omega t} \phi^{* l, -|m|}(t) \end{equation} \begin{equation} \phi^{*l, -|m|}(\omega) = \frac{1}{\sqrt{2\pi}} \int_{-\infty}^\infty {\mathrm{d}} t e^{i\omega t} \phi^{*l, -|m|}(t) = (-1)^{|m|} \phi^{l, |m|}(-\omega) \implies \phi^{l, -|m|}(\omega) = (-1)^{|m|} \phi^{*l, |m|}(-\omega) \end{equation} \section{MCMC: An illustrative Case} \label{sec:MCMC_demo} We present an MCMC estimation of $a$-coefficients using a smaller set of modes (and hence model parameters). The smaller number of model parameters lets us present all the marginal probabilities in a single plot. The MCMC walkers are shown in Figure~\ref{fig:chain-sample}. In spite of using a flat prior, the likelihood function is sharp enough to bias the walkers to move towards the region of optimal solution within $\sim 500$ iterations. It can be seen that different walkers start off randomly at different locations in parameter space and ultimately converge to the same region around the optimal solution. After removing the iterations from the ``burn-in" period, where the walkers are still exploring a larger parameter space, histograms are plotted and marginal probability distributions are obtained. Figure~\ref{fig:corner-sample} shows one such estimation of $(a_3, a_5)$ for $n=0$ and $\ell$ in the range $200$ to $202$. It can be seen that the marginal posterior probability distributions for each of the parameters are unimodal. This tells us that the currently defined misfit function has a unique minimum. Note that this distribution was obtained using a flat prior and hence the resulting posterior distributions are essentially sampling the likelihood function. It is also worth noting that for the range of $\ell$'s chosen, the confidence intervals are $< 1$ nHz. \begin{figure} \centering \includegraphics[width=0.9\textwidth] {chain_00_200_210_maskl_4_00.pdf} \caption{Each parameter is shown in a different figure to indicate the value as a function of the Markov Chain step number. The first few ``burn-in" values are discarded and only the values beyond the vertical line are considered to obtain the probability distributions.} \label{fig:chain-sample} \end{figure} \begin{figure} \centering \includegraphics[width=0.9\textwidth] {corner_00_200_210_maskl_4_00.pdf} \caption{Cross-correlation of model parameters and the marginal probability of the model parameters.} \label{fig:corner-sample} \end{figure} \end{appendices}
1,108,101,565,412
arxiv
\section*{Acknowledgement} We thank T. Alexiadis and J.~M\'{a}rquez for the data acquisition, B. Pellkofer for hardware support, A. Quiros-Ramires for support with MTurk, A. Osman for support with Tensorflow, and S. Pujades for help with finalizing the paper. We further thank Karras et al. for providing us with a static face mesh for comparison. MJB has received research gift funds from Intel, Nvidia, Adobe, Facebook, and Amazon. While MJB is a part-time employee of Amazon, his research was performed solely at, and funded solely by, MPI. MJB has financial interests in Amazon and Meshcapade GmbH. \textbf{In loving memory of Daniel Cudeiro}. \section{Conclusion} We have presented VOCA\xspace, a simple and generic speech-driven facial animation framework that works across a range of identities. Given an arbitrary speech signal and a static character mesh, VOCA\xspace fully automatically outputs a realistic character animation. VOCA\xspace leverages recent advances in speech processing and 3D face modeling in order to be subject independent. We train our model on a self-captured multi-subject 4D face dataset (VOCASET). The key insights of VOCA\xspace are to factor identity from facial motion, which allows us to animate a wide range of adult faces, and to condition on subject labels, which enables us to train VOCA\xspace across multiple subjects, and to synthesize different speaker styles during test time. VOCA\xspace generalizes well across various speech sources, languages, and 3D face templates. We provide optional animation control parameters to vary the speaking style and to alter the identity dependent shape and head pose during animation. The dataset, trained model, and code are available for research purposes~\cite{VOCA_project}. \section{VOCASET} \label{data} This section introduces VOCASET\xspace and describes the capture setup and data processing. \qheading{VOCASET\xspace:} Our dataset contains a collection of audio-4D scan pairs captured from 6 female and 6 male subjects. For each subject, we collect 40 sequences of a sentence spoken in English, each of length three to five seconds. The sentences were taken from an array of standard protocols and were selected to maximize phonetic diversity using the method described in \cite{fisher1986}. In particular, each subject spoke 27 sentences from the TIMIT corpus \cite{timit1993}, three pangrams used by \cite{karras2017}, and 10 questions from the Stanford Question Answering Dataset (SQuAD) \cite{squad2016}. The recorded sequences are distributed such that five sentences are shared across all subjects, 15 sentences are spoken by three to five subjects (50 unique sentences), and 20 sentences are spoken only by one or two subjects (200 unique sentences). We make VOCASET\xspace available to the research community. \begin{figure}[t] \centerline{ \includegraphics[width=0.24\columnwidth]{reg_subj1_00000.png} \includegraphics[width=0.24\columnwidth]{reg_subj1_00023.png} \includegraphics[width=0.24\columnwidth]{reg_subj1_00081.png} \includegraphics[width=0.24\columnwidth]{reg_subj1_00132.png} } \centerline{ \includegraphics[width=0.24\columnwidth]{reg_subj2_00000.png} \includegraphics[width=0.24\columnwidth]{reg_subj2_00013.png} \includegraphics[width=0.24\columnwidth]{reg_subj2_00094.png} \includegraphics[width=0.24\columnwidth]{reg_subj2_00189.png} } \caption{Sample meshes of two VOCASET\xspace subjects.} \label{fig:dataset} \end{figure} \qheading{Capture setup:} We use a multi-camera active stereo system (3dMD LLC, Atlanta) to capture high-quality 3D head scans and audio. The capture system consists of six pairs of gray-scale stereo cameras, six color cameras, five speckle pattern projectors, and six white light LED panels. The system captures 3D meshes at 60fps, each with about 120K vertices. The color images are used to generate UV texture maps for each scan. The audio, synchronized with the scanner, is captured with a sample rate of 22 kHz. \qheading{Data processing:} The raw 3D head scans are registered with a sequential alignment method as described in \cite{flame2017} using the publicly available generic FLAME model. The image-based landmark prediction method of \cite{BulatTzimiropoulos} is used during alignment to add robustness while tracking fast facial motions. After alignment, each mesh consists of $5023$ 3D vertices. For all scans, we measure the absolute distance between each scan vertex and the closest point in the FLAME alignment surface: median (0.09mm), mean (0.13mm), and standard deviation (0.14mm). Thus, the alignments faithfully represent the raw data. All meshes are then unposed; i.e. effects of global rotation, translation, and head rotation around the neck are removed. After unposing, all meshes are in ``zero pose''. For each sequence, the neck boundary and the ears are automatically fixed, and the region around the eyes is smoothed using Gaussian filtering to remove capture noise. Note that no smoothing is applied to the mouth region so as to preserve subtle motions. Figure~\ref{fig:dataset} shows sample alignments of two VOCASET\xspace subjects. The supplementary video shows sequences of all subjects. \section{Discussion} While VOCA\xspace can be used to realistically animate a wide range of adults faces from speech, it still lacks some of the details needed for conversational realism. Upper face motions (i.e. eyes and eyebrows) are not strongly correlated with the audio~\cite{karras2017}. The causal factor is emotion, which is absent in our data due the inherent difficulty of simulating emotional speech in a controlled capture environment. Thus, VOCA\xspace learns the causal facial motions from speech, which are mostly present in the lower face. Non-verbal communication cues, such as head motion, are weakly correlated with the audio signal and hence are not modeled well by audio-driven techniques. VOCA\xspace offers animators and developers the possibility to include head motion, but does not infer it from data. A speech independent model for head motion could be used to simulate realistic results. Application specific techniques, such as dyadic interactions between animated assistants and humans require attention mechanisms that consider spatial features, such as eye tracking. Learning richer conversation models with expressive bodies~\cite{SMPLEx2019} is future research. Conditioning on subject labels is one of the key aspects of VOCA\xspace that allows training across subjects. This allows a user to alter the speaking style during inference. Using data from more subjects to increase the number of different speaking styles remains a task for future work. Further experiments on mitigating or amplifying different speaking styles, or combining characteristics of different subjects also remain for future work. \section{Experiments} \label{experiments} Quantitative metrics, such as the norm on the prediction error, are not suitable for evaluating animation quality. This is because facial visemes form many-to-many mappings with speech utterances. A wide range of plausible facial motions exists for the same speech sequence, which makes quantitative evaluation intractable. Instead, we perform perceptual and qualitative evaluations. Further, our trained model is available for research purposes for direct comparisons~\cite{VOCA_project}. \subsection{Perceptual evaluation} \qheading{User study:} We conduct three \ac{amt} blind user studies: i) a binary comparison between held-out test sequences and our model conditioned on all training subjects, ii) an ablation study to assess the effectiveness of the DeepSpeech features, and iii) a study to investigate the correlation between style, content, and identity. All experiments are performed on sequences and subjects fully disjoint from our training and validation set. For binary comparisons, two videos with the same animated subject and audio clip are shown side by side. For each video pair, the participant is asked to choose the talking head that moves more naturally and in accordance with the audio. To avoid any selection bias, the order (left/right) of all methods for comparison is random for each pair. Style comparisons are used to evaluate the learned speaking styles. Here, Turkers see three videos: one reference and two predictions. The task is to determine which of the two predictions is more similar to the reference video. To ensure the quality of the study and remove potential outliers, we require Turkers to pass a simple qualification test before they are allowed to submit HITs. The qualification task is a simplified version of the following user study, where we show three comparisons with an obvious answer, i.e. one ground-truth sequence and one sequence with completely mismatched video and audio. \qheading{Comparison to recorded performance:} We compare captured and processed test sequences with VOCA\xspace predictions conditioned on all eight speaker styles. In total, Turkers ($400$ HITs) perceived the recorded performance more natural ($83 \pm 9\%$) than the predictions ($17 \pm 9\%$), across all conditions. While VOCA\xspace results in realistic facial motion for the unseen subjects, it is unable to synthesize the idiosyncrasies of these subjects. As such, these subtle subject-specific details make the recorded sequences look more natural than the predictions. \qheading{Speech feature ablation:} We replace the DeepSpeech features by Mel-filterbank energy features (fbank) and train a model for $50$ epochs (the same as for VOCA\xspace). Turkers ($400$ HITs) perceived the performance of VOCA\xspace with DeepSpeech more natural ($78 \pm 16\%$) than with fbank features ($22 \pm 16\%$) across all conditions. That indicates that VOCA\xspace with DeepSpeech features generalizes better to unseen audio sequences than with fbank features. \qheading{Style comparisons:} Speech-driven facial performance varies greatly across subjects. However, it is difficult to separate between style (facial motion of a subject), identity (facial shape of a subject), and content (the words being said), and how these different factors influence perception. The goal of this user study is to evaluate the speech-driven facial motion independently from identity-dependent face shape in order to understand if people can recognize the styles learned by our model. To accomplish this, we subtract the personalized template (neutral face) from all sequences to obtain ``displacements'', then add these displacements to a single common template (randomly sampled from the FLAME shape space). Then, for several reference sequences from the training data, we compare two VOCA\xspace predictions (on audio from the test set): one conditioned on the reference subject and one conditioned on another randomly selected subject. We ask Turkers to select which predicted sequence is more similar in speaking style to the reference. To explore the influence of content, we perform the experiment twice, once where the reference video and the predictions share the same sentence (spoken by different subjects) and once with different sentences. Figure~\ref{fig:exp3_and_4} shows the results for this experiment. Results varied greatly across conditions. For some conditions, Turkers could consistently pick the sequence with the matching style (e.g. conditions 3, 4, and 5); for others, their choices were no better or worse than chance. The impact of the content was not significant for most conditions. More research is needed to understand which factors are important for people to recognize different speaking styles, and to develop new models that more efficiently disentangle facial shape and motion. \pgfplotsset{compat=1.11, /pgfplots/ybar legend/.style={ /pgfplots/legend image code/.code={% \draw[##1,/tikz/.cd,yshift=-0.25em] (0cm,0cm) rectangle (0.6em,0.8em);}, }, } \begin{figure}[t] \centering \begin{tikzpicture} \begin{axis}[ symbolic x coords={0, 1, 2, 3, 4, 5, 6, 7, 8, 9}, xmin=0, xmax=9, xtick={1, 2, 3, 4, 5, 6, 7, 8}, xtick style={draw=none}, xlabel=Condition, yticklabel={\pgfmathprintnumber[assume math mode=true]{\tick}\%}, ylabel=Percent of reference condition chosen, ymajorgrids=true, ytick={10, 20, 30, 40, 50, 60, 70, 80, 90}, ymin=0, ymax=100, ytick style={draw=none}, legend style={at={(0.5,1.15)}, anchor=north, legend columns=-1, draw=lightgray}, style={draw=lightgray}, ybar=0pt, bar width=7pt ] \addplot[fill=chartblue, draw opacity=0, error bars/.cd, y dir=both, y explicit, error bar style={draw=black, draw opacity=1}] coordinates { (1, 25) +- (0, 13.4) (2, 48) +- (0, 12.6) (3, 87) +- (0, 8.6) (4, 68) +- (0, 14.5) (5, 80) +- (0, 10.1) (6, 48) +- (0, 12.6) (7, 55) +- (0, 12.6) (8, 15) +- (0, 9.0)}; \addplot[fill=chartorange, draw opacity=0, error bars/.cd, y dir=both, y explicit, error bar style={draw=black, draw opacity=1}] coordinates { (1, 45) +- (0, 8.9) (2, 34) +- (0, 9.8) (3, 92) +- (0, 4.7) (4, 67) +- (0, 8.4) (5, 76) +- (0, 7.7) (6, 26) +- (0, 7.8) (7, 55) +- (0, 10.0) (8, 14) +- (0, 6.9)}; \legend{Same sentences, Different sentences} \end{axis} \end{tikzpicture} \caption{AMT study of styles. The bars show the percentage of Turkers choosing the reference condition when the same sentence was being shown for reference and prediction, and with difference sentences.} \label{fig:exp3_and_4} \end{figure} \subsection{Qualitative evaluation} \qheading{Generalization across subjects:} Factoring identity from facial motion allows us to animate a wide range of adult faces. To show the generalization capabilities of VOCA\xspace, we select, align and pose-normalize multiple neutral scans from the BU-3DFE database~\cite{BU-3DFE_2006}, with large shape variations. Figure~\ref{fig:generalization_across_subj} shows the static template (left) and some VOCA\xspace animation frames, driven by the same audio sequence. \begin{figure}[t] \centerline{ \includegraphics[width=0.24\columnwidth]{F0002_NE00BL_RAW_template.png} \includegraphics[width=0.24\columnwidth]{F0002_NE00BL_RAW_sentence21_00023.png} \includegraphics[width=0.24\columnwidth]{F0002_NE00BL_RAW_sentence21_00168.png} \includegraphics[width=0.24\columnwidth]{F0002_NE00BL_RAW_sentence21_00190.png} } \centerline{ \includegraphics[width=0.24\columnwidth]{M0003_NE00WH_RAW_template.png} \includegraphics[width=0.24\columnwidth]{M0003_NE00WH_RAW_sentence21_00023.png} \includegraphics[width=0.24\columnwidth]{M0003_NE00WH_RAW_sentence21_00168.png} \includegraphics[width=0.24\columnwidth]{M0003_NE00WH_RAW_sentence21_00190.png} } \centerline{ template \hspace{0.26\columnwidth} animation frames \hspace{0.2\columnwidth} } \caption{VOCA\xspace generalizes across face shapes. Each row shows the template of a subject selected from the static BU-3DFE face database~\cite{BU-3DFE_2006} (left), and three randomly selected animation frames, driven by the same audio input (right).} \label{fig:generalization_across_subj} \end{figure} \qheading{Generalization across languages:} The video shows the VOCA\xspace output for different languages. This indicates that VOCA\xspace can generalize to non-English sentences. \qheading{Speaker styles:} Conditioning on different subjects during inference results in different speaking styles. Stylistic differences include variation in lip articulation. Figure~\ref{fig:conditioning} shows the distance between lower and upper lip as a function of time for VOCA\xspace predictions for a random audio sequence and different conditions. This indicates that the convex combination of styles provides a wide range of different mouth amplitudes. We generate new intermediate speaking styles by convex combinations of conditions. Due to the linearity of the decoder, performing this convex combination in the 3D vertex space or in the 50-dimensional encoding space is equivalent. The supplementary video shows that combining styles offers animation control to synthesize a range of varying speaking styles. This is potentially useful for matching the speaking performance of a subject not seen during training. In the future, this could be estimated from video. \begin{figure} \includegraphics[width=0.95\columnwidth]{convex_range.png} \caption{Distance between lower and upper lip for VOCA\xspace predictions conditioned on different subjects. The shaded region represents the space of convex combinations of the different conditions.} \label{fig:conditioning} \end{figure} \qheading{Robustness to noise:} To demonstrate robustness to noise, we combine a speech signal with different levels of noise and use the noisy signal as VOCA\xspace input. As a noise source, we use a realistic street noise sequence~\cite{soundible} added with negative gain of $36$dB (low), $24$dB (medium), $18$dB (slightly high), and $12$dB (high). Only the high noise level leads to a damped facial motion, but despite the noise, the facial animations remain plausible. \qheading{Comparison to Karras et al.~\cite{karras2017}:} We compare VOCA\xspace to Karras et al.~\cite{karras2017}, the state-of-the-art in realistic subject-specific audio-driven facial animation. The results are shown in the supplementary video. For comparison, the authors provided us with a static mesh, to which we aligned the FLAME topology. We then use eight audio sequences from their supplementary video (including singing, spoken Chinese, an excerpt of a Barack Obama speech, and different sequences of the actor), to animate their static mesh. The supplementary video shows that, while their model produces more natural and detailed results, we can still reproduce similar facial animation without using any of their subject-specific training data. Further, Karras et al. use professional actors capable of simulating emotional speech. This enables them to add more realism in the upper face by modeling motions (i.e. eyes and eyebrows) that are more correlated with emotions than speech. \qheading{Animation control:} Figure~\ref{fig:modified_shape} demonstrates the possibility of changing the identity dependent shape (top) and head pose (bottom) during animation. Both rows are driven by the same audio sequence. Despite the varying shape or pose, the facial animation looks realistic. \begin{figure}[t] \centerline{ \includegraphics[width=0.24\columnwidth]{shape_variation_00000.png} \includegraphics[width=0.24\columnwidth]{shape_variation_00069.png} \includegraphics[width=0.24\columnwidth]{shape_variation_00138.png} \includegraphics[width=0.24\columnwidth]{shape_variation_00208.png} } \centerline{ \includegraphics[width=0.24\columnwidth]{pose_variation_00000.png} \includegraphics[width=0.24\columnwidth]{pose_variation_00069.png} \includegraphics[width=0.24\columnwidth]{pose_variation_00138.png} \includegraphics[width=0.24\columnwidth]{pose_variation_00208.png} } \caption{Animation control. Top: varying the first identity shape components to plus two (second column) and minus two (last column) standard deviations. Bottom: varying the head pose to minus 30 degrees (second column) and plus 30 degrees (last column).} \label{fig:modified_shape} \end{figure} \section{Preliminaries} Our goal for VOCA\xspace is to generalize well to arbitrary subjects not seen during training. Generalization across subjects involves both (i) generalization across different speakers in terms of the audio (variations in accent, speed, audio source, noise, environment, etc.) and (ii) generalization across different facial shapes and motion. \qheading{DeepSpeech:} To gain robustness to different audio sources, regardless of noise, recording artifacts, or language, we integrate DeepSpeech~\cite{hannun2014deep} into our model. DeepSpeech~\cite{hannun2014deep} is an end-to-end deep learning model for \ac{asr}. DeepSpeech uses a simple architecture consisting of five layers of hidden units, of which the first three layers are non-recurrent fully connected layers with ReLU activations. The fourth layer is a bi-directional \ac{rnn}, and the final layer is a fully connected layer with ReLU activation. The final layer of the network is fed to a softmax function whose output is a probability distribution over characters. The TensorFlow implementation provided by Mozilla~\cite{mozillaDeepSpeech} slightly differs from the original paper in two ways: (i) the \ac{rnn} units are replaced by \ac{lstm} cells and (ii) 26 \ac{mfcc} audio features are used instead of directly performing inference on the spectrogram. Please see \cite{mozillaDeepSpeech} for more details. \qheading{FLAME:} Facial shape and head motion vary greatly across subjects. Furthermore, different people have different speaking styles. The large variability in facial shape, motion, and speaking style motivates using a common learning space. We address this problem by incorporating FLAME, a publicly available statistical head model, as part of our animation pipeline. FLAME uses linear transformations to describe identity and expression dependent shape variations, and standard linear blend skinning (LBS) to model neck, jaw, and eyeball rotations. Given a template $\textbf{T} \in \mathbb{R}^{3N}$ in the ``zero pose'', identity, pose, and expression blendshapes are modeled as vertex offsets from $\textbf{T}$. For more details we refer the reader to~\cite{flame2017}. \section{Introduction} Teaching computers to see and understand faces is critical for them to understand human behavior. There is an extensive literature on estimating 3D face shape, facial expressions, and facial motion from images and videos. Less attention has been paid to estimating 3D properties of faces from sound; however, many facial motions are caused directly by the production of speech. Understanding the correlation between speech and facial motion thus provides additional valuable information for analyzing humans, particularly if visual data are noisy, missing, or ambiguous. The relation between speech and facial motion has previously been used to separate audio-visual speech~\cite{Ephrat:Siggraph:2018} and for audio-video driven facial animation~~\cite{Liu2015}. Missing to date is a general and robust method that relates the speech of {\em any} person in {\em any} language to the 3D facial motion of {\em any} face shape. Here we present {\em VOCA (Voice Operated Character Animation)}, that takes a step towards this goal. While speech-driven 3D facial animation has been widely studied, speaker-independent modeling remains a challenging, unsolved task for several reasons. First, speech signals and facial motion are strongly correlated but lie in two very different spaces; thus, non-linear regression functions are needed to relate the two. One can exploit deep neural networks to address this problem. However, this means that significant amounts of training data are needed. Second, there exists a many-to-many mapping between phonemes and facial motion. This poses an even greater challenge when training across people and styles. Third, because we are especially sensitive to faces, particularly realistic faces, the animation must be realistic to avoid falling into the Uncanny Valley~\cite{Mori1970}. Fourth, there is very limited training data relating speech to the 3D face shape of multiple speakers. Finally, while previous work has shown that models can be trained to create speaker-specific animations~\cite{Cao2005, karras2017}, there are no generic methods that are speaker independent and that capture a variety of speaking styles. \qheading{VOCASET\xspace:} To address this, we collected a new dataset of 4D face scans together with speech. The dataset has 12 subjects and 480 sequences of about 3-4 seconds each with sentences chosen from an array of standard protocols that maximize phonetic diversity. The 4D scans are captured at 60fps and we align a common face template mesh to all the scans, bringing them into correspondence. This dataset, called {\em VOCASET\xspace}, is unlike any existing public datasets. It allows training and testing of speech-to-animation models that can generalize to new data. \qheading{VOCA\xspace:} Given such data, we train a deep neural network model, called {\em VOCA\xspace} (Figure~\ref{fig:network-architecture}), that generalizes to new speakers (see Figure~\ref{fig:teaser}). Recent work using deep networks has shown impressive results for the problem of regressing {\em speaker-dependent} facial animation from speech \cite{karras2017}. Their work, however, captures the idiosyncrasies of an individual, making it inappropriate for generalization across characters. While deep learning is advancing the field quickly, even the best recent methods rely on some manual processes or focus only on the mouth \cite{taylor2017deep}, making them inappropriate for truly automatic full facial animation. The key problem with prior work is that facial motion and facial identity are confounded. Our key insight is to factor identity from facial motions and then learn a model relating speech to only the motions. Conditioning on subject labels during training allows us to combine data from many subjects in the training process, which enables the model both to generalize to new subjects not seen during training and to synthesize different speaker styles. Integrating DeepSpeech~\cite{hannun2014deep} for audio feature extraction makes VOCA\xspace robust w.r.t. different audio sources and noise. Building on top of the expressive FLAME head model~\cite{flame2017} allows us i) to model motions of the full face (i.e. including the neck), ii) to animate a wide range of adult faces, as FLAME can be used to reconstruct subject-specific templates from a scan or image, and iii) to edit identity-dependent shape and head pose during animation. VOCA\xspace and VOCASET\xspace are available for research purposes~\cite{VOCA_project}. \section{VOCA} \label{model} \begin{figure*} \centering \includegraphics[width=0.8\linewidth]{network_architecture.png} \caption{VOCA\xspace network architecture.} \label{fig:network-architecture} \end{figure*} \begin{table}[t] \footnotesize \begin{tabular}{lcccc} \bf Type & \bf Kernel & \bf Stride & \bf Output & \bf Activation \\ \hline DeepSpeech & - & - & 16x1x29 & - \\ \hline Identity concat & - & - & 16x1x37 & - \\ Convolution & 3x1 & 2x1 & 8x1x32 & ReLU \\ Convolution & 3x1 & 2x1 & 4x1x32 & ReLU \\ Convolution & 3x1 & 2x1 & 2x1x64 & ReLU \\ Convolution & 3x1 & 2x1 & 1x1x64 & ReLU \\ \hline Identity concat & - & - & 72 & - \\ Fully connected & - & - & 128 & tanh \\ Fully connected & - & - & 50 & linear \\ \hline Fully connected & - & - & 5023x3 & linear \\ \end{tabular} \caption{Model architecture.} \label{tab:architecture} \end{table} This section describes the model architecture and provides details on how the input audio is processed. \qheading{Overview:} VOCA\xspace receives as input a subject-specific template $\textbf{T}$ and the raw audio signal, from which we extract features using DeepSpeech~\cite{hannun2014deep}. The desired output is the target 3D mesh. VOCA\xspace acts as an encoder-decoder network (see Figure~\ref{fig:network-architecture} and Table~\ref{tab:architecture}) where the encoder learns to transform audio features to a low-dimensional embedding and the decoder maps this embedding into a high-dimensional space of 3D vertex displacements \qheading{Speech feature extraction:} Given an input audio clip of length $T$ seconds, we use DeepSpeech to extract speech features. The outputs are unnormalized log probabilities of characters for frames of length 0.02 s (50 frames per second); thus, it is an array of size $50 T \times D$, where $D$ is the number of characters in the alphabet plus one for a blank label. We resample the output to 60 fps using linear interpolation. In order to incorporate temporal information, we convert the audio frames to overlapping windows of size $W \times D$, where $W$ is the window size. The output is a three-dimensional array of dimensions $60 T \times W \times D$. \qheading{Encoder:} The encoder is composed of four convolutional layers and two fully connected layers. The speech features and the final convolutional layer are conditioned on the subject labels to learn subject-specific styles when trained across multiple subjects. For eight training subjects, each subject $j$ is encoded as an one-hot-vector $I_j = \left(\delta_{ij}\right)_{1 \leq i \leq 8}$. This vector is concatenated to each $D$-dimensional speech feature vector (i.e. resulting in windows of dimension $W \times (D+8)$), and concatenated to the output of the final convolution layer. To learn temporal features and reduce the dimensionality of the input, each convolutional layer uses a kernel of dimension $3 \times 1$ and stride $2 \times 1$. As the features extracted using DeepSpeech do not have any spatial correlation, we reshape the input window to have dimensions $W \times 1 \times (D+8)$ and perform 1D convolutions over the temporal dimension. To avoid overfitting, we keep the number of parameters small and only learn 32 filters for the first two, and 64 filters for the last two convolutional layers. The concatenation of the final convolutional layer with the subject encoding is followed by two fully connected layers. The first has 128 units and a hyperbolic tangent activation function; the second is a linear layer with 50 units. \qheading{Decoder:} The decoder of VOCA\xspace is a fully connected layer with linear activation function, outputting the $5023 \times 3$ dimensional array of vertex displacements from $\textbf{T}$. The weights of the layer are initialized by $50$ PCA components computed over the vertex displacements of the training data; the bias is initialized with zeros. \qheading{Animation control:} During inference, changing the eight-dimensional one-hot-vector alters the output speaking style. The output of VOCA\xspace is an expressed 3D face in ``zero pose'' with the same mesh topology as the FLAME face model \cite{flame2017}. VOCA\xspace's compatibility with FLAME allows alteration of the identity-dependent facial shape by adding weighted shape blendshapes from FLAME. The face expression and pose (i.e. head, jaw, and eyeball rotations) can also be changed using the blendweights, joints, and pose blendshapes provided by FLAME. \section{Related work} \label{related} Facial animation has received significant attention in the literature. Related work in this area can be grouped into three categories: speech-based, text-based, and video- or performance-based. \qheading{Speech-driven facial animation:} Due to the abundance of images and videos, many methods that attempt to realistically animate faces use monocular video~\cite{Brand1999,Bregler1997,Chen2018,Ezzat2002,obama2017,Wang2011,XieLiu2007}. Bregler et al.~\cite{Bregler1997} transcribe speech with a \ac{hmm} into phonetic labels and animate the mouth region in videos with an exemplar-based video warping. Brand~\cite{Brand1999} uses a mix of \ac{lpc} and RASTA-PLP~\cite{Hermansky1994} audio features and an \ac{hmm} to output a sequence of facial motion vectors. Ezzat et al.~\cite{Ezzat2002} perform \ac{pca} on all images and use an example-based mapping between phonemes and trajectories of mouth shape and mouth texture parameters in the \ac{pca} space. Xie and Liu~\cite{XieLiu2007} model facial animation with a dynamic Bayesian network-based model. Wang et al.~\cite{Wang2011} use an \ac{hmm} to learn a mapping between \ac{mfcc} and \ac{pca} model parameters. Zhang et al.~\cite{Zhang2013} combine the \ac{hmm}-based method of \cite{Wang2011} trained on audio and visual data of one actor with a deep neural network based encoder trained from hundreds of hours of speaker independent speech data to compute an embedding of the MFCC audio features. Shimba et al.~\cite{shimba2015talking} use a deep \ac{lstm} network to regress \ac{aam} parameters from \ac{mfcc} features. Chen et al.~\cite{Chen2018} correlate audio and image motion to synthesize lip motion of arbitrary identities. Suwajanakorn et al.~\cite{obama2017} use an \ac{rnn} for synthesizing photorealistic mouth texture animations using audio from 1.9 million frames from Obama's weekly addresses. However, their method does not generalize to unseen faces or viewpoints. In contrast to this, VOCA\xspace is trained across subjects sharing a common topology, which makes it possible to animate new faces from previously unseen viewpoints. Pham et al.~\cite{Pham2017} regress global transformation and blendshape coefficients~\cite{Cao2014_FaceWarehouse} from MFCC audio features using an \ac{lstm} network. While their model is trained across subjects---similar to VOCA\xspace---they rely on model parameters regressed from 2D videos rather than using 3D scans, which limits their quality. A few methods use multi-view motion capture data~\cite{Busso2007,Cao2005} or high-resolution 3D scans~\cite{karras2017}. Busso et al.~\cite{Busso2007} synthesize rigid head motion in expressive speech sequences. Cao et al.~\cite{Cao2005} segment the audio into phonemes and use an example-based graph method to select a matching mouth animation. Karras et al.~\cite{karras2017} propose a convolutional model for mapping \ac{lpc} audio features to 3D vertex displacements. However, their model is subject specific, and animating a new face would require 3D capture and processing of thousands of frames of subject data. Our model, VOCA\xspace factors identity from facial motion and is trained across subjects, which allows animation of a wide range of adult faces. Several works also aim at animating artist designed character rigs~\cite{Ding2015,Edwards2016_JALI,Hong2002,kakumanu2001speech,Salvi2009,taylor2016audio,taylor2017deep,taylor2012dynamic,Zhou2018}. Taylor et al.~\cite{taylor2017deep} propose a deep-learning based speech-driven facial animation model using a sliding window approach on transcribed phoneme sequences that outperforms previous LSTM based methods~\cite{fan2015photo, fan2016deep}. While these models are similar to VOCA\xspace in that they animate a generic face from audio, our focus is animating a realistic face mesh, for which we train our model on high-resolution face-scans. \qheading{Text-driven facial animation:} Some methods aim to animate faces directly from text. Sako et al.~\cite{Sako2000} use a hidden Markov model to animate lips in images from text. Anderson et al.~\cite{Anderson2013} use an extended hidden Markov text-to-speech model to drive a subject-specific active appearance model (AAM). In a follow-up, they extend this approach to animate the face of an actress in 3D. While our focus is not to animate faces from text, this is possible by animating our model with the output of a text-to-speech (TTS) system (e.g.~\cite{WaveNet2016}), similar to Karras et al.~\cite{karras2017}. \qheading{Performance-based facial animation:} Most methods to animate digital avatars are based on visual data. Alexander et al.~\cite{Alexander2009}, Wu et al.~\cite{Wu2016}, and Laine et al.~\cite{Laine2017} build subject-specific face-rigs from high-resolution face scans and animate these rigs with video-based animation systems. Several methods build personalized face-rigs using generic face models from monocular videos to transfer and reenact facial performance between videos. Tensor-based multilinear face models~\cite{Bolkart2015,Cao2015,Cao2014,Dale2011,Vlasic2005,Yang2012} and linear models~\cite{Thies2016_Face2Face} are widely used to build personalized face-rigs. Cao et al.~\cite{Cao2015,Cao2014} use a regression-based face tracker to animate the face-rig and digital avatars, while Thies et al.~\cite{Thies2016_Face2Face} use a landmark-based face tracker and deformation transfer~\cite{SumnerPopovic2004} to reenact monocular videos. Other methods that animate virtual avatars rely on {RGB-D} videos or 4D sequences to track and retarget facial performance. Li et al.~\cite{Li2010} and Weise et al.~\cite{Weise2011} capture example-based rigs in an offline calibration procedure to build personalized face-rigs, Bouaziz et al.~\cite{Bouaziz2013} use a generic identity model. Liu et al.~\cite{Liu2015} combine audio and video to robustly animate a generic face model from RGB-D video. Li et al.~\cite{flame2017} capture facial performance with a high-resolution scanner and animate static face meshes using an articulated generic head model. In contrast to these methods, our approach solely relies on audio to animate digital avatars. \qheading{3D face datasets:} Several 3D face datasets have been released that focus on the analysis of static 3D facial shape and expression (e.g.~\cite{Cao2014_FaceWarehouse,Bosphorus2008,BU-3DFE2006}) or dynamic facial expressions (e.g.~\cite{Alashkar2014,Chang2005,D3DFACS2011,CoMA2018,BU-4DFE2008,BP4D-Spontaneous2014,MMSE2016}). Most of these datasets focus on emotional expressions and only a few datasets capture facial dynamics caused by speech. The recently published 4DFAB dataset~\cite{4DFAB2018} contains 4D captures of 180 subjects, but with only nine word utterances per subject and lower mesh quality than VOCASET. The B3D(AC)\^{}2 dataset~\cite{B3D(AC)2010} contains a large set of audio-4D scan pairs of 40 spoken English sentences. In contrast, VOCASET\xspace contains 255 unique sentences in total. To enable training on both a large number of sentences and subjects, some sentences are shared across subjects and some sentences are spoken by only one subject. The visible artifacts present in the raw B3D(AC)\^{}2 scans (i.e.~holes and capture noise) mean that subtle facial motions may be lost; also, the registered template only covers the face, ignoring speech-related motions in the neck region. VOCASET\xspace, in comparison, provides higher-quality 3D scans as well as alignments of the entire head, including the neck. \section{Model training} \label{training} In this section we describe training relevant details. \qheading{Training set-up:} We start from a large dataset of audio-4D scan pairs, denoted as $\{(\textbf{x}_{i}, \textbf{y}_{i})\}_{i = 1}^{F}$. Here $\textbf{x}_{i} \in \mathbb{R}^{W \times D}$ is the input audio window centered at the $i$th video frame, $\textbf{y}_{i} \in \mathbb{R}^{N\times3}$. Further, let $\textbf{f}_i \in \mathbb{R}^{N\times3}$ denote the output of VOCA\xspace for $\textbf{x}_{i}$. For training, we split the captured data into a training set (eight subjects), a validation set (two subjects), and a test set (two subjects). The training set consists of all 40 sentences of the eight subjects, i.e. in total 320 sentences. For validation and test data, we only select the 20 unique sentences that are not shared with any other subject, i.e. 40 sentences for validation and testing, respectively. Note that our training, validation, and test sets for all experiments are fully disjoint, i.e. no overlap of subjects or sentences exists. \qheading{Loss function:} Our training loss function consists of two terms, a position term and a velocity term. The position term $E_p = \Vert \textbf{y}_{i} - \textbf{f}_i \Vert_{F}^2$ computes the distance between the predicted outputs and the training vertices. This position term encourages the model to match the ground truth performance. The velocity term $E_v = \Vert (\textbf{y}_{i} - \textbf{y}_{i-1}) - (\textbf{f}_i - \textbf{f}_{i-1})\Vert_{F}^2$ uses backward finite differences. It computes the distance between the differences of consecutive frames between predicted outputs and training vertices. This velocity term induces temporal stability. \qheading{Training parameters:} We perform hyperparameter tuning on the held-out validation set. We train VOCA\xspace for 50 epochs with a constant learning rate of $1e-4$. The weights for the position and velocity terms are $1.0$ and $10.0$, respectively. During training, we use batch normalization with a batch size of 64. We use a window size of $W=16$ with $D=29$ speech features. \qheading{Implementation details: } VOCA\xspace is implemented in Python using TensorFlow \cite{Abadi2016}, and trained using Adam \cite{kingma2014adam}. Training one epoch takes about ten minutes on a single NVIDIA Tesla K20. We use a pre-trained DeepSpeech model~\cite{mozillaDeepSpeech} which is kept fixed during training.
1,108,101,565,413
arxiv
\section{Introduction\label{Intro}} Since 1990s the standard {\sl de facto} for scientific high performance computations involving discrete Fourier transform (DFT) is FFTW library~\cite{FFTW}. Author used it during his whole research career, from early 2000s, when it was still version 2. This library provides self tuned perfected version of fast Fourier transform algorithm, which was initially introduced for broad use in~\cite{CT1965}. From the very beginning of the usage of the FFTW, author was fascinated by the elegant design of the software and convenient application programming interface (API). Author used FFTW for many research codes(e.g.~\cite{DKZ2003cap,DKZ2003grav,DKZ2004,Korotkevich2008PRL,KDZ2016,KL2011,KLR2015,DLK2013,DLK2016}). Some of them involved 2D FFTs, but for very modest sizes of arrays. For quite a long time to get a computational workstation with many computing central processing unit (CPU) cores was prohibitively expensive, while distributed memory computations for these sizes of arrays was utterly useless due to communication cost. Fortunately, situation changed drastically. Now even several tens of CPU cores SMP configurations are readily available for a manageable cost. Available amount of memory also have risen to hundreds of gibibytes. It made some long standing problems from Physics and Applied Mathematics approachable. For example, turbulence simulations require wide range of scales, in other words large wavenumbers arrays, like in the recent work~\cite{FV2015}. In order to check some theoretical predictions for propagation of the laser radiation in a turbulent atmosphere, like in paper~\cite{KLL2020}, one needs 2D array of at least (!) $(43000)^2$ size. Taking into account previously described availability of systems with high number of CPU cores, usage of shared memory program architecture makes algorithms both simpler to implement and more efficient, as it allows to avoid communication between parallel parts of the program due to direct access to the whole program memory by any part (thread) of the code. FFTW has multithreaded parallel functions for all kinds of DFTs during practically all time of its existence. At the same time, when performance tests were done, author was disappointed to notice drastic drop of performance of multithreaded DFTs with growth of the array sizes (see Figures~\ref{Thor.size_test}-\ref{Old.size_test} below). It should be specified, that author was mostly interested in 2D DFTs. While prallelization of 2D DFT looks obvious and one would expect linear or near to linear growth of performance with the number of parallel threads, the problem is not that simple for small transformation sizes, as inevitable overhead for threads initialization and synchronization can eliminate all the gain if amount of work for every parallel part is not large enough. But for large, tens of thousands points in every direction 2D arrays situation supposed to be pretty simple and speedup from parallelization looks inevitable. On the contrary, parallel FFTW performance degradation was increasing with the size of the arrays (as it is demonstrated later in the paper). A remedy for the situation was found, nearly linear speedup in number of used CPU cores was achieved. A set of 2D DFTs (out of place, double precision complex numbers) is published~\cite{GitHub_my_fft} under free software license (GNU GPL v3) for use by community and/or incorporation into existing software. \section{Proposed solution\label{Solution}} Author decided that most probably architecture of FFTW is tuned for efficient memory handling for relatively small transformations. This is why it was decided to use 1D FFTW subroutines and efficient CPU cache memory handling techniques in order to try to improve the situation. For one of the previous codes~\cite{KL2011} author developed a parallel version of block matrix transposition, which allowed to avoid unnecessary penalties for cache memory misses (when accessed memory parts are not going one after another, which results in access to the parts of memory (RAM) not mapped into CPUs cache memory, causing latency delays). As a result, the 2D DFT was implemented in the following way: parallel 1D DFTs (using 1D DFTs from FFTW) in the direction of array which goes ``along the memory'' (in C programming language matrix organization is so called ``row major format'', which means that matrix is stored in memory as row after row, unlike FORTRAN's ``column major format'' for a matrix storage, so we first compute 1D DFTs along rows) are computed; then array is transposed, resulting in other dimension being along the memory (in C columns become rows); now parallel DFTs are computed for now fast dimension (in C it will correspond to 1D DFTs along rows of transposed matrix, meaning columns of original one); then we transpose array again. Operations of array transposition look unnecessary, as one could just perform 1D DFTs along second dimension without them, but in such a case distance in memory between successive elements of corresponding 1D array will be large (for large arrays) and nearly every operation with these elements would result in CPU ``cache miss'', meaning the CPU will wait for data from RAM instead of number crunching. As it was mentioned above we avoid unnecessary ``cache misses'' by using block array transposition. For large array sizes (when array doesn't fit into CPU cache memory) effect of efficient memory handling could easily results in order of magnitude speedup of transposition operation in comparison with naive version of it. Block transposition is a pretty standard technique demonstrating importance of thoughtful use of memory especially for large array sizes (some discussion of even more advanced memory handling techniques used internally in FFTW can be found, for example, here~\cite{CacheOblivious}). Simple set of subroutines for pool of threads organization and synchronization was used~\cite{GitHub_my_thr} (included with the new library). Because author created this set of 2D DFT subroutines specifically for his set of research tasks, only complex-to-complex out-of-place DFTs were implemented. Other types of DFTs could be implemented by demand or if new research needs will arise. Published code was not specifically optimized for speed or memory usage, with exception of memory block size for transposition subroutine, which was tested on several architectures to provide slight performance gain with respect to some generic value (only two different values: for AMD\textregistered\, and Intel\textregistered\, CPUs). Application programming interface (API) was made as close to FFTW's one as possible. Is described in details in~\ref{AppendixAPI}, together with complete description of benchmark program and proposed performance test procedure. \section{Performance of the code\label{Performance}} All systems which participated in benchmark are briefly described in \ref{AppendixSystemsDescr}. In all the tests version 3.3.8 (the latest at the moment of writing the article) of FFTW was used. Configuration of the library for different CPUs and machines is described in \ref{AppendixFFTWConf}. For evaluation of performance the same methodology as described for official FFTW benchmark utility (described in~\cite{FFTW_bench}) was used, which means that for complex transforms number of operations is estimated as $5N\log_2 N$ (here $N$ is the total number of harmonics/points), time (up to nanoseconds) is measured for a series of DFTs which takes at least several (usually tens or hundreds) of seconds to compute, then this procedure is repeated $10$ times and smallest time (meaning the best performance) expressed in microseconds was used to divide estimated number of operations in order to get performance in MFLOPs (Mega-FLOPs or millions of floating point operations per second). Two series of tests were performed: fixed array size and variable number of parallel threads, fixed number of threads (usually the number of threads which gives the best performance for FFTW in the previous test was chosen) and variable size of the array in order to determine at what value performance drop happens. \subsection{Fixed array size. Performance scaling with number of threads\label{thr_test}} In all the cases double precision complex array of the size $32768\times32768$ was used. Out of place (means different input and output arrays are used) complex to complex DFTs were evaluated for both new library and FFTW. Benchmark program is included in the library package. Its usage together with methodology of the tests are explained in the end of~\ref{AppendixAPI}. For all but one system (exception was {\bf System 2b}) the full test was completed twice with several days between runs to avoid influence of system processes etc. Best results of all runs were used for evaluation. Benchmark results are represented in Figures~\ref{Thor.thr_test}-\ref{Old.thr_test}. \begin{figure}[htbp] \centering \includegraphics[width=10.0cm]{32k.thr_test.Thor.paper.eps} \caption{\label{Thor.thr_test} Array size $(32768)^2$, performance dependence as a function of number of parallel threads. Red: new library, green: FFTW. {\bf System 1} was equipped with two Intel\textregistered\, Xeon\textregistered\, CPU E5-2680 v3 @ 2.50GHz (Haswell), which results in $2\times12$ CPU cores, $2\times24$ HyperThreading cores.} \end{figure} \begin{figure}[htbp] \centering \includegraphics[width=6.78cm]{32k.thr_test.joyspride_sonrisa.paper.eps} \includegraphics[width=6.78cm]{32k.thr_test.parma6.paper.eps} \caption{\label{New.thr_test} Array size $(32768)^2$, performance dependence as a function of number of parallel threads. Red: new library, green: FFTW. {\sl (Left panel)} {\bf System 2a} was equipped with two Intel\textregistered\, Xeon\textregistered\, Silver 4110 CPU @ 2.10GHz (Skylake), which results in $2\times8$ CPU cores, $2\times16$ HyperThreading cores. {\bf System 2b} was equipped with two Intel\textregistered\, Xeon\textregistered\, Gold 6126 CPU @ 2.60GHz (Skylake), which results in $2\times12$ CPU cores, $2\times24$ HyperThreading cores. {\sl (Right panel)} {\bf System 3} was equipped with two Intel\textregistered\, Xeon\textregistered\, Gold 6242 CPU @ 2.80GHz (Cascade Lake), which results in $2\times16$ CPU cores, $2\times32$ HyperThreading cores.} \end{figure} \begin{figure}[htbp] \centering \includegraphics[width=6.78cm]{32k.thr_test.Odin.paper.eps} \includegraphics[width=6.78cm]{32k.thr_test.parma5.paper.eps} \caption{\label{Old.thr_test} Array size $(32768)^2$, performance dependence as a function of number of parallel threads. Red: new library, green: FFTW. {\sl (Left panel)} {\bf System 4} was equipped with two AMD\textregistered\, Opteron\texttrademark\, Processor 6276 @ 2.3GHz (Bulldozer), which results in $2\times8$ CPU cores, $2\times16$ HyperThreading cores. {\sl (Right panel)} {\bf System 5} was equipped with two Intel\textregistered\, Xeon\textregistered\, CPU E5-2670 0 @ 2.60GHz (Sandy Bridge EP), which results in $2\times8$ CPU cores, $2\times16$ HyperThreading cores.} \end{figure} As one can see, for this size of the array new library gives performance boost for all but one system (exception is {\bf System 3}). See Table~\ref{tab:thr_test} for a summary of results. \begin{table}[htb!] \center \begin{tabular}{|c|c|c|c|c|c|c|} \hline MFLOPS & {\bf Sys 1} & {\bf Sys 2a} & {\bf Sys 2b} & {\bf Sys 3} & {\bf Sys 4} & {\bf Sys 5} \\ \hline FFTW & $42109$ & $17640$ & $24241$ & $44918$ & $9078$ & $15961$ \\ \hline new library & $59175$ & $34292$ & $42130$ & $38628$ & $14986$ & $21993$ \\ \hline boost & $41\%$ & $94\%$ & $74\%$ & $-14\%$ & $65\%$ & $38\%$ \\ \hline best/core/GHz & $986$ & $1021$ & $675$ & $501$ & $407$ & $529$ \\ \hline \end{tabular} \caption{\label{tab:thr_test} Comparison of best performances for a problem $32768\times32768$ double precision complex array. Numbers for FFTW and new library are given in MFLOPS. It should be noted that these numbers could be achieved at different numbers of used parallel threads for FFTW and new library. ``Boost'' shows speedup of the best results of the new library with respect to the best result of FFTW. Also we provide estimation of efficiency for different systems, computed as best performance (regardless of the library) divided by number of real CPU (not HyperThreading) cores and by base frequency in GHz.} \end{table} Performance improvement ranges for most systems from 38\% to 94\%. The only system which behaved completely differently (see Figure~\ref{New.thr_test}, right panel) is {\bf System 3}. Although architectures of {\bf Systems 2a,b} and {\bf System 3} are very close, the dependence changed drastically after some number of parallel threads. As one can see, until number of threads close to $16$ everything looks pretty similar to other systems. Some ideas why it can be so are given in the end of~\ref{AppendixSystemsDescr}. Also, one could notice that $16$ is exactly the number of real CPU cores on one of the two processors. As a result, absence of speedup after adding more parallel threads can be attributed to some problems of communication between two CPUs in the system. The source(-s) of these problems can range from architecture flaws and/or hardware configuration to configuration of Linux kernel or even motherboard (BIOS) software. The fact that there is no noticeable influence of this issue on FFTW's performance could be the lower usage of memory throughput due to slightly lower performance which does not create some kind of issue in the kernel and/or CPUs. As a summary, the behavior of {\bf System 3} is still a mystery for the author. Even on this system if we limit number of parallel threads by $16$ (one real CPU) new library demonstrates improvement in performance. \FloatBarrier \subsection{Fixed number of threads. Performance scaling with array size\label{size_test}} In order to test the performance of two libraries under consideration as a function of the size of an array, author performed another series of tests. Now the number of parallel threads was fixed and only the size of the problem was changing. Once again, only out of place double precision complex DFTs were tested. Number of threads was determined from the tests described in the previous Subsection~\ref{thr_test}. Namely, author used number of parallel threads which gave the highest (or close to that) performance for FFTW library on every architecture. All arrays were square and the problem size was starting from $4096\times4096$ with continuation in integer (natural) multiples of $1024$ (for example, $(20\times1024)^2$). The same testing program included in the library package was used. Results of the tests are represented in Figures~\ref{Thor.size_test}-\ref{Old.size_test}. \begin{figure}[htbp] \centering \includegraphics[width=10.0cm]{Thor_size_test.47thr.paper.eps} \caption{\label{Thor.size_test} Performance dependence as a function of array size $N$, full 2D-array size if $N^2$. Number of threads was chosen as giving the best performance to FFTW in the previous series of tests. Red: new library, green: FFTW. {\bf System 1} was equipped with two Intel\textregistered\, Xeon\textregistered\, CPU E5-2680 v3 @ 2.50GHz (Haswell), which results in $2\times12$ CPU cores, $2\times24$ HyperThreading cores, number of threads is $47$.} \end{figure} \begin{figure}[htbp] \centering \includegraphics[width=6.78cm]{joyspride_sonrisa_size_test.32thr.paper.eps} \includegraphics[width=6.78cm]{parma6_size_test.33thr.paper.eps} \caption{\label{New.size_test} Fixed number of threads (giving best performance to FFTW in the previous series of tests), performance dependence as a function of size (length of the side) of the square array. Red: new library, green: FFTW. {\sl (Left panel)} {\bf System 2a} was equipped with two Intel\textregistered\, Xeon\textregistered\, Silver 4110 CPU @ 2.10GHz (Skylake), which results in $2\times8$ CPU cores, $2\times16$ HyperThreading cores, number of threads is $32$. {\bf System 2b} was equipped with two Intel\textregistered\, Xeon\textregistered\, Gold 6126 CPU @ 2.60GHz (Skylake), which results in $2\times12$ CPU cores, $2\times24$ HyperThreading cores, number of threads is $48$. {\sl (Right panel)} {\bf System 3} was equipped with two Intel\textregistered\, Xeon\textregistered\, Gold 6242 CPU @ 2.80GHz (Cascade Lake), which results in $2\times16$ CPU cores, $2\times32$ HyperThreading cores, Number of threads is $33$.} \end{figure} \begin{figure}[htbp] \centering \includegraphics[width=6.78cm]{Odin_size_test.32thr.paper.eps} \includegraphics[width=6.78cm]{parma5_size_test.32thr.paper.eps} \caption{\label{Old.size_test} Fixed number of threads (giving the best performance to FFTW in the previous series of tests), performance dependence as a function of size (length of the side) of the square array.. Red: new library, green: FFTW. {\sl (Left panel)} {\bf System 4} was equipped with two AMD\textregistered\, Opteron\texttrademark\, Processor 6276 @ 2.3GHz (Bulldozer), which results in $2\times8$ CPU cores, $2\times16$ HyperThreading cores, number of threads is $32$. {\sl (Right panel)} {\bf System 5} is equipped with two Intel\textregistered\, Xeon\textregistered\, CPU E5-2670 0 @ 2.60GHz (Sandy Bridge EP), which results in $2\times8$ CPU cores, $2\times16$ HyperThreading cores, Number of threads is $32$.} \end{figure} As can be seen from the measurements, for every architecture the size of the array when FFTW experiences a dramatic drop in performance is slightly different. At the same time the performance of the new library relatively weakly depends on the size of the problem. Observed oscillations are the consequence of different efficiency of 1D FFTW DFTs for different sizes of the problems (``out of the box'' FFTW is the most efficient for sizes which can be represented as products of integer (natural) powers of few lowest prime numbers $2$, $3$, $5$, and $7$, although there is a possibility to make FFTW to be as efficient for larger primes as well~\cite{FFTW_sizes}). As it can be noted, dips and peaks of the performance of new library perfectly coincide with ones for FFTW, which is inevitable as the same 1D DFT subroutines are used. See Table~\ref{tab:size_test} for a summary of results. \begin{table}[htb!] \center \begin{tabular}{|c|c|c|c|c|c|c|} \hline Array sizes & {\bf Sys 1} & {\bf Sys 2a} & {\bf Sys 2b} & {\bf Sys 3} & {\bf Sys 4} & {\bf Sys 5} \\ \hline Threshold & $16\times2^{10}$ & $17\times2^{10}$ & $18\times2^{10}$ & $16\times2^{10}$ & $17\times2^{10}$ & $17\times2^{10}$ \\ & & & & $33\times2^{10}$ & & \\ \hline Perf. drop & $35\%$ & $40\%$ & $45\%$ & $25\%$ & $30\%$ & $40\%$ \\ & & & & $50\%$ & & \\ \hline \end{tabular} \caption{\label{tab:size_test} Dependence of FFTW performance on problem's size given as $S\times1024=S\times2^{10}$, where $S$ is an integer (natural) number, starting from $4\times1024$. Here by threshold size author means one side of the square array, e.g. size $17\times1024$ means that the actual problem was $17408\times17408$, double precision complex array. Performance drops are only approximate, as DFT performance oscillates with the size of the problem.} \end{table} In all the cases there is a significant $25-40\%$ performance degradation when size of the array exceeds the threshold close to $16384\times16384$ (slightly different for different systems). Again we see different behavior of {\bf System 3} with respect to other systems. The first drop in performance by $25\%$ happens, as problem size exceeds $16\times1024$, the second one, twice stronger ($50\%$ if we are using relative measure, but approximately the same amount in MFLOPS), after approximately double of the first threshold, i.e. at $33\times1024$. Taking this fact into account, one could see, that if the previous performance tests were done not for array's size $32\times1024$, but for $33\times1024$, perhaps we would get result similar to other systems. \FloatBarrier \section{Conclusion} The significant degradation of performance of current standard {\sl de facto} DFT library, using multiple parallel threads on SMP systems, motivated the author to perform extensive set of tests on different platforms. Results of these tests indicated that there is performance drop for array sizes exceeding some threshold (slightly different for different CPUs and architectures). The author implemented a relatively simple algorithm as a library using 1D DFT subroutine from FFTW together with parallel block transposition subroutine. The new library demonstrated significant ($35-94\%$) performance boost on all but one configurations. Even in the exceptional case slight increase in performance was achieved for some limited number of parallel computational threads. The new library together with the testing suite and results of all tests are published as a project on GitHub platform under free software license (GNU GPL v3). Comprehensive description of programming interface as well as provided testing programs is given in the~\ref{AppendixAPI}. Author would like to mention that domination of Intel\textregistered\, CPUs in the list of tested systems was not intentional. AMD\textregistered\, Opteron\texttrademark\, CPU of Bulldozer family (and similar) are infamous for their dreadful performance. Newer CPUs of the same company appeared few years ago. Author tried to contact AMD\textregistered\, through different means of communication and requested a remote access to any SMP machine for 24 hours. Author's request was declined, it was proposed to contact vendors. Unfortunately, during the time of tests no high performance computing vendors offered servers equipped with recent AMD\textregistered\, CPUs. Current version of the library provides only 2D out of place complex to complex DFT subroutines. It comes with some limitations and not optimized for memory consumption. Some straightforward ways of improvement include different pools of threads for transposition and 1D DFTs, elimination of current limitations on dimensions of the arrays (currently they have to be multiples of relatively small number, like $64$), and optimization of memory usage. Further development will depend on author's future research needs as well as demand from potential users. Author would prefer the new library to be included as a possible engine for large arrays into existing libraries and will try to work in this direction. Another option will be internal tuning of the existing libraries, perhaps based on tests presented in this paper, in order to eliminate existing significant performance degradation for large arrays. Any of these two variants will make life of users (including the author) way easier as there will be no need to choose different libraries for different sizes of the problems. \section*{Acknowledgments} The author has performed this work for the project supported by the Simons Collaboration on Wave Turbulence and author is grateful for this support. Some of the systems used for development and testing ({\bf System 1} and {\bf System 4}) were funded by NSF grant OCE 1131791. Author would like to express his gratitude to A.\,O.~Prokofiev, who explained him block transposition in early 2000s. Also author would like to thank developers of FFTW~\cite{FFTW} for this beautiful and free example of software engineering.
1,108,101,565,414
arxiv
\section{Introduction} The aim of this work is to investigate the solutions to some boundary value problems involving the equation \begin{equation} \left( D^{\nu_1}_{t} + D^{\nu_2}_{x} \right) u = 0, \quad x \in \Omega_0,\, t \geq 0, \quad \nu_j \in (0,1) \cup \mathbb{N},\, j=1,2 \label{eqFirst} \end{equation} where $\Omega_0= \Omega \cup \{0\}$ and $\Omega=(0,\infty)$. The symbol $D^{\nu}_{z}$ stands for the Riemann-Liouville fractional operator and will be better characterized further in the text. Here the fractional powers $\nu_j$, $j=1,2$ can take both real and integer values in the interval $(0,1) \cup \mathbb{N}$. Thus, we study fractional- and higher-order equations. Time fractional equations have been investigated by many researchers. The context in which such equations have been developed is that of anomalous diffusions or diffusions on porous media. In the works by \citet{Nig86, Wyss86, SWyss89, Koc89} the authors dealt with second-order operators in space and gave the solutions in terms of Wright or Fox functions. More recently, in the paper by \citet{OB09} some interesting results on the explicit form of the solutions have been presented. Moreover, in the paper by \citet{BMN09ann} and the references therein the reader can find some interesting results on abstract Cauchy problems and fractional diffusions on bounded domain. In the present paper we deal with higher-order heat type equations in which the derivative with respect to $t$ is replaced by the fractional derivative and therefore we obtain higher-order fractional equations. Such equations have been studied by \citet{beg08} who has presented the solutions in terms of inverse Fourier transforms. In that paper, the author pointed out that such solutions can be viewed as the transition laws of compositions involving pseudo-processes and randomly varying times. Furthermore, we establish some connection between higher-order equations which lead to pseudo-processes (or higher-order diffusions) and, the time-fractional counterpart to those equations which lead to either stable or their inverse processes. In the matter of pseudo-processes (see Section \ref{sez4}) we refer to the paper by \citet{kry60, Hoc78, Funaki79, Ors91, HO94, LCH03}. In sections \ref{sez2},\ref{sez3} and \ref{sez4}, some preliminary results are presented whereas, the main results of this work are collected in section \ref{sez5}. In particular, we obtained the solutions to the equation \eqref{eqFirst} in the following cases: \begin{equation*} \begin{array}{lll} a) & \nu_1 \in \mathbb{N},& \nu_2 \in (0,1],\\ b) & \nu_1 \in (0,1], & \nu_2 \in \mathbb{N},\\ c) & \nu_1 \in (0,1], & \nu_2 \in (0,1]. \end{array} \end{equation*} The special cases \begin{equation*} \begin{array}{lll} a1) & \nu_1 =1,& \nu_2 = 1/n, \, n \in \mathbb{N},\\ b1) & \nu_1 = 1/n, \, n \in \mathbb{N}, & \nu_2 =1, \end{array} \end{equation*} represent the fractional counterpart of the higher-order equations of order $n$. Section \ref{sez6} is devoted to the explicit representation of solutions in some particular cases involving fundamental equations of mathematical physics. \section{Inverse processes} \label{sez2} Let $\varphi = \varphi(x,t)$ denote the distribution of a L\'evy process $\mathfrak{X}_t$, $t>0$ on $\mathbb{R}^n$ for which \begin{equation} \mathbb{E} \exp - i\xi \mathfrak{X}_t = \exp -t \Psi_{\mathfrak{X}}(\xi). \label{cfL} \end{equation} The L\'evy process $\mathfrak{X}_t$, $t>0$ represents the stochastic solution to the equation $$ (D_{0+, t} - \mathcal{A})\varphi = 0, \quad \varphi \in D(\mathcal{A}) $$ where \begin{equation*} D(\mathcal{A}) = \left\lbrace f \in L^1_{loc}(\mathbb{R}^n)\, : \, \int_{\mathbb{R}^n} |\hat{f}(\xi)|^2 (1+ \Psi_{\mathfrak{X}}(\xi)) \, d\xi < \infty \right\rbrace. \end{equation*} We have used the familiar notation in which $\hat{f}(\xi)=\mathcal{F}[f(\cdot)](\xi)$ stands for the Fourier transform of $f$. A stable subordinator, say $\mathfrak{H}_\nu(t)$, $t>0$, is a L\'evy process with non-negative, independent and homogeneous increments, see \citet{Btoi96} and, the $x$-Lapace transform reads as \begin{equation} \mathbb{E} \exp -\lambda \mathfrak{H}_\nu(t) = \mathcal{L}[h_{\nu}(\cdot, t)](\lambda) = \exp - t \lambda^\nu \label{laph} \end{equation} where $h_\nu$ is the density law of $\mathfrak{H}_\nu$. Straightforward calculations lead to \begin{equation} \Psi_{\mathfrak{H}}(\xi) = |\xi |^{\nu} \left( -i \frac{\pi \nu}{2} \frac{\xi}{|\xi |} \right). \label{exponent} \end{equation} According to the literature, we define the process $\mathfrak{L}^\nu(t)$, $t>0$ as the inverse to the stable subordinator $\mathfrak{H}_\nu$ and for which $Pr\{ \mathfrak{H}_\nu(x) >t \} = Pr\{\mathfrak{ L}^{\nu}(t) < x \}$. Such an inverse process has non-negative, non-stationary and non-independent increments (see \citet{MSheff04}). The law of $\mathfrak{L}^\nu$ can be written in terms of the Wright function $$ W_{\alpha, \beta}(z) = \sum_{k=0}^\infty \frac{z^k}{k!\, \Gamma(\alpha k + \beta)}$$ as follows \begin{equation} l_{\nu}(x,t) = t^{-\nu} W_{-\nu, 1- \nu}\left(-x t^{-\nu}\right), \quad x \in \Omega_0, \; t>0,\; \nu \in (0,1). \label{ldist} \end{equation} From \eqref{ldist} we immediately get the Laplace transform \begin{equation} \mathbb{E} \exp -\lambda \mathfrak{L}^{\nu}(t) = \mathcal{L}[l_\nu(\cdot, t)](\xi) = E_{\nu}(- \xi t^\nu) \label{lapL} \end{equation} in terms of the Mittag-Leffler function $E_{\alpha}(z) = E_{\alpha, 1}(z)$ where the entire function \begin{equation*} E_{\alpha, \beta}(z) = \sum_{k \geq 0} \frac{z^k}{\Gamma(\alpha k +\beta)} \end{equation*} is the generalized Mittag-Leffler for which \begin{equation} \int_0^\infty e^{-\lambda z} z^{\beta -1} E_{\alpha, \beta}( - \mathfrak{c} z^\alpha) \, dz = \frac{\lambda^{\alpha -\beta}}{\lambda^{\alpha} + \mathfrak{c}}. \label{lapMittagLeffler} \end{equation} The function $l_\nu$ is the density law of the inverse process $\mathfrak{L}_\nu$. The governing equations of both processes introduced so far can be written by means of the fractional operators \eqref{Rfracder} and \eqref{Rfracder2} as we will show in the next section. In particular, from the symbol \eqref{exponent}, we recognize that $\mathfrak{H}_{\nu}$ is a stable process (positively) totally skewed (see e.g. \citet{Zol86}). Thus, as for stable processes, we expect a fractional operator in space as well. We state the following relevant fact. \begin{lem} \label{lemma0} The following holds \begin{equation} \frac{t}{x}\, l_{\nu}(t,x) = h_{\nu}(x,t), \quad x \in \Omega,\; t > 0. \label{assumef} \end{equation} \end{lem} \begin{proof} The $(x,t)-Laplace$ transforms of $h_{\nu}$ writes \begin{equation} \int_{0}^\infty e^{-\lambda t} \int_{0}^\infty e^{-\xi x} h_{\nu}(x,t)\, dx \, dt= \int_{0}^\infty e^{-\lambda t} e^{-t \xi^\nu} dt = \frac{1}{\lambda + \xi^\nu}. \label{llppboth} \end{equation} This is because of the fact that $\mathbb{E} \exp -\xi \mathfrak{H}_{\nu}(t) = \exp -t\, \xi^\nu$. From \eqref{lapMittagLeffler} we obtain the Fourier-Mellin integral \begin{equation} \mathcal{L}[h_{\nu}(x, \cdot)](\lambda) = \frac{1}{2\pi i} \int_{Br} \frac{e^{\xi x} \, d\xi}{\lambda + \xi^\nu} = x^{\nu -1} E_{\nu, \nu}(-\lambda x^{\nu}). \label{lapTh} \end{equation} Let us assume that formula \eqref{assumef} holds true. Thus, the $t$-Laplace transform of $h_{\nu}$ can be evaluated as follows \begin{align*} \mathcal{L}[h_{\nu}(x, \cdot)](\lambda) = & \int_{0}^{\infty} e^{-\lambda t} \frac{t}{x} l_{\nu}(t,x) \,dt\\ = & - \frac{1}{x} \frac{d}{d\lambda} \int_0^\infty e^{-\lambda t} l_{\nu}(t,x)\, dt\\ = & (\textrm{by } \eqref{lapL}) = - \frac{1}{x} \frac{d}{d\lambda} \, E_{\nu}(-\lambda x^{\nu})\\ = & x^{\nu -1} E_{\nu, \nu}(-\lambda x^{\nu}) \end{align*} which coincides with \eqref{lapTh}. We can obtain the same result by considering \eqref{laph} and the fact that \begin{equation} \int_{0}^{\infty} e^{-\xi x} \mathcal{L}[l_\nu(x, \cdot)](\lambda)dx = \lambda^{\nu -1} \int_0^\infty e^{-\xi x} \exp -x \lambda^\nu \, dx = \frac{\lambda^{\nu -1}}{\lambda^\nu + \xi}. \label{accordFl} \end{equation} The Laplace transform $\mathcal{L}[l_\nu(x, \cdot)](\lambda)$ can be easily carried out from \eqref{ldist}. \end{proof} \section{Fractional equations on $\Omega_0$} \label{sez3} It is well-known that the law $h_\nu$ of the stable subordinator $\mathfrak{H}_{\nu}$ satisfies the fractional equation \begin{align} \left( D_{0+, t} + D^{\nu}_{0+, x}\right) h_{\nu} = & 0, \quad x \in \Omega,\; t> 0\label{pdehnu} \end{align} with initial condition $h_{\nu}(x, 0) =\delta(x)$ whereas, for the inverse process $\mathfrak{L}_{\nu}$, we have that \begin{align} \left( D^{\nu}_{0+,t} + D_{0+,x} \right) l_{\nu} = & \delta(x)\, \frac{t^{-\nu}}{\Gamma(1-\nu)}, \quad x \in \Omega_0, \; t > 0 \label{fracProbL} \end{align} with initial condition $l_{\nu}(x, 0) =\delta(x)$. The fractional operators appearing in \eqref{pdehnu} and \eqref{fracProbL} are the Riemann-Liouville fractional derivatives defined in \eqref{Rfracder} and \eqref{Rfracder2}. The initial and boundary conditions for the equation \eqref{pdehnu} can be written as \begin{equation} \left\lbrace \begin{array}{l} h_{\nu}(x, 0) = \delta(x), \quad x \in \Omega, \\ h_{\nu}(0, t) = 0, \quad t>0, \end{array} \right . \label{conditionsh} \end{equation} whereas, for the equation \eqref{fracProbL}, we get \begin{align} \left( D^{\nu}_{0+,t} + D_{0+,x} \right) l_{\nu} = & 0, \quad x \in \Omega, \; t \geq 0 \label{pdelnu} \end{align} subject to the initial and boundary conditions \begin{equation} \left\lbrace \begin{array}{l} l_{\nu}(x, 0) = \delta(x), \quad x \in \Omega, \\ l_{\nu}(0, t) = \Phi_{\nu}(t), \quad t>0. \end{array} \right . \label{conditionsl} \end{equation} In \eqref{conditionsl} we considered the function $$\Phi_{\alpha}(z) = \frac{1}{\Gamma(1-\alpha)} \, z_{+}^{-\alpha}$$ where $z_{+}^{-\alpha} = H(z)\, z^{-\alpha}$ with $\alpha \neq 1, 2, \ldots $ and, $H(z)$ is the Heaviside function. Since $\Phi_{\alpha}(z) \in L^1(\mathbb{R})$ we have that \begin{equation} \mathcal{L}[\Phi_{\alpha}(\cdot)](\zeta) = \zeta^{\alpha -1}. \label{lapHz} \end{equation} From the discussion made so far, the problem of finding solutions for \eqref{pdelnu} is to trace back through the study of boundary values for $l_\nu$. This approach will turn out to be useful when we study the solutions to higher-order equations which represent the higher-order counterparts of both \eqref{pdelnu} and \eqref{pdehnu}. The problem of solving \eqref{eqFirst} with fractional powers $\nu_j \in (0,1)$, $j=1,2$ can be approached by introducing the process $\mathfrak{H}^{\nu_2}(\mathfrak{L}^{\nu_1}(t))$, $t>0$ driven by the law \begin{equation} \mathfrak{f}_{\nu_1, \nu_2}(x,t) = \langle h_{\nu_2}(x, \cdot), \, l_{\nu_1}(\cdot, t) \rangle , \quad x \in \Omega_{0},\; t>0,\; \nu_j \in (0,1), \; j=1,2 \label{lawf} \end{equation} (see e.g. \cite{ Dov2, Lanc, MLP01} and the references therein). For $\nu_1=\nu_2=\nu$ the law \eqref{lawf} takes the form $\mathfrak{f}_{\nu, \nu}(x,t)=t^{-1}f_{\nu}(t^{-1}x)$ where \begin{equation*} f_{\nu}(x) = \frac{1}{\pi} \frac{x^{\nu -1} \sin \pi \nu}{1+ 2 x^{\nu}\cos \pi \nu + x^{2\nu}}, \quad x \in \Omega_{0},\; t>0, \quad \nu \in (0,1) \end{equation*} and $\mathfrak{H}^{\nu}(\mathfrak{L}^{\nu}(t)) \stackrel{law}{=} t \times \, _1\mathfrak{H}^{\nu}(t) / \, _2\mathfrak{H}^{\nu}(t)$, $t>0$ where $\, _j\mathfrak{H}^{\nu}(t)$, $j=1,2$ are independent stable subordinators (see for example \cite{ChYor03, Dov2, Lamp63}). We notice that the ratio $\, _1\mathfrak{H}^{\nu}(t) / \, _2\mathfrak{H}^{\nu}(t)$ is independent of $t$. The governing equation of the density \eqref{lawf} (see e.g. \cite{Dov3}) is written as \begin{equation} (D^{\nu_1}_{0+,t} + D^{\nu_2}_{0+,x})\mathfrak{f}_{\nu_1, \nu_2}(x,t)=\delta(x)\frac{t^{-\nu_1}}{\Gamma(1-\nu_1)}, \quad x \in \Omega_{0},\; t>0 \label{eq1lamp} \end{equation} with $\mathfrak{f}_{\nu_1,\nu_2}(\partial \Omega_{0},t) =0$ and $\mathfrak{f}_{\nu_1, \nu_2}(x,0)=\delta(x)$ or, by considering \eqref{RCfracder}, as \begin{equation} \left( \frac{\partial^{\nu_1}}{\partial t^{\nu_1}} + D^{\nu_2}_{0+,x}\right)\mathfrak{f}_{\nu_1, \nu_2}(x,t)= 0, \quad x \in \Omega_{0},\; t>0. \label{eq2lamp} \end{equation} In the next sections we will study the remaining cases in which the powers $\nu_j$, $j=1,2$ can also take integer values. For this reason we will give a short introduction on pseudo-processes. \section{Pseudo processes} \label{sez4} According to the literature we define the pseudo-process $X^{(n)}_t$, $t>0$, $n>2$, which is a Markov pseudo-process with law satisfying the higher-order heat equation \begin{equation} \left( \frac{\partial}{\partial t} - \kappa_{n} \frac{\partial^n}{\partial x^n}\right) v_{n}=0, \quad x \in \mathbb{R},\; t>0, \; n >2 \label{eqNord} \end{equation} where $\kappa_n = (-1)^{p+1}$ for $n=2p$ or $\kappa_n = \pm 1$ for $n=2p+1$. Such a process is termed ''pseudo-process'' because of the fact that the driving measure is a signed measure. We refer to the interesting work by \citet{LCH03} and the references therein for an exhaustive discussion on this topic. Here, we only recall that, for a given order $n > 2$, the solution to \eqref{eqNord} can be expressed in terms of its inverse Fourier transform \begin{equation} v_n(x,t) = \frac{1}{2\pi} \int_{-\infty}^{+\infty} e^{-i \zeta x + \kappa_n (-i \zeta)^n t} d\zeta \label{soleqNord} \end{equation} when it exists. For $n=3$ we obtain a solution in a closed form. Indeed, the solution to the third-order heat equation with $\kappa_3=\pm 1$ can be written in terms of the well-known Airy function (see, for example, \citet{LE} for information on this function) and we obtain that \begin{equation} v_3(x,t)=\frac{1}{\sqrt[3]{3t}}Ai\left( \frac{\mp x}{\sqrt[3]{3t}} \right), \quad x \in \mathbb{R},\; t>0. \end{equation} The reader can consult the paper by \citet{Ors91} for further details on this case. The following result can be viewed as the higher-order counterpart of Lemma \ref{lemma0}. \begin{lem} \label{lemma1} Let $w_j$, $j=1,2$ be two solutions of \eqref{eqNord}. We have that \begin{equation} w_1(x,t) = \frac{x}{t} w_2(x,t), \quad x \in \mathbb{R}, \; t>0. \label{idNordQ} \end{equation} \end{lem} \begin{proof} First, for the solution \eqref{soleqNord}, we prove that \begin{equation} \left( D_{0+, x}^{n -1} + \frac{\kappa_n}{n} \frac{x}{t} \right) \, v_n = 0. \label{eighN} \end{equation} Let us write $$ v_n(\zeta, t) = \mathcal{F}[v_{n}(\cdot, t)](\zeta) = \int_{\mathbb{R}} e^{i \zeta x} v_n(x,t)\, dx .$$ We have that \begin{equation*} \mathcal{F}[D_{0+, x}^{n -1}\, v_{n}(\cdot, t)](\zeta) = (-i \zeta)^{n-1} v_{n}(\zeta, t) \end{equation*} and \begin{equation*} \mathcal{F}[x\, v_{n}(\cdot, t)](\zeta) = \frac{1}{i} \frac{d}{d\zeta} v_n(\zeta, t) = - n \kappa_n t (-i \zeta)^{n-1} v_n(\zeta, t). \end{equation*} By collecting all pieces together we prove the identity \eqref{eighN}. The next step is to evaluate the following derivatives: \begin{equation*} \frac{\partial^n w_1}{\partial x^n} = \frac{n}{t} \frac{\partial^{n-1} w_2}{\partial x^{n-1}} + \frac{x}{t} \frac{\partial^n w_2}{\partial x^n}. \end{equation*} and \begin{equation*} \frac{\partial w_1}{\partial t} = -\frac{x}{t^2}w_2 + \kappa_n \frac{x}{t} \frac{\partial^n w_2}{\partial x^n} \end{equation*} where we used the fact that $w_2$ solves \eqref{eqNord}. By summing up such derivatives we get that \begin{align*} \frac{\partial w_1}{\partial t} - \kappa_n \frac{\partial^n w_1}{\partial x^n} = & -\frac{x}{t^2}w_2 - \kappa_n \frac{n}{t} \frac{\partial^{n-1} w_2}{\partial x^{n-1}} \\ = & - \frac{1}{t} \left( n \kappa_n \frac{\partial^{n-1} w_2}{\partial x^{n-1}} + \frac{x}{t}w_2 \right) \end{align*} where $w_2$ solves \eqref{eqNord} and thus, the identity \eqref{eighN} is in order. We obtain that \begin{equation*} \frac{\partial w_1}{\partial t} - \kappa_n \frac{\partial^n w_1}{\partial x^n} =0 \end{equation*} and therefore $w_1$ solves the higher-order equation \eqref{eqNord}. \end{proof} \begin{os} \normalfont From Lemma \ref{lemma1}, by applying $m$-times the identity \eqref{idNordQ} we immediately obtain that \begin{equation*} w(x,t) = \frac{x^m}{t^m} v_n(x,t), \quad x \in \mathbb{R}, \; t>0, \; m \in \mathbb{N} \end{equation*} solves the equation \eqref{eqNord}. \end{os} \section{Fractional higher-order equations on $\Omega_0$} \label{sez5} The solutions to the fractional problem involving higher-order operators in space of a general order $n \in \mathbb{N}$ on the whole real line have been investigated by \citet{beg08}. In that paper the solutions are presented as the transition functions of pseudo-processes with randomly varying time $\mathcal{T}_{\alpha}$. In particular, see \cite[Theorem 2.3]{beg08}, the solutions to \begin{equation} \left\lbrace \begin{array}{l} \frac{\partial^\alpha}{\partial t^\alpha}u(x,t) = \kappa_n \frac{\partial^n}{\partial x^n}u(x,t), \quad x \in \mathbb{R},\; t>0\\ u(x,0) = \delta(x) \end{array} \right . \end{equation} coincide with $$u_{\alpha}(x,t) = \int_{0}^\infty p_n(x,s) \bar{v}_{2\alpha}(u,t)du$$ where $\bar{v}_{2\alpha}$ is the Wright function \eqref{ldist}. In such problems only the initial conditions are required and the solutions are presented in terms of the inverse Fourier transform \eqref{soleqNord}. Indeed, the functions $p_n$ are the solutions to \eqref{eqNord}. Our aim is to investigate fractional higher-order equations on the semi-infinite interval $\Omega_0$ with suitable boundary conditions. Due to the fact that $l_{\nu}(0^{+},t) = \Phi_{\nu}(t) < \infty$ for $t>0$, if we are looking for solutions which are symmetric, then we can extend these results to the whole real line without effort. It is enough to consider the function $l_{\nu}(|x|, t)$. We state the following result. \begin{te} For $\nu \in (0,1]$, $n \in \mathbb{N}$, we have that \begin{equation} (D^{n}_{0-,x} - D^{\nu}_{0+, t})l_{\frac{\nu}{n}}=\, 0, \quad x \in \Omega,\; t>0 \label{eqlFH} \end{equation} with conditions \eqref{conditionsl} and \begin{equation} D^k_{0-, x} l_{\frac{\nu}{n}}(x,t)\Big|_{x=0^{+}} = \Phi_{\frac{\nu (k+1)}{n}}(t), \quad 0 \leq k < n. \label{derHy} \end{equation} \end{te} \begin{proof} We write the Laplace transforms $$ \int_{0}^\infty e^{-\xi x} \int_0^\infty e^{-\lambda t} \, l_{\frac{\nu}{n}}(x,t)\, dt\, dx = l_{\frac{\nu}{n}}(\xi, \lambda).$$ Keeping in mind formulae \eqref{DDsign} and \eqref{propLint} we evaluate the Laplace transform \begin{align*} \mathcal{L}[D^{n}_{0-,x} l_{\frac{\nu}{n}}(\cdot, t)](\xi) = & (-1)^n \mathcal{L}[D^{n}_{0+,x} l_{\frac{\nu}{n}}(\cdot, t)](\xi)\\ = & (-1)^n \left[ \xi^n l_{\frac{\nu}{n}}(\xi, t) - \xi^{n-1} \sum_{k=0}^{n-1} (-1/\xi)^{k} \Phi_{\frac{\nu(k+1)}{n}}(t) \right] \end{align*} where, once again from \eqref{DDsign}, formula \eqref{derHy} has been rewritten as $$D^k_{0+,x} l_{\nu}(x,t) \big|_{x=0^{+}} = \, (-1)^k\, \Phi_{\frac{\nu(k+1)}{n}}(t) .$$ From \eqref{lapHz} and the linearity of the Laplace transform we get that \begin{align} & \int_{0}^{\infty} e^{-\lambda t} \sum_{k=0}^{n-1} (-1/\xi)^{k} \Phi_{\frac{\nu(k+1)}{n}}(t) \, dt \label{teL} \\ = & \sum_{k=0}^{n-1} (-1/\xi)^{k} \lambda^{\frac{\nu(k+1)}{n} -1} = \lambda^{\frac{\nu}{n} - 1} \sum_{k=0}^{n -1} \left( -\lambda^{\frac{\nu}{n}} / \xi \right)^k \nonumber \\ = & \lambda^{\frac{\nu}{n} -1} \xi^{1-n} \frac{\xi^n - (-\lambda^{\frac{\nu}{n}})^{n}}{\xi + \lambda^\frac{\nu}{n}} = \xi^{1-n} \frac{\lambda^{\frac{\nu}{n} - 1}}{\xi + \lambda^{\frac{\nu}{n}}} \left( \xi^n - (-1)^n \lambda^{\nu} \right). \nonumber \end{align} Thus, from \eqref{lapHH} and \eqref{teL}, the equation \eqref{eqlFH} takes the form \begin{align*} 0 = & - \lambda^{\nu} l_{\frac{\nu}{n}}(\xi, \lambda) + (-1)^n \left[ \xi^n l_{\frac{\nu}{n}}(\xi, \lambda) - \frac{\lambda^{\frac{\nu}{n} - 1}}{\xi + \lambda^{\frac{\nu}{n}}} \left( \xi^n - (-1)^n \lambda^{\nu} \right) \right]\\ = & \left( (-1)^n \xi^n - \lambda \right)l_{\frac{\nu}{n}}(\xi, \lambda) - \frac{\lambda^{\frac{1}{n}-1}}{\xi + \lambda^{\frac{1}{n}}} \left( (-1)^n \xi^n - \lambda \right) \end{align*} and immediately we get that \begin{align*} l_{\frac{\nu}{n}}(\xi, \lambda) = & \frac{\lambda^{\frac{1}{n}-1}}{\xi + \lambda^{\frac{1}{n}}} \end{align*} which is in accord with \eqref{accordFl}. This concludes the proof. \end{proof} \begin{coro} For $\nu=1/n$, $n \in \mathbb{N}$ we have that \begin{align} \left( D_{0-,t} + D^{n}_{0-, x} \right) l_{\nu} = & \, 0, \quad x \in \Omega,\, t > 0 \end{align} with conditions \eqref{conditionsl} and \begin{equation} D^k_{0-,x} l_{\nu}(x,t) \big|_{x=0^{+}} = \, \Phi_{\nu(k+1)}(t), \quad 0 < k < n . \label{boundcondF} \end{equation} is the higher-order counterpart of the equation \eqref{pdelnu}. \end{coro} \begin{os} \normalfont We notice that $\Phi_{1}(t) = 0$ and thus, for $\nu=1/n$ we have that $$D^{n-1}_{0-,x} l_{\nu}(x,t) \big|_{x=0^{+}} = 0. $$ Furthermore, \begin{equation*} D^k_{0-,x} l_{\nu}(x,t) \big|_{x=0^{+}} = D_{0+, t}^{\nu k} \Phi_{\nu}(t) = D_{0+, t}^{\nu k} l_{\nu}(0,t). \end{equation*} \end{os} We pass to the study of the equations involving the laws of stable subordinators. \begin{te} For $\nu \in (0,1]$, $n \in \mathbb{N}$, we have that \begin{equation} (D^{n}_{0-, t} - D^{\nu}_{0+,x})h_{\frac{\nu}{n}}=0, \quad x \in \Omega,\; t>0 \label{pdehnuHigher} \end{equation} with initial conditions \eqref{conditionsh} and \begin{equation} D^{k}_{0-, t} h_{\frac{\nu}{n}}(x,t) \Big|_{t=0^{+}} = \, \Phi_{\frac{\nu k}{n}+1}(x), \quad 0 < k < n. \end{equation} \end{te} \begin{proof} First, we write $$ \int_0^\infty e^{-\lambda t} \int_0^\infty e^{-\xi x} h_{\frac{\nu}{n}}(x,t)\, dx dt = h_{\frac{\nu}{n}}(\xi, \lambda) .$$ By evaluating the $x$-Laplace transform of the equation \eqref{pdehnuHigher} with the boundary condition $h_{\nu}(0,t) = 0$ we get that $$\left( D^n_{0-, t} - \xi^{\nu} \right) h_{\frac{\nu}{n}}(\xi, t) = 0.$$ By applying standard Laplace technique, in particular from \eqref{DDsign} and \eqref{propLint}, we obtain that \begin{equation*} \int_{0}^{\infty} e^{-\lambda t} D^{n}_{0+,t}h_{\frac{\nu}{n}}(x,t) \,dt = \lambda^n h_{\frac{\nu}{n}}(x, \lambda) - \lambda^{n-1}\delta(x) - \lambda^{n-1} \sum_{k=1}^{n-1} (-1/\lambda)^{k} \, \Phi_{\frac{\nu k}{n}+1}(x). \end{equation*} From \eqref{lapHz} we get \begin{align*} \int_0^\infty e^{-\xi x} \sum_{k=1}^{n-1} (-1/\lambda)^{k} \, \Phi_{\frac{\nu k}{n}+1}(x) dx = & \sum_{k=1}^{n-1} \left( -\xi^{\frac{\nu}{n}} /\lambda \right)^{k} \end{align*} and thus, the $(x,t)$-Laplace transforms of the equation \eqref{pdehnuHigher} lead to \begin{align*} 0 = & (-1)^{n} \left[ \lambda^n h_{\frac{\nu}{n}}(\xi, \lambda) - \lambda^{n-1} - \lambda^{n-1} \sum_{k=1}^{n-1} (-\xi^{\frac{\nu}{n}}/\lambda)^k \right] - \xi^{\nu} h_{\frac{\nu}{n}}(\xi, \lambda)\\ = & (-1)^{n} \left[ \lambda^n h_{\frac{\nu}{n}}(\xi, \lambda) - \lambda^{n-1} \sum_{k=0}^{n-1} (-\xi^{\frac{\nu}{n}}/\lambda)^k \right] - \xi^{\nu} h_{\frac{\nu}{n}}(\xi, \lambda)\\ = & (-\lambda)^n h_{\frac{\nu}{n}}(\xi, \lambda) - \xi^{\nu} h_{\frac{\nu}{n}}(\xi, \lambda) - (-1)^n \lambda^{n-1} \sum_{k=0}^{n-1} (-\xi^{\frac{\nu}{n}}/\lambda)^k \\ = & \left( (-\lambda)^n - \xi^{\nu} \right) h_{\frac{\nu}{n}}(\xi, \lambda) - (-1)^n \lambda^{n-1} \frac{1- (-\xi^{\frac{\nu}{n}}/\lambda)^n}{ 1+ \xi^{\frac{\nu}{n}}/\lambda}\\ = & \left( (-\lambda)^n - \xi^{\nu} \right) h_{\frac{\nu}{n}}(\xi, \lambda) - \frac{(-\lambda)^n - \xi^{\nu}}{\lambda + \xi^{\frac{\nu}{n}}}. \end{align*} Finally, we obtain that \begin{equation} h_{\frac{\nu}{n}}(\xi, \lambda) = \frac{1}{\lambda + \xi^{\frac{\nu}{n}}} \end{equation} which is in accord with \eqref{llppboth} and this concludes the proof. \end{proof} From \eqref{pdehnuHigher}, for $n=1$, we immediately reobtain the equation \eqref{pdehnu}. Furthermore, a direct consequence of the previous Theorem is the following \begin{coro} For $\nu=1/n$, $n \in \mathbb{N}$, we have that \begin{equation} \left( D^n_{0-, t} + D_{0-, x} \right) h_{\nu} = \, 0, \quad x \in \Omega,\, t > 0 \end{equation} with conditions \eqref{conditionsh} and \begin{equation} D^{k}_{0-, t}\, h_{\nu}(x,t) \big|_{t=0^{+}} = \, \Phi_{\nu k+1}(x), \quad n> k > 0. \end{equation} is the higher-order counterpart of the equation \eqref{pdehnu}. \end{coro} \begin{os} \normalfont The functions $l_{\nu}$ and $h_{\nu}$, for $\nu=1/n$, $n \in \mathbb{N}$ can be written as Mellin convolution of generalized gamma functions or, for $n \in 2\mathbb{N} +1$, in terms of Mellin convolution of the Modified Bessel function $K_{\alpha}$ (see \cite{Dov2, Dov3}). \end{os} \section{Higher-order equations: further directions} \label{sez6} Let $v(\cdot, t)$ be in $C(\Omega_0)$ as a function of $t$. We have that \begin{equation*} \langle v(x,\cdot),\, D_{0+, \cdot }\, h_{\frac{1}{n}}(\cdot ,t)\rangle = \mathcal{N}_n(x,t) - \langle h_{\frac{1}{n}}(\cdot ,t),\, D_{0+,\cdot }\, v(x,\cdot)\rangle \end{equation*} where \begin{equation*} \mathcal{N}_n(x,t) = v(x,s)\, h_{\frac{1}{n}}(s,t)\Big|_{s \in \partial \Omega_0}. \end{equation*} For $n>1$, the conditions \eqref{conditionsh} hold true and then we get that $\mathcal{N}_n(x,t) \equiv 0$ and \begin{equation*} \langle v(x, \cdot),\, D_{0+, \cdot}\, h_{\frac{1}{n}}(\cdot,t)\rangle = \langle h_{\frac{1}{n}}(\cdot,t),\, D_{0-,\cdot}\, v(x,\cdot)\rangle. \end{equation*} Thus, the stochastic solution to the equation \begin{equation} (D^{n}_{0-,t} + D^{m}_{0-,x}) \mathfrak{u}_{m,n} =0, \quad x \in \Omega,\; t>0, \quad m, n >1 \label{eqHordmn} \end{equation} is given by the process $\mathfrak{L}^{\frac{1}{m}}(\mathfrak{H}^{\frac{1}{n}}(t))$, $t>0$ with law $$\mathfrak{u}_{m,n}(x,t) = \langle l_{\frac{1}{m}}(x,\cdot),\, h_{\frac{1}{n}}(\cdot,t)\rangle.$$ From the fact that $l_{\nu}(x,s) \stackrel{\nu \to 1}{\longrightarrow} \delta(x-s)$ and $h_{\nu}(s,t) \stackrel{\nu \to 1}{\longrightarrow} \delta(s - t)$ we obtain that $\mathfrak{u}_{m, 1}=l_{1/m}$ and $\mathfrak{u}_{1,n}=h_{1/n}$. As a direct consequence we also obtain that \begin{equation*} \lim_{n \to 1} \mathcal{N}_{n}(x,t) = \psi(x,t), \quad x \in \Omega_{0},\; t \geq 0 \end{equation*} which must be understood in the sense of distribution. \subsection{$m=2$} The case $m=2$ leads to the process $|B(\mathfrak{H}^{\frac{1}{n}}(t))|$, $t>0$ where $|B|$ is the reflecting Brownian motion. From the subordination principle we have that the law of $|B(\mathfrak{H}^{\frac{1}{n}})|$ coincides with the folded law of a $2/n$-stable process with characteristic function \eqref{cfL} and $\Psi(\xi) = |\xi |^{2/n}$. Thus, for $n=2$ (and $m=2$), we obtain the folded Cauchy process driven by the Laplace equation. \subsection{$m=3$} For $m=3$ we focus on the following few cases: \begin{align*} (D_{0+,t} + D^3_{0+,x}) u_1 = 0,\\ (D^2_{0+,t} - D^3_{0+,x})u_2=0,\\ (D^{3}_{0+,t} + D^{3}_{0+,x})u_3=0. \end{align*} The explicit solutions to the equations above are listed below: \begin{equation*} u_1(x,t) = \frac{1}{\pi}\sqrt{\frac{x}{t}} K_{\frac{1}{3}}\left( \frac{2}{3^{3/2}} \frac{x^{3/2}}{\sqrt{t}}\right) \end{equation*} (which is the solution of \eqref{eqNord} on the positive real line) where (see \citet[formula 5.7.2]{LE}) $$K_{\nu}(z) = \frac{\pi}{2} \frac{I_{-\nu}(z) - I_{\nu}(z)}{\sin \nu \pi}$$ is the modified Bessel function of the second kind (or Macdonald's function) and $$ I_{\nu}(z) = \sum_{k=0}^\infty \frac{(z/2)^{2k + \nu}}{k!\, \Gamma(k + \nu + 1)}$$ is the modified Bessel function of the first kind (see \cite[formula 5.7.1]{LE}); \begin{equation*} u_2(x,t) = \frac{1}{4} \sqrt{\frac{3}{x \pi}} \exp\left( \frac{x^3}{3^{3}t^2} \right) \mathcal{W}_{-\frac{1}{2}, \frac{1}{6}}\left( \frac{2 x^3}{3^{3}t^2} \right) \end{equation*} where (see \citet[formula 9.13.16]{LE}) $$\mathcal{W}_{\alpha, \beta}(z) = z^{\beta + 1/2} e^{-\frac{z}{2}} U(1/2 - \alpha + \beta, 2\beta +1; z)$$ is the Whittaker function ($U$ is the confluent hypergeometric function of the second kind); \begin{equation*} u_3(x,t) = \frac{2t}{3^{3/2} \pi} \frac{x-t}{x^3 - t^3}. \end{equation*} The solution $u_2$ can be obtained by considering the integral \begin{align*} u_2(x,t) = \int_{0}^{\infty} u_1(x,s)\, \frac{t\, e^{-\frac{t^2}{4s}}}{\sqrt{4\pi s^3}}\, ds \end{align*} and the formula 6.631 of \citet{GR}. The solution $u_3$ comes out from the integral \begin{equation*} u_3(x,t) = \int_{0}^{\infty} u_1(x,s)\, \frac{1}{3\pi} \frac{t^{3/2}}{s^{3/2}}K_{\frac{1}{3}}\left(\frac{2}{3^{3/2}} \frac{t^{3/2}}{\sqrt{s}} \right)\, ds \end{equation*} and the fact that \begin{equation*} \int_0^\infty s \, K_{\nu}(ys)\,K_{\nu}(zs)\, ds = \frac{\pi (yz)^{-\nu}(y^{2\nu} - z^{2\nu})}{2\sin \pi \nu\, (y^2 - z^2)}, \quad \Re\{ y+z\}>0,\; |\Re\{ \nu \}|<1 \end{equation*} (see \cite[formula 6.521]{GR}). We observe that $$u_3(x,t) = \frac{2}{3^{3/2} \pi} \frac{t}{x^2 + xt + t^2}$$ which agrees, in some sense, with \eqref{lawf} for $\nu_1=\nu_2$. \subsection{$m=4$} For $n=1$, the solution to \eqref{eqHordmn} is the law of the folded iterated Brownian motion $|B(|B(t)|)|$, $t>0$. This is because of the fact that $l_{1/4} = l_{1/2} \circ l_{1/2}$ is the density law of $\mathfrak{L}^{1/2}(\mathfrak{L}^{1/2}(t))$, $t>0$. The solution $y(x,t) = \mathfrak{u}_{4,1}(|x|, t)$, $x \in \mathbb{R}$, $t>0$ has been thoroughly studied (see e.g. \citet{AZ01}) and solves the generalized equation \begin{equation*} \frac{\partial y}{\partial t} = \frac{\partial^4 y}{\partial x^4} + \frac{t^{-1/2}}{\sqrt{\pi}} \delta^{\prime \prime}, \quad x \in \mathbb{R},\; t\geq 0 \end{equation*} where $\langle \delta^{\prime \prime}, \phi \rangle = \phi^{\prime \prime}(0)$ for $\phi \in C_c^{\infty}(\mathbb{R})$. In this case we have that $$\mathcal{N}_{n}(x,t) \stackrel{n \to 1}{\longrightarrow} \delta^{\prime \prime}(x) / \sqrt{\pi t}.$$ For $n=2$ we have that $r(x,t) = \mathfrak{u}_{4,2}(|x|, t)$ solves the equation of vibrations of rods on $\mathbb{R}$ and the corresponding process is $B(|S_{1/4}(t)|)$ where $B$ is the standard Brownian motion and $S_{\alpha}$ is the L\'evy process with $\Psi_{S}(\xi) = |\xi |^{\alpha}$ and $\alpha \in (0,2]$ (that is, an $\alpha$-stable symmetric process).